Image-making AIs are headed to your favorite fan universe

Originally appeared on Axios | written by Ina Fried

Today’s popular AI-driven image-generating apps can already costume a user’s selfie in fantasy-world garb. But some in the field are working on using similar tech to mimic brand-name characters and worlds from popular movies and video games.

Why it matters: Whether it’s “Star Wars,” the Marvel universe or “The Simpsons,” people have strong ties to their favorite fictional worlds and, presumably, might be willing to pay handsomely to see themselves in such environments.

The initial idea here is to let users create images a la Lensa AI, but instead of dropping a portrait into a broad category like science fiction or anime, you’d see yourself more specifically as a Muppet or a Klingon. Further down the road, these apps could evolve to let you join in the action of a movie or video game.

  • Novelty filters and web-based tools provided some of this fun in the past, but generative AI could make it cheaper, easier and more widespread.

Among those aiming to make that happen is Vermillio, a Chicago-based startup. The company has been in talks with a variety of content owners and has signed at least one major deal, expected to be announced early next year.

  • All content owners should be able to set-up a generative AI system based on their intellectual property as a way to engage with fans, says Vermillio CEO Dan Neely.
  • “It should happen, it should be easy and you should get paid,” Neely said.

The big picture: The move to bring licensed content into generative AI comes as the technology is having a major moment, with the arrival of engines like Dall-EStable Diffusion and ChatGPT as well as commercial services and apps like Lensa AI.

  • Vermillio isn’t alone in aiming to combine generative AI with the work of specific creators. Adobe has also talked about wanting to enable generative AI to build in the style of artists who choose to help a program “learn” from their works.
  • Using a single set of licensed content to power an AI engine could help address some of the legal uncertainty over today’s engines, many of which were trained with mountains of data scraped from the web that can include lots of copyrighted material.

Between the lines: Authentication is a key aspect of Vermillio’s offering. The company says it can track the history of a piece of AI-generated content, from the engine that generated it to modifications made after its creation.

  • “There needs to be some kind of technology that understands creation, where it all came from,” Neely said.

Vermillio’s authentication system uses a blockchain for that purpose. But other techniques could play a role, too.

  • Adobe has led its own industry effort, called the Content Authenticity Initiative, which is designed to work across companies without relying on blockchain technology.

Yes, but: For content owners, opening up their fictional worlds to generative AI carries the promise of added revenue but also holds significant dangers.

  • The biggest issue is the lack of control. There’s “brand risk” in the possibility that a generative AI could create works that are unpleasant or even offensive.
  • The unpredictability can be seen more clearly with text-based chatbots, though there have also been complaints of image generators like Lensa AI sexualizing their subjects, lightening skin, or providing other problematic results.

Get our free REPORT

THE “i” in Generative AI

Get our free REPORT

THE “i” in Generative AI

Scroll to Top