Lessons from Scarlett Johansson and OpenAI – What Now?

I’m sure many of you have been following the drama around Scarlett Johansson and OpenAI. To create the voice for ChatGPT, OpenAI claims they went through an extensive casting process before recording voice actors. There are two hypothetical ways they could be telling the truth while still arriving at a voice that sounds distinctly like Scarlett in “Her”:

  1. They could have recorded the female voice actors before prompting their foundation models to make them sound more like Scarlett. For example, they could have prompted the voice to “sound more like Samantha the AI in the film ‘Her’ starring Scarlett Johansson.” OpenAI’s foundation models already have an understanding of all of culture and include the film and trillions of other pieces of data from culture. 
  1. They could have recorded the female voice actors and entered a series of prompts to more indirectly get to a voice that sounds like Scarlett’s. For example, they could have prompted the voice to “sound more like a female AI in a movie that is a companion of sorts, kind but also sarcastic and warm” until they got to their desired result. 

Regardless of how they arrived at the voice, there are plenty of unresolved questions about ownership in the generative AI models due to the lack of legislation around this rapidly developing technology. As an Annapurna Pictures and Warner Bros. film, the studios have copyright claims because they own both the film and the lines the Samantha voice says within it. As the talent who literally voiced the AI within the film, Scarlett could claim ownership of any new outputs that were created (literally all of the new OpenAI ones as none where lines in the film). And by developing the generative AI models to make a novel voice, OpenAI can claim the data is fair use and the voice and its model is entirely their own.  

Given all of this, how should we be thinking about the relationship between studios, talent, and new AI platforms like OpenAI? There are going to be hundreds of companies building models that put the onus on talent and IP holders to hold them accountable. But I believe we’re currently at a moment when IP holders and talent have the leverage: tons of valuable metadata and additional data inside their businesses that the new platforms and models need. Without this metadata which they cannot scrape from the open internet they will not be able to continue to build and refine their tech

This moment provides an opportunity for labels, studios, talent, and other IP holders to address the imbalances that built social media. For example, YouTube would not be what it is today without music labels and Instagram would not be what it is today without the biggest talent and creators. One could argue those businesses alone are worth $1 trillion each.  It is time for a system that puts the power in the hands of those with the leverage. Demand the platforms adopt Authenticated AI platforms so IP holders and talent can protect and license their data in a trusted manner with contracts and technology that protect them and handle the licenses in perpetuity. I also believe this an existential moment for many of these IP holders. We can’t let these AI platforms police themselves. A third party will allow for transparent audit rights to hold platforms accountable and allow IP holders to understand the value of their content in this new marketplace. This will allow transparent, data based negotiations between rights holders and platforms – something studios and talent are just now getting 17 years in with streaming platforms.  With the work we have done thus far with our partners and customers, the technology offered by Vermillio ensures that labels, studios, talent, and other IP holders will get a fair deal with the ability to audit them. 

Get our free REPORT

THE “i” in Generative AI

Get our free REPORT

THE “i” in Generative AI

Protect My Content with TraceID by Vermillio

By submitting this form, you agree to our
Terms of Service & Privacy Policy

Scroll to Top