OpenAI’s latest announcement—and subsequent reversal—has sent ripples through the entertainment and creative industries. Sora 2 is hands down the most advanced text-to-video AI model on the market. With nothing more than a few sentences, users can create hyper realistic video complete with audio. While Sora 2 represents a significant leap forward in synthetic media creation, it’s the rapidly evolving data policies around celebrity likenesses and copyrighted content that should have talent and their representatives paying very close attention.
Sora 2 initially launched with a policy to train on celebrity likenesses and copyrighted characters unless individuals explicitly opted out. Just three days later, facing significant backlash, OpenAI reversed course. Rightsholders will now decide whether their copyrighted characters can be generated by Sora 2, with revenue sharing for those who opt in.
As our CEO, Dan Neely, told WSJ: “For so many in the AI space, this move validates longstanding fears and underscores why we need guardrails.”
A Partial Victory—But the Battle Continues
OpenAI’s reversal on name, image, and likeness (NIL) rights represents a significant win for talent advocacy. The shift from opt-out to opt-in for copyrighted characters acknowledges what the industry has been demanding: consent should be required, not assumed.
However, this victory is incomplete.
While NIL is now opt-in for Sora 2, copyright protections remain opt-out—meaning your creative work, performances captured on video, musical compositions, and other copyrighted material can still be used to train AI models unless you actively exclude yourself. And that’s just OpenAI. Dozens of other AI companies continue developing models with varying policies, many still operating under opt-out frameworks or no clear policy at all.
The path to comprehensive protection is still a long way away.
The Tilly Norwood Factor: When AI Becomes the Talent
Adding fuel to this conversation is the rise of Tilly Norwood, the fully AI-generated actor who’s recently sent shockwaves through the entertainment industry. Tilly represents what’s possible when AI is trained on vast amounts of human performance data—a completely synthetic performer capable of “acting” in ways that mimic human talent.
While Tilly is openly presented as an AI creation, the technology that powers this digital actor was built on analysis of countless real human performances. And unlike Sora 2’s new NIL policy, talent weren’t given the chance to opt-out or opt-in to their likeness being used. This raises an uncomfortable question: what happens when AI-generated actors are trained on your likeness, your performance style, and your unique qualities—all without your explicit permission?
Understanding What’s Still at Risk
Even with OpenAI’s policy change, significant vulnerabilities remain:
What’s now protected:
- Your name, image, and likeness cannot be generated without your opt-in consent
What’s still at risk:
- Copyright Material: Your performances, creative work, and copyrighted content can still be used for AI training unless you opt out
- Other AI Companies: Dozens of AI models from other companies operate under different policies
- Training vs. Generation: Even if a model can’t generate your likeness, it may still train on your data to improve its general capabilities
What this means for different talent:
- For Actors: While Sora 2 won’t generate your likeness without permission, your performances can still be analyzed to teach AI acting techniques, emotional range, and timing.
- For Musicians: Your recordings and compositions can be studied to train AI on musical structure, vocal techniques, and artistic choices.
- For Athletes: Your game footage and signature moves can inform AI understanding of sports performance and technique.
- For Content Creators: Your videos, creative style, and content structure can be absorbed into AI models that learn from your approach.
Why Opt-Out Models Remain Problematic
The burden of protection shouldn’t fall on individuals to constantly monitor and opt out of every new AI system. Opt-out models assume permission unless explicitly denied—flipping traditional entertainment standards where consent is required before using someone’s likeness or work.
Many talent may not even be aware their data is being used, and with new AI models emerging constantly from companies worldwide, the ongoing burden is unrealistic and nearly impossible to manage alone.
At Vermillio, we believe that while OpenAI’s reversal is a step in the right direction, talent cannot afford to rely on voluntary corporate policy changes. The AI usage of your name, image, likeness, and IP will continue and accelerate across the industry. Protecting your digital identity today isn’t just recommended—it’s essential.
What Talent Organizations Are Saying—And Why We Support Them
Industry unions like SAG-AFTRA and talent agencies like WME have been negotiating for protections around the AI use of clients’ likenesses, demanding stronger consent requirements and fair compensation frameworks.
OpenAI’s initial policy—and the swift backlash that forced its reversal—validates what many in the industry have been warning about. Corporate policies can change overnight, in either direction.
At Vermillio, we stand firmly with these talent organizations in advocating for more ethical guardrails around the usage of talent data in AI training. Current opt-out models for copyrighted work are insufficient, and the industry needs universal mandatory opt-in consent, transparent disclosure, and fair compensation across all AI companies and applications.
However, advocacy alone isn’t enough. While we actively support and lobby for these systemic changes, we recognize that policy moves slowly while AI technology moves fast. Talent cannot afford to wait for legal protections that may take years to materialize, or hope that every AI company will follow OpenAI’s example.
This is why we urge talent to take proactive steps to protect themselves today. Protection and advocacy must happen simultaneously.
What Talent Should Do Now
1. Implement Comprehensive Digital Identity Protection: Use services like Vermillio that provide 24/7 monitoring of your likeness across the internet. Vermillio’s AI-powered platform continuously scans for unauthorized use—including deepfakes, AI-generated content, or potential training data misuse—and takes immediate action to have it removed. This creates a protective barrier while larger industry changes take effect.
2. Stay Informed: Monitor announcements from major AI companies about their training data policies and opt-out procedures. Remember: policies can change quickly, as OpenAI just demonstrated.
3. Document Your Opt-Outs: Keep detailed records of when and where you’ve opted out of AI training programs across different companies.
4. Engage Your Union or Agency: Collective action through industry organizations is more effective than individual efforts. SAG-AFTRA and other unions have proven they can influence corporate behavior.
5. Consult Legal Counsel: Work with attorneys who understand both entertainment law and emerging AI regulations.
6. Speak Up: Use your platform to raise awareness about these issues and influence policy development. The Sora 2 reversal proves that public pressure works.
The Role of Proactive Protection: Why Waiting Isn’t an Option
OpenAI’s policy reversal is encouraging, but it’s one company among many. Given the difficulty of monitoring how your data is used across numerous AI systems from dozens of companies worldwide, proactive digital identity protection has become essential. By the time you discover your likeness has been used to train an AI model or appears in unauthorized content, significant damage may already be done.
Vermillio’s platform continuously monitors the internet for unauthorized use of your likeness across millions of websites, social media platforms, and emerging AI services. When unauthorized content is detected, we don’t just alert you—we take immediate action to have it removed, leveraging legal frameworks to protect your digital identity before content can spread or be incorporated into training datasets.
Think of Vermillio as your digital immune system: constantly scanning, detecting threats, and neutralizing them before they can cause harm. While we continue advocating for systemic change, we provide talent with the immediate protection they need today.
Moving Forward: Advocacy and Action Working Together
At Vermillio, we believe in a two-pronged approach: advocate for systemic change while taking immediate protective action.
OpenAI’s reversal on NIL demonstrates that advocacy works—but it also reveals how fragile these protections are. What one company grants through policy, another may ignore. What’s protected today could change tomorrow. And copyright protections for your creative work remain opt-out, even at OpenAI.
We support universal opt-in standards, fair compensation, transparency requirements, and legal protections that extend traditional likeness rights to AI training contexts across all companies and applications.
But the AI landscape will not pause while we wait for perfect legislation or hope every company follows OpenAI’s lead. Companies will continue developing new models, and your likeness will continue to be at risk unless you take proactive steps to protect it.
We’re committed to both efforts: lobbying alongside talent organizations for the ethical guardrails the industry desperately needs, while providing talent with the tools to protect themselves today.
Change takes time. Protection is immediate. You need both.
Ready to protect your digital identity? Learn more about how Vermillio provides comprehensive monitoring and protection for talent in the AI era.