The entertainment industry just woke up to something that’s been happening for a while.
AI models have been training on creative work and celebrity likenesses since the beginning. But OpenAI’s Sora 2 announcement last week made it explicit: we’re using your data, your face, your voice—unless you explicitly tell us not to. And they’re not alone. Grok, Perplexity, and dozens of other AI companies are doing the same thing with varying levels of transparency.
The response was fierce. OpenAI reversed course on celebrity likenesses after three days. But here’s what that headline doesn’t tell you: they still need to build the technology to allow you to opt in. Right now, the damage has already been done—your likeness is in the model. Meanwhile, copyright protections remain opt-out, and dozens of other AI companies are still operating under whatever rules they choose.
At Vermillio, we’ve had the busiest week of the last year. And what we’re seeing on the ground is far more urgent than any policy debate.
What’s Actually Happening Right Now
This isn’t a theoretical problem. The threats are real, they’re happening daily, and they’re getting worse fast.
Fans Are Being Scammed Out of Thousands
We’re tracking a massive surge in sophisticated impersonation scams. Scammers create fake social media accounts and dating profiles using stolen photos and AI-generated videos of soap opera actors, Hallmark stars, and other beloved performers.
They build emotional connections with fans, then extract money through:
- Fake meet-and-greet opportunities
- Romance scams needing “plane ticket money” to meet in person
- Fraudulent personal video requests
- Bogus charity appeals
With tools like Sora 2, Grok, and others becoming more accessible, scammers can now create convincing deepfake videos of talent “speaking” directly to victims. It’s devastatingly believable.
When fans realize they’ve been conned? They blame the talent. Years of trust-building—destroyed by someone the performer has never met.
Your Work Is Training Your Replacement
Meanwhile, the copyright violations are accelerating:
- Hyperrealistic fake episodes of shows like Family Guy generated using AI
- “New episodes” of canceled shows using actual characters’ voices
- AI-generated music that sounds exactly like specific artists
- Creative styles and techniques absorbed into models that compete with the original creators
For musicians, designers, visual artists, and writers—every piece of work you publish becomes potential training data. The AI learns your style, replicates it, then undercuts you on price.
It’s Happening Across All Modalities
The violations we’re tracking aren’t staying in neat categories:
- Voice cloning for fake audio messages and phone scams
- Image generation for impersonation accounts and unauthorized endorsements
- Video deepfakes for romance scams and fake testimonials
- Text generation trained on writers’ distinctive styles and copyrighted work
One person can face all of these threats simultaneously. And most don’t even know it’s happening until significant damage is done.
The Real Problem: You Can’t Monitor This Alone
Here’s why this is spiraling out of control:
You’re not dealing with one AI company making one policy change. You’re dealing with:
- Dozens of AI companies with different (or no) policies
- Thousands of scammers using those tools
- Millions of websites and platforms where violations appear
- New AI models launching constantly
Even if you wanted to opt out of everything, you’d need to:
- Track every AI company developing models (good luck)
- Monitor their policy announcements
- Submit opt-out requests to each one
- Verify those requests were honored
- Repeat this process as new companies emerge weekly
It’s impossible. Which is exactly why the violations are accelerating.
What Comprehensive Protection Actually Looks Like
This is why we built TraceID the way we did.
We don’t wait for you to discover violations. We find them first.
We search across everything:
- Social media platforms and dating apps where impersonators target fans
- AI training datasets where your copyrighted work is being scraped
- Websites, marketplaces, and forums where deepfakes appear
- Emerging AI services as they launch
We protect all modalities:
- Voice cloning and audio deepfakes
- Image generation and fake photos
- Video deepfakes and synthetic media
- Text generation trained on your work
We handle both NIL and copyright: Most services focus on one or the other. The threats you face don’t stay in neat legal categories, and neither should your protection.
We enforce removal immediately: Detection without action is just surveillance. When we find violations, we get them removed – through legal takedown notices, direct platform action, and whatever mechanisms actually work.
We monitor 24/7: New violations appear daily. Your protection can’t be a one-time audit.
What We’re Seeing This Week
Our phones haven’t stopped ringing. Talent, managers, agencies, studios – everyone’s suddenly asking the same questions:
“Is my likeness being used without permission?”
Usually, yes.
“Can people really fake my voice that convincingly?”
Absolutely.
“Is my work training AI models?”
Almost certainly, across multiple companies.
“How can I stop this?”
Get proactive protection with Vermillio.
The Technology Already Exists
You don’t need to wait for better legislation or hope that every AI company does the right thing. The technology to protect yourself exists today.
AI-powered monitoring that scans millions of sites continuously. Detection systems that identify deepfakes and unauthorized use across voice, image, video, and text. Legal enforcement frameworks that actually get violations removed.
Vermillio’s platform does all of this because the threats you face require all of this.
Why Now Matters
The AI tools creating these violations are getting better and cheaper every week. What requires technical skill today will be point-and-click simple tomorrow. The scams that seem sophisticated now will be automated at scale soon.
Dozens of models are training on whatever data they can access. More companies are launching similar tools constantly. The gap between what AI can do and what protections exist isn’t narrowing—it’s widening.
If you’re a creator, talent, someone with a digital presence, or someone who owns IP, waiting isn’t a strategy. It’s just hoping the problem doesn’t find you before you find it.
The Bottom Line
The Sora 2 announcement was a wake-up call. But it’s what’s happening beyond the headlines that should concern you.
Your fans are being scammed. Your work is training your competition. Your likeness is being used without permission. And it’s happening right now, across dozens of platforms and AI systems, faster than any one person could possibly track.
The good news? Comprehensive protection exists. You don’t have to monitor this alone.
The question is: will you wait to discover violations after the damage is done, or will you find and stop them before they spread?
Ready to see what unauthorized use of your likeness already exists?
Contact us at hello@vermill.io or visit vermill.io/protection