The Rise of AI Scams: What You Need to Know This Holiday Season

AI-generated scams aren’t coming. They’re already here and they’re getting worse.

According to the Federal Trade Commission, consumers reported losing over $2.7 billion to imposter scams in 2023 alone. By 2024, reports of AI-enabled fraud had increased by 25% compared to the previous year. And as we head into the 2025 holiday season, experts warn we’re about to see an unprecedented surge.

The technology that was once limited to tech labs is now accessible to anyone with an internet connection. Deepfakes that used to take days to produce can now be generated in minutes. Voice clones that once required hours of audio samples now need just seconds.

And scammers are taking full advantage.

Political AI: Manipulating Democracy

2025 politics have become a testing ground for AI-generated content. Politicians are using AI in campaign videos – some transparently, others less so.

Let’s look at New York’s City Council race where Andrew Cuomo briefly released a campaign video created with AI falsely depicting Zohran Mamdani supporters in a negative light. There were many stereotypes conveyed and the video was quickly taken down. Donald Trump also recently shared an AI-generated video depicted poop dropping on “No Kings” protesters, using manipulated content to mock political opponents.

A recent study found that 58% of Americans have encountered AI-generated political misinformation, and 38% couldn’t tell it was fake.

When voters can’t trust what they see or hear, democracy itself is at stake.

Health AI Scams: Exploiting Trust in Experts

Some of the most insidious AI scams target people’s health concerns.

Deepfakes of prominent doctors and health experts – people like Dr. Andrew Huberman, Dr. Peter Attia, and even celebrities like Oprah – are being used to sell supplements, weight loss products, and miracle cures they never endorsed.

These aren’t crude fakes. They’re sophisticated videos where the person appears to speak directly to camera, using their real voice, facial expressions, and mannerisms. The only problem? They never said any of it. They never promoted any of these products.

According to the Better Business Bureau, health-related scams using AI-generated endorsements increased by 500% in 2025. Victims aren’t just losing money, they’re making health decisions based on fraudulent medical advice from people who never gave it.

The trust people place in medical professionals is being weaponized against them.

Financial Scams: The Voice Clone Epidemic

“Mom, I need help. I’ve been in an accident.”

It sounds like your daughter. It has her voice, her speech patterns, even her way of pausing when she’s upset. But it’s not her. It’s an AI clone, and someone is using it to steal your money.

Voice cloning scams have exploded in the past year. The FBI reports that complaints about these scams increased by 300% in 2024, with losses exceeding $900 million.

Crypto investment scams are particularly prevalent. Fraudsters create deepfake videos of prominent investors like Elon Musk or well-known financial advisors, promising guaranteed returns on cryptocurrency investments. They clone voices to make urgent phone calls requesting wire transfers.

Starling Bank UK found that 46% of people surveyed had been targeted by an AI voice cloning scam, and 1 in 4 people said they would send money if they received a call that sounded like a loved one in distress. Because of these issues, Starling recently launched its own AI-powered tool Scam Intelligence to combat scams.

Misinformation: Rewriting Reality

Perhaps most disturbing is AI’s ability to create entirely fabricated “evidence” of events that never happened.

Hurricane Melissa just hit Jamaica last week and there were hundreds of videos online depicting damage that wasn’t real. Many of these videos were AI-generated, designed to spread panic and drive engagement.

AI-generated disaster footage, fake rescue operations, fabricated celebrity incidents, manipulated historical events – these aren’t isolated incidents. They’re becoming routine.

An MIT study found that false information spreads six times faster than accurate information on social media. Now add AI’s ability to create photorealistic “proof” of those false claims, and we have a serious problem.

When seeing is no longer believing, how do we know what’s real?

The Holiday Season Surge

As we head into the holiday season, experts predict a massive increase in AI-enabled scams.

Why? Because scammers know people are:

  • Shopping more (fake product endorsements, fraudulent deals)
  • Donating to charities (fake fundraisers using celebrity deepfakes)
  • Connecting with family (voice clone scams targeting grandparents)
  • Distracted and rushed (easier to miss red flags)
  • Emotional and generous (more likely to respond to urgent appeals)

The FBI has issued specific warnings about AI scams during the holidays, noting that criminals are already creating deepfakes of public figures promoting fake charities and investment opportunities.

Europol estimates that AI-enabled fraud during the 2025 holiday season could exceed $10 billion globally.

What You Can Do

The scale of the problem is daunting, but you’re not powerless:

Verify before you trust. If a video seems shocking, check multiple credible sources before sharing.

Create a family code word. If someone calls claiming to be in trouble, ask for the code word before sending money.

Question urgency. Scammers use time pressure to override your judgment. If someone is rushing you, that’s a red flag.

Report what you see. If you encounter an AI scam using someone’s likeness without consent, report it. Platforms like Vermillio exist specifically to help get this content removed.

Protect your own content. Be aware of what videos and audio of yourself exist online. Scammers use publicly available content to create deepfakes.

The Bigger Picture

AI scams represent more than just financial fraud. They’re eroding trust in media, institutions, and even our own senses. When anyone’s face and voice can be convincingly faked, and any “evidence” can be fabricated, we’re facing a crisis of truth itself.

At Vermillio, we’re working to help people protect their identity and fight back against unauthorized use of their likeness. But technology alone won’t solve this. We need stronger laws, better platform policies, improved AI detection tools, and most importantly, public awareness.

Because right now, the scammers are ahead. And as this holiday season approaches, they’re counting on you not knowing what to look for.

Stay vigilant. Stay informed. And stay protected.


Vermillio – Protecting your identity in the age of AI deception.

Get our free REPORT

THE “i” in Generative AI

Get our free REPORT

THE “i” in Generative AI

Protect My Content with TraceID by Vermillio

By submitting this form, you agree to our Privacy Policy

Protect My Content with TraceID by Vermillio

By submitting this form, you agree to our Privacy Policy

Protect My Content with TraceID by Vermillio

By submitting this form, you agree to our Privacy Policy

Protect My Content with TraceID by Vermillio

By submitting this form, you agree to our Privacy Policy

TraceID for Content Holders

By submitting this form, you agree to our Privacy Policy

TraceID for AI Developers

This field is for validation purposes and should be left unchanged.
Name(Required)
This field is hidden when viewing the form

By submitting this form, you agree to our Privacy Policy

TraceID for CiviSocial

This field is for validation purposes and should be left unchanged.
Your Name(Required)
Who is interested in protection?(Required)
Add up to 5 by clicking/tapping on the PLUS sign.
Full Name
 

By submitting this form, you agree to our Privacy Policy

Scroll to Top