How to Spot AI Scams: Deepfakes, Voice Cloning, and the Red Flags Most People Miss

Last updated: March 2026

Artificial intelligence is transforming productivity across nearly every part of our lives, reshaping how we communicate, learn, work, create, and solve complex problems. But like any powerful technology, its benefits are matched by the determination of those who use it for harm.

The era of obvious phishing emails filled with broken English and suspicious links is largely over. AI scams have evolved far beyond that, enabling cybercriminals to craft sophisticated, highly targeted attacks that can fool even cautious and tech-savvy individuals. With just a few seconds of audio pulled from social media, scammers can clone a child’s voice. They can stage convincing video calls where a familiar face appears to speak in real time. They can even sustain months-long fake romantic relationships using AI-generated photos, videos, and messages, operating nonstop without human effort.

These are not distant possibilities. They are happening right now to people across the country, and the financial losses are mind boggling.

What Is an AI Scam?

An AI scam is any fraud that uses artificial intelligence tools to make deception more convincing, more personal, and harder to detect. This includes using AI to clone someone’s voice, generate a realistic fake video, write targeted phishing messages, or create entirely fictional identities complete with photos, backstories, and ongoing conversation. What sets AI scams apart from traditional fraud is not just the technology involved but the scale and realism it enables. A single scammer can now impersonate a loved one’s voice, maintain a fake romantic relationship for months, and craft personalized messages for thousands of targets simultaneously, all with minimal effort and at very low cost.

The Numbers Behind the Problem

According to the FBI’s 2024 Internet Crime Report, released in April 2025, Americans over the age of 60 lost nearly $4.8 billion to cybercrime in 2024 alone, a staggering 33% jump from the year before, with total reported losses across all age groups hitting a record $16.6 billion. The Federal Trade Commission reports that romance scams alone cost U.S. consumers over $1.14 billion in reported losses in 2023, and the FBI estimates that number is significantly undercounted because many victims are too embarrassed to report what happened. Deepfake fraud in the U.S. surged 1,100% in the first months of 2025 according to fraud detection firm Sumsub.

What is driving this explosion is not just better technology. It is cost. Creating a convincing voice clone now costs a few dollars. Building a fake profile with AI-generated photos takes minutes. Maintaining dozens of fake romantic relationships simultaneously is something an AI chatbot can do without stopping. Scammers have essentially eliminated the labor barrier that once limited how many people they could target at once.

Real People, Real Losses

Statistics tell part of the story. Real cases tell the rest.

Sharon Brightwell, Dover, Florida: $15,000 Lost in a Single Phone Call

Sharon received a phone call late at night that no parent ever wants to get. On the other end was what sounded exactly like her daughter’s voice, crying and saying she had been in a car accident, lost her unborn child, and was now facing criminal charges. A man claiming to be a lawyer then took over the call and told Sharon that $15,000 in cash was needed immediately to keep her daughter out of jail.

Sharon sent the money. Her real daughter had no idea any of this was happening. The voice Sharon heard was an AI-generated clone built from audio scraped from her daughter’s social media. By the time the fraud was uncovered, the cash was gone. Source: FOX 13 Tampa Bay

Abigail Ruvalcaba, Southern California: Her Home and $81,000

Abigail, 66, connected with someone on Facebook who appeared to be Steve Burton, a well-known actor from the TV show General Hospital. Over the following months, she received personalized video messages where the person on screen looked directly at her, used her name, and expressed deep affection. The videos were AI deepfakes built using the actor’s public footage.

When Abigail’s daughter Vivian finally saw one of the videos, she said it was not grainy or obviously fake. To the naked eye, it looked completely real. By the time the scam unraveled, Abigail had sent over $81,000 through gift cards, cash, Zelle, and Bitcoin. Then, even after her accounts were drained, the scammer pushed her to sell her home. She sold her paid-off condo for $350,000, well below market value. The LAPD documented the case but told the family there was little chance of recovering anything. Source: KTLA Los Angeles

A French Woman and a Fake Brad Pitt: $850,000 Gone

A woman in France spent 18 months in what she believed was a real romantic relationship with actor Brad Pitt. Scammers sent her AI-generated photos, including what appeared to be personal selfies taken just for her. She performed a reverse image search and could not find the photos anywhere online, which she took as proof they were real and private. They were AI-generated images that did not exist anywhere else because they had just been created.

She ultimately transferred approximately $850,000 before the fraud came to light. As one security researcher put it, this woman was sophisticated enough to try to verify the photos and still got fooled. That is the point. These tools are now good enough to fool careful people. Source: CBS News

Frank and Alice Boren, Alabama: The Cloned Great-Grandson

Alice Boren picked up the phone and heard what sounded like their great-grandson Cameron’s voice saying he was in pain, that his nose was broken, and that he was being taken to jail after a car accident. A follow-up call from someone claiming to be an attorney said the family needed to pay over $11,000 in bail or Cameron would stay locked up.

Frank told the caller he did not have that kind of money. The scammer immediately asked how much he did have. It was only later that the family confirmed Cameron was completely safe and had made no such call. The voice they heard was manufactured using AI voice cloning technology. Source: WBRC News

Steve Beauchamp, United States: $690,000 in Retirement Savings

Steve Beauchamp, an 82-year-old retiree, came across a video online that appeared to show Elon Musk personally endorsing an AI-powered investment platform. The video looked legitimate. The face was right. The voice matched. The presentation was professional.

Beauchamp transferred his entire retirement savings, $690,000, into the platform over several weeks. The investment platform was completely fake. The Musk video was a deepfake. There was no investment. The money disappeared. Source: Incode – Top 5 Cases of AI Deepfake Fraud From 2024

How Deepfake and Voice Cloning Scams Actually Work

Understanding the mechanics makes these attacks easier to recognize before they hit.

Voice Cloning — Modern AI can recreate a convincing version of someone’s voice from as little as three seconds of audio. That audio does not need to come from a private source. A voicemail greeting, a TikTok video, a YouTube clip, or an Instagram reel is more than enough. Once the clone exists, the scammer can make it say anything. Research published in 2024 found that these clones can match the original person’s accent, tone, and speech patterns with around 85% accuracy.

Deepfake Video — Creating a convincing fake video used to require expensive equipment and technical skill. That is no longer true. Scammers now use consumer-grade software to generate personalized video messages, fake video calls, and fake romantic video chats. Research suggests that 68% of deepfake videos currently cannot be reliably distinguished from real footage by the average viewer. The face moves naturally. The expressions track. The voice matches.

AI-Written Messages — Old scam messages were easy to spot because they were poorly written. AI writing tools have eliminated that tell. For a deeper look at what to watch for in your inbox, see our guide on how to spot scam emails. Scammers can now produce personalized, emotionally intelligent messages that match your interests, reference things you have shared publicly, and build rapport over time. An AI chatbot can maintain multiple fake romantic relationships simultaneously, never forgetting a detail, never sounding tired or off.

Fake Profiles with AI-Generated Photos — AI can generate realistic photos of people who do not exist. The images show up in reverse image searches as nothing because they were just created. Scammers layer these images with fabricated life stories and AI-powered conversation to build fake identities convincing enough to fool people over months of contact.

Who Gets Targeted Most Often

While anyone can fall victim to an AI scam, certain groups are targeted more frequently than others. Older adults and seniors are disproportionately targeted, particularly those who may be less familiar with voice cloning or AI-generated video. Widowed or divorced individuals on dating apps or social media are also common targets, since they may be more open to new connections. Anyone who posts regular public video or audio content is at higher risk simply because that material feeds the cloning tools directly. People actively looking for investment opportunities or who have recently come into retirement money are frequently targeted through deepfake celebrity endorsement scams. And it is worth noting that family members of primary victims can become secondary targets as well, since scammers sometimes push victims to sell property or drain joint accounts, pulling others into the fraud in the process.

Why These Scams Work So Well

The technology is only half of the equation. These scams work because they are engineered to shut down your ability to think clearly before you act.

Urgency is the primary weapon. When someone you love appears to be in danger, or when a romantic partner you have grown attached to is in a crisis, the brain shifts into a reactive, emotional mode. Critical thinking takes a back seat. Scammers design every element of the interaction to keep you in that state.

Trust in familiar voices and faces is deeply wired into how humans process the world. When something sounds like your grandson or looks like someone you have been talking to for months, skepticism does not come naturally. That familiarity is the exact vulnerability these tools exploit.

Secrecy is the third pillar. Scammers almost universally insist on keeping the situation between you and them. Do not tell your family. Do not call anyone else. Do not hang up. This isolation cuts off the people who might otherwise talk you out of acting. In the Ruvalcaba case, her daughter noticed her mother had become secretive and defensive before the full scope of the scam became clear. That isolation is not an accident. It is a deliberate strategy.

How to Protect Yourself and Your Family

Create a family code word. Agree on a word or phrase in advance that only your immediate family knows. If you ever receive an urgent call from someone claiming to be a relative in trouble, ask for the code word before doing anything. The FTC highlights this as one of the most effective tools available. One important caution: never volunteer the word yourself. Scammers, aware that code words exist, may claim to have forgotten it and wait for you to offer it instead.

Hang up and call back on a number you already have. If you receive any urgent call from a family member, a bank, or anyone requesting money or personal information, hang up and call them back on a number stored in your phone. Do not use a number the caller gives you. Do not rely on caller ID, which can be faked.

Slow down on purpose. Urgency is a manipulation tactic, not a real emergency signal. If someone is pressuring you to act immediately, to not hang up, to not tell anyone, treat that pressure as a warning sign. Legitimate emergencies can wait three minutes while you verify through a separate channel.

Reverse image search any photos from new online contacts. Drag the image into Google Images or visit images.google.com and upload it. AI-generated photos often cannot be found anywhere else on the internet, which may actually be a red flag rather than reassurance, as it was for the Brad Pitt scam victim. A dedicated tool like FaceCheck.ID searches specifically for face matches.

Be careful with what you post publicly. The more public audio and video your family posts, the more raw material scammers have available. A three-second clip of a voice is genuinely enough to build a clone. This is especially worth considering for parents and grandparents who post regularly featuring children or grandchildren.

Never send money based on emotion or pressure. Wire transfers, gift cards, cryptocurrency, and cash sent by courier are all essentially untraceable and unrecoverable. No legitimate emergency requires payment through these methods. No real attorney, bail bondsman, or law enforcement officer will demand cash via courier or gift card.

Talk to elderly family members directly. Many older adults do not know that voice cloning is possible or that AI can generate a realistic video of a person saying something they never said. A clear, calm conversation about how these tools work is one of the most practical things you can do to protect the people you love.

AI Scam Red Flags to Watch For

  • Any caller insisting on secrecy and urging you not to tell anyone or hang up
  • Requests for payment using gift cards, wire transfers, cryptocurrency, or cash sent by courier
  • A video or voice call where something feels slightly off but you cannot quite explain why
  • A new online romantic interest who refuses to do a live, unscripted video call or always has a technical excuse
  • Investment opportunities promoted through video endorsements from celebrities, especially involving guaranteed returns
  • A story that keeps evolving under questions or shifts when you push back on specific details
  • Any request to move a conversation off a dating app to WhatsApp, Telegram, or another encrypted app early in the relationship

A Note on Recovery

One fact from this area of fraud is worth stating clearly because people often do not find it out until it is too late. Once money moves through wire transfer, cryptocurrency, gift cards, or cash, recovery is extremely rare. Law enforcement can document cases and open investigations, but as the Ruvalcaba family learned directly from the LAPD, there is often very little that can be done once the funds are gone.

This is why prevention matters so much more in this space than response. Unlike a fraudulent credit card charge where the bank can reverse the transaction, money you authorize and send yourself is in an entirely different legal category. Scammers know this and count on it.

What Legitimate Requests Will Never Look Like

No real police officer, attorney, bail bondsman, bank, or government agency will call you out of the blue and demand immediate payment in gift cards, wire transfer, or cash. They will never insist that you keep the situation secret from your family. They will never pressure you to act before you have had a chance to verify through your own channels. If someone is doing any of those things, it does not matter how much the voice sounds like your grandchild or how real the face on the video looks. Those behaviors are the scam, not a real emergency.

Where to Report AI Scams

Reporting a scam will not always get your money back, but it matters. Reports help law enforcement identify patterns, build cases, and warn others before they become victims. If you or someone you know has been targeted by an AI scam, here is where to go.

Federal Trade Commission (FTC). The FTC is the primary federal agency handling consumer fraud. File a report at ReportFraud.ftc.gov. Reports are shared with law enforcement agencies across the country and help the FTC track emerging scam trends. The process takes about ten minutes and does not require a police report first.

FBI Internet Crime Complaint Center (IC3). For internet-based fraud including deepfake scams, romance scams, and AI-generated investment fraud, file a complaint with the FBI at IC3.gov. The IC3 accepts complaints from both victims and third parties and forwards actionable reports to federal, state, and local law enforcement. Include as much detail as possible, including dates, dollar amounts, and any usernames, phone numbers, or email addresses involved.

Local Law Enforcement. File a report with your local police department, especially if you lost money or were threatened. Ask for a copy of the report number for your records. While local departments may have limited resources for online fraud, a filed report creates an official record that can support any financial institution disputes or insurance claims.

Your Bank or Financial Institution. If money moved through a bank transfer, contact your bank immediately. While wire transfers are rarely reversed, acting quickly gives you the best chance. If gift cards were used, contact the card issuer directly as some have fraud recovery programs. Report credit card fraud to your card issuer right away, as those disputes have stronger consumer protections.

AI Scam Prevention Checklist

    • Set up a family code word that only immediate family members know, and never volunteer it if asked

    • If you receive an urgent call from a family member in trouble, hang up and call them back on a number already saved in your phone

    • Never send money via wire transfer, gift card, cryptocurrency, or cash courier based on a phone call or message alone

    • Reverse image search photos from anyone you meet online before trusting them

    • Tighten privacy settings on social media and limit who can see videos and audio clips of you and your family

    • Be suspicious of any caller or contact who insists on secrecy or tells you not to hang up or talk to family

    • Treat extreme urgency as a warning sign, not a reason to act faster

    • Never click links in unexpected emails or texts, type the website address directly into your browser instead

    • Talk to elderly parents and grandparents about voice cloning and deepfakes before they encounter them

    • If you are targeted, report it to the FTC at ReportFraud.ftc.gov and the FBI at IC3.gov

Final Thoughts

AI scams are more personal than anything that came before them. They can use the voices of people you love, the faces of people you trust, and the emotional weight of situations you cannot afford to get wrong.

The most powerful defense is not a piece of software. It is slowing down before you act, verifying through a channel you already control, and building a few simple habits before you ever need them. A family code word agreed on today costs nothing. The absence of one could cost everything.

The technology behind these scams will keep improving. But the core tactics, urgency, secrecy, and emotional manipulation, are not new. Scammers have always used those levers. Knowing that does not make you immune, but it does make you a harder target.

Share this with your family, especially anyone who might not know that a voice on the phone is no longer reliable proof of who you are actually talking to.

Explore more Online Security guides for related tips, tools, and reviews.

FAQ

Can scammers really clone someone's voice from a short video or voicemail?

Yes, and it takes far less than most people expect. Modern AI voice cloning tools can build a convincing replica of someone’s voice from as little as three seconds of audio. That audio does not need to come from a private source. A voicemail greeting, a TikTok video, a YouTube clip, or even a short Instagram reel is more than enough raw material. Once the clone exists, the scammer can make it say anything they want.

It is getting harder, and that is the honest answer. Research suggests that 68% of deepfake videos currently cannot be reliably distinguished from real footage by the average viewer. That said, there are things to watch for. Look for slight unnatural blinking, edges around the hairline or face that look slightly soft or blurry, and audio that feels just slightly out of sync with the mouth movements. If something feels off but you cannot explain why, trust that instinct. The safest move is to hang up and initiate a call back through a number you already have saved.

A family code word is a word or short phrase that only your immediate family members know in advance. If you ever receive an urgent call from someone claiming to be a relative in trouble, you ask for the code word before doing anything else. If they cannot provide it, you hang up and call the real person back on a number already in your phone. One important rule: never volunteer the word yourself. Scammers who know about code words may claim to have forgotten it and wait for you to offer it instead.

In most cases, unfortunately no. This is one of the hardest facts about AI scams and fraud in general. Once money moves through a wire transfer, cryptocurrency, gift cards, or cash sent by courier, recovery is extremely rare. Unlike a fraudulent credit card charge where a bank can reverse the transaction, money you authorize and send yourself falls into a different legal category. This is exactly why scammers insist on those payment methods. Acting quickly and contacting your bank immediately gives you the best possible chance, but prevention is far more reliable than recovery.

The data backs it up, though it is worth understanding why. According to the FBI’s 2024 Internet Crime Report, Americans over 60 lost nearly $4.8 billion to cybercrime in 2024 alone. Older adults are often targeted because they may be less familiar with voice cloning and deepfake technology, may be more open to new connections after losing a spouse, and are more likely to have accessible retirement savings. That said, AI scams are fooling tech-savvy people too. The Brad Pitt scam victim actually tried to verify the photos through a reverse image search and still got fooled. Awareness and a few simple habits matter far more than age.

AI scams are spreading faster than awareness of them. If this article helped you understand what to watch for, consider sharing it on your favorite platform below, or send it directly to someone you care about. It only takes a second and it might make a real difference.

Facebook
X / Twitter
LinkedIn
Picture of Michael Kendrick

Michael Kendrick

Director of IT and former Certified Registered Locksmith with 27 years in technology and cybersecurity. Practical, everyday guidance to help you protect everything from the locks on your doors to the logins on your accounts.

Related Posts