How to Spot AI Scams: Deepfakes, Voice Cloning, and the Red Flags Most People Miss

Last updated: April 2026

Table of Contents

Artificial intelligence is transforming productivity across nearly every part of our lives, reshaping how we communicate, learn, work, create, and solve complex problems. But like any powerful technology, its benefits are matched by the determination of those who use it for harm.

The era of obvious phishing emails filled with broken English and suspicious links is largely over. AI scams have evolved far beyond that, enabling cybercriminals to craft sophisticated, highly targeted attacks that can fool even cautious and tech-savvy individuals. With just a few seconds of audio pulled from social media, scammers can clone a child’s voice. They can stage convincing video calls where a familiar face appears to speak in real time. They can even sustain months-long fake romantic relationships using AI-generated photos, videos, and messages, operating nonstop without human effort.

These are not distant possibilities. They are happening right now to people across the country, and the financial losses are mind boggling.

What Is an AI Scam?

An AI scam is any fraud that uses artificial intelligence to make deception more convincing, more personalized, and harder to detect. The three most common types are:

  • Voice cloning: AI replicates a person’s voice from as little as 3 seconds of audio, then makes it say anything a scammer wants.
  • Deepfake video: AI generates realistic fake video of a real person, used in video calls, fake ads, or romantic video messages.
  • AI-written phishing: AI generates personalized, grammatically perfect messages at scale, eliminating the spelling errors that once made scams easy to spot.

What sets AI scams apart from traditional fraud is scale and realism. A single scammer can now maintain hundreds of fake relationships simultaneously, impersonate any voice instantly, and generate believable content at zero cost.

The Numbers Behind the Problem

According to the FBI’s 2024 Internet Crime Report, released in April 2025, Americans over the age of 60 lost nearly $4.8 billion to cybercrime in 2024 alone, a staggering 33% jump from the year before, with total reported losses across all age groups hitting a record $16.6 billion. The Federal Trade Commission reports that romance scams alone cost U.S. consumers over $1.14 billion in reported losses in 2023, and the FBI estimates that number is significantly undercounted because many victims are too embarrassed to report what happened. Deepfake fraud in the U.S. surged 1,100% in the first months of 2025 according to fraud detection firm Sumsub.

What is driving this explosion is not just better technology. It is cost. Creating a convincing voice clone now costs a few dollars. Building a fake profile with AI-generated photos takes minutes. Maintaining dozens of fake romantic relationships simultaneously is something an AI chatbot can do without stopping. Scammers have essentially eliminated the labor barrier that once limited how many people they could target at once.

Real People, Real Losses

Statistics tell part of the story. Real cases tell the rest.

Sharon Brightwell, Dover, Florida: $15,000 Lost in a Single Phone Call

Sharon received a phone call late at night that no parent ever wants to get. On the other end was what sounded exactly like her daughter’s voice, crying and saying she had been in a car accident, lost her unborn child, and was now facing criminal charges. A man claiming to be a lawyer then took over the call and told Sharon that $15,000 in cash was needed immediately to keep her daughter out of jail.

Sharon sent the money. Her real daughter had no idea any of this was happening. The voice Sharon heard was an AI-generated clone built from audio scraped from her daughter’s social media. By the time the fraud was uncovered, the cash was gone. Source: FOX 13 Tampa Bay

Abigail Ruvalcaba, Southern California: Lost Her Home and $81,000

Abigail, 66, connected with someone on Facebook who appeared to be Steve Burton, a well-known actor from the TV show General Hospital. Over the following months, she received personalized video messages where the person on screen looked directly at her, used her name, and expressed deep affection. The videos were AI deepfakes built using the actor’s public footage.

When Abigail’s daughter Vivian finally saw one of the videos, she said it was not grainy or obviously fake. To the naked eye, it looked completely real. By the time the scam unraveled, Abigail had sent over $81,000 through gift cards, cash, Zelle, and Bitcoin. Then, even after her accounts were drained, the scammer pushed her to sell her home. She sold her paid-off condo for $350,000, well below market value. The LAPD documented the case but told the family there was little chance of recovering anything. Source: KTLA Los Angeles

A French Woman and a Fake Brad Pitt: $850,000 Gone

A woman in France spent 18 months in what she believed was a real romantic relationship with actor Brad Pitt. Scammers sent her AI-generated photos, including what appeared to be personal selfies taken just for her. She performed a reverse image search and could not find the photos anywhere online, which she took as proof they were real and private. They were AI-generated images that did not exist anywhere else because they had just been created.

She ultimately transferred approximately $850,000 before the fraud came to light. As one security researcher put it, this woman was sophisticated enough to try to verify the photos and still got fooled. That is the point. These tools are now good enough to fool careful people. Source: CBS News

Steve Beauchamp, United States: $690,000 in Retirement Savings

Steve Beauchamp, an 82-year-old retiree, came across a video online that appeared to show Elon Musk personally endorsing an AI-powered investment platform. The video looked legitimate. The face was right. The voice matched. The presentation was professional.

Beauchamp transferred his entire retirement savings, $690,000, into the platform over several weeks. The investment platform was completely fake. The Musk video was a deepfake. There was no investment. The money disappeared. Source: Incode – Top 5 Cases of AI Deepfake Fraud From 2024

How Deepfake and Voice Cloning Scams Actually Work

Understanding the mechanics makes these attacks easier to recognize before they hit.

Voice Cloning — Modern AI can recreate a convincing version of someone’s voice from as little as three seconds of audio. That audio does not need to come from a private source. A voicemail greeting, a TikTok video, a YouTube clip, or an Instagram reel is more than enough. Once the clone exists, the scammer can make it say anything. Research published in 2024 found that these clones can match the original person’s accent, tone, and speech patterns with around 85% accuracy.

Deepfake Video — Creating a convincing fake video used to require expensive equipment and technical skill. That is no longer true. Scammers now use consumer-grade software to generate personalized video messages, fake video calls, and fake romantic video chats. Research suggests that 68% of deepfake videos currently cannot be reliably distinguished from real footage by the average viewer. The face moves naturally. The expressions track. The voice matches.

AI-Written Messages — Old scam messages were easy to spot because they were poorly written. AI writing tools have eliminated that tell. For a deeper look at what to watch for in your inbox, see our guide on how to spot scam emails. Scammers can now produce personalized, emotionally intelligent messages that match your interests, reference things you have shared publicly, and build rapport over time. An AI chatbot can maintain multiple fake romantic relationships simultaneously, never forgetting a detail, never sounding tired or off.

Fake Profiles with AI-Generated Photos — AI can generate realistic photos of people who do not exist. The images show up in reverse image searches as nothing because they were just created. Scammers layer these images with fabricated life stories and AI-powered conversation to build fake identities convincing enough to fool people over months of contact.

AI Job Scams: The New Wave of Employment Fraud

Most people think of AI scams as phone calls from fake grandchildren or fake celebrity investment pitches. But one of the fastest-growing categories is the AI job scam, and it is catching people at a moment when they are already stressed and hopeful.

Here is how it works. A scammer scrapes your LinkedIn profile or resume from a job board. An AI generates a perfectly written job offer that references your actual skills, your real work history, and the exact type of role you are looking for. The email comes from a domain that looks legitimate at a glance. The interview happens over chat, sometimes with a video call using a deepfake of a real company’s HR representative. Once you are ‘hired,’ you are asked to complete onboarding steps that include providing your Social Security number, banking information for direct deposit, or purchasing equipment with a reimbursement promised later.

The equipment reimbursement never comes. The company either does not exist or the real company has no idea their brand is being used. Your financial and identity information is now in the hands of criminals.

Common AI job scam red flags:

  • The offer comes without you applying
  • The salary is unusually high for the role
  • Interview is entirely text-based, uses low-quality video, or feels rushed
  • Communication avoids official company channels
  • You are asked for financial or sensitive information before day one
  • You are told to purchase equipment or send money

Important:
Any request to buy gift cards, wire money, or pay out of pocket is almost certainly a scam. Legitimate employers do not operate this way.

Who Gets Targeted Most Often

While anyone can fall victim to an AI scam, certain groups are targeted more frequently than others. Older adults and seniors are disproportionately targeted, particularly those who may be less familiar with voice cloning or AI-generated video. Widowed or divorced individuals on dating apps or social media are also common targets, since they may be more open to new connections. Anyone who posts regular public video or audio content is at higher risk simply because that material feeds the cloning tools directly. People actively looking for investment opportunities or who have recently come into retirement money are frequently targeted through deepfake celebrity endorsement scams. And it is worth noting that family members of primary victims can become secondary targets as well, since scammers sometimes push victims to sell property or drain joint accounts, pulling others into the fraud in the process.

Why These Scams Work So Well

The technology is only half of the equation. These scams work because they are engineered to shut down your ability to think clearly before you act.

Urgency is the primary weapon. When someone you love appears to be in danger, or when a romantic partner you have grown attached to is in a crisis, the brain shifts into a reactive, emotional mode. Critical thinking takes a back seat. Scammers design every element of the interaction to keep you in that state.

Trust in familiar voices and faces is deeply wired into how humans process the world. When something sounds like your grandson or looks like someone you have been talking to for months, skepticism does not come naturally. That familiarity is the exact vulnerability these tools exploit.

Secrecy is the third pillar. Scammers almost universally insist on keeping the situation between you and them. Do not tell your family. Do not call anyone else. Do not hang up. This isolation cuts off the people who might otherwise talk you out of acting. In the Ruvalcaba case, her daughter noticed her mother had become secretive and defensive before the full scope of the scam became clear. That isolation is not an accident. It is a deliberate strategy.

How to Protect Yourself and Your Family

Create a family code word. Agree on a word or phrase in advance that only your immediate family knows. If you ever receive an urgent call from someone claiming to be a relative in trouble, ask for the code word before doing anything. The FTC highlights this as one of the most effective tools available. One important caution: never volunteer the word yourself. Scammers, aware that code words exist, may claim to have forgotten it and wait for you to offer it instead.

Hang up and call back on a number you already have. If you receive any urgent call from a family member, a bank, or anyone requesting money or personal information, hang up and call them back on a number stored in your phone. Do not use a number the caller gives you. Do not rely on caller ID, which can be faked.

Slow down on purpose. Urgency is a manipulation tactic, not a real emergency signal. If someone is pressuring you to act immediately, to not hang up, to not tell anyone, treat that pressure as a warning sign. Legitimate emergencies can wait three minutes while you verify through a separate channel.

Reverse image search any photos from new online contacts. Drag the image into Google Images or visit images.google.com and upload it. AI-generated photos often cannot be found anywhere else on the internet, which may actually be a red flag rather than reassurance, as it was for the Brad Pitt scam victim. A dedicated tool like FaceCheck.ID searches specifically for face matches.

Be careful with what you post publicly. The more public audio and video your family posts, the more raw material scammers have available. A three-second clip of a voice is genuinely enough to build a clone. This is especially worth considering for parents and grandparents who post regularly featuring children or grandchildren.

Never send money based on emotion or pressure. Wire transfers, gift cards, cryptocurrency, and cash sent by courier are all essentially untraceable and unrecoverable. No legitimate emergency requires payment through these methods. No real attorney, bail bondsman, or law enforcement officer will demand cash via courier or gift card.

Talk to elderly family members directly. Many older adults do not know that voice cloning is possible or that AI can generate a realistic video of a person saying something they never said. A clear, calm conversation about how these tools work is one of the most practical things you can do to protect the people you love.

Free Tools That Can Help You Spot AI Fakes

You do not need expensive software to add some protection against AI-generated content. These free or low-cost tools are worth knowing about.

FaceCheck.ID searches for face matches across the internet and is specifically designed for identifying whether a person’s face appears in known scam profiles or elsewhere online. It is more effective than standard reverse image search for romantic scam verification.

Google Reverse Image Search remains useful for finding whether a photo appears elsewhere. Simply drag an image into the search bar and run a search. One limitation is that AI-generated photos often do not exist anywhere else online, which can be a false comfort.

Hive Moderation’s AI Image Detector lets you upload an image and receive a probability score for whether it was AI-generated. It is not perfect and should be used as just one data point among several.

Sensity AI offers a free deepfake detection tool that can analyze video clips for signs of synthetic generation.

None of these tools are foolproof. The technology that creates deepfakes and the technology that detects them are in an arms race, and creation is currently winning. Use these tools to add a layer of scrutiny, not as a final verdict.

AI Scam Red Flags to Watch For

  • Any caller insisting on secrecy and urging you not to tell anyone or hang up
  • Requests for payment using gift cards, wire transfers, cryptocurrency, or cash sent by courier
  • A video or voice call where something feels slightly off but you cannot quite explain why
  • A new online romantic interest who refuses to do a live, unscripted video call or always has a technical excuse
  • Investment opportunities promoted through video endorsements from celebrities, especially involving guaranteed returns
  • A story that keeps evolving under questions or shifts when you push back on specific details
  • Any request to move a conversation off a dating app to WhatsApp, Telegram, or another encrypted app early in the relationship

Immediate Steps to Take After Realizing It’s a Scam

1. Cut off contact immediately

  • Hang up
  • Do not reply or engage in any way, even if they threaten urgency
  • Block the number or account

2. Secure your accounts right away (do this first if you shared login info)

3. Contact your bank or financial institutions

  • Report the fraud immediately
  • Use only official numbers from your card or website.
  • Ask about reversing or stopping transactions
  • Wire transfers are hard to recover, but acting quickly gives you a small chance
  • If gift cards were used, contact the issuer, some can freeze unused balances

4. Preserve evidence before it disappears

  • Screenshot messages, emails, and transaction records
  • Save phone numbers, usernames, and URLs
  • Write down exactly what happened while it is still fresh

5. Report the incident

Prepare Now, Panic Less Later

If identity theft happens, the last thing you want is to be scrambling to figure out who to call. Download the free Identity Theft & Fraud Response Sheet to keep every phone number, website, and recovery step in one printable reference — and if you want to understand how identity theft works before it happens, start with our Identity Theft 101 guide.

A Note on Recovery

One fact from this area of fraud is worth stating clearly because people often do not find it out until it is too late. Once money moves through wire transfer, cryptocurrency, gift cards, or cash, recovery is extremely rare. Law enforcement can document cases and open investigations, but as the Ruvalcaba family learned directly from the LAPD, there is often very little that can be done once the funds are gone.

This is why prevention matters so much more in this space than response. Unlike a fraudulent credit card charge where the bank can reverse the transaction, money you authorize and send yourself is in an entirely different legal category. Scammers know this and count on it.

What Legitimate Requests Will Never Look Like

No real police officer, attorney, bail bondsman, bank, or government agency will call you out of the blue and demand immediate payment in gift cards, wire transfer, or cash. They will never insist that you keep the situation secret from your family. They will never pressure you to act before you have had a chance to verify through your own channels. If someone is doing any of those things, it does not matter how much the voice sounds like your grandchild or how real the face on the video looks. Those behaviors are the scam, not a real emergency.

Where to Report AI Scams

Reporting a scam will not always get your money back, but it matters. Reports help law enforcement identify patterns, build cases, and warn others before they become victims. If you or someone you know has been targeted by an AI scam, here is where to go.

Federal Trade Commission (FTC). The FTC is the primary federal agency handling consumer fraud. File a report at ReportFraud.ftc.gov. Reports are shared with law enforcement agencies across the country and help the FTC track emerging scam trends. The process takes about ten minutes and does not require a police report first.

FBI Internet Crime Complaint Center (IC3). For internet-based fraud including deepfake scams, romance scams, and AI-generated investment fraud, file a complaint with the FBI at IC3.gov. The IC3 accepts complaints from both victims and third parties and forwards actionable reports to federal, state, and local law enforcement. Include as much detail as possible, including dates, dollar amounts, and any usernames, phone numbers, or email addresses involved.

Local Law Enforcement. File a report with your local police department, especially if you lost money or were threatened. Ask for a copy of the report number for your records. While local departments may have limited resources for online fraud, a filed report creates an official record that can support any financial institution disputes or insurance claims.

Your Bank or Financial Institution. If money moved through a bank transfer, contact your bank immediately. While wire transfers are rarely reversed, acting quickly gives you the best chance. If gift cards were used, contact the card issuer directly as some have fraud recovery programs. Report credit card fraud to your card issuer right away, as those disputes have stronger consumer protections.

AI Scam Prevention Checklist

  • Set up a family code word that only immediate family members know, and never volunteer it if asked
  • If you receive an urgent call from a family member in trouble, hang up and call them back on a number already saved in your phone
  • Never send money via wire transfer, gift card, cryptocurrency, or cash courier based on a phone call or message alone
  • Reverse image search photos from anyone you meet online before trusting them
  • Tighten privacy settings on social media and limit who can see videos and audio clips of you and your family
  • Be suspicious of any caller or contact who insists on secrecy or tells you not to hang up or talk to family
  • Treat extreme urgency as a warning sign, not a reason to act faster
  • Never click links in unexpected emails or texts, type the website address directly into your browser instead
  • Talk to elderly parents and grandparents about voice cloning and deepfakes before they encounter them
  • If you are targeted, report it to the FTC at ReportFraud.ftc.gov and the FBI at IC3.gov

Final Thoughts

AI scams are more personal than anything that came before them. They can use the voices of people you love, the faces of people you trust, and the emotional weight of situations you cannot afford to get wrong.

The most powerful defense is not a piece of software. It is slowing down before you act, verifying through a channel you already control, and building a few simple habits before you ever need them.

Start with one thing today: set up a family code word. Text it to your family members right now. It costs nothing and takes about 60 seconds. That one step breaks the grandparent scam, the emergency voice clone call, and most other impersonation attacks that rely on urgency and trust.

If you want to go further, check out our guides on Password Security, Two-Factor Authentication, and How to Spot Scam Emails, all part of the same practical approach to staying safe without becoming paranoid.

And if this article helped you understand the threat, please share it. Send it to a parent, a grandparent, or anyone you know who might not yet know that a voice on the phone is no longer reliable proof of who you are actually talking to.

Explore more Online Security guides for related tips, tools, and reviews.

AI Scams FAQ

Can scammers really clone someone's voice from a short video or voicemail?

Yes, and it takes far less than most people expect. Modern AI voice cloning tools can build a convincing replica of someone’s voice from as little as three seconds of audio. That audio does not need to come from a private source. A voicemail greeting, a TikTok video, a YouTube clip, or even a short Instagram reel is more than enough raw material. Once the clone exists, the scammer can make it say anything they want.

It is getting harder, and that is the honest answer. Research suggests that 68% of deepfake videos currently cannot be reliably distinguished from real footage by the average viewer. That said, there are things to watch for. Look for slight unnatural blinking, edges around the hairline or face that look slightly soft or blurry, and audio that feels just slightly out of sync with the mouth movements. If something feels off but you cannot explain why, trust that instinct. The safest move is to hang up and initiate a call back through a number you already have saved.

A family code word is a word or short phrase that only your immediate family members know in advance. If you ever receive an urgent call from someone claiming to be a relative in trouble, you ask for the code word before doing anything else. If they cannot provide it, you hang up and call the real person back on a number already in your phone. One important rule: never volunteer the word yourself. Scammers who know about code words may claim to have forgotten it and wait for you to offer it instead.

In most cases, unfortunately no. This is one of the hardest facts about AI scams and fraud in general. Once money moves through a wire transfer, cryptocurrency, gift cards, or cash sent by courier, recovery is extremely rare. Unlike a fraudulent credit card charge where a bank can reverse the transaction, money you authorize and send yourself falls into a different legal category. This is exactly why scammers insist on those payment methods. Acting quickly and contacting your bank immediately gives you the best possible chance, but prevention is far more reliable than recovery.

The data backs it up, though it is worth understanding why. According to the FBI’s 2024 Internet Crime Report, Americans over 60 lost nearly $4.8 billion to cybercrime in 2024 alone. Older adults are often targeted because they may be less familiar with voice cloning and deepfake technology, may be more open to new connections after losing a spouse, and are more likely to have accessible retirement savings. That said, AI scams are fooling tech-savvy people too. The Brad Pitt scam victim actually tried to verify the photos through a reverse image search and still got fooled. Awareness and a few simple habits matter far more than age.

Trust your instinct. If something feels wrong about the voice, like being slightly flat, unnaturally smooth, or the emotion does not quite land the way it normally would, trust that feeling. Do not share financial information or confirm any personal details. Tell the caller you need to call them back. Hang up. Call the real person back on a number already saved in your phone, not a number the caller gives you. Even if you feel embarrassed and the voice sounded exactly like your family member, making that verification call costs nothing. Getting it wrong could cost everything.

The most effective steps are proactive rather than reactive. Set up a family code word before anything happens. Sit down with your parents and agree on a word or phrase that only immediate family knows, and explain that they should ask for it any time someone calls claiming to be family in trouble. Explain in simple terms that AI can now copy someone’s voice from a short video. Walk them through the concept with a real example, like the Sharon Brightwell case above, so it is not abstract. Make sure they know that no real attorney, bail bondsman, or law enforcement officer will ever demand cash, gift cards, or cryptocurrency over the phone. Finally, set up a group chat or a simple rule: before sending any money for any emergency, call one other family member first. That one extra step breaks the scam almost every time.

AI scams are spreading faster than awareness of them. If this article helped you understand what to watch for, consider sharing it on your favorite platform below, or send it directly to someone you care about. It only takes a second and it might make a real difference.

Facebook
X / Twitter
LinkedIn
Picture of Michael Kendrick

Michael Kendrick

Director of IT and former Certified Registered Locksmith with 27 years in technology and cybersecurity. Practical, everyday guidance to help you protect everything from the locks on your doors to the logins on your accounts.

Related Posts