5 AI-powered scams you should know about and how to avoid them
Tero Vesalainen // Shutterstock
5 AI-powered scams you should know about and how to avoid them
Online scams are certainly not a new phenomenon, but unfortunately, they are on the rise. The Pew Research Center reports that 73% of adults in the U.S. have been victims of an attack, as of July 2025. This was calculated using information based on 9,397 panelists surveyed between April 14 and April 20, 2025. AI scams, however, are new, and now 76% of the population is concerned about them, according to a February 2025 Statista study.
The most effective way to protect yourself from these scams is to become aware of the most popular ones. In this article, Lifeguard breaks down the five most prevalent AI scams going around today and how you can protect yourself from these threats.
1. Voice-cloned grandparent emergency scams
Scammers are increasingly using AI to clone the voices of individuals. They are then using these clones to take advantage of family members and request money. This scam has been around for a while, but it has significantly improved because of technology.
Mini Case Study: Scammers may use audio recordings from a grandchild, obtained through social media, and create a voice sample of them. They then use this voice to call a grandparent and express that there has been an emergency and that they need money.
Prevention Tip: If you ever receive a call from a family member that they’re in an emergency and need financial assistance, verify it first. Try calling the family member directly or reaching out to other family members.
2. Spear phishing of executives and officials
Spear phishing is when scammers send targeted emails to an individual using tailored information. With the prevalence of social media, this is easier than ever, as scammers can find personal information online with just a few clicks.
AI makes this even simpler, as it allows an attacker to automate the research process and return data that they can use in their scam.
Mini Case Study: Cisco outlines a convincing example of how this scam can operate. A scammer does research on a company and identifies a high-ranking CEO and a low-level employee. They duplicate the email signature of the CEO and spoof their address. They then send a request to the employee to go to the store, get $500 in gift cards, and send the codes.
Prevention Tip: Be sure to check the full email address of any person sending you a link or a strange request.
3. AI-enhanced social engineering and phishing
A scammer who is experienced with AI can have their bots scrape personal data from social media sites like Facebook and find connections they might have. Using this information, they can craft specific, detailed emails that are very convincing.
Mini Case Study: For example, an automated AI bot finds a list of mid-level employees working at a finance company on LinkedIn. It identifies colleagues of these individuals. Using AI, a personalized email is then created, appearing as if it’s from one of the trusted colleagues. This email, containing a link, is sent to the mid-level employee. When opened, it unleashes malware on an internal computer system.
Prevention Tip: NPR recommends setting profiles to private, as AI is not able to access this data.
4. Fake customer support numbers on Google’s AI overview
Google’s AI overview at the top of the search page gives quick summaries of the most useful search results. However, scammers have learned to manipulate it via generative engine optimization (GEO). They create web pages that AI overviews prefer to crawl and take data from. Scammers will populate sites with fake phone numbers for the customer service departments of legitimate companies. When users search for this number in Google, the AI overview gives the fraudulent number.
Mini Case Study: Say you are having problems with your phone bill. You go to Google and type in the company name and “customer support.” A convincing 1-800 number shows up in the AI overview, and you give it a call.
On the other end, a “representative” of the company answers the phone and discusses your issue. They state that you have an overdue balance and request that you pay it right there. You give them your credit card information. Unfortunately, this person was a scammer.
Prevention Tip: Never call a phone number that’s published in the AI overview. Always go to the company’s website and verify the actual number.
5. Deepfake videos, images, and voice fraud
AI has become so advanced that it is able to create entirely fake video recordings using just a few basic prompts and reference materials. Deepfakes have already caused $200 million in financial losses in 2025 alone.
Mini Case Study: There are countless examples of people being persuaded into buying a certain cryptocurrency because they saw a video of a prominent politician or business leader advertising it. New York Attorney General Leticia James put out an investor alert in 2024 that warned people of this very attack.
Prevention Tip: The MIT Media Lab gives specific advice on identifying deepfake videos. Some questions to ask yourself include:
- Are the facial movements natural?
- Does the person’s blinking seem too fast or slow?
- Is there an unnatural glare or lack of glare on glasses?
- Do the person’s facial hair and other features look real?
Don’t let AI scams get the best of you
With AI-powered fraud on the rise, it pays to always be cautious. Scams are getting more advanced every day, as seen with examples like voice cloning, automated spear phishing, deepfakes, and fake customer support numbers.
Take protective action and follow the prevention tips in this article to keep you and your finances safe. With a little scrutiny, you should have no problem identifying the fraud that might try to make you a target.
This story was produced by Lifeguard and reviewed and distributed by Stacker.