How AI Is Making Romance Scams Even More Dangerous

Most of us think that we will never fall for a scam. We think we know “tells,” like a poorly worded communication that seems urgent. Unfortunately, social engineering – tactics that prey on human emotions and instincts to get us to act against our own interests – can work on anyone.
Romance scams are a classic example of emotional manipulation in which the perpetrator exploits a victim’s feelings of loneliness, love, or desire for connection to build long-term trust. Beyond the emotional toll, the financial consequences are significant: The FBI’s Internet Crime Complaint Center (IC3) reported $672 million in losses from romance scams in 2024, and that figure is almost certainly only a fraction of the true total.
Fraudsters are increasingly using AI tools in romance scams, making these campaigns even harder to detect and therefore even more dangerous for targets. Experian predicts that AI-powered romance scams will be among the top fraud threats in 2026.
How a romance scam works
As McAfee describes in a recent report on the state of romance scams, this type of fraud is a long-standing scam. A romance scam usually starts with a “hook,” such as a DM, a follow request, a “wrong number” text message, or a match on a dating app. Once a scammer gets a response, they will engage in love bombing in an attempt to quickly build intimacy and trust while encouraging you to keep the relationship a secret. It will take time for them to build credibility around their personality, which likely includes a job or lifestyle that prevents them from meeting you.
Next comes a minor request for financial support, which can escalate to opening an account, “investing” in a business, or co-signing a loan. Increasingly, these schemes involve fraudulent investments in cryptocurrencies. (Another term for this is “pig butchery.”) Once they get what they want, the scammers disappear, leaving the victims to deal with the consequences.
Romance scams work because they don’t start with obvious exploitation. Scammers build trust over weeks and months, so it’s more likely to feel like a real relationship rather than a scam until victims are already too involved.
AI makes romance scams worse
AI makes romance scams even easier for fraudsters to pull off. In a review of recent research, Bitdefender notes that to build trust, fraudsters have traditionally had to devote significant time and attention to each individual target. While playing the game in the long run this way is often worth it (as the payoffs are often large), it limits the number of potential victims a scammer could reach.
AI removes these barriers. Large Language Models (LLMs) are able to maintain natural conversations without the red flags of a scam, such as poor grammar and spelling mistakes. AI can mirror personality, reflect emotions, and match tone, and it is less likely than a human to feel pressured or rushed. Chatbots can retain and integrate personal details from past conversations, and this requires very little effort for a very large number of victims.
What do you think of it so far?
Automated chatbots are particularly adept at handling the early stages of a romance scam, and humans are only required to intervene at critical moments to provide reassurance or initiate a financial demand. Since scammers can maintain multiple conversations at once, they can also test different tactics and quickly refine them based on what works best to keep victims engaged. As the Global Cyber Alliance puts it, AI adds “speed, scale and consistency” to the traditional romance scam.
Research suggests that victims may actually find AI more trustworthy than a human. McAfee found that a third of American adults believe it is possible to develop romantic feelings toward an AI robot. Deepfake audio and video make these AI-powered scams even more credible, because victims can no longer rely on a scammer’s refusal to speak to them as a red flag.
How to detect a romance scam
Even a well-trained chatbot has its limits. According to McAfee, the most common clues that you are interacting with a bot or fake profile include scripted or repetitive responses, instant (and perfectly crafted) responses, and obviously AI-generated photos. Other red flags include contact that avoids voice and video calls as well as unusual requests early in the relationship.
To avoid getting sucked into an AI-powered romance scam, slow down. Be wary of perfectly worded responses, which may indicate automation. Try asking unexpected questions or creating friction, which can confuse a chatbot. Remember that relationships should not be based on secrecy or depend on financial support. Social media and dating sites are full of fake profiles, so seeing is not always believing.




