BipHoo CA

collapse
Home / Daily News Analysis / Victim loses almost $28,000 in AI romance scam

Victim loses almost $28,000 in AI romance scam

May 13, 2026  Twila Rosenbaum  6 views
Victim loses almost $28,000 in AI romance scam

A man in Shanghai, China, has lost nearly $28,000 after falling victim to an AI-powered romance scam, according to reports from Chinese state media. The scammers used generative artificial intelligence to create realistic videos and photos of a young woman they called 'Ms. Jiao.' The victim, who became romantically interested in the fictional persona, transferred close to 200,000 yuan (approximately $28,000) to a bank account controlled by the fraudsters.

The scam was further bolstered by the creation of a fake identity and even fabricated medical records. These documents were used to manipulate the victim into believing that the woman urgently needed financial assistance for medical bills. Romance scams have existed for years, but the integration of advanced AI technology has made them far more convincing and dangerous. Deepfake videos and AI-generated images now enable scammers to create lifelike personas that are nearly indistinguishable from real people.

The Mechanics of AI Romance Scams

Generative AI tools, such as deep learning models, can produce high-quality images, videos, and voice recordings that mimic human appearance and speech. In the Shanghai case, the scammers likely used software that can generate a full set of realistic photos and short video clips of a nonexistent person. They then constructed a backstory, including a name, occupation, and personal history, to make the persona seem credible. By adding fake medical records and bills, they created a sense of urgency and emotional pressure that prompted the victim to send money.

This type of fraud is not limited to China. A report released by the cybersecurity firm McAfee on February 11, 2025, reveals that more than half (52%) of people have either been scammed out of money or pressured to send money or gifts by someone they met online. The company noted an 'explosion of online romance fraud' across social media, messaging platforms, and AI chatbots. In the same study, 26% of respondents said they—or someone they know—had been approached by an AI chatbot posing as a real person on a dating app or social media platform.

Additionally, 21% of people reported being contacted by someone pretending to be a well-known public figure. Among those who fell for such schemes, 33% lost money, with an average reported loss of $1,985. McAfee also blocked 321,509 fraudulent URLs designed to lure victims in the seven weeks leading up to Valentine’s Day, demonstrating the scale of the problem.

Global Cases Highlight the Growing Threat

One of the most shocking incidents occurred in France earlier this year, where a woman was duped out of €830,000 (about $850,000) after believing she was in a relationship with Hollywood actor Brad Pitt. Scammers used AI-generated images and videos to impersonate the actor, communicating with the victim over several months. The fraudsters exploited the victim’s emotional attachment and trust, eventually convincing her to transfer large sums of money.

These cases illustrate how AI is enabling a new wave of highly targeted and believable scams. Unlike traditional romance scams that relied on text-based conversations and stolen photos, modern fraudsters can now create custom content that appears authentic. Deepfake technology can even mimic a person’s voice in real time, making phone calls and video chats seem genuine. As the cost of generative AI tools drops and their quality improves, the barrier to entry for scammers continues to lower.

The Role of Social Media and Dating Apps

Social media platforms and dating apps are the primary hunting grounds for romance scammers. They often create fake profiles and use AI-generated images to avoid detection. In the Shanghai scam, the persona of 'Ms. Jiao' was likely created using a combination of AI-generated photos and a fabricated identity. The victim likely met the persona on a dating app or social media, where normal interaction gradually escalated into a relationship.

Once trust is established, scammers introduce a crisis—often a medical emergency, travel problems, or legal issues—that requires immediate financial help. The use of fake documents, such as hospital bills, police reports, or plane ticket confirmations, adds a layer of credibility. With AI, these documents can be generated quickly and accurately, making it harder for victims to spot the fraud.

Law enforcement agencies around the world are struggling to keep pace with these evolving tactics. The anonymity provided by the internet, combined with the ability to create convincing deepfakes, makes it difficult to trace scammers. Many operate from countries with weak cybersecurity laws, further complicating prosecution. Victims often feel embarrassed and may be reluctant to report the crime, leading to underreported statistics.

How to Protect Yourself from AI Romance Scams

While AI technology makes scams more sophisticated, there are still warning signs that can help potential victims avoid falling prey. First, be cautious of anyone who quickly professes strong feelings or talks about a future together without having met in person. Scammers often rush to establish an emotional bond to lower your guard. Second, never send money to someone you have only met online, regardless of the story they tell. Legitimate emergencies do not require wire transfers, gift cards, or cryptocurrency.

Another red flag is a reluctance to video chat or meet in person. While scammers can now fake video calls using deepfakes, many still avoid real-time interaction. If they do agree to a video call, look for inconsistencies in the background, facial movements, or audio quality. AI-generated videos may have slight glitches, such as unnatural blinking or lip-sync errors. Use reverse image search tools to check if profile photos appear elsewhere on the internet, as scammers often reuse images.

Additionally, educate yourself about common scam tactics. The McAfee report highlights that a significant number of people have encountered AI chatbots posing as humans. If a conversation feels robotic or perfectly scripted, it might be a bot. Trust your instincts—if something feels off, it probably is. Financial institutions and dating platforms are also implementing AI to detect fraudulent behavior, but individual vigilance remains the best defense.

As generative AI continues to advance, the threat of romance scams will likely grow. The technology that powers creative content and virtual assistants can also be weaponized for fraud. Ongoing public awareness campaigns, improved detection tools, and stronger legal frameworks are necessary to combat this emerging cybercrime. The case in Shanghai serves as a stark reminder that even tech-savvy individuals can be deceived by the persuasive power of AI.


Source: ReadWrite News


Share:

Your experience on this site will be improved by allowing cookies Cookie Policy