AI is helping scammers outsmart you—and your bank

ChatGPT and other AI tools can even enable scammers to create an imitation of your voice and identity.  (Pixabay)
ChatGPT and other AI tools can even enable scammers to create an imitation of your voice and identity. (Pixabay)

Summary

Your “spidey sense” is no match for the new wave of scammers.

Artificial intelligence is making scammers tougher to spot.

Gone are the poorly worded messages that easily tipped off authorities as well as the grammar police. The bad guys are now better writers and more convincing conversationalists, who can hold a conversation without revealing they are a bot, say the bank and tech investigators who spend their days tracking the latest schemes.

ChatGPT and other AI tools can even enable scammers to create an imitation of your voice and identity. In recent years, criminals have used AI-based software to impersonate senior executives and demand wire transfers.

“Your spidey senses are no longer going to prevent you from being victimized," said Matt O’Neill, a former Secret Service agent and co-founder of cybersecurity firm 5OH Consulting.

In these recent cases, the frauds are often similar to old scams. But AI has enabled scammers to target much larger groups and use more personal information to convince you the scam is real.

Fraud-prevention officials say these tactics are often harder to spot because they bypass traditional indicators of scams, such as malicious links and poor wording and grammar. Criminals today are faking driver’s licenses and other identification in an attempt to open new bank accounts and adding computer-generated faces and graphics to pass identity-verification processes. All of these methods are hard to stave off, say the officials.

JPMorgan Chase has begun using large-language models to fight identity fraud. Carisma Ramsey Fields, vice president of external communications at JPMorgan Chase, said the bank has also stepped up its efforts to educate customers about scams.

And while banks stop some fraud, the last line of defense will always be you. These security officials say to never share financial or personal information unless you’re certain about who’s on the receiving end. If you do pay, use a credit card because it offers the most protection.

“Somebody who tells you to pay by crypto, cash, gold, wire transfer or a payment app is likely a scam," said Lois Greisman, an associate director of the Federal Trade Commission.

Tailored targeting

With AI as an accomplice, fraudsters are reaping more money from victims of all ages. People reported losing a record $10 billion to scams in 2023, up from $9 billion a year prior, according to the FTC. Since the FTC estimates only 5% of fraud victims report their losses, the actual number could be closer to $200 billion.

Joey Rosati, who owns a small cryptocurrency firm, never thought he could fall for a scam until a man he believed to be a police officer called him in May.

The man told Rosati he had missed jury duty. The man seemed to know all about him, including his Social Security number and that he had just moved to a new house. Rosati followed the officer’s instruction to come down to the station in Hillsborough County, Fla.— which didn’t seem like something a scammer would suggest.

On the drive over, Rosati was asked to wire $4,500 to take care of the fine before he arrived. It was then that Rosati realized it was a scam and hung up.

“I’m not uneducated, young, immature. I have my head on my shoulders," Rosati said. “But they were perfect."

Social-engineering attacks like the jury-duty scam have grown more sophisticated with AI. Scammers use AI tools to unearth details about targets from social media and data breaches, cybersecurity experts say. AI can help them adapt their schemes in real time by generating personalized messages that convincingly mimic trusted individuals, persuading targets to send money or divulge sensitive information.

David Wenyu’s LinkedIn profile displayed an “open to work" banner when he received an email in May offering a job opportunity. It appeared to be from SmartLight Analytics, a legitimate company, and came six months after he had lost his job.

He accepted the offer, even though he noticed the email address was slightly different from those on the company’s website. The company issued him a check to purchase work-from-home equipment from a specific website. When they told him to buy the supplies before the money showed up in his account, he knew it was a scam.

“I was just emotionally too desperate, so I ignored those red flags," Wenyu said.

In an April survey of 600 fraud-management officials at banks and financial institutions by banking software company Biocatch, 70% said the criminals were more skilled at using AI for financial crime than banks are at using it for prevention. Kimberly Sutherland, vice president of fraud and identity strategy at LexisNexis Risk Solutions, said there has been a noticeable rise in fraud attempts that appear to be AI related in 2024.

Password risks, amplified

Criminals used to have to guess or steal passwords through phishing attacks or data breaches, often targeting high-value accounts one by one. Now, scammers can quickly cross-reference and test reused passwords across platforms. They can use AI systems to write code that would automate various aspects of their ploys, O’Neill said.

If scammers obtain your email and a commonly used password from a tech company data breach, AI tools can swiftly check if the same credentials unlock your bank, social media or shopping accounts.

Outsmarting scams

Financial institutions are taking new steps—and tapping AI themselves—to shield your money and data.

Banks monitor how you enter credentials, whether you tend to use your left or right hand when swiping on the app, and your device’s IP address to build a profile on you. If a login attempt doesn’t match your typical behavior, it is flagged, and you may be prompted to provide more information before proceeding.

They can tell when you’re being coerced into filling out information, because of shifts in your typing cadence. If digits are copied and pasted, if the voice verification is too perfect, or if text is too evenly spaced and grammatically correct, that is a red flag, said Jim Taylor, chief product officer at RSA Security, a firm with fraud-detection tech used by Wells Fargo, Citibank and others.

Self-defense

Consumers paid scammers $1.4 billion in cryptocurrency in 2023, up more than 250% from 2019, according to FTC data.

As a result, security officials suggest that you turn on two-factor authentication, so you get a text or email whenever someone tries logging into one of your accounts. If anything feels off during a potential money exchange, take a beat.

Pressing pause on a potentially fraudulent situation is also important psychologically. Many scammers try to create a false urgency or confuse victims to manipulate them. If all the information about a transaction or account is coming from one person, that is a red flag. Get a second opinion from a trusted contact.

“If it’s going to hurt if you lose it, validate it," O’Neill, the former Secret Service agent said.

Write to Dalvin Brown at dalvin.brown@wsj.com and Katherine Hamilton at katherine.hamilton@wsj.com

Catch all the Technology News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.
more

MINT SPECIALS