As generative AI platforms such as ChatGPT-4o, Grok 3, and Midjourney become more advanced and accessible, cyber experts are warning of a disturbing rise in misuse—one that poses serious risks to users, institutions and democratic processes, especially in regions like South Asia.
Madhu Srinivas, Chief Risk Officer at Signzy, a global RegTech company specialising in AI-powered risk and compliance solutions, sounded the alarm in an exclusive interaction, noting how bad actors are increasingly exploiting these tools beyond simple document forgery.
"These platforms are now being used to create fake human faces, hyper-realistic scenes, and emotionally manipulative visuals that feed into scams, smear campaigns, and even political disinformation," said Srinivas.
The most pressing dangers for common users lie in sophisticated phishing attacks enhanced by fake imagery, the risk of personal photos being scraped and turned into deepfakes, and the psychological toll of being targeted by manipulated content on social media and messaging platforms.
"Victims often don’t realise they’ve been targeted until the damage is done," Srinivas added. "We’re seeing a rise in non-consensual content and AI-powered fraud, both of which are escalating rapidly."
Srinivas listed five real-world scenarios demonstrating the threat posed by AI-generated visuals:
Generative AI’s ability to mimic facial and iris biometrics is also causing deep concern in sectors like fintech and surveillance. In financial services, synthetic identities can be used to bypass Know Your Customer (KYC) protocols, while in public surveillance, fake faces make it easier to evade detection or commit fraud.
"When a fake face can pass as real, the integrity of biometrics itself comes under threat," Srinivas warned.
The danger is particularly pronounced in South Asia, where political tension, rapid information spread via WhatsApp and Telegram, and emotional polarisation make AI-generated images an ideal vector for disinformation.
"A single fake image of a politician at a fabricated protest or a violent incident can go viral in minutes," Srinivas said. “These visuals influence public opinion far faster than facts can catch up, making them powerful tools of manipulation.”
He also highlighted the weaponisation of deepfakes in cyber extortion schemes targeting journalists, activists, and women—a trend increasingly being exploited by foreign actors to sow discord and mistrust.
While platforms such as OpenAI and xAI have introduced basic safeguards like content filters and limited watermarking, Srinivas believes these efforts are not yet sufficient.
"The technology is far ahead of the safety mechanisms. Anyone can generate convincing fake IDs or human faces with minimal effort," he noted.
He proposed a multi-pronged strategy to mitigate the risks:
Srinivas emphasised the need for law enforcement, journalists, and educators to urgently adapt.
Law enforcementagencies must strengthen their cyber forensic capabilities, update legal frameworks to handle synthetic content crimes, and collaborate internationally to track cross-border campaigns.
Journalists must integrate tools that verify visuals and metadata into everyday reporting. "In today’s media ecosystem, seeing is no longer believing," Srinivas said. “Newsrooms must treat visual verification as seriously as fact-checking.”
Educators, he stressed, have a pivotal role in shaping the next generation's ability to distinguish real from fake. "AI literacy and visual critical thinking must be part of the school curriculum. Students and teachers alike need to be equipped to understand how these tools work—and how they can be abused."
In a world increasingly shaped by synthetic content, Srinivas believes that truth itself is under threat—but not beyond defence.
"With foresight, collaboration, and investment in digital resilience, we can equip our societies to protect the public from misinformation and restore trust in what we see," he said.
Catch all the Technology News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.