Technology alone is not enough to combat deepfakes, we need a concerted effort

We need stringent regulations and laws at global and national levels to combat deepfakes.
We need stringent regulations and laws at global and national levels to combat deepfakes.

Summary

  • Technology, regulation, education and social action must join forces to tackle this serious problem. We must become savvy media consumers and question the authenticity of any suspicious content.

We know that a technology concern has become really serious when the leader of the most populated country in the world complains publicly about it. Narendra Modi, the Prime Minister of India, declared in a speech recently that he himself had become a victim of a ‘deepfake.’ He was shown participating in a folk dance that he did not participate in. He went on to say that he had personally talked to ChatGPT’s maker OpenAI about it. A few days before that, India’s IT minister had warned AI and social media firms that they needed to weed out deep-fakes. Earlier that week, three of India’s most popular film actors had suffered from deep-fake scandals.

While we seem to have seen a spate of deep-fakes recently, this phenomenon is not new. There is the infamous case of a Christmas broadcast featuring Queen Elizabeth seemingly delivering a message, only for it to be revealed as a deep-fake designed to highlight the potential dangers of this technology. Another example is a manipulated video of US politician Nancy Pelosi, who seemed intoxicated during an official speech. A more recent video clip was of a debonair looking Pope in a white Balenciaga puffer jacket.

The world’s first ‘certified’ deep-fake was probably of an AI professional, Nina Schick, delivering a warning about how “the lines between reality and fiction are becoming blurred." Manipulated or concocted videos are called ‘deep-fakes,’ a term that combines two words—‘deep learning’ and ‘fake’—as coined in 2017 on Reddit. The same Reddit thread morphed the faces of actors like Gal Gadot and Taylor Swift onto porn stars, opening a Pandora’s Box. Around 95% of all deep-fakes are estimated to be pornographic, causing an untold number of women distress.

The technology behind deep-fakes is called Generative Adversarial Networks (GANs), invented by a famous AI scientist, Ian Goodfellow. GANs are twin AI agents: One forges an image, the other tries to detect it. If the second ‘adversarial’ agent identifies the forgery, the forger AI adapts and improves, and the process continues ad infinitum to build ever more sophisticated deep-fakes. Initially, it was a fun thing, a way to flaunt the prowess of AI. However, the idea took a dark turn with political leaders’ speeches manipulated to cause unrest and ‘revenge porn’ clips made and circulated by jilted boyfriends. There is now fear that the 2024 elections in India, the US and UK, among other countries, could be undermined using deep-fakes and democracy itself could be subverted.

Deep-fakes pre-date Generative AI, but the latter had put the menace on steroids. Sophisticated deep-fake production software has made detecting and stopping fakes very difficult. Deep learning algorithms are very good at analysing facial expressions and body movements, making these fakes incredibly realistic. These can sometimes be detected through visual and auditory irregularities and there are AI tools to identify them. But, much like the virus-antivirus scenario, this is an arms race. Experts design a deep-fake detector and someone can make a better deep-fake to evade it. It is a battle of AI against AI, and it can continue forever. Blockchain-based solutions are another option, with the power to detect provenance, tracing deep-fakes back to their origin. Big Tech firms and innovative startups are racing to develop digital watermarks and classifiers. AI and social media companies are under pressure to ramp up efforts to weed them out.

Technology by itself will not solve the problem. We need regulation and education. We need stringent regulations and laws at global and national levels. Awareness and education at a societal and school level is also a must. We must become savvy media consumers and question the authenticity of suspicious content. In some countries, children are formally taught in school to distinguish between real and fake content; perhaps we need that too.

One of my more horrific memories is of reading about innocent women disfigured for life by their jilted spouses or boyfriends pouring acid on their faces to shame them and ruin their social lives. Many women fought it bravely, while others cut themselves off from society or even tried to kill themselves. Deep-fakes, in a sense, are similar: an online ‘acid attack’ meant to take cheap revenge and ‘dishonour’ a person. Acid attacks were identified as a severe criminal offence and society turned against their perpetrators. As our online and offline lives merge, we need something similar to combat people who try the same thing with deep-fakes.

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more

MINT SPECIALS