Before worrying about a killer bot, regulators must take on human abuse of AI

Most GenAI models are riddled with racial and gender biases, having been trained on biased supersets of data.
Most GenAI models are riddled with racial and gender biases, having been trained on biased supersets of data.

Summary

  • Anxiety over AI outsmarting and killing humans mustn’t distract us from its here-and-now risks. We should worry more about immediate AI threats posed by rogue humans, such as propagandists deploying deep-fakes or dictators unleashing autonomous weapons.

Every week sees a slew of launch announcements in artificial intelligence (AI). The last week, however, was marked by a rush of declarations on how to regulate it. It started off with the US springing a surprise with Joe Biden’s executive order requiring AI majors to be more transparent and careful in their development. This was followed by the Global AI Summit convened by Rishi Sunak; attended by 28 countries (China included) and boasting the star power of Elon Musk, Demis Hassabis and Sam Altman, it led to a joint communique on regulating Frontier AI. The EU is racing to be next, China has something out, and India is making the right noises. OpenAI announced a team to tackle Super Alignment, declaring that, “We need scientific and technical breakthroughs to steer and control AI systems much smarter than us."

The race to develop AI has turned into a race to regulate it. There is certainly some optimism here—that governments and tech companies are awake to the dangers that this remarkable technology can pose to humankind, and one cannot but help applaud the fact that they are being proactive about managing the risks. Perhaps they have learnt their lessons from the ills that social media begat, and want to perform better this time. Hopefully, we will not have an AI Hiroshima before people sit up to the dangers of it.

However, I am not so sanguine about this. On closer look, most of this concern and regulation seems to be directed towards what is loosely called Frontier AI—that time in the future when AI will become more powerful than humans and perhaps escape our control. The Bletchley Park AI Summit was very clear on this; it focused on Frontier AI. The OpenAI initiative is also about alignment between human and superintelligent AI values—thus the term ‘super alignment.’ Most of the narrative around regulating AI seems to be focused on this future worry. My belief, however, is that we need to worry far more about the here-and-now, and the current issues that AI has. Today’s large language models (LLMs) often hallucinate, playing fast and easy with the truth. AI powered ‘driverless’ cars cause accidents, killing people. Most GenAI models are riddled with racial and gender biases, having been trained on biased supersets of data. Copyright and plagiarism problems abound, with disgruntled human creators filing lawsuits in courts for redressal. And then, the training of these humongous LLMs spews out CO2 and degrades the environment (bit.ly/3QsM2Wx).

Gary Marcus, noted AI scientist and author, echoes this sentiment: “…the (UK AI) summit appears to be focusing primarily on long-term AI risk—the risk of future machines that might in some way go beyond our control…. We need to stop worrying (just) about Skynet and robots taking over the world, and think a lot more about what criminals, including terrorists, might do with LLMs, and what, if anything, we might do to stop them." (bit.ly/3tQXVy5). A Politico article (politi.co/3tUSzli) has an interesting take. It talks about a deliberate effort by Silicon Valley AI billionaires lobbying the US Government to focus on just ‘one slice of the overall AI problem’—“the long-term threats that future AI systems might pose to human survival." Critics say that focusing on this ‘science fiction’ shifts the policy narrative from pressing here-and-now issues, ones that leading AI firms might want kept off the policy agenda. “There’s a push being made that the only thing we should care about is long-term risk because ‘It’s going to take over the world, Terminator, blah blah blah,’" says AI professor Suresh Venkatasubramanian in Politico. “I think it’s important to ask, what is the basis for these claims? What is the likelihood of these claims coming to pass? And how certain are we about all this?"

This is my exact point. Instead of super-intelligence-caused doomsday scenarios, which have a comparatively tiny probability, we need to focus on the many immediate threats of AI.

It is not a Terminator robot arising from a data centre that will cause the destruction of humanity. More likely, it will be a malevolent state actor who uses deepfakes and false content at scale to subvert democracy, or maybe a cornered dictator who turns to AI-based lethal autonomous weapons to win a war he is losing. Moreover, an unbridled race to build the next massive LLM will further accelerate global warming. And then there’s the crisis of a deluge of fake provocative news that could turn communities on each other.

AI might not harm us, but a human using AI could. We need to regulate humans using AI, not AI itself.

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more

MINT SPECIALS