Utopia or dystopia: Can artificial intelligence really be regulated?

Analogies of AI regulation with nuclear weapon control regimes are flawed but we could take a cue from Asimov’s prescience and formulate a basic set of rules aimed at protecting people.
Analogies of AI regulation with nuclear weapon control regimes are flawed but we could take a cue from Asimov’s prescience and formulate a basic set of rules aimed at protecting people.

Summary

  • We should perhaps turn to Isaac Asimov’s fictional ‘three laws of robotics’ to protect us from robots that are smarter than us

After OpenAI, an artificial intelligence (AI) company, launched ChatGPT in November 2022, followed by ChatGPT 4, a more advanced generative version of its large learning model, AI has taken the world by storm. People at large are confused, awestruck by the immense possibilities outlined in the Utopian narrative. At the same time, they are “a little bit scared," as OpenAI chief executive officer Sam Altman put it, by the alternative dystopian narrative. Politicians are also confused, it appears, though they pretend to be in control.

In my simple lay person’s understanding, generative AI is essentially a family of algorithms—computer programs—which use artificial neural networks to understand language and generate skills to answer virtually any question, based on amassing and mining huge amounts of data. The greater the volume of data they feed on, the greater the capacity of these large learning models. The reliability or adequacy of the answers is, of course, another question.

In the Utopian narrative, these models open up vast opportunities in everything from the creation of literature, music and art to the extension of fundamental scientific knowledge, such as determining the structure of all proteins and consequent breakthroughs in medical science, agriculture and manufacturing. AI will enable new production processes across industry and services with much higher productivity, new forms of mobility and communications, new ways of monitoring and mitigating climate change, and more. In short, it will basically transform the technological foundations of modern human society as we know it.

But there are also threats that come along with these opportunities. Mint columnist Anurag Behar recently reported studies on the shocking health consequences among teens of the increasing use of smartphones, the internet and social media: rising frequency of self-harm, hospitalization and suicide (Mint, 22 June 2023). He also mentioned the adverse impact of digital reading on attention: shallow reading, reduced comprehension, etc. If that has been the impact of just digital reading, smartphones and social media, how much worse would be the effects of increasing human dependence on AI, especially for education? With such outsourcing of our thinking, would our capacity to think wither away over time?

A Wall Street Journal article reported even more frightening consequences for workers in Kenya engaged in cleaning dark content from the masses of text, visual and audio material used to train large language models like ChatGPT. For hours on end, day after day, they were required to review awful material like toxic violent language, videos of rape, beheadings and suicides, child abuse and bestiality. Not surprisingly, a lot of these workers ended up with mental illness and broken families.

These were unintended consequences. But the intended consequences of AI deployment are also deeply worrying. In their just published volume, Power and Progress, tracing the 1,000-year history of technical progress, Daren Acemoglu and Simon Johnson point out that technical progress has mostly been driven by the dual goals of maximizing productivity while minimizing the share of labour for the benefit of a small controlling elite. In a candid keynote address at a conference organized by the Institute of Human Development last week, Johnson stated that this dual pattern of maximizing productivity while minimizing the role of labour is being taken to a whole different level by AI. The role of labour will not just be reduced, but possibly eliminated altogether in some branches of production, which would lead to a further increase in inequality.

However, Acemoglu and Johnson are not entirely pessimistic about AI. They feel AI can be directed to augment human labour rather than replace it, if labour organizations and civil society can be mobilized to nudge public regulatory policy in that direction. But therein lies the key question: Can AI be regulated? A global regulatory regime similar to the nuclear regulatory regime is sometimes suggested. This is not surprising, since both technologies pose existential threats. Also, the world has successfully fended off nuclear war for over 75 years. But is that because of its regulatory regime or the fear of mutually assured destruction? The Cuban missile crisis episode suggests that it is the latter, and in a context where nuclear arsenals were controlled by two rival states. That analogy breaks down in the case of AI, where the technology is controlled by a group of private corporations mostly in the US.

There is also a more fundamental difference that defies regulation. Geoffrey Hinton, known as the founding father of AI, recognized the existential threat that AI poses and he resigned from Google to sound the alarm. He warned that as AI systems develop and become more powerful, they also become more dangerous; “killer robots" is the term he is reported to have used. This is probably also recognized by Altman and the other heads of the AI technology firms. What happens when AI surpasses human intelligence, as is likely to happen in the near future?

When we are lost for answers in the real world, it is tempting to turn to fiction. Most breakthrough inventions were anticipated in science fiction long before they became reality. The same is true of smarter-than-us robots.

In a landmark short story titled Runaround, published in 1942, Isaac Asimov spelt out three laws of robotics to protect human beings from robots smarter than themselves. Can we conceive of such laws to be built into the foundation of all large learning models or is that just a lay person’s desperate imagination?

These are the author’s personal views.

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more

MINT SPECIALS