AI needs a globally coordinated effort for its effective regulation

The European Union has adopted an approach similar to the much-vaunted Global Data Protection Regulation (GDPR) called the AI Act
The European Union has adopted an approach similar to the much-vaunted Global Data Protection Regulation (GDPR) called the AI Act

Summary

  • This may be the best of various proposed approaches even if fractured geopolitics poses challenges

Last week, I was delivering a keynote on Generative AI at the leadership retreat of an industry organization, when its president raised a question: “We all talk about regulating AI. But a technology like this, can it even be regulated?" I have been asked this before a few times. In fact, when I started doing my Masters in AI and Ethics at Cambridge University in September 2021, ChatGPT was not yet out, but even then we had the feeling that the horse had already left the barn, and ethicists and regulators were chasing a fast-moving target. With Generative AI and ChatGPT moving so fast, we sometimes cannot even the see the horse.

So while we all have noble intentions, are there ways we can regulate AI? There are five ways (bit.ly/3LwbNn9). The first is licensing, which came straight from the horse’s mouth, in this case OpenAI CEO Sam Altman’s; he suggested that AI companies should need some kind of licence to operate and all such licensed bodies should be regulated. The second is use-case led regulation, much like the US FDA regulating new drugs at their point of use. The third is where countries get together and cooperatively work on AI, like the discovery of the Higgs Boson at the CERN accelerator. With fractured geopolitics, this might be like flogging a dead horse, though. Fourth is the suggestion of an ‘isolated island’ approach, where AI research on superintelligence happens in an isolated, ‘air gapped’ manner before it is released to the wild—something like how new aircraft are made and approved. The fifth regulatory idea is another suggested by Altman again, which is akin to regulating nuclear energy through the International Atomic Energy Agency and Non-Proliferation Treaty.

None of them is perfect. They would require countries and companies to come together. In the meantime, major jurisdictions are deploying a horses-for-courses approach. The EU’s is predictably the most stringent. It has adopted an approach similar to the much-vaunted Global Data Protection Regulation (GDPR) called the AI Act, under which creators of large language models (LLMs) would be responsible and liable for how their technology is used, even if another party uses it for anything else after licensing or buying it. This has drawn howls of protest from OpenAI, with Altman threatening to withdraw ChatGPT from the EU, and Google’s Kent Walker saying, “You wouldn’t expect the maker of a typewriter to be responsible for something libellous."

The US approach has been more laissez faire ,with the AI industry being asked to regulate itself aided by some not-so-gentle prodding from the White House. US President Joe Biden got major AI players like OpenAI, Microsoft, Google, Meta, etc, together and made them sign a set of ‘voluntary commitments,’ which include internal and external testing of AI systems before release, clearly classifying which content is AI-generated, and a promise of increased transparency on the capabilities and limitations of their models. The US is very reluctant to stifle innovation in its world-leading AI capabilities. It understandably does not want to look a gift horse in its mouth.

China, on the other hand, has been the earliest and strictest to regulate AI. Its approach has been very targeted and specific, with an intent to control the flow of information, especially what it disapproves of, requiring AI products to adhere to “core values of socialism." Companies have to submit their models for approval before they are released, as we have seen in the recent cases of Tencent and Baidu. In many ways, a Great Wall is being built around Generative AI too. Much like the splintered internet, we’ll see a splintered AI world too.

The UK has proclaimed a ‘pro-innovation’ approach. Its idea is to regulate usage rather than the technology. It is also striving to take a leadership position with its Global AI Summit in London scheduled this October, getting countries together to hammer out a global framework.

This seems to be the approach that countries like India favour for global technologies like crypto or AI, where a global approach would be more effective than individual country regulation. A technology like AI crosses borders effortlessly, and can be created by a developer with a laptop sitting in a far-off basement. Much like it did in the case of nuclear power, the world needs to come together to harness and regulate a technology as existential as AI. The state of the world, however, does not lend itself to much optimism on this front. After all, you can lead a horse to water, but you can’t make it drink.

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more

MINT SPECIALS