Why IT firms are treating artificial intelligence as a poisoned chalice

Cognizant said that as with many innovations, AI presents risks and challenges that could adversely impact its business. Photo: iStock
Cognizant said that as with many innovations, AI presents risks and challenges that could adversely impact its business. Photo: iStock

Summary

  • Generative AI is now part of every IT major’s toolkit, but the red flags in their recent annual reports show they are yet to fully grasp the risks and rewards of this revolutionary technology.

In the 18 months or so since ChatGPT was unveiled in November 2022, the IT industry has embraced generative AI. But as its penetration has increased, IT consultants have begun warning clients and investors about the potential downsides of AI.

Companies such as Accenture, Cognizant Technology and Capgemini have mentioned potential risks in management discussions published in their annual reports and in their conference calls with investors. India’s IT majors including TCS, Infosys, Wipro and HCL Tech are likely to follow suit in their upcoming management discussions.

Red flags

Cognizant said, “Our use of AI technologies may present business, financial, legal and reputational risks. As with many innovations, AI presents risks and challenges that could adversely impact our business." The company added, “Failure to appropriately conform to this evolving landscape may result in legal liability, regulatory action, or brand and reputational harm."

French IT giant Capgemini also spoke about potential disruptive effects. “Generative AI will transform numerous jobs and create new ones... From a societal point of view, generative AI threatens to exacerbate social divides and raises challenges for democracy and human rights, particularly as its workings lack transparency and results may be inappropriate or biased," it said.

Also read: Unease at Cognizant, Capgemini, Accenture over rise of AI risks

Accenture warned about security risks and the potential impact on profitability. “[GenAI] could reduce our ability to obtain favourable pricing… As cyberattacks become increasingly sophisticated (e.g. deepfakes and AI-generated social engineering), the risk of security incidents and cyberattacks has increased. Such incidents could lead to shutdowns, or disruptions of or damage… and unauthorised disclosure of sensitive or confidential information."

Gen AI is creating new revenue streams and is now part of every IT major’s toolkit. Microsoft (in conjunction with OpenAI), Meta and Google have deployed gen AI tools, while Apple has decided to use Google’s Gemini in its devices. Hyperscalers and telecom service providers, running cloud operations and data centres, are also beneficiaries of AI deployment as it has brought in new business.

Accenture said gen AI will contribute about $1.1 billion or 2.8% of its order book for January-June 2024. TCS estimated it had gen AI contracts worth $900 million, or about 2.1% of its March order book. The numbers may seem negligible, but these revenue streams did not exist 18 months ago, and IT services firms are also increasing their use of AI internally.

Tectonic shifts

AI’s disruptive impact will be felt across every industry and society. It will cause tectonic shifts in employment and work patterns. Call centres, for example, will need fewer human agents. Basic coding is also becoming AI-dominated, so there will be less need for “ant-farms" of coders. Translation services and basic news reporting is also being done by AI. On the flip side, there is a need for more and smarter humans in areas such as cybersecurity to tackle the risks posed by AI.

Many professions could become redundant even as new ones are created, requiring re-training and upskilling of employees. A rough analogy would be the emergence of the internal combustion engine. Within a few decades, cars generated millions of jobs and transformed society. But horse-breeders, carriage-drivers and others also went out of business. So yes, AI could lead to “exacerbations of social divides", as Capgemini pointed out.

Also read: AI frontier: Can India leap ahead?

The use of AI on the battlefield, in law enforcement and in surveillance – all potential revenue opportunities – are also a cause for concern. In terms of reputational harm, Google has already seen employees protest against Project Nimbus, through which the company provides AI, cloud computing and other resources to Israel’s military.

Israel has also used locally developed AI algorithms called ‘Lavender’ and ‘Where’s Daddy’ to identify Hamas members in Gaza and target them with explosives. AI also controls military drones and other autonomous and semi-autonomous weaponry on modern battlefields.

Regulators playing catch-up

Since AI also enables accurate face-recognition and can analyse complex information for hidden patterns and connections, its potential uses range from driving cars to identifying rare cancers. However, the same technology could be used by repressive regimes to track dissidents and crush opposition.

Policymakers have started to address some of these concerns. The EU, for instance, has put restrictions on the use of face recognition and on AI targeting minors. Its legislation, which uses a template to classify AI by levels of risk, could serve as a draft for other lawmakers, but the technology is always likely to outpace regulation.

Like any other technology, AI is in itself neither good nor bad and its effects depend on how it is used. As it is trained to perform more tasks, businesses at the cutting-edge of R&D and AI deployment will have to walk a tightrope. So far, however, we have fully grasped neither the risks nor the rewards.

Also read: Generative AI: Five ways companies in India can fully exploit its potential

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more

topics

MINT SPECIALS