GenAI’s ability to hallucinate may turn out helpful if we use these tools well

The efficacy of a large language model (LLM) is often measured by how much it does or does not hallucinate, with research companies introducing hallucination indexes.
The efficacy of a large language model (LLM) is often measured by how much it does or does not hallucinate, with research companies introducing hallucination indexes.

Summary

  • What Generative AI often throws up may lack factual inaccuracy and this has got many people worried. But we should start viewing hallucinations as an aid for human creativity.

German chemist Friedrich Kekulé was having a reverie, or daydream, of a snake biting its own tail, and he started wondering if the six carbon atoms in the benzene molecule had a similar structure. This hallucinatory experience led to the discovery of the hexagonal ring structure with alternating single and double bonds, a ground-breaking concept in organic chemistry. Kekule was not the only one. Dmitri Mendeleev reportedly had a vision of the periodic table and Edison claimed to mine his dreams for material. Writer Stephen King claimed to have dreamt up his novel Misery during a somnolent flight, and the masterpieces of Van Gogh and Salvador Dali were often inspired by hallucinations.

The word ‘hallucinate’ entered the technology lexicon after the launch of ChatGPT and the realization that these Generative AI chatbots were inventing or ‘dreaming up’ a lot of false and weird stuff. ChatGPT’s alter ego Sydney famously expressed its undying love for a New York Times reporter. A US lawyer relied upon it to file a case against an airline, but the judge found that all the cases cited were dreamt up by ChatGPT. When I was writing a paper on Indian philosophy and privacy for a Cambridge University course, ChatGPT authoritatively gave me five research papers to cite—all of them wrong. This hallucinatory ability of GenAI has people worried, especially when dealing with enterprise use cases or applications in healthcare or education. In fact, the efficacy of a large language model (LLM) is often measured by how much it does or does not hallucinate, with research companies introducing hallucination indexes. A recent Cornell research (bit.ly/48gko5Y) revealed that GPT 3.5 hallucinated 69% of the time, and Meta’s LlaMA 2 hit an astounding 88% level. While the later versions of the models have improved substantially, companies are worried that the nonsense that these models spew out could hurt their brand and stock price, anger customers and pose a legal threat.

However, we need to think differently about this. What if hallucinations in LLMs are a feature, not a bug? The probabilistic construct of these models promotes this behaviour, and it might be impossible for Generative AI to be accurate all the time. What if we start leveraging this human-like behaviour of creativity (and, yes, hallucination) the same way Kekule and Dali did? Sundar Pichai of Google backs the thought. He suggests that hallucinating could be a feature and a bug, and that a GenAI experience should be “imaginative, like a child who doesn’t know what the constraints are when they’re imagining something." Marc Andreessen of A16Z remarked: “When we like the answer, we call it creativity; when we do not, we call it hallucination."

Artists and creators have caught on. John Thornhill has written in the Financial Times (bit.ly/42E8FNm) about Martin Puchner, a Harvard professor who “loves hallucinations." Puchner talks about how humans mix cultures and inputs from previous generations to generate new stuff, and how civilizations advance that way. “Culture," he says, “is a huge recycling project." This is precisely what GenAI is doing. It is borrowing, stealing, copying and also mashing up different inputs from humans to create new stuff. Thus, says Thornhill, “Hallucinations may not be so much an algorithmic aberration as a reflection of human culture." If we stop looking at GenAI as a forecasting tool, but as one that enhances our creative prowess by giving us innovative ideas and content, ‘hallucinations’ would be welcome. Modern artists and creators have started harnessing this power. Visual Electric, a California-based firm, encourages hallucinations to create new visuals and ideas (bit.ly/49A1wAd). Austin Carr has written in BusinessWeek about a film director, Paul Trillo, who used GenAI to create an acclaimed short film with psychedelic effect. Inworld AI uses the creativity of GenAI to help video game developers build interactive computer characters.

We need to see GenAI for what it is, not confuse it with machine or deep learning (which are also AI) and expect it to make high-accuracy predictions. Think of GenAI as a writer of fiction, not non-fiction. It is the ‘creative’ side of GenAI that enables idea generation, art production and better work outcomes with Copilot. If we think like Stephen King or Van Gogh, it can become an immensely powerful creative tool. As for use cases that require exact answers, we need to be careful until these models improve. Until then, as John Thornhill concludes his FT article: “Caveat prompter." Gen-AI users, beware.

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more

MINT SPECIALS