Active Stocks
Tue Jun 25 2024 15:59:52
  1. HDFC Bank share price
  2. 1,710.90 2.32%
  1. Power Grid Corporation Of India share price
  2. 327.60 -1.64%
  1. ICICI Bank share price
  2. 1,199.05 2.48%
  1. Tata Steel share price
  2. 175.70 -1.24%
  1. Tata Motors share price
  2. 954.85 -0.35%
Business News/ Opinion / Columns/  Doug Lenat and the search for AI
BackBack

Doug Lenat and the search for AI

Lenat's project focused on building comprehensive AI using common sense and general models of the world, and his approach differed from current generative AI models

Douglas LenatPremium
Douglas Lenat

My only contribution to the Cyc project—an artificial intelligence (AI) project for comprehensive ontology—was vanishingly small, and some 32 years on, I have no idea if it persists. It was a piece of code in the Lisp programming language. It was set in motion when you clicked on an object on screen and moved it around using your mouse. In short, it made it easier to visualize that motion.

I had written code like that before, so I knew how to write it here. Now I had to show it to the Cyc guys. I walked across the atrium to the office of the man whose brainchild Cyc was, Douglas Lenat. He and a colleague, Ramanathan Guha, came back to my office wearing looks of serious scepticism. I barely knew them, I wasn’t part of the Cyc team, so I could almost hear the question buzzing in their minds: “What’s this dude going to show us about our own effort that we don’t already know?"

But they were charmed by my little utility. To their credit, they looked at me with newfound respect, thanked me and said they would incorporate it into Cyc. For the next several months, until I quit the company we all worked at, MCC, I’d get a cheery “Hi" from them every time we crossed paths.

It’s been three decades, and I have lost touch with Lisp, MCC, Cyc, Guha and Lenat. Still, I felt a distinct pang on hearing that Douglas Lenat died on 31 August, at nearly 73.

AI is all the rage these days, of course, astonishing people, raising worries, showing up everywhere. For just one example: as I write these words, I’m occasionally checking highlights from the ongoing US Open tennis tournament. To my surprise, these clips are embellished with commentary that’s clearly AI-generated. I’ll say this: it’s only about adequate. There are giveaways that the speaker and the words aren’t actually human. First, the slightly wooden voice. Second, the slightly awkward turns of phrase—like “at the crucial moment, Sinner drops the match point", or “Sinner loses the first set after Zverev’s electrifying ace". No tennis observer speaks like this.

This strain of AI (usually called “generative") builds on so-called Large Language Models: vast databases of text and rules about how text and speech are constructed. As the tennis commentary and many other examples show, these LLMs do a pretty good job of mimicking humans, of showing us what looks very much like intelligence. Until they don’t—for which the tennis commentary, again, is itself an example. The reason we sometimes find our brows furrowing while reading or listening to something produced by ChatGPT is that while it can look reasonably convincing and persuasive, it often is not quite right.

Here’s another example. I had this exchange with ChatGPT just now:

Me: “My bedroom has three people in it. I walk in to say hello. How many of us are there?"

ChatGPT: “If your bedroom initially had three people in it, and then you walk in to say hello, there would still be three people in the room. You walking into the room does not change the number of people who were already there."

As you see: it’s a perfectly constructed answer that is also totally wrong - one that you would never get from a human. So what happened? As Lenat and Gary Marcus explained in a recent paper (“Getting from Generative AI to Trustworthy AI: What LLMs might learn from Cyc", https://arxiv.org/pdf/2308.04445.pdf, 31 July 2023), ChatGPT’s failure here is in deduction. “A trustworthy AI," they write, “should be able to perform the same types of deductions as people do, as deeply as people generally reason."

And in fact, it’s not just deduction. Lenat and Marcus list 16 different “desiderata" that they believe “a general AI which is trustworthy" must have. Deduction is one; explanation, pro and con arguments, and analogy are three more. As you can tell, Lenat and Marcus set great store by that word “trustworthy". For ChatGPT to be truly intelligent in a human sense, you have to be able to trust its responses just as you would a human’s.

As Lenat and Marcus write: “humans possess knowledge and reasoning capabilities (unlike) today’s generative AI."

These ideas about AI emerged from the nearly four decades that Lenat and his team have worked on Cyc— that name excerpted from the word “encyclopaedia". Cyc builds intelligence on top of a vast store of information, too, but it is profoundly different from LLMs in the way it approaches AI. It seeks to “explicitly articulate the tens of millions of pieces of common sense and general models of the world that people have (and) represent those in a form that computers can reason over mechanically (and) quickly."

In short, human intelligence is far deeper, broader, more profound, than the AI we see today.

Still, this is not the place to tell you more about that, nor about Cyc’s innards. Lenat and his colleagues started building Cyc in the late 1980s at the Microelectronics and Computer Technology Corp. (MCC) in Austin. I worked at MCC in those years, in another AI programme. There were both tenuous links and a relatively friendly rivalry between the programmes. I say “relatively" because Lenat also attracted his share of critics and doubters. Look up the term “microLenat" sometime, enough said.

Yet the truth is that he was an AI pioneer in his own right. Something about the way he approached and built Cyc was, to him, more “right" than the ChatGPTs of today. It may seem that way to you too. After all, do you go about your life by calling on and analysing vast amounts of data? Or because you apply common sense to the world around you? Think about it.

In 1994, Lenat started a company, Cycorp, to continue building Cyc. It was never a commercial success. But as Marcus remarks in a tribute, it is still operational all these years on, and there are hardly any other AI firms that can say the same. In their paper, Lenat and Marcus suggest that future work in AI will need to “hybridize" the LLM and Cyc approaches.

So Cyc lives on. That’s Doug Lenat’s legacy. And someday, perhaps I’ll find out if my own tiny contribution lives on too.

Once a computer scientist, Dilip D’Souza now lives in Mumbai and writes for his dinners. His Twitter handle is @DeathEndsFun.

3.6 Crore Indians visited in a single day choosing us as India's undisputed platform for General Election Results. Explore the latest updates here!

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
More Less
Published: 07 Sep 2023, 11:56 PM IST
Next Story footLogo
Recommended For You