Will AI ever grasp quantum mechanics? Don’t bet on it

One might think that understanding the most mysterious theory in physics shouldn’t be difficult for artificial intelligence (AI). This isn’t about clever software, though, but about something AI doesn’t have: consciousness.
Artificial intelligence (AI) is moving fast—faster than many of us ever imagined. It can diagnose diseases from images, write complex computer programs, predict market trends and help simulate the birth of galaxies in just a few seconds. It would not be a joke to say one day it will find the final secrets of the universe—perhaps even of quantum mechanics (QM), that most puzzling theory in modern physics.
As a physicist, I’ve used AI tools myself and been impressed by what they can do in seconds—things that used to take us years and huge amounts of funding. But I have a big doubt: AI may never truly ‘understand’ quantum mechanics. One might think that cracking the most mysterious theory in modern physics should not be difficult for AI, which is already helping scientists solve complicated equations and design quantum computers.
But I am not so sure. And it’s not about the power or programming. It’s about something AI doesn’t have: consciousness.
Also Read: Dodgy aides: What can we do about AI models that defy humans?
Let me take you back to my student days. I was sitting in a quantum physics lecture, listening to my professor talk about the famous double-slit experiment. It showed something interesting: tiny particles like electrons behave like waves—until we try to observe them. The moment we ‘watch,’ their behaviour changes.
This strange result led to a shocking idea: the act of observing something can change reality itself. This is just like a person at a gathering who behaves freely when unobserved but changes behaviour once noticed. Similarly, electrons act like waves when not observed but change to particle-like behaviour upon measurement.
Kurt Gödel’s incompleteness theorems, proven in the 1930s, drew attention to the question of whether a formal system (like those AI is built on) could grab all mathematical truths. There will always be true statements that such systems cannot prove. This limitation applies to AI, which basically operates within algorithmic bounds.
The British physicist and mathematician Roger Penrose—winner of a Nobel Prize in Physics and co-architect of modern general relativity—went where few dared. He extended Gödel’s incompleteness theorems into the mind itself. In The Emperor’s New Mind and Shadows of the Mind, Penrose argued that no algorithm, no matter how sophisticated, can truly mimic human consciousness. Why? Because consciousness, he suggested, doesn’t arise from classical computation, but from quantum processes inside the brain.
Also Read: Biases aren’t useless: Let’s cut AI some slack on these
I tend to agree. AI can mimic quantum behaviour, but does not experience it. It calculates probabilities but never truly observes. It outputs solutions without thinking of their philosophical connections. It is like a brilliant student solving the Navier–Stokes equations (famously tied to a million-dollar prize), but without sensing the turbulence of the waves they describe.
Modern AI, especially models that use machine learning and neural networks, is based on pattern recognition. They take large data-sets and find patterns, gathering and optimizing statistical regularities. This works well for visual recognition, language generation and even solving some physics problems, like calculating energy levels in molecular systems.
However, QM goes beyond problem-solving. It is a philosophical dare. It asks questions that go beyond ‘what happens’ to ‘why does this happen this way’ and ‘what does it mean for something to happen at all?’ The debates between Einstein, Bohr and Schrödinger weren’t about the output of calculations; they were about the nature of reality.
Also Read: Confidently wrong: Why AI is so exasperatingly human-like
Simulation isn't comprehension: There is a nice but critical difference between mimicking a phenomenon and understanding it. AI can simulate quantum systems with precision, especially with hybrid quantum-classical algorithms.
Take the Schrödinger’s Cat thought experiment. A large language model can cite ‘Copenhagen interpretations,’ but it does not wrestle with the paradox of a cat that’s neither dead nor alive the way a physicist does. AI doesn’t lose sleep over it. It doesn’t look for an explanation that ‘feels’ right in its deepest sense. Its answers are not born of factual curiosity, but of statistical association. Understanding requires not just prediction, but a jump of abstraction and an act of belief—something deeply human.
QM inherently resists usual logic. It is probabilistic, contextual and often counter-intuitive. AI, by contrast, is built atop layers of statistical models and optimization processes.
Ironically, this might make AI more naturally aligned with the probabilistic nature of quantum physics than classical human thinking. Will AI ever generate such a revolutionary conceptual leap? Can it doubt the axioms it is trained on? It’s not just math; it is a window to the basic nature of existence. Whether it reveals a multiverse or a single uncleared reality, it poses questions that touch on consciousness, causality and the limits of knowledge itself.
AI is a powerful tool—perhaps the most powerful we’ve built. But as a physicist, I remain cautious. Until AI can construct questions it was not trained to ask, challenge patterns it was built upon and develop a sense of awe about its place in the universe, it will remain an assistant, not an originator of quantum understanding. At the end, it’s not just about numbers, but about tackling the mysteries of the universe with curiosity, humility and imagination. And that, for now, remains uniquely human.
These are the author’s personal views.
The author is a theoretical physicist at the University of North Carolina at Chapel Hill, US. He posts on X @NishantSahdev
topics
