LLMs are Language Calculators
Once you understand LLMs are language calculators, you can no longer be taken for a narrative spin about “AI.”
Let’s bring out the language calculator in you. Perform this “calculation”:
4 + legs = ?
If you said cat or zebra, congratulations, you’ve performed a probabilistic language calculation. Is your answer correct? You’ll have to ask the entire human race and see whether they agree with you.
sun + rain = ?
Tough one. If you surveyed the population, you might see results like
rainbow (60%)
wet (10%)
slippery roads (2%)
LLMs are simply language calculators that do these calculations at unimaginably fast speeds and large scales.
There + is + a + bug + in + the + following + code + :
...code here...
+ what + is + the + issue + ?
= ?The language calculator, which has surveyed the entire corpus of human text, will come back to you in 15–45 seconds with a probabilistic result, like:
= The bug in your code is the early return statement.Woah! It was right! How did it know the bug was in my early return statement, even though it’s never seen my code before!?
Ah, but it has. The magic lies not in the “emergent” nature of the calculator, but rather the sheer immensity of the training data, the scale of which no one person can really fathom.
The calculator is only as good as what it has seen. By definition, it is incapable of forming a probabilistic result that somehow evolves beyond simple but powerful corpus-based language calculation.
LLMs are often compared to humans, but language is just one of many tools humans use and is not itself the substrate of consciousness, but rather a particular fascination of the brain’s left hemisphere.
LLMs and “AI” are a rather simple technology packaged in layers and layers of anthropomorphic marketing. No calculator is intelligent. This would be a ridiculous claim to make. And yet, Anthropic wants to assign its calculator unique rights and pursuit of subjective desire and ethical “decision making.”
A calculator can never understand.
A calculator takes an input and presents an output. The output is utterly self-incomprehensible, and to assign sentimentality to the idea of a calculator having self-affection for its output, is delusion on a level I have not seen before. Not even by the religions Amodei disparages in his essays.
I consider myself a technological optimist and accelerationist, and philosophically inclined towards capitalism.
But I will not join today’s crazed religion of assigning existential wonder to a calculator.
I will not join in on making unfounded prognostications on how much more fantastical this calculator will be in “12–18 months.”
I have no doubt we will perfect this calculator to the limit of its inherent perfectability.
But it will always be a calculator.



LLMs lack understanding, but they made me think about how we speak.
The process of putting thoughts into words may be more similar to LLMs than we would like to admit. As we speak, they appear on the fly out of nowhere. Occasionally, we may be surprised by what we've said or realize it didn't make sense. We put words together based on past experience in a kind of "probabilistic" way, and then the conscious part of the brain verifies, filters, and organizes those outputs.
Roger Penrose has, in my opinion, the most sober thoughts about how understanding is something that requires both intelligence and consciousness, and that the second component is not computational.