LLMs lack understanding, but they made me think about how we speak.
The process of putting thoughts into words may be more similar to LLMs than we would like to admit. As we speak, they appear on the fly out of nowhere. Occasionally, we may be surprised by what we've said or realize it didn't make sense. We put words together based on past experience in a kind of "probabilistic" way, and then the conscious part of the brain verifies, filters, and organizes those outputs.
Roger Penrose has, in my opinion, the most sober thoughts about how understanding is something that requires both intelligence and consciousness, and that the second component is not computational.
There is no doubt that LLMs make for a great metaphor about how humans process language. It was afterall designed to model precisely that process. (Language *is* the act of stringing words together).
But even if you were to grant that the metaphor is 1:1 (it's not), humans do far more than probabilistic language calculation. That's a tiny subset of what makes us human.
The interesting part is that before mass adoption of LLMs, putting sentences together (like LLMs do) was also considered a human-only skill.
Now we learn that it's not the essential part.
Obviously neurons are not transistors, but right now, I'm leaning toward the belief that our job is mainly applying meaning to things. A camera can capture an image, an LLM can write working code, but what we add is mainly picking the right photo/code out of 10 unfiltered examples, and deciding on the meaning of those choices.
LLMs lack understanding, but they made me think about how we speak.
The process of putting thoughts into words may be more similar to LLMs than we would like to admit. As we speak, they appear on the fly out of nowhere. Occasionally, we may be surprised by what we've said or realize it didn't make sense. We put words together based on past experience in a kind of "probabilistic" way, and then the conscious part of the brain verifies, filters, and organizes those outputs.
Roger Penrose has, in my opinion, the most sober thoughts about how understanding is something that requires both intelligence and consciousness, and that the second component is not computational.
There is no doubt that LLMs make for a great metaphor about how humans process language. It was afterall designed to model precisely that process. (Language *is* the act of stringing words together).
But even if you were to grant that the metaphor is 1:1 (it's not), humans do far more than probabilistic language calculation. That's a tiny subset of what makes us human.
The interesting part is that before mass adoption of LLMs, putting sentences together (like LLMs do) was also considered a human-only skill.
Now we learn that it's not the essential part.
Obviously neurons are not transistors, but right now, I'm leaning toward the belief that our job is mainly applying meaning to things. A camera can capture an image, an LLM can write working code, but what we add is mainly picking the right photo/code out of 10 unfiltered examples, and deciding on the meaning of those choices.
Yes, and to me, THAT'S the one thing a machine will never be able to do. Not unless it's trained on what meaning already is.