When it Comes to AI: Think Inside the Box

James Somers recently published an interesting essay in The New Yorker titled “The Case That A.I. Is Thinking.” He starts by presenting a specific definition of thinking, attributed in part to Eric B. Baum’s 2003 book What is Thought?, that describes this act as deploying a “compressed model of the world” to make predictions about what you expect to happen. (Jeff Hawkins’s 2004 exercise in amateur neuroscience, On Intelligence, makes a similar case).

Somers then talks to experts who study how modern large language models operate, and notes that the mechanics of LLMs’ next-token prediction resemble this existing definition of thinking. Somers is careful to constrain his conclusions, but still finds cause for excitement:

“I do not believe that ChatGPT has an inner life, and yet it seems to know what it’s talking about. Understanding – having a grasp of what’s going on – is an underappreciated kind of thinking.”

Compare this thoughtful and illuminating discussion to another recent description of AI, delivered by biologist Bret Weinstein on an episode of Joe Rogan’s podcast.

Weinstein starts by (correctly) noting that the way a language model learns the meaning of words through exposure to text is analogous to how a baby picks up parts of language by listening to conversations.

But he then builds on this analogy to confidently present a dramatic description of how these models operate:

“It is running little experiments and it is discovering what it should say if it wants certain things to happen, etc. That’s an LLM. At some point, we know that that baby becomes a conscious creature. We don’t know when that is. We don’t even know precisely what we mean. But that is our relationship to the AI. Is the AI conscious? I don’t know. If it’s not now, it will be, and we won’t know when that happens, right? We don’t have a good test.”

This description conflates and confuses many realities about how language models actually function. The most obvious is that once trained, language models are static; they describe a fixed sequence of transformers and feed-forward neural networks. Every word of every response that ChatGPT produces is generated by the same unchanging network.

Contrary to what Weinstein implies, a deployed language model cannot run “little experiments,” or “want” things to happen, or have any notion of an outcome being desirable or not. It doesn’t plot or plan or learn. It has no spontaneous or ongoing computation, and no updatable model of its world – all of which implies it certainly cannot be considered conscious.

As James Somers argues, these fixed networks can still encode an impressive amount of understanding and knowledge that is applied when generating their output, but the computation that accesses this information is nothing like the self-referential, motivated, sustained internal voices that humans often associate with cognition.

(Indeed, Somers specifically points out that our common conceptualization of thinking as “something conscious, like a Joycean inner monologue or the flow of sense memories in a Proustian daydream” has confused our attempts to understand artificial cognition, which operates nothing like this.)

~~~

I mention these two examples because when we talk about AI, they present two differing styles.

In Somers’s thoughtful article, we experience a fundamentally modern approach. He looks inside the proverbial black box to understand the actual mechanisms within LLMs that create the behavior he observed. He then uses this understanding to draw interesting conclusions about the technology.

Weinstein’s approach, by contrast, is fundamentally pre-modern in the sense that he never attempts to open the box and ask how the model actually works. He instead observed its behavior (it’s fluent with language), crafted a story to explain this behavior (maybe language models operate like a child’s mind), and then extrapolated conclusions from his story (children eventually become autonomous and conscious beings, therefore language models will too).

This is not unlike how pre-modern man would tell stories to describe natural phenomena, and then react to the implication of their tales; e.g., lightning comes from the Gods, so we need to make regular sacrifices to keep the Gods from striking us with a bolt from the heavens.

Language model-based AI is an impressive technology that is accompanied by implications and risks that will require cool-headed responses. All of this is too important for pre-modern thinking. When it comes to AI, it’s time to start our most serious conversations by thinking inside the box.

The post When it Comes to AI: Think Inside the Box appeared first on Cal Newport.

2 likes ·   •  0 comments  •  flag
Share on Twitter
Published on November 23, 2025 05:30
No comments have been added yet.


Cal Newport's Blog

Cal Newport
Cal Newport isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Cal Newport's blog with rss.