Tacit Knowledge and Life in the Swamp

Bill Gates speaks during an event promoting the Netflix docuseries Back to the Bots

Yes, I’m talking about generative AI again. Sorry.

It’s just that our old friend and elder tech overlord, Bill Gates, came out with some astonishing words last week, generated by his own mouth parts and brain, presumably, and not an AI assistant. You don’t have to look further than the headline to be taken aback: “Within 10 years, AI will replace many doctors and teachers—humans won't be needed for most things.”

Well, now. I have my doubts about that, even with my ambivalent-leaning-towards-positive view of generative AI. So, I posted the article to ChatGPT and asked what it thought (or “thought”) of Gates’ prediction. Here’s a selection of what it gave me:


I think we need to separate the technical feasibility from the human reality.


Yes, AI will assist—and in some contexts, even outperform—humans in tasks. But full replacement? That underestimates the social, emotional, and ethical dimensions of these roles.


Gates's call to young people—this is the frontier, go build something—is vintage Gates: optimistic, forward-looking, and pro-innovation. And he’s right: this is the most profound tech shift since the internet, maybe even since the printing press.


But what’s missing is a parallel call to ethics, design for equity, and human-centered governance. If we don’t build those into the foundation of this AI age, we’ll be reacting to crises rather than steering the future.


Gates paints a world where intelligence is cheap, fast, and everywhere. But the world we should aim for isn’t just smarter—it’s also wiser. That requires keeping people in the loop, not just for efficiency, but for humanity.


Not bad, bot. Not bad at all.

Subscribe now

Three Practitioners

Let me give you three anecdotes to frame how I think about all of this.

Anecdote 1: The major league pitcher stands at the mound, runner on first. He stares past the batter at the catcher’s mitt, his target. All of his focus is straight ahead of him. But then he whips around and throws to the first baseman, catching the runner, who was about to steal second, off base.

Anecdote 2: The classroom teacher has handed out an assignment and is watching her students work. One girl at the back of the room is twirling her pigtails and looking up and to the right of her desk. The boy sitting in front of her has his head down on the desk. The girl sitting in front of him has her pencil in hand and is busy writing. The teacher walks over to the first girl, taps her on the shoulder, and tells her she needs to start working on the assignment, not draw pictures of horses. She knew this before she was able to see the paper. But she leaves the other two students alone, because she knows they’re just thinking in preparation of working.

Anecdote 3: Dr. House is leading his team of young doctors in an exercise of differential diagnosis. He stands in front of a whiteboard and lists all of the symptoms of a mystery patient whose ailment none of them can figure out. As they look at the list of symptoms, the young doctors call out possible causes. They are bright, ambitious, highly educated doctors, and they know their stuff. Their heads are filled with facts about a hundred different diseases. As they call out possibilities, Dr. House shoots down each idea for one reason or another—two of the symptoms fit, but a third doesn’t, and so on. They’re left with nothing. House, alone, stares at the board for a long period of time, bouncing a ball against the wall. And then he gets it.

If you asked all three of these people how they knew what they knew, why they knew what they knew, they wouldn’t have an answer for it. They’d just say that they “felt it.”

This is called tacit knowledge. It’s knowing without knowing how you know it. It’s knowing deep in your bones.

Why Tacit Knowledge Matters

When people like Malcolm Gladwell talk about the 10,000 hours required to become expert in something, it’s not simply the piling up of factual or procedural information that makes the difference. It’s the internalizing of all of that information and the development of “schema,” or structures of meaning in the mind.

Researchers like John Bransford and Donald Schon studied what expertise and expert practice really entailed, and they found that it wasn’t that experts knew more things than novices; it was that they had organized those things into structures and patterns that allowed them to access information differently and make meaning from it.

One of my educational gurus, Grant Wiggins, used to refer to schema as “conceptual Velcro.” He imagined hundreds of little, discrete balls of information attaching themselves to a big idea or a concept in your mind, enlarging and enriching it over time as more and more details attach themselves to it. Without a big idea, the little bits of information have nothing to cling to and are easily forgotten. This is why teaching without an overarching narrative or context can be ineffective (and feel like irrelevant lists of facts). And it’s why a high school student’s schema of, say, the American Civil War, even in a well-structured class, will be thinner and simpler than a seasoned historian’s. It’s not just the additional facts that the historian knows; it’s how all of those facts interrelate and interplay over the years, how they reinforce or challenge each other to create a deep, nuanced understanding.

That richness of understanding becomes critical when expertise moves out of the library and informs the work of a practitioner in an important field, like teaching, or medicine, or construction, or government.

As Schon wrote:

There is a high, hard ground where practitioners can make effective use of research-based theory and technique, and there is a swampy lowland where situations are confusing 'messes' incapable of technical solution... In the swamp are the problems of greatest human concern.

This is why Dr. House can solve a mystery that the younger doctors can’t. He’s been around longer; he’s lived in the swamp. He’s seen diseases manifest themselves in so many different ways, with so many extenuating circumstances intervening and muddying the waters, that he has a vast repertoire he can call on when he needs to make sense of a strange, new situation.

The same holds true for the veteran teacher. She can look out at a classroom of new students on the first day of school and quickly suss out what kinds of kids she’s dealing with. She’s seen so many come and go over the years that she can discern patterns: “Oh, this new boy is a lot like Jeremy from last year, but also a little bit like Ahmed from three years ago. I think I know what to do with him.”

Will it be a perfect fit? No—each student is a unique individual. So is each patient. But you don’t have to start from zero. You call upon the old patterns and your repertoire of “plays,” how you handled those patterns in the past. And you can cut and paste and adapt to make things fit the details of the new situation. And then, that new adaptation becomes part of your repertoire to inform the next new challenge.

The same holds true for the veteran ballplayer. He can read the room, so to speak. He can feel the vibes. He develops a spidey-sense. Like the classroom teacher, it seems like he has eyes on the back of his head. But it’s just that he’s been here before. He’s faced hundreds of runners. He can read their body language. He can figure the odds of trying for a steal at this particular point in the game, with this particular setup of runners on base, with a runner like this particular one. He doesn’t have to calculate any of this consciously; it just happens.

That’s why he’s valuable. That’s why all of them are valuable. What they are able to do can’t quickly or easily be replaced. When they leave the field, or the classroom, they leave a void.

What AI Can and Can’t Do

Could generative AI take the place of veteran practitioners when there’s a gap in the workforce? Could it remove the need for having veteran practitioners at all? I’m not so sure.

A neural net trained on vast data certainly might notice statistical patterns that most of us wouldn’t see. There’s value in that. For example, some AIs have predicted patient mortality better than doctors—not because they understood death, but because they detected subtle correlations in lab results or word choices in notes.

But here’s the key difference: AI can’t explain why it knows what it knows. And more importantly, it doesn’t inhabit the knowledge. It doesn’t care if it’s right. It doesn’t feel the stakes. It can’t say: I’ve been here before, and I know how this tends to go, the way a teacher or doctor can.

The right decision—the right call—depends on more than cold analysis of the available data. It also requires an understanding of the stakes involved and the cost of making or missing the call.

So, even if AI imitates some outcomes of tacit knowledge, it lacks the sense of narrative continuity and responsibility that human practitioners bring. It’s not just what you know—it’s knowing that you’re the one who has to act on it, and you’ll have to live with that action.

When Gates talks about AI providing "great medical advice" or "great tutoring," what’s missing is that advice doesn’t live in isolation. It’s tangled up in:

Who the person is

What they've been through

What they fear, hope for, resist, or misunderstand

What this moment reminds the practitioner of, and how that memory guides action

It’s not just a task to be completed—it’s a moment to be navigated. And that navigation depends on memory, attention, humility, and care.

AI may get very good at simulating certain expert behaviors. But simulation is not the same as being situated in the social, moral, and temporal web that gives actions their meaning. A large language model might “know” what to say to a grieving parent, but only a human knows what it means to be saying it to that person, in that situation, in that moment.

If you’re a fan of the TV show I mentioned above, you’ll argue that the character of Gregory House doesn’t care about any of this stuff, that he prefers to make his diagnoses without ever meeting patients and getting mixed up in all of their feelings and fears and lies about themselves—that he behaves more or less the way AI would. That’s kind of true on the show, but not entirely. In episode after episode, House breaks the rules and ignores his superiors in order to do what he thinks is right for his patients—even if he’s never met them. He gets frustrated. He gets angry. He understands the stakes…even if he’s insufferable and snarly about it.

As I said in a post a few weeks ago:

When you’re breakable and mortal, you can have empathy for other creatures who share those traits. You can sympathize. You can exercise care. You can love….[and] you can love only when you understand what you stand to lose.

When the stakes are high, we need the best data and the best analysis we can get. We need cool heads to examine and correlate and connect the dots in that data. Sometimes, those cool heads might be large language models. But we will always need to filter that information through the breakable, mortal, lived-in meat of our human selves, to know not only what we can do, but also what we should do.

Share

Leave a comment

 •  0 comments  •  flag
Share on Twitter
Published on April 04, 2025 05:07
No comments have been added yet.


Scenes from a Broken Hand

Andrew Ordover
Thoughts on teaching, writing, living, loving, and whatever else comes to mind
Follow Andrew Ordover's blog with rss.