Do Tools Think? Automation, Intuition, and the Designer’s Role

Who (or What) Is Designing?

Somewhere between the mouse click and the model update, a question sometimes lingers, one that makes architects react with violence. Who is really designing here? “Me,” will answer the architect. “The computer is Just A Tool and I’m in Control.” Of course I agree. Except that many architects aren’t in control of jack shit, but that’s a different story. Still, the underlying question is interesting enough to deserve a reflection. Who’s responding to a set of predefined rules, constraints, and scripts? The system or the human behind the screen?

Design has always been a negotiation between intention and material, purpose and medium, and many many other things. But now that negotiation might include another party: automation. In Revit, in Rhino, in Grasshopper, in the silent judgment of a machine-learning model suggesting the next move, automation is no longer a passive extension of the designer’s will. It curates options. It filters noise. Sometimes, it even surprises. As Cedric Price prophetically mused in the 1960s, “Technology is the answer, but what was the question?”

Yes, I’m in love with the work done by this guy.

We’re past the point of asking whether machines can draw. Apparently, they do. The real anxiety — and the real opportunity — lies in how they think alongside us. Or ahead of us. Or instead of us. When does a parametric script become more than a tool? When does it become a co-author? And if authorship is now distributed across layers of logic and learning, where do we locate intuition?

Marshall McLuhan, in his 1964 provocation, told us that media (and by extension, tools) are “extensions of man.” But what happens when those extensions begin to offer suggestions of their own? When your Revit family auto-populates parameters based on predictive rules you wrote two weeks ago and now barely remember? When your AI-powered concept sketcher proposes a form you would never have drawn, but now you can’t unsee?

In science fiction, this is where the tool becomes sentient. In architectural practice, it’s more subtle and perhaps more insidious. The automation of design decision-making isn’t some far-off dystopia; it’s a default setting. The danger isn’t that we’ll be replaced. It’s that we’ll stop noticing when we’ve ceded control. Revit seems to be making decisions on how a wall should behave, so people stop thinking about how a wall works.

Of course this isn’t a Luddite call to arms. It’s a design brief. Automation doesn’t erase authorship: it reshapes it. The creative act is still there, but it’s refracted through scripts, interfaces, presets, and algorithms. The challenge is to trace that act, to own it even when it appears in distributed, delegated form.

With genAI being super-hyped, the imperative may not be to draw a line between humans and machines. That binary has expired. It’s about recognising that we are designing with systems that contain fragments of our past thinking, our encoded preferences, and yes — occasionally — alien logics. If anything, the frontier of design today is less about form, and more about responsibility.

So let’s ask again, not out of fear but out of curiosity, ambition, and a refusal to let default settings define creative practice: Do tools think? And when they do, do we still think with them?

1. Thinking with Tools: from the Drafting Table to the Script

Before there were machines that could design, there were architects who designed machines. Cedric Price was one of them, of course, and my students have heard his name repeated over and over. His unrealised projects — the Fun Palace and the Potteries Thinkbelt Project — weren’t just designs: they were systems. Circuits of possibility, not monuments. Architecture, for Price, was not a noun but a verb, a process, a mechanism for adaptation. Technology could quickly become a solution in search of a problem, as his famous quote encapsulates, and that line still slices uncomfortably through today’s AI hype.

Price didn’t want to design buildings: he wanted to shape behaviours. He sketched not spaces but feedback loops. His work anticipated a world where the designer no longer “composes” space in the classical sense, but configures the conditions for it to emerge, transform, respond. In that sense, he didn’t just predict automation: he demanded it.

Enter Marshall McLuhan, another provocateur of his era. While Price reimagined buildings as intelligent environments, McLuhan reframed the entire idea of tools. His premise was deceptively simple: tools are extensions of man. The wheel extends the foot. The book extends the eye. The computer extends — and transforms — the nervous system. Every tool, McLuhan argued, externalises a part of the self. And when we externalise a function, we also alter it. We don’t just offload effort: we mutate perception, behaviour, and judgment.

For architects and designers, this should hit particularly close to home. Even if some people fight against the very concept, I’ve been pushing for years the idea that the transition from the drafting table to the digital model wasn’t just about speed or precision: it was a shift in worldview. The graphite line required a steady hand and a practised eye. The CAD line requires similar skills, plus the ability to snap to grid and an understanding of Boolean logic. Now, with parametric design and AI assistance, we’re shifting again: from drawing things to designing systems that generate things. The tool is no longer a passive medium. It’s a collaborator, a filter, a suggestion engine.

This often isn’t a linear progression. It’s a messed-up lineage of designer-tool entanglements. The compass didn’t eliminate geometry: it invited more of it; the early use of perspective in Renaissance drawing didn’t reduce architecture to optics, it opened a spatial revolution. Similarly, the algorithm doesn’t end design; it reconfigures it. But each leap has consequences.

Sometimes leaps have beautiful consequences. Other times… well, we’ll see.

Today’s design tools don’t just follow orders: they anticipate needs, enforce standards, and sometimes obscure intentions. A Revit family loaded with pre-programmed behaviours might help you model faster, but it also nudges you toward certain assumptions about how walls, doors, and systems should behave. As European users know very well, sometimes these assumptions are rooted in the way things work in the developer’s Country of origin, and when this doesn’t align with how things work in the rest of the world… things get messy. In the same way, a Grasshopper script gives you flexibility but also offloads your memory, intuition, and sometimes even your authorship onto a nested logic you may not fully control or remember when you’re way into the project. We’re no longer just drawing with tools. We’re delegating decisions to them. The question is no longer what we can do with them, ’cause in the lineage of designer-tool interactions, this is the moment where agency gets messy. Tools have always shaped thought. But now, they also feed back into it. Faster, deeper, and with increasing autonomy. Cedric Price sketched the provocation, McLuhan mapped the feedback loop, but we’re the ones living in the recursive moment, caught between intention and automation, authorship and abstraction.

And if our tools now whisper back — with predictions, completions, scripts, and scores — we have to learn to listen carefully. Not just to what they say, but to what they assume, because behind every “automated” suggestion is a history of human thought: encoded, structured, sometimes forgotten, and tremendously biased.

2. Scripts, Routines, and Ghosts in the Shell

Automation doesn’t go south with a boom. It slides in quietly, line by line, wrapped in good intentions: efficiency, consistency, error reduction. But make no mistake — every script you write, every node you connect in Dynamo, might be the last thing you do on your model. But that’s not what concerns us here. What concerns us is that you’re not just modelling walls or automating schedules: you’re encoding decisions, preferences, hierarchies. You’re capturing pieces of your design logic and handing them over to a machine that will execute them without hesitation, hesitation being a deeply human feature.

And it won’t even ask you to evaluate its performance. The nerve.

This is what we call procedural thinking: the shift from product to process, from result to recipe. Parametric design invited us into this logic, but BIM formalised it, rendered it a necessity because of the extra effort you need to put into… well, everything. It also made it easier, since the model isn’t a visual artefact but a database of rules and, once rules are in play, all you need for automation are standards. Routines proliferate in this framework: batch-renaming sheets, auditing worksets, reconciling type parameters across linked models. These aren’t design activities in the traditional sense, yet they structure the design process at scale. And here’s the twist: the more we automate, the more we rely on the machine’s memory rather than our own. Which is fine. That Dynamo script we wrote three months ago to sort room tags by department is doing the work now. The trick is… do we remember exactly how it works? Do our teammates? Who’s the author of that decision today?

There’s a ghost in the shell, and it might be you. A past version of yourself, frozen in code, still shaping outcomes long after you’ve mentally moved on.

Consider this: you build a Dynamo graph to auto-place fire extinguishers based on local code logic, spatial thresholds, and occupancy loads. On the surface, it’s a time-saver. But under the hood, it’s a snapshot of your interpretation of regulation, spatial standards, and project priorities. That script becomes part of the firm’s operational DNA. Others run it, unaware of the assumptions baked into it. If the regulation changes, the script doesn’t. Unless someone thinks to check.

This is where decision-making gets displaced. Not eliminated: just moved, hidden in scripts, buried in custom nodes, abstracted behind UI buttons labelled “Update Data.” The human still decides what to automate, but not always how the automation plays out over time.

There’s a risk here, and it’s not dystopian: it’s banal. It’s the slow erosion of agency under the weight of convenience. It’s when design becomes reactive — responding to what the system can do — rather than what the designer envisions. And this happens subtly, not through command but through suggestion. A pop-up. A default setting. A template that doesn’t push the project but gently nudges it in a particular direction, before anyone sketches a line.

But there’s also potential — and power — in recognising this. Scripts aren’t inherently reductive. They can encode nuance, adaptability, even elegance. I’ve built Dynamo graphs that don’t just crunch data but generate responses based on thresholds of daylight availability (it’s a regulation in places like Russia). These scripts didn’t replace my thinking: they extended it, they acted as a reminder (and my ADHD sorely needs it), but they also challenged it. They made visible certain patterns I hadn’t fully grasped, and offered to my designers alternate paths I hadn’t drawn.

Automation becomes most interesting when it becomes an enlarging mirror, when the output surprises you not because it’s alien, but because it reveals a blind spot in your own logic. That’s the moment when the tool isn’t just executing: it’s provoking.

So yes, there are ghosts in the shell. But they’re not rogue AIs: they’re fragments of you, your team, your office culture, your assumptions. Scripts carry memory, and automation carries intent. The question is: do you still recognise yours?

Let’s see how to deal with this.

3. Remembering the Author: Strategies for Designing Automation with Memory and Intent

Automation should not be amnesia. If design is a cultural act, automation must be annotated, not just executed. From the top of my head, I’ll give you five ideas on how to embed authorship, decision rationale, and institutional memory within automated processes, ensuring the designer’s hand remains legible even when the process is procedural.

1. Comment Like a Human: Writing Code That Thinks Out Loud

Most automation scripts are haunted houses: full of poltergeists, things that move by themselves, old blood trickling down the wall for no reason, and decisions made while running for your life. Variables named “x” or “temp.” Conditional logic written for a specific exception, long forgotten. What’s left is a script that maybe works but certainly doesn’t speak, not to others, not even to your future self.

To comment like a human means to narrate your thinking. Not just what the code does, but why you made the choice. Think of comments as pebble trails through your design logic, meant to carry you back home when you’re lost in the woods. A well-commented Dynamo graph or Python snippet should read like a manifesto in miniature: this is what I knew, this is what I assumed, this is what I chose.

Best practices:

use plain language before code: “This step reorders rooms by program adjacency, not alphabetically”;declare exceptions explicitly: “This override is only for the 3rd-floor fire core: DO NOT APPLY ELSEWHERE”;document external logic sources: “Thresholds based on EN ISO 7730:2006 comfort criteria” (spoiler: it’ll soon be outdated);annotate decisions, not just syntax: “Chose list flattening here to prevent nested geometries: watch for broken grouping.”

Well-commented automation is generous. It invites participation, critique, and future revision. Uncommented code is authoritarian: it demands obedience but reveals nothing of its reasoning. And, of course, I’m bound to dislike it.

2. Design Diaries and Automation Logs

Scripts don’t evolve in a vacuum: they respond to shifting needs, updated standards, new project types, and the occasional “why did we do this?” fire drill. Capturing these evolutions is not just project hygiene: it’s part of design authorship.

A design diary is an informal changelog with memory. It can be embedded in script headers, external text files, or even version-controlled documentation platforms. The goal: record not only what changed, but why.

Automation logs can track:

version number and author/editor;date of update;description of the change;reason for the change (e.g., “New local code standard for corridor width”);known limitations or warnings.

Tool tip: markdown-based README files can live alongside scripts, hosted in cloud directories or Git-based systems. Version comments should be written in a conversational style, such as: “v3.1: Simplified the occupancy filter: previous method was overfitting small rooms.”

This is less about compliance and more about creating a time machine for your thought process. Scripts are living artefacts, and they deserve a living memory.

3. Embedded Documentation in the Model

In digital workflows, documentation often gets exiled to external files — PDFs, checklists, emails — while the actual product — the model in our case — is treated as neutral data. That’s a mistake. We can leverage the model and bring it back to its original purpose of design medium even in this context, turning it into an interface.

Using Revit as an example, strategies for embedded documentation include:

Parameter notes: use Revit’s shared/project parameters to hold rationale for specific values. Example: a “Design_Rationale” parameter explaining why certain types were used.Annotated views: create dedicated views (e.g., “Script Logic Overview” or “Design Intent Map”) where documentation is visualised alongside geometry.Legends as narratives: use schedules and colour-coded legends (yes, filters do work in legends) not just for data but for storytelling (e.g., phases of automation, levels of manual override).“Ghost views”: as scary as it might sound, create views or sheets not for the delivery of the design intent, but for internal communication: logic flows, decision forks, exceptions.

Documentation inside the model travels with the model itself. It makes your thinking legible at the point of action, not buried in a folder. And it’s Agile as hell.

4. Health Checks and Intent Validators

Automation is not “fire and forget.” It’s “fire, monitor, correct.” Scripts and routines can drift, either from changing requirements or from silent misuse. Embedding validation logic into your tools helps maintain alignment between what the tool does and what you intended it to do.

Examples of Health Checks include:

Sanity checks: do all generated elements conform to standards such as naming? Are parameter values within expected ranges?Ghost errors: are there model elements created but not placed? Filled parameters that don’t drive anything?Scope alerts: if the number of elements affected by a script doubles suddenly, should it pause and ask for confirmation?

These are validator strategies. They require a little more of code but they perform internal checks within the automation tools, so that the original intent is preserved and a failsafe switch is implemented.

Write scripts that report before they act, through a preview mode with warnings or flags. Implement visual diagnostics (e.g., temporary filters or colour overlays). Use dashboards (e.g., Power BI hooked to Revit exports) to audit performance over time. In short: give your scripts some humility. They should check themselves before they wreck themselves (and your project).

5. Product Ownership for Scripts and Models

Borrowing from Agile methodologies, consider treating key scripts and models not just as assets, but as products with maintainers, roadmaps, user feedback loops, and lifecycles.

Appointing Product Owners means:

assigning responsibility for a script/model’s integrity and evolution;gathering input from its “users” (e.g., project teams, BIM coordinators);managing versioning, retirement, and onboarding;tracking change requests like features or bugs.

Treating scripts as products isn’t about adding more bureaucracy — God knows that’s the last thing we need in BIM — it’s about cultivating a design culture that values clarity, continuity, and care. When automation routines are owned, documented, and maintained like real tools rather than throwaway hacks, they stop becoming orphaned fragments of past projects and start functioning as trusted components of your practice. Versioning becomes meaningful: “We use v4.2 of the Lighting Zone Allocator: here’s what it does, here’s who to ask.” That clarity reduces the risk of misapplication and helps teams build confidence in using shared tools. It also encourages thoughtful refactoring over hasty patches, allowing systems to evolve deliberately instead of being duct-taped into irrelevance. In short, scripts deserve the same critical attention we give to buildings, because in the digital layer of design, they are the architecture.

4. Reclaiming Intuition: the Designer’s Evolving Role

If scripts are to carry memory, then designers must remember themselves not just as coders or modellers, — or technically inept artists, which is worst — but as authors in the broader sense. The practical strategies we’ve just explored — from annotated scripts to product ownership — aren’t side notes: they’re management skills needed to survive and thrive in an era where authorship risks dissolving into automation. They remind us that design intent doesn’t vanish in digital workflows; it simply mutates. And if we want to remain designers — not just operators — we need to reframe intuition not as an endangered species, but as a new form of literacy.

Too often, intuition is pitched as the opposite of automation, as if there were a clean divide between gut feeling and digital logic. But in practice, they interlace. When a seasoned designer glances at a Dynamo graph and knows it won’t work before running it… that’s intuition. When someone selects a default value not because the software suggested it, but because they recall a hard-earned lesson from a failed submission… that’s intuition too.

When you pause for a snack and your sandwich bites back, that’s a mimic (but that’s also another story)

As we learn from the theories of fast and slow thinking, intuition is compressed experience. And in automated environments, it becomes the skill of reading the system, the ability to recognise when a script is silently misaligned with project goals. It’s knowing when to trust the output and when to question the assumptions embedded in the algorithm. Like code fluency, intuition becomes a kind of pattern recognition, not in opposition to technical reasoning, but inseparable from it.

This is where reflexive authorship enters the frame. In post-automation design, authorship isn’t about total control: it’s about strategic intervention. You don’t handcraft every detail: you shape the conditions that generate detail. You build tools that can surprise you, then you respond to those surprises with judgment, not rigidity. You don’t just edit models; you edit systems that produce models. In that loop, authorship isn’t erased but multiplied. This means that to be a designer is to navigate layered systems of agency. You are no longer the sole originator of form. You are a reader of constraints, a tuner of parameters, a negotiator between human intention and machine interpretation. And if that sounds abstract, consider the very real act of stepping back from an AI-generated product and asking: is this what we meant to build? That’s not just QA. That’s authorship doing its job.

The new creative stance is not about reclaiming the drafting table. It’s about learning to operate in a space where decisions are distributed across scripts, teams, platforms, and time. The designer becomes a curator of systems and a guardian of quality in environments that increasingly “design themselves.” This in itself is a form of leadership, not managerial but conceptual. Less about knowing all the answers, more about knowing which questions must remain yours. If this is the landscape, intuition is not a romantic escape from machines: it’s the only thing that lets us keep thinking with them, without being consumed by them.

So, reclaim your intuition. Write it into the code. Annotate it in the model. Speak it in reviews. Build it into the scripts that others will run long after you’ve moved on. Because that’s what authors do. They don’t just design outcomes: they leave behind systems that can still think.

Conclusion: Thinking Tools and Thinking Designers

Let’s come back to the question that started all of this:

Do tools think?

Not in the sentient, sci-fi sense. Not yet. But they do encode decisions. They propose options. They operationalise patterns. And more importantly, they remember even when we don’t. In that sense, yes: tools think. But only in the way that we have taught them to, which is precisely why the real issue isn’t whether tools can think: it’s whether we still think with them.

Hello? Is the brain still connected?

In the age of automation, authorship has not vanished but shifted, and now hides in naming conventions, in nested conditions, in scripts that propagate judgment across dozens of models. It lives in parameter schemas and in Dynamo routines built to reduce labour but capable of reducing meaning if left unexamined. And the more we separate design intent from technical excellence — the more we think we can flood studios with dozens of underpaid BIM specialists as long as we have supposedly brilliant project leaders — the more things will get messy. Automation reshapes ownership and displaces it. It spreads it across interfaces, fragments it across files, and embeds it in silent decisions that eventually will shape how buildings get drawn, calculated, modelled, built. This isn’t a lament, for me. It’s a liberation. But only if we acknowledge the new terrain.

We must stop imagining automation as an enemy of design. The real danger is something quieter: becoming passive in the presence of systems that offer too much, too fast. When suggestions become assumptions. When defaults become doctrine. When speed becomes strategy because we’re too greedy to pause and check.

So what now?

We write code as narrative. We annotate logic like we once sketched over butter paper — as a way of thinking, not just outputting. We train ourselves and our teams not just to use tools, but to interpret them, to interrogate them, to shape them back. In short: we become thinking designers for an era of thinking tools. Because design has always been about choices. And if the tools now help us choose, then our responsibility is even greater: to choose how we use them, to know when to listen and when to override, and to leave behind not just buildings, but ways of working that still think when we’re no longer there.

 •  0 comments  •  flag
Share on Twitter
Published on July 16, 2025 02:00
No comments have been added yet.