Josh Clark's Blog, page 4

October 14, 2024

This AI Pioneer Thinks AI Is Dumber Than a Cat

Christopher Mims of the Wall Street Journal profiles Yann LeCun, AI pioneer and senior researcher at Meta. As you’d expect, LeCun is a big believer in machine intelligence���but has no illusions about the limitations of the current crop of generative AI models. Their talent for language distracts us from their shortcomings:


Today���s models are really just predicting the nextword in a text, he says. But they���re so good at thisthat they fool us. And because of their enormous memorycapacity, they can seem to be reasoning, when in factthey���re merely regurgitating information they���ve alreadybeen trained on.


���We are used to the idea that people or entities thatcan express themselves, or manipulate language, aresmart���but that���s not true,��� says LeCun. ���You can manipulatelanguage and not be smart, and that���s basically whatLLMs are demonstrating.���


As I’m fond of saying, these are not answer machines, they’re dream machines: “When you ask generative AI for an answer, it���s not giving you the answer; it knows only how to give you something that looks like an answer.”

LLMs are fact-challenged and reasoning-incapable. But they are fantastic at language and communication. Instead of relying on them to give answers, the best bet is to rely on them to drive interfaces and interactions. Treat machine-generated results as signals, not facts. Communicate with them as interpreters, not truth-tellers.

This AI Pioneer Thinks AI Is Dumber Than a Cat | WSJ
 •  0 comments  •  flag
Share on Twitter
Published on October 14, 2024 05:47

October 13, 2024

Beware of Botshit

botshit noun: hallucinated chatbot content that is uncritically used by a human for communication and decision-making tasks. “The company withdrew the whitepaper due to excessive botshit, after the authors relied on unverified machine-generated research summaries.”

From this academic paper on managing the risks of using generated content to perform tasks:

Generative chatbots do this work by ���predicting��� responsesrather than ���knowing��� the meaning of their responses.This means chatbots can produce coherent sounding butinaccurate or fabricated content, referred to as ���hallucinations���.When humans use this untruthful content for tasks,it becomes what we call ���botshit���.

See also: slop.

Beware of Botshit: How to Manage the Epistemic Risks of Generative Chatbots
 •  0 comments  •  flag
Share on Twitter
Published on October 13, 2024 08:21

A Radically Adaptive World Model

Ethan Mollick posted this nifty little demo of a research project that generates a world based on Counter-Strike, frame by frame in response to your actions. What’s around that corner at the end of the street? Nothing, that portion of the world hasn’t been created yet���until you turn in that direction, and the world is created just for you in that moment.

This is not a post that proposes the future of gaming or that tech will replace well-crafted game worlds and the people who make them. This proof of concept is nowhere near ready or good enough for that, except perhaps as a tool to assist/support game authors.

Instead, it’s interesting as a remarkable example of a radically adaptive interface, a core aspect of Sentient Design experiences. The demo and the research paper behind it show a whole world being conceived, compiled, and delivered in real time. What happens when you apply this thinking to a web experience? To a data dashboard? To a chat interface? To a calculator app that lets you turn a blank canvas into a one-of-a-kind on-demand interface?

The risk of radically adaptive interfaces is that they turn into robot fever dreams without shape or destination. That���s where design comes in: to conceive and apply thoughtful constraints and guardrails. It���s weird and hairy and different from what’s come before.

Far from replacing designers (or game creators), these experiences require designers more than ever. But we have to learn some new skills and point them in new directions.

Ethan Mollick's post on LinkedIn
 •  0 comments  •  flag
Share on Twitter
Published on October 13, 2024 07:21

Exploring the AI Solution Space

Jorge Arango explores what it means for machine intelligence to be “used well” and, in particular, questions the current fascination with general-purpose, open-ended chat interfaces.


There are obvious challenges here. For one, this isthe first time we���ve interacted with systems that matchour linguistic abilities while lacking other attributesof intelligence: consciousness, theory of mind, pride,shame, common sense, etc. AIs��� eloquence tricks usinto accepting their output when we have no competenceto do so.


The AI-written contract may be better than a human-writtenone. But can you trust it? After all, if you���re nota lawyer, you don���t know what you don���t know. And thefact that the AI contract looks so similar to a humanone makes it easy for you to take its provenance forgranted. That is, the better the outcome looks to yournon-specialist eyes, the more likely you are to giveup your agency.


Another challenge is that ChatGPT���s success has drivenmany people to equate AIs with chatbots. As a result,the current default approach to adding AI to productsentails awkwardly grafting chat onto existing experiences,either for augmenting search (possibly good) or replacinghuman service agents (generally bad.)


But these ���chatbot��� scenarios only cover a portionof the possibility space ��� and not even the most interestingone.


I’m grateful for the call to action to think beyond chat and general-purpose, open-ended interfaces. Those have their place, but there’s so much more to explore here.

The popular imagination has equated intelligence with convincing conversation since Alan Turing proposed his “imitation game” in 1950. The concept is simple: if a system can fool you into thinking you’re talking to a human, it can be considered intelligent. For the better part of a century, the Turing Test has shaped popular expectations of machine intelligence from science fiction to Silicon Valley. Chat is an interaction clich�� for AI that we have to escape (or at least question), but it has a powerful gravitational force. “Speaks well = thinks well” is a hard perception to break. We fall for it with people, too.

The “AI can make mistakes” labels don’t cut it.

Given the outsized trust we have in systems that speak so confidently, designers have a big challenge when crafting intelligent interfaces: how can you engage the user’s agency and judgment when the answer is not actually as confident as the LLM delivers it? Communicating the accuracy/confidence of results is a design job. The “AI can make mistakes” labels don’t cut it.

This isn’t a new challenge. I’ve been writing about systems smart enough to know they’re not smart enough for years. But the problem gets steeper as the systems appear outwardly smarter and lull us into false confidence.

Jorge’s 2x2 matrix of AI control vs AI accuracy is a helpful tool to at least consider the risks as you explore solutions.

Jorge Arango's model of the AI possibility space is a two-by-two matrix of AI control vs AI accuracy Source: Jorge Arango

This is a tricky time. It’s natural to seek grounding in times of change, which can cause us to cling too tightly to assumptions or established patterns. Loyalty to the long-held idea that conflates conversation with intelligence is doing a disservice. Conversation between human and machine doesn���t have to mean literal dialogue. Let���s be far more expansive in what we consider ���chat��� and unpack the broad forms these interactions can take.

Exploring the AI Solution Space | Jorge Arango
 •  0 comments  •  flag
Share on Twitter
Published on October 13, 2024 06:58

October 8, 2024

Introducing Generative Canvas

On-demand UI! Salesforce announced its pilot of “generative canvas,” a radically adaptive interface for CRM users. It’s a dynamically generated dashboard that uses AI to assemble the right content and UI elements based on your specific context or request. Look out, enterprise, here comes Sentient Design.

[image error]

I love to see big players doing this. Here at Big Medium, we’re building on similar foundations to help our clients build their own AI-powered interfaces. It’s exciting stuff! Sentient Design is about creating AI-mediated experiences that are aware of context/intent so that they can adapt in real time to specific needs. Veronika Kindred and I call these radically adaptive interfaces, and it shows that machine-intelligent experiences can be so much more than chat. This new Salesforce experience offers a good example.

For Salesforce, generative canvas is an intelligent interface that animates traditional UI in new and effective ways. It’s a perfect example of a first-stage radically adaptive interface���and one that’s well suited to the sturdy reliability of enterprise software. Generative canvas uses all of the same familiar data sources as a traditional Salesforce experience might, but it assembles and presents that data on the fly. Instead of relying on static templates built through a painstaking manual process, generative canvas is conceived and compiled in real time. That presentation is tailored to context: it pulls data from the user���s calendar to give suggested prompts and relevant information tailored to their needs. Every new prompt or new context gives you a new layout. (In Sentient Design’s triangle framework, we call this the Bespoke UI experience posture.)

So the benefits are: 1) highly tailored content and presentation to deliver the most relevant content in the most relevant format (better experience), and 2) elimination or reduction of manual configuration processes (efficiency).

In Sentient Design, we call this the Bespoke UI experience posture.

Never fear: you’re not turning your dashboard into a hallucinating robot fever dream. The UI stays on the rails by selecting from a collection of vetted components from Salesforce’s Lightning design system: tables, charts, trends, etc. AI provides radical adaptivity; the design system provides grounded consistency. The concept promises a stable set of data sources and design patterns���remixed into an experience that matches your needs in the moment.

This is a tidy example of what happens when you sprinkle machine intelligence onto a familiar traditional UI. It starts to dance and move. And this is just the beginning. Adding AI to the UX/UI layer lets you generate experiences, not just artifacts (images, text, etc.). And that can go beyond traditional UI to yield entirely new UX and interaction paradigms. That’s a big focus of Big Medium’s product work with clients these days���and of course of the Sentient Design book. Stay tuned, lots more to come.

Introducing Generative Canvas: Dynamically Generated UX, Grounded in Trusted Data and Workflows
 •  0 comments  •  flag
Share on Twitter
Published on October 08, 2024 14:06

September 28, 2024

Your Sparkles Are Fizzling

This essay is part of a series about Sentient Design, the already-here future of intelligent interfaces and AI-mediated experiences.

The book

Sentient Design by Josh Clark with Veronika Kindred will be published by Rosenfeld Media.

The talk

Photo of Josh Clark speaking in front of a screen that says, "Get cozy with casual intelligence"

Watch Josh Clark's talk ���Sentient Design���

The workshop

Book a workshop for designing AI-powered experiences.

Need help?

If you’re working on strategy, design, or development of AI-powered products, we do that! Get in touch.

Please put the ���sparkles��� away. They���re sprinkled everywhere these days, drizzled on every AI tool, along with a heaping dollop of purple hues and rainbow gradients. It���s magic, it���s special, it���s sparkly.

It���s really not magic, of course���it���s just software. And because so many companies have heedlessly bolted AI features onto existing products, the shipped features are often half-baked, experimental, frequently shoddy. Instead of ���magic��� or ���new,��� sparkles are fast becoming the new badge for ���beta.��� The message sparkles now convey is, ���This feature is weird and probably broken���good luck!��� It would be more honest to use this emoji instead: ����

So what symbol should you use to represent machine-intelligent tools? How about no symbol at all. No need to segregate AI features or advertise their AI-ness as anything special. What matters to the user is what it does, not how it���s implemented.

Image of buttons before and after having their sparkles removed. The buttons are labeled "Create," "Summarize," and "Revise."

We give no special billing to spell checkers or spam filters or other algorithmic tools; we just expect them to work. That���s the expectation we should set for all features, whether they���re enabled by machine intelligence or not. Features should just do what it says on the tin, without the sparkle-shaped asterisk. You���re revising text, generating images, assembling insights, making music, recommending content, predicting next steps, etc. These are simply product features���like every other powerful feature your experience offers���so present them that way. No special label required, no jazz hands needed.

Drop the sparkles. No need to segregate AI features or advertise their AI-ness as anything special. What matters to the user is what it does, not how it���s implemented.

One of the essential characteristics of Sentient Design is that it is deferential. It���s a posture that suggests humility. Put the disco ball away. Your goal should be to make machine intelligence a helpful and seamless part of the user experience, not a novelty or an afterthought.

If the feature actually is weird, unreliable, or broken, that���s a bigger problem that sparkles won���t solve. Maybe your feature isn���t ready to ship. Or better, perhaps you need to find a way to make the weirdness an asset instead of a reliability. Now that���s a fun design challenge to solve���and one that engages with the grain of machine intelligence as a design material. Take a measured pragmatic approach to what machine intelligence is good at and what it isn���t���and be equally measured in how you label it. Be cool. Put away the sparkles.

Is your organization trying to understand the role of AI in your mission and practice? We can help! Big Medium does design, development, and product strategy for AI-mediated experiences; we facilitate Sentient Design sprints; we teach AI workshops; and we offer executive briefings. Get in touch to learn more.

 •  0 comments  •  flag
Share on Twitter
Published on September 28, 2024 14:23

September 27, 2024

Data Whisperers, Pinocchios, and Sentient Design

This essay is part of a series about Sentient Design, the already-here future of intelligent interfaces and AI-mediated experiences.

The book

Sentient Design by Josh Clark with Veronika Kindred will be published by Rosenfeld Media.

The talk

Watch Josh Clark's talk ���Sentient Design���

The workshop

Book a workshop for designing AI-powered experiences.

Need help?

If you’re working on strategy, design, or development of AI-powered products, we do that! Get in touch.

It took all of three minutes for Google���s NotebookLM to create this lively 12-minute podcast-style conversation about Sentient Design:

Bonkers, right? The script and the voices���along with the very human pauses, ums, and mm-hmms���all of it is machine-generated. It even nails the emotional tenor of the podcast format. I gave NotebookLM several chapters of the Sentient Design book manuscript, pushed a button, and this just popped out.[1]

The speed, quality, and believability of this podcast are remarkable, but they���re not even the most interesting thing about it. Here���s the bit that gets me excited: by transforming data from one format into another, this gives the content new life, enabling a new use case and context. It���s a whole new experience for content that was previously frozen into a different shape. Instead of 100 pages of PDFs that require your eyes and a few hours of time and attention, you���ve got a casual, relatable conversation that you can listen to on the go to get the gist in just a few minutes. It���s a new format, new mindset, new context, new user persona… and a new level of accessibility.

Listen to the data whisperer

In Sentient Design, Veronika and I call this the data whisperer experience pattern. The data whisperer shifts content or data from one format to another. One super-pragmatic example is extracting structured data from a mess of unstructured content: turn blobs of text into JSON or XML so that they can be shared among systems. Machine intelligence is great at doing things like this���translation among formats. But this can go so much farther than file types.

If you listen to the podcast, you���ll hear the robot hosts talk about the Sentient Triangle, a way to describe different postures for machine-intelligent experiences. Data whisperers stake out the interoperable point of the triangle.

Triangle diagram of Sentient Design experiences across three attributes: grounded, interoperable, and radically adaptive Data whisperers stake out the interoperable corner of the Sentient Design triangle.

Interoperability is typically associated with portability among systems. But data whisperers become even more powerful when they enable portability among experiences. Instead of focusing only on the artifacts that machine intelligence can generate with these transformations, consider the new interaction paradigms they could enable.

As a designer, ask yourself: what becomes possible when I liberate this content from its current form, and how can machine intelligence help me do that? And even better: what if you think of it beyond rote translation of modes or formats (text to speech, English to Chinese, PDF to JSON) but also manner, interaction, or even meaning? That���s what the podcast example hints at; it���s doing something far more than converting 100 PDF pages into audio. It���s reinterpreting the content for a whole new context.

Tim Paul of GOV.UK put together this side project showing another example of using AI to rescue content from PDFs. His experiment translates static government forms into interactive multi-step web forms rendered in the gov.uk design system.

Screenshot of Tim Paul's AI experiment converting a PDF to a web form. Tim Paul���s AI experiment rescues forms trapped inside PDFs and converts them to web forms using GOV.UK���s design system.

Here at Big Medium, we did something similar when we built a Figma plugin for one of our clients. The plugin takes a sketch (or text prompt or screenshot) and creates a first-pass Figma layout using the company���s design system components. Here it is in action, acting as a sous chef to organize the components so that the designer “chef” can take over and create the final refined result:

Figma plugin generates a design layout from a wireframe sketch. The Pinocchio interaction pattern

This plugin is a tight example of the data whisperer pattern: it thaws meaning and agency from a frozen artifact. But it���s also doing something more: it transforms content from low fidelity to high fidelity.

In Sentient Design, we call this the Pinocchio interaction pattern���turning the puppet into a real boy. Use the Pinocchio pattern to flesh an outline into text, zap a sketch into an artwork, or transform a wireframe into working code. Pinocchio is a data whisperer that elevates an idea into something functional.

Here���s another Pinocchio example. tldraw is a framework for creating digital whiteboard apps. The tldraw team created a ���Make Real��� feature that transforms hand-drawn sketches or wireframes into interactive elements built with functional code. Just draw a sketch, select it, and click the Make Real button to insert a working web view into the canvas. It���s an inline tool to enable quick and easy prototyping, turning a Pinocchio sketch into ���real boy��� markup.

Photo of an interface sketch in tldraw, next to the generated working prototype.

When you bake the Pinocchio pattern into the fabric of an application, its interface becomes a radically adaptive surface���a free-form canvas that adapts to your behavior and context. You can see Apple working this angle in the iPad applications it previewed earlier this year. In Notes for iPad, Apple���s demo shows how you can circle a sketch, and the app creates a high-fidelity drawing based on the sketch and surrounding content context.

Demo of Notes for iPad: circle a sketch and it transforms into a high-fidelity image

Or the Math Notes feature of iPad���s new calculator app lets you scrawl equations on the screen, and it transforms them into working math���spreadsheet functionality in sketch format. Scribbled numbers become variables. A column of figures becomes a sum. An equation becomes an interactive graph.

Demo of Math Notes in Calculator for iPad Experience over artifact

With generative AI, the generation tends to get all the attention. That���s a missed opportunity. With the data whisperer and Pinocchios patterns, you can create not only new content artifacts but new experience paradigms. You can liberate content and interaction from frozen formats, unlock new use cases, and help people move from rough idea to refined concept. Lean into it, friends, there���s much to explore here.

Is your organization trying to understand the role of AI in your mission and practice? We can help! Big Medium does design, development, and product strategy for AI-mediated experiences; we facilitate Sentient Design sprints; we teach AI workshops; and we offer executive briefings. Get in touch to learn more.

My favorite part of the podcast is when one of the AI-generated hosts says, ���Sometimes I even forget I���m talking to a computer.��� The other responds, ���Tell me about it.���  ↩

 •  0 comments  •  flag
Share on Twitter
Published on September 27, 2024 06:00

September 24, 2024

Workshop: Craft AI-Powered Experiences with Sentient Design

Reserve your workshop

Contact Josh Clark to inquire about availability and pricing.

This workshop teaches and explores the Sentient Design methodology for designing intelligent interfaces and AI-mediated experiences. Learn more about Sentient Design:

The book

Sentient Design by Josh Clark with Veronika Kindred will be published by Rosenfeld Media.

The talk

Photo of Josh Clark speaking in front of a screen that says, "Get cozy with casual intelligence"

Watch Josh Clark's talk ���Sentient Design���

Need help?

If you’re working on strategy, design, or development of AI-powered products, we do that! Get in touch.

Now available! We offer private workshops to teach product and design teams to imagine, design, and deliver AI-powered features and products. It���s all backed by Big Medium���s Sentient Design methodology, a practical framework for designing intelligent interfaces that are radically adaptive to user needs and intent. We���ve been doing these workshops at conferences and with our clients, and now we want to share them with you.

These measured, pragmatic workshops are zero hype. They provide real-world AI literacy and practical techniques that your team can use today���like right now���to imagine surprising new experiences or to improve existing products. You and your team will discover entirely new interaction paradigms, along with new challenges and responsibilities, too. This demands fresh perspective, technique, and process; Sentient Design provides the framework for delivering this new kind of experience.

What���s in the workshop?

Our Sentient Design workshops are immersive, hands-on experiences that guide participants through the whole life cycle of designing machine-intelligent experiences. Here���s what we���ll do together:

Prototype a new product: identify, imagine, and design AI-powered features that solve real problems (not just ���because AI���).Learn to use machine-generated content and interaction as design material in your everyday work.Use machine intelligence to deliver entirely new interactions or simply elevate traditional interfaces.Explore radically adaptive interfaces that are conceived in real-time based on user context and intent.Get your hands dirty working with models directly to learn their strengths and quirks.Discover emerging UX patterns and postures that go way beyond ���slap a chatbot on it.���Learn the art of defensive design. AI can be… unpredictable (and weird, wrong, and biased, too). Make its weirdness an asset instead of a liability.Adopt techniques to set user expectations and guide behavior to match the system���s ability.Use responsible practices that build trust and transparency.

The workshops mix it up with lecture (the fun kind), discussion, and hands-on exercises. You and your team will put theory into practice through collaborative design sessions, prototyping exercises, and critical analysis of real-world AI applications. By the end of the workshop, your team will have a solid foundation in Sentient Design principles and a toolkit of techniques and design patterns to apply in your work.

Who it���s for

This workshop is perfect for designers, product owners/managers, and design-minded developers who want to stay ahead of the curve in AI-powered experiences. If you’re curious about how to make AI work for people (instead of the other way around), this is for you.

The workshop can flex to accommodate groups from 10 to 100 people.

Who���s teaching?

The workshops are led by Big Medium���s Josh Clark and Veronika Kindred, authors of Sentient Design, the forthcoming book from Rosenfeld Media.

Reserve your workshop

Get in touch with Josh Clark to explore availability and pricing.

We offer flexible formats to suit your team’s needs:

Half-day introductionFull-day immersionTwo-day comprehensive program

The workshops may be offered online via video conference or in person. No special preparation is required for your team. No prior AI or machine-learning experience is required���only a healthy mix of imagination and skepticism. Bring your human intelligence, and we’ll supply the artificial kind.

Workshops are flat-rate (not by number of attendees). Half-day workshops start at $5000, and pricing varies for length and for remote vs in-person.

Our workshops are also available at our frequent speaking events. Here���s what���s coming up:

 •  0 comments  •  flag
Share on Twitter
Published on September 24, 2024 05:56

September 20, 2024

The Already-Here Future of Prototyping

I’ve spent pretty much my entire career helping teams design software rather than pictures of software. Development is design. Frontend design. Front-of-front-end code. Front-end workshop environments. Death to the waterfall. The Hot Potato process. Designer/developer collaboration. Pattern Lab. I’ve shared all of these concepts, words, and tools to help bring the worlds of design and development closer to one another so that people can make great things together.

Unfortunately, all these years later it still feels like our design processes have calcified into a one-dimensional, one-directional waterfall workflow. Design tooling has gotten significantly better over the years, which in my view is truly a double-edged sword. On one hand, getting designers using components, variants, autolayout, and variables is way better than Photoshop smart objects and these developments have been extremely welcome. On the other hand, the tooling has gotten to a “good enough” state that allows designers to stay in their comfort zone and insulated from the mediums for which they’re ultimately designing. The old limitations of design tools used to require designers to enlist the help of developers, which while not ideal still created opportunities for cross-disciplinary collaboration and shared understanding.

I’m still hopeful that we can restore a collaborative process and get teams working better together and to get designers closer to real software. It’s critical for designers to capitalize on the opportunities and confront the constraints of the medium for which they’re designing, which is why cross-disciplinary collaboration and prototyping in code are still so important.

A tweet that reads Telling web designers they dont need to worry about code is like telling architects they dont need to worry about steel, wood, or physics

At the end of the day, design tools are abstractions that don’t paint the full picture of what a digital experience is. I dig into this in Atomic Design:


Working in HTML, CSS, and presentational JavaScript allows teams to not only create aesthetically beautiful designs, but demonstrates those uniquely digital design considerations like:

flexibilityimpact of the networkinteractionmotionergonomicscolor and text renderingpixel densityscrolling performancedevice and browser quirksuser preferences

Crucially, jumping into the browser faster also kick-starts the creation of the patterns that will make up the living, breathing design system.


Atomic Design, Development Is Design


Every medium has its affordances, its opportunities, and its limitations, and the sooner we factor those into our design process the better. Thankfully, expressing design in code is easier than ever.

“Should designers code?” is a funny question now

“Should designers code?” has always been a hilariously charged question in the world of design. But LLM tools like ChatGPT and Claude have made it easier than ever for anyone ��� including and perhaps especially non-developers ��� to develop ideas in code. Designers, product managers, and even lay people don’t need to know how to code in order to create something and explore ideas. This democratization of code opens many doors and unlocks so many opportunities for creativity and collaboration.

You can create a functioning prototype of an idea in the time it would take you to draw 4 rectangles in Figma.

This new generation of tools erases many of the excuses used to avoid involving developers and exploring ideas in code during the “design phase” of an effort. “Oh we can’t free up time for our developers”, or “this will take too long.” Those excuses don’t hold water anymore. In fact, you can create a functioning prototype of an idea in the time it would take you to draw 4 rectangles in Figma. Sure, the window dressing might not be perfect, but there are really no excuses left for code to not be a key part of the design process.

Flow and a focus on results

As I’ve adopted these tools in my own workflow, I’m struck by how much these tools get me out of that place where I always seem to have a million ideas buzzing around in my head, and I often talk myself out of pursuing these ideas by mentally stressing about getting (the right!) environments stood up, choosing (the right!) tools, and (the right!) code syntax, and creating (the right!) features.

Rather than getting bogged down by all of these hypothetics, I can instead use these tools to focus my attention of getting my ideas out of my head and into the world. That’s why I wanted to put this video together to show a real-life example of turning a frequently-occurring idea (in this case, “I should have a ‘request song’ page on my website”) into actual working software:

This immediacy has already proven to be so valuable both in my personal practice as well as in our professional work. We’ll be on a Zoom call with a client discussing an idea, and when the conversation ventures into hypothetical back-and-forth, we’ll spin up a quick prototype to give everyone something tangible to latch onto and iterate over together. It really is a powerful tool in the creative toolkit.

The design process needs a serious shake-up

The ability to rapidly prototype in code ��� now supercharged by the introduction of LLM tools ��� brings us closer to truly designing for the web medium. It democratizes the process, allows more people to express ideas in code, makes it easier to consider the medium’s opportunities and constraints, and gets ideas further down the line faster than ever.

There’s a time and a place for traditional design processes, and sure creating a prototype is not the same as creating a ready-to-ship product. It’s important to note that these approaches are not mutually exclusive; you can create and advance prototypes while sweating the design details and code architecture elsewhere. But too often we don’t create the practice nor the room for designers to do this type of exploration. Does everything need to originate in Figma and be fully articulated there? Increasingly, the answer is “no”. There’s a big world outside of Figma, and I recommend exploring it!

Rapid iteration and solid foundations

Of course, putting together a quick prototype detached from your design and development reality is all well and good. Cute, even! But at Big Medium, we’re helping our client teams integrate these new workflows and tools into their organization’s architecture, workflows, and culture. We’re finding that coupling this generative creative approach with the sturdy foundations of a design system yields some truly impressive results. Design systems and AI are a potent combination.

Sure these tools help save time and money, but it also blows open the doors of new creative terrain. Ideas that would have taken days, weeks, or months to pursue can now be explored over the course of a meeting. It’s truly wild. And when coupled with an organization’s solid design foundations, the ideas don’t seem like distractions or impossible-to-implement experiments; instead they feel like…the new design process.

Using these tools in our behind-the-scenes design process unlocks new opportunities, but it doesn’t need to stop there. With Sentient Design, my partner Josh Clark demonstrates how these tools can be used to create radically adaptive user experiences that morph and shift based on the user’s specific context and needs.

We’re entering a new era of design, and it’s been fun to explore and help our client teams navigate this new terrain. If your team would welcome help integrating these tools into your practice, feel free to reach out! We’re offering AI for designers and developers workshops and engagements to help organizations successfully integrate these these tools into their design practices.

Now get out there and get into some code!

 •  0 comments  •  flag
Share on Twitter
Published on September 20, 2024 06:17

The already-here future of prototyping

I’ve spent pretty much my entire career helping teams design software rather than pictures of software. Development is design. Frontend design. Front-of-front-end code. Front-end workshop environments. Death to the waterfall. The Hot Potato process. Designer/developer collaboration. Pattern Lab. I’ve shared all of these concepts, words, and tools to help bring the worlds of design and development closer to one another so that people can make great things together.

Unfortunately, all these years later it still feels like our design processes have calcified into a one-dimensional, one-directional waterfall workflow. Design tooling has gotten significantly better over the years, which in my view is truly a double-edged sword. On one hand, getting designers using components, variants, autolayout, and variables is way better than Photoshop smart objects and these developments have been extremely welcome. One the other hand, the tooling has gotten to a “good enough” state that allows designers to stay in their comfort zone and insulated from the mediums for which they’re ultimately designing. The old limitations of design tools used to require designers to enlist the help of developers, which while not ideal still created opportunities for cross-disciplinary collaboration and shared understanding.

I’m still hopeful that we can restore a collaborative process and get teams working better together and to get designers closer to real software. It’s critical for designers to capitalize on the opportunities and confront the constraints of the medium for which they’re designing, which is why cross-disciplinary collaboration and prototyping in code are still so important.

A tweet that reads Telling web designers they dont need to worry about code is like telling architects they dont need to worry about steel, wood, or physics

At the end of the day, design tools are abstractions that don’t paint the full picture of what a digital experience is. I dig into this in Atomic Design:


Working in HTML, CSS, and presentational JavaScript allows teams to not only create aesthetically beautiful designs, but demonstrates those uniquely digital design considerations like:

flexibilityimpact of the networkinteractionmotionergonomicscolor and text renderingpixel densityscrolling performancedevice and browser quirksuser preferences

Crucially, jumping into the browser faster also kick-starts the creation of the patterns that will make up the living, breathing design system.


Atomic Design, Development Is Design


Every medium has its affordances, its opportunities, and its limitations, and the sooner we factor those into our design process the better. Thankfully, expressing design in code is easier than ever.

“Should designers code?” is a funny question now.

“Should designers code?” has always been a hilariously charged question in the world of design. But LLM tools like ChatGPT and Claude have made it easier than ever for anyone ��� including and perhaps especially non-developers ��� to develop ideas in code. Designers, product managers, and even lay people don’t need to know how to code in order to create something and explore ideas. This democratization of code opens many doors and unlocks so many opportunities for creativity and collaboration.

You can create a functioning prototype of an idea in the time it would take you to draw 4 rectangles in Figma.

This new generation of tools erases many of the excuses used to avoid involving developers and exploring ideas in code during the “design phase” of an effort. “Oh we can’t free up time for our developers”, or “this will take too long.” Those excuses don’t hold water anymore. In fact, you can create a functioning prototype of an idea in the time it would take you to draw 4 rectangles in Figma. Sure, the window dressing might not be perfect, but there are really no excuses left for code to not be a key part of the design process.

Flow and a focus on results

As I’ve adopted these tools in my own workflow, I’m struck by how much these tools get me out of papra I always seem to have a million ideas buzzing around in my head, and I often talk myself out of pursuing these ideas by mentally stressing about getting (the right!) environments stood up, choosing (the right!) tools, and (the right!) code syntax, and creating (the right!) features.

Rather than getting bogged down by all of these hypothetics, I can instead use these tools to focus my attention of getting my ideas out of my head and into the world. That’s why I wanted to put this video together to show a real-life example of turning a frequently-occurring idea (in this case, “I should have a ‘request song’ page on my website”) into actual working software:

This immediacy has already proven to be so valuable both in my personal practice as well as in our professional work. We’ll be on a Zoom call with a client discussing an idea, and when the conversation ventures into hypothetical back-and-forth, we’ll spin up a quick prototype to give everyone something tangible to latch onto and iterate over together. It really is a powerful tool in the creative toolkit.

The design process needs a serious shake-up

The ability to rapidly prototype in code ��� now supercharged by the introduction of LLM tools ��� brings us closer to truly designing for the web medium. It democratizes the process, allows more people to express ideas in code, makes it easier to consider the medium’s opportunities and constraints, and gets ideas further down the line faster than ever.

There’s a time and a place for traditional design processes, and sure creating a prototype is not the same as creating a ready-to-ship product. It’s important to note that these approaches are not mutually exclusive; you can create and advance prototypes while sweating the design details and code architecture elsewhere. But too often we don’t create the practice nor the room for designers to do this type of exploration. Does everything need to originate in Figma and be fully articulated there? Increasingly, the answer is “no”. There’s a big world outside of Figma, and I recommend exploring it!

Rapid iteration and solid foundations

Of course, putting together a quick prototype detached from your design and development reality is all well and good. Cute, even! But at Big Medium, we’re helping our client teams integrate these new workflows and tools into their organization’s architecture, workflows, and culture. We’re finding that coupling this generative creative approach with the sturdy foundations of a design system yields some truly impressive results. Design systems and AI are a potent combination.

Sure these tools help save time and money, but it also blows open the doors of new creative terrain. Ideas that would have taken days, weeks, or months to pursue can now be explored over the course of a meeting. It’s truly wild. And when coupled with an organization’s solid design foundations, the ideas don’t seem like distractions or impossible-to-implement experiments; instead they feel like…the new design process.

Using these tools in our behind-the-scenes design process unlocks new opportunities, but it doesn’t need to stop there. With Sentient Design, my partner Josh Clark demonstrates how these tools can be used to create radically adaptive user experiences that morph and shift based on the user’s specific context and needs.

We’re entering a new era of design, and it’s been fun to explore and help our client teams navigate this new terrain. If your team would welcome help integrating these tools into your practice, feel free to reach out! We’re offering AI for designers and developers workshops and engagements to help organizations successfully integrate these these tools into their design practices.

Now get out there and get into some code!

 •  0 comments  •  flag
Share on Twitter
Published on September 20, 2024 06:17