Josh Clark's Blog, page 14
August 29, 2017
Designing with Artificial Intelligence: The Dinosaur on a Surfboard
It was a real treat to chat with one of my design heroes, Jeff Veen, as we mulled over the role of designers in a world of algorithms and machine learning. Jeff interviewed me for his podcast Presentable, a must-listen series of conversations about designing the future. Tune in to the episode: Designing with Artificial Intelligence: The Dinosaur on a Surfboard
And yes, we do indeed talk about dinosaurs on surfboards, as well as using AI in your everyday work, the one-true-answer problem, comically evil search suggestions, the approaching golden age of AI mashups, the battle against bias, and the training of algorithms as UX research at massive scale.
All of this is in the service of exploring the meaty design problems that have so far gone unaddressed in machine-generated content and interaction. There���s much to be done here for designers. As I said to Jeff, a new kind of content requires a new kind presentation:
It is ���intelligence,��� but it���s not ours. These machines think in a different way. ��� They���re weird. The machine logic is weird and surprising. One of the things I���m finding as we begin to design for data-generated content and interaction is that much of the work is actually just trying to anticipate the weirdness that comes out of these machines. [���]
The presentation of the data generated content is just as important as the algorithm. We���ve made incredible engineering gains in how machine learning and artificial intelligence work, especially with deep learning almost month to month���it���s these incredible leaps. The algorithms are becoming more and more accurate, and yet the presentation of them is busted.
Listen in to hear us talk about the techniques, design principles, and even the concepts of social justice that we can put to use today as we work with this new design material.
August 16, 2017
Four SXSW Panels for Your Consideration
I proposed a solo talk for the SXSW Interactive Festival and, wow, was also flattered to be invited to two other panel proposals on top of that. My favorite person Liza Kindred also proposed a talk on Mindful Technology, which is both timely and actionable. Help me usher these panels into the bright, bright Texas sunlight. Voting closes August 25. I���d be mighty obliged if you���d vote up these four proposals at the SXSW PanelPicker:
Josh Clark: Design in the Era of the Algorithm

I���m super-excited to expand on the techniques and design principles I outlined earlier this year in my long-form essay about designing with machine learning. Data-driven content and interfaces are at the core of every single emerging consumer technology right now. I���ve got a crisp point of view about design���s role and responsibility in this developing future:
Designers have an urgent role to play in crafting the emerging generation of AI interfaces. This hour explores a rich set of examples���both entertaining and sobering���that unearth 10 design principles for creating responsible machine-learning applications. Learn to use AI as design material in your everyday work. Anticipate the weird, unexpected or incorrect conclusions the machines sometimes deliver. Above all, scrub data for bias to create a respectful and inclusive world we all want to live in.
Questions answered
I���m a designer, not a data scientist. What’s my role in AI? How can I start using machine learning in my everyday work, starting today?
How can design set better expectations and confidence in the accuracy and perspective of AI interfaces? (We’re still bad at this.)
What are the ethical responsibilities���and actionable tasks���to root out data bias and create inclusive services? (We’re bad at this, too.)
Liza Kindred: Mindful Technology
My astonishing wife Liza Kindred brought down the house at this year���s Interaction conference when she debuted her framework of design principles for mindful technology. How do we bend technology to our lives instead of the reverse? What are our responsibilities as designers to help focus attention instead of trap it? Friends, I���m the luckiest person on the planet that I get to be inspired by Liza every day. You’ll do the SXSW audience a big favor by treating them to even an hour of her insights:
Staying connected does matter���but to each other, not our devices. Instead of views and attention grabs, let���s design for purpose, calm & compassion. Instead of engagement with interfaces, let���s design to engage with what we truly care about. Instead of simply building more tech, let���s build more human connection. With a host of practical examples, Liza Kindred shares a framework for how we can make and use new technologies that create real insight, joy, and utility���while still getting the job done.
Questions answered
What is the role of mindfulness in a technology-saturated world, and how are some of the world’s biggest businesses already putting it to practical use?
As tech settles into the most intimate spaces of our lives, homes, & bodies, what are our responsibilities for creating respectful experiences?
What are the practical principles for & examples of creating (and using!) technologies that are respectful, calm, purposeful, & humane?
Panel: Designing Our Robot Overlords

In a few short weeks, I���m headed deep into the northern forests of Norway to huddle with a group of intimidatingly smart people to consider our alongside robots and algorithms. My friend Andy Budd of Clearleft organized this retreat and also had the good idea to share what comes of it at SXSW. I���m all atingle to join in this conversation with Andy, Amber Case (Harvard, MIT, Calm Technology), and Karen Kaushansky (futurist and hardware designer):
In the Summer of 2017 a diverse group of designers, makers, artists, academics and sci-fi authors hired the house where Ex Machina was filmed, to explore our emerging relationship with robots. What will that future really look like, and how can we use our existing design skills to make it more human?
In this session we will share the findings of this retreat, along with 8 key principals for designing better human-robot interactions.
Questions answered
With the rise of the robots, what does the future of human-computer interaction look like?
How can I apply my existing design skills to this new frontier?
What new skills will I need to learn in order to thrive in this environment?
Panelists
Andy Budd, Managing Director, Clearleft
Josh Clark, Designer, Developer, & Author, Big Medium
Amber Case, Research Fellow, MIT Media Lab
Karen Kaushansky, Experience Design Leader, Independent Consultant
Panel: Killing The Design Review: Atomic Design At Scale
I spent much of last year helping the venerable internet brand About.com become Dotdash. Big Medium helped the sprawling how-to site reinvent itself as a set of six new sites, each with a new brand, and look and feel. The project was crazy successful. More than that, though, we also helped the company completely overhaul its design and development process. I was asked to moderate this proposed panel on what we did and how:
Imagine building 6 brands in one year while simultaneously changing the underlying product development process. In this session you’ll hear about Dotdash’s journey���both successes and mistakes���to embrace Atomic Design, leverage pattern libraries, and kill the design review process from 3 different perspectives: product, design, and engineering.
Questions answered
How do you avoid meetings where the CEO tells you to scrap your prototype and start over with 2 weeks to go?
How can you build – and maintain! – a tool that will be used by designers, developers, product, and an army of robots?
Can you keep a whole company interested in a site buildout from start to finish?
Panelists
Adam McClean, SVP, Product, Dotdash
Ben Cochran, VP, Front End Development, Dotdash
Hetal Rathod, Senior Designer, UX/UI, Dotdash
Josh Clark, Founder, Big Medium
August 2, 2017
The Pop-Up Employer: Build a Team, Do the Job, Say Goodbye
Big Medium is what my friend and collaborator Dan Mall calls a design collaborative. Dan runs his studio Superfriendly the same way I run Big Medium: rather than carry a full-time staff, we both spin up bespoke teams from a tight-knit network of well-known domain experts. Those teams are carefully chosen to meet the specific demands of each project. It’s a very human, very personal way to source project teams.
And so I was both intrigued and skeptical to read about an automated system designed to do just that at a far larger scale. Noam Scheiber reporting for The New York Times:
True Story was a case study in what two Stanford professors
call ���flash organizations��� ��� ephemeral setups to execute
a single, complex project in ways traditionally associated
with corporations, nonprofit groups or governments. [���]
And, in fact, intermediaries are already springing up across industries like software and pharmaceuticals to assemble such organizations. They rely heavily on data and algorithms to determine which workers are best suited to one another, and also on decidedly lower-tech innovations, like middle management. [���]
���One of our animating goals for the project was, would it be possible for someone to summon an entire organization for something you wanted to do with just a click?��� Mr. Bernstein said.
The fascinating question here is how systems might develop algorithmic proxies for the measures of trust, experience, and quality that weave the fabric of our professional networks. But even more intriguing: how might such models help to connect underrepresented groups with work they might otherwise never have access to? For that matter, how might those models introduce me to designers outside my circle who might introduce more diverse perspectives into my own work?
New York Times | The Pop-Up Employer: Build a Team, Do the Job, Say Goodbye
The BBQ and the Errant Butler
Marek Pawlowski shares a tale of a dinner party taken hostage by a boorish Alexa hell-bent on selling the guests music.
Amid the flashy marketing campaigns and rapid technological
advances surrounding virtual assistants like Alexa,
Cortana and Siri, few end users seem willing to question
how the motivation of their creators is likely to affect
the overall experience. Amazon has done much to make
Alexa smart, cheap and useful. However, it has done
so in service of an over-arching purpose: retailing.
Of course, Google, Microsoft and Apple have ulterior
motives for their own assistants, but it should come
as no surprise that Alexa is easily sidetracked by
her desire to sell you things.
MEX | User Story: the BBQ and the Errant Butler
Toy Story Lessons for the Internet of Things
Dan G��rdenfors ponders how to handle “leadership conflicts” in IoT devices:
In future smart homes, many interactions will be complex
and involve combinations of different devices. People
will need to know not only what goes on but also why.
For example, when smart lights, blinds and indoor climate
systems adjust automatically, home owners should be
able to know what triggered it. Was it weather forecast
data or the behaviour of people at home that made the
thermostat lower the temperature? Which device made
the decision and told the others to react? Especially
when things don���t end up the way we want them to, smart
objects need to communicate more, not less.
As we introduce more sensors, services, and smart gadgets into our life, some of them will inevitably collide. Which one “wins”? And how do we as users see the winner (or even understand that there was a conflict in the first place)?
UX design gets complicated when you introduce multiple triggers from multiple opinionated systems. And of course all those opinionated systems should bow to the most important opinion of all: the user’s. But even that is complicated in a smart-home environment where there are multiple users who have changing needs, desires, and contexts throughout the day. Fun!
Hacker Noon | Toy Story lessons for the Internet of Things
Politics Are a Design Constraint
Designers, if you believe that politics don’t belong at work, guess what: your work is itself political. Software channels behavior, and that means that it’s freighted with values.
Ask yourself: as a designer, what are the behaviors I’m shaping, for what audience, to what end, and for whose benefit? Those questions point up the fact that software is ideological. The least you can do is own that fact and make sure that your software’s politics line up with your own. John Warren Hanawalt explains why:
Designers have a professional responsibility to consider
what impact their work has���whether the project is explicitly
���political��� or not. Design can empower or disenfranchise
people through the layout of ballots or UX of social
network privacy settings.
Whose voices are amplified or excluded by the platforms
we build, who profits from or is exploited by the service
apps we code, whether we have created space for self-expression
or avenues for abuse: these are all political design
considerations because they decide who is represented,
who can participate and at what cost, and who has power. [���]
If you���re a socially conscious designer, you don���t need to quit your job; you need to do it. That means designing solutions that benefit people without marginalizing or harming others. When your boss or client asks you to do something that might do harm, you have to say no. And if you see unethical behavior happening in other areas of your company, fight for something better. If you find a problem, you have a problem. Good thing solving problems is your job.
John Warren Hanawalt | Politics Are a Design Constraint
AI First���with UX
When mobile exploded a decade ago, many of us wrestled with designing for the new context of freshly portable interfaces. In fact, we often became blinded by that context, assuming that mobile interfaces should be optimized strictly for on-the-go users: we overdialed on location-based interactions, short attention spans, micro-tasks. The “lite” mobile version ruled.
It turned out that the physical contexts of mobile gadgets���device and environment���were largely red herrings. The notion of a single “mobile context” was a myth that distracted from the more meaningful range of “softer” contexts these devices introduced by unchaining us from the desktop. The truth was that we now had to design for a huge swath of temporal, behavioral, emotional, and social contexts. When digital interfaces can penetrate any moment of our lives, the designer can no longer assume any single context in which it will be used.
This already challenging contextual landscape is even more complicated for predictive AI assistants that constantly run in the background looking for moments to provide just-in-time info. How much do they need to know about current context to judge the right moment to interrupt with (hopefully) useful information?
In an essay for O’Reilly, Mike Loukides explores that question, concluding that it’s less a concern of algorithm design than of UX design:
What’s the experience I want in
being ���assisted��� How is that experience
designed? A design that requires me to expend more
effort to take advantage of the assistant’s capabilities
is a step backward.
The design problem becomes more complex when we think
about how assistance is delivered. Norvig’s "reminders"
are frequently delivered in the form of asynchronous
notifications. That’s a problem: with many applications
running on every device, users are subjected to a constant
cacophony of notifications. Will AI be smart enough
to know what notifications are actually wanted, and
which are just annoyances? A reminder to buy milk?
That’s one thing. But on any day, there are probably
a dozen or so things I need, or could possibly use,
if I have time to go to the store. You and I probably
don’t want reminders about all of them. And when do
we want these reminders? When we’re driving by a supermarket,
on the way to the aforementioned doctor’s appointment?
Or would it just order it from Amazon? If so, does
it��need your permission? Those are all UX questions,
not AI questions.
We’ve made lots of fast progress in just the last few years���months, even���in crafting remarkably accurate algorithms. We’re still getting started, though, in crafting the experiences we wrap around them. There’s lots of work to be done right now by designers, including UX research at unprecedented scale, to understand how to put machine learning to use as design material. I have ideas and design principles about how to get started. In the meantime, I really like the way Mike frames the problem:
In a future where humans and computers are increasingly in the loop together, understanding context is essential. But the context problem isn’t solved by more AI. The context is the user experience. What we really need to understand, and what we’ve been learning all too slowly for the past 30 years, is that technology is the easy part.
O'Reilly Media | AI first���with UX
Making Software with Casual Intelligence
The most broadly impactful technologies tend to be the ones that become mundane���cheap, expected, part of the fabric of everyday life. We absorb them into our lives, their presence assumed, their costs negligible. Electricity, phones, televisions, internet, refrigeration, remote controls, power windows���once-remarkable technologies that now quietly improve our lives.
That’s why the aspects of machine learning that excite me most right now are the small and mundane interventions that designers and developers can deploy today in everyday projects. As I wrote in Design in the Era of the Algorithm, there are so many excellent (and free!) machine-learning APIs just waiting to be integrated into our digital products. Machine learning is the new design material, and it’s ready today, even for the most modest product features.
If you���re not sure what AI APIs could bring to your products, think about the impact of predictive text on typing. Many such opportunities
— John Allsopp (@johnallsopp) April 26, 2017
All of this reminds me of an essay my friend Evan Prodromou wrote last year about making software with casual intelligence. It’s a wonderful call to action for designers and developers to start integrating machine learning into everyday design projects.
Programmers in the next decade are going to make huge
strides in applying artificial intelligence techniques
to software development. But those advances aren���t
all going to be in moonshot projects like self-driving
cars and voice-operated services. They���re going to
be billions of incremental intelligent updates to our
interfaces and back-end systems.
I call this _casual intelligence_���������making everything
we do a little smarter, and making all of our software
that much easier and more useful. It���s casual because
it makes the user���s experience less stressful, calmer,
more leisurely. It���s also casual because the developer
or designer doesn���t think twice about using AI techniques.
Intelligence becomes part of the practice of software
creation.
Evan touches on one of the most intriguing implications of designing data-driven interfaces. When machines generate both content and interaction, they will often create experiences that designers didn’t imagine (both for better and for worse). The designer’s role may evolve into one of corralling the experience in broad directions, rather than down narrow paths. (See conversational interfaces and open-ended, Alexa/Siri-style interactions, for example.)
Designers need to stop thinking in terms of either-or interfaces���������either we do it this way, or we do it that way. Casual intelligence lets interfaces become _and-also_���������different users have different experiences. Some users will have experiences never dreamed of in your wireframes���������and those may be the best ones of all.
Evan Prodromou | Making Software with Casual Intelligence
In the AI Age, ���Being Smart��� Will Mean Something Completely Different
As machines become better than people at so many things, the natural question is what’s left for humans���and indeed what makes us human in the first place? Or more practically: what is the future of work for humans if machines are smarter than us in so many ways? Writing for Harvard Business Review, Ed Hess suggests that the answer is in shifting the meaning of human smarts away from information recall, pattern-matching, fast learning���and even accuracy.
What is needed is a new definition of being smart,
one that promotes higher levels of human thinking and
emotional engagement. The new smart will be determined
not by what or how you know but by the quality of your
thinking, listening, relating, collaborating, and learning.
Quantity is replaced by quality. And that shift will
enable us to focus on the hard work of taking our cognitive
and emotional skills to a much higher level.
We will spend more time training to be open-minded
and learning to update our beliefs in response to new
data. We will practice adjusting after our mistakes,
and we will invest more in the skills traditionally
associated with emotional intelligence. The new smart
will be about trying to overcome the two big inhibitors
of critical thinking and team collaboration: our ego
and our fears. Doing so will make it easier to perceive
reality as it is, rather than as we wish it to be.
In short, we will embrace humility. That is how we
humans will add value in a world of smart technology.
Harvard Business Review | In the AI Age, ���Being Smart��� Will Mean Something Completely Different
July 20, 2017
Designing for Touch���in French! And Chinese!

Designing for Touch is now available in Chinese, French, and the original English.
I always feel a giddy cosmopolitan flush when one of my books is published in a new language. (Hey if I can’t manage to a cosmopolitan swagger, at least my books can look the part.)
And so I’m feeling especially je ne sais quoi about the publication of Designing for Touch in France and���new this summer���in China. Many thanks to translators Charles Robert and Zou Zheng (aka “C7210”) for their remarkable work on these two editions.
You can snap up any of these editions at the links below:
French: Design Tactile
Chinese: ������������:������������������������������������
English: Designing for Touch
Designing for Touch explores all the ways that touchscreen UX goes way beyond making buttons bigger for fat fingers. Designers have to revisit���and in many cases chuck out���the common solutions of the last thirty years of traditional interface design. In this book, you���ll discover entirely new methods, including design patterns, touchscreen metrics, ergonomic guidelines, and interaction metaphors that you can use in your websites and apps right now. The future is in your hands.