Josh Clark's Blog, page 13

September 10, 2018

Consider the Beer Can

Once upon a time, beer cans had no tab. They were sealed cans, and you used a church key to punch holes in them. In 1962, the “zip top” tab was invented, letting you open the can by peeling off a (razor-sharp) tab. John Updike was not impressed:




This seems to be an era of gratuitous inventions &
negative improvements. Consider the beer can-it was
beautiful as a clothespin, as inevitable as the wine
bottle, as dignified & reassuring as the fire hydrant.
A tranquil cylinder of delightfully
resonant metal, it could be opened in an instant, requiring
only the application of a handy gadget freely dispensed
by every grocer… Now we are given instead, a top
beeling with an ugly, shmoo-shaped "tab,"
which after fiercely resisting the tugging, bleeding
fingers of the thirsty man, threatens his lips with
a dangerous & hideous hole. However, we have discovered
a way to thwart Progress… Turn the beer can upside
down and open the bottom. The bottom is still the way
the top used to be. This operation gives the beer an
unsettling jolt, and the sight of a consistently inverted
beer can makes some people edgy. But the latter difficulty
could be cleared up if manufacturers would design cans
that looked the same whichever end was up, like playing
cards. Now, that would be progress.




I love this. It conjures lots of questions for designers as we seek to improve existing experiences:



What do innovations cost in social and physical pleasures when they disrupt familiar experiences? What price do we pay (or extract from others) when we design for efficiency? Whose efficiency are we designing for anyway? How do we distinguish nostalgia from real loss (and does the distinction matter)? How can we take useful lessons from the hacks our customers employ to work around our designs?



Related: Eater covers the history of beer-can design. You’re welcome.




The New Yorker | Consider the Beer Can
 •  0 comments  •  flag
Share on Twitter
Published on September 10, 2018 06:21

How to have a healthy relationship with tech

Liza Kindred's Mindful Technology workshop



Liza Kindred of Mindful Technology


At Well+Good, the wonderful Liza Kindred describes how to make personal technology serve you, instead of the reverse. It all starts with realizing that your inability to put down your phone isn’t a personal failing, it’s something that’s been done to you:




���The biggest problem with how people engage with technology
is technology, not the people,��� she says. ���Our devices
and favorite apps are all designed to keep us coming
back for more. That being said, there are many ways
for us to intervene in our own relationships with tech,
so that we can live this aspect of our lives in a way
we can be proud of.���




Liza offers several pointers for putting personal technology in its place. My personal favorite:




Her biggest recommendation is turning off all notifications not sent by a human. See ya, breaking news, Insta likes, and emails. ���Your time is more valuable than that,��� Kindred says.




Alas, these strategies are akin to learning self-defense skills during a crime wave. They’re helpful (critical, even), but the core problem remains. In this case, the “crime wave” is the cynical, engagement-hungry strategies that too many companies employ to keep people clicking and tapping. And clicking and tapping. And clicking and tapping.



Liza’s on the case there, too. Her company Mindful Technology helps organizations craft products and business strategies that are kind and respectful while still serving the bottom line. I’ve participated in her Mindful Technology workshops, and they’re mind opening. Liza demonstrates that design patterns and business models that you might take for granted as a best practice do more damage than you realize. She has a collection of these anti-patterns, and product designers should take note.



Meanwhile, we’ll have to continue to sharpen those self-defense skills.




Well+Good | How to have a healthy relationship with tech
 •  0 comments  •  flag
Share on Twitter
Published on September 10, 2018 06:03

July 1, 2018

���Trigger for a rant���

In his excellent Four Short Links daily feature, Nat Torkington has something to say about innovation poseurs���in the mattress industry:




Why So Many Online Mattress Brands – trigger for a
rant: software is eating everything, but that doesn’t
make everything an innovative company. If you’re applying
the online sales playbook to product X (kombucha, mattresses,
yoga mats) it doesn’t make you a Level 9 game-changing
disruptive TechCo, it makes you a retail business keeping
up with the times. I’m curious where the next interesting
bits of tech are.





O'Reilly Media | Four short links
 •  0 comments  •  flag
Share on Twitter
Published on July 01, 2018 05:58

June 30, 2018

Should computers serve humans, or should humans serve computers?

Nolan Lawson considers dystopian and utopian possibilities for the future, with a gentle suggestion that front-line technologists have some agency here. What kind of world do you want to help build?




The core question we technologists should be asking
ourselves is: do we want to live in a world where computers
serve humans, or where humans serve computers?



Or to put it another way: do we want to live in a world
where the users of technology are in control of their
devices? Or do we want to live in a world where the
owners of technology use it as yet another means of
control over those without the resources, the knowledge,
or the privilege to fight back?





Read the Tea Leaves | Should computers serve humans, or should humans serve computers?
 •  0 comments  •  flag
Share on Twitter
Published on June 30, 2018 05:27

May 20, 2018

s5e11: Things That Have Caught My Attention

In a recent edition of his excellent stream-of-consciousness newsletter, Dan Hon considers Alexa Kids Edition in which, among other things, Alexa encourages kids to say “please.” There are challenges and pitfalls, Dan writes, in designing a one-size-fits-all system that talks to children and, especially, teaches them new behaviors.




Parenting is a very personal subject. As I have become
a parent, I have discovered (and validated through
experimental data) that parents have very specific
views about how to do things! Many parents do not agree
with each other! Parents who agree with each other
on some things do not agree on other things! In families
where there are two parents there is much scope for
disagreement on both desired outcome and method!��



All of which is to say is that the current design,
architecture and strategy of Alexa for Kids indicates
one sort of one-size-fits-all method and that there’s
not much room for parental customization. This isn’t
to say that Amazon are actively preventing it and might
not add it down the line - it’s just that it doesn’t
really exist right now. Honan’s got a great point that:



"[For
example,] take the magic word we mentioned earlier.
There is no universal norm when it comes to what���s
polite or rude. Manners vary by family, culture, and
even region. While ���yes, sir��� may be de rigueur in
Alabama, for example, it might be viewed as an element
of the patriarchy in parts of California."





Dan Hon | s5e11: Things That Have Caught My Attention
 •  0 comments  •  flag
Share on Twitter
Published on May 20, 2018 08:48

AI Is Harder Than You Think

In the New York Times opinion section, Gary Marcus and Ernest Davis suggest that today’s data-crunching model for artificial intelligence is not panning out. Instead of truly understanding logic or language, today’s machine learning instead identifies data patterns to recognize and reflect human behavior. The systems this approach creates tends to mimic more than think. As a result, we have some impressive but incredibly narrow applications of AI. The culmination of artificial intelligence appears to be making salon appointments.



Decades ago, the approach was different. The AI field tried to understand the elements of human thought���and teach machines to actually think. The goal proved elusive and the field drifted instead to what machines were already better at understanding, pattern recognition. Marcus and Davis say the detour has not proved helpful:




Once upon a time, before the fashionable rise of machine learning and ���big data,��� A.I. researchers tried to understand how complex knowledge could be encoded and processed in computers. This project, known as knowledge engineering, aimed not to create programs that would detect statistical patterns in huge data sets but to formalize, in a system of rules, the fundamental elements of human understanding, so that those rules could be applied in computer programs. Rather than merely imitating the results of our thinking, machines would actually share some of our core cognitive abilities.



That job proved difficult and was never finished. But ���difficult and unfinished��� doesn���t mean misguided. A.I. researchers need to return to that project sooner rather than later, ideally enlisting the help of cognitive psychologists who study the question of how human cognition manages to be endlessly flexible.



Today���s dominant approach to A.I. has not worked out. Yes, some remarkable applications have been built from it, including Google Translate and Google Duplex. But the limitations of these applications as a form of intelligence should be a wake-up call. If machine learning and big data can���t get us any further than a restaurant reservation, even in the hands of the world���s most capable A.I. company, it is time to reconsider that strategy.





The New York Times | AI Is Harder Than You Think
 •  0 comments  •  flag
Share on Twitter
Published on May 20, 2018 08:04

May 9, 2018

Google Duplicitous

Jeremy Keith comments on Google���s announcement of Google Duplex:




The visionaries of technology���Douglas Engelbart,
J.C.R Licklider���have always
recognised the potential for computers to augment humanity, to be bicycles for the
mind. I think they would be horrified to see the increasing
trend of using humans to augment computers.





Adactio | Google Duplicitous
 •  0 comments  •  flag
Share on Twitter
Published on May 09, 2018 05:37

April 27, 2018

Do You Have ���Advantage Blindness���?

At Harvard Business Review, Ben Fuchs, Megan Reitz, and John Higgins consider the responsibility of identifying our own blind spots���the biases, privileges, and disadvantages we haven’t admitted to ourselves. It’s important (and sometimes bruising) work���all the more important if you’re in a privileged position that gives you the leverage to make a difference for others.




To address inequality of opportunity, we need to acknowledge
and address the systemic advantages and disadvantages
that people experience daily. For leaders, recognizing
their advantage blindness can help to reduce the impact
of bias and create a more level playing field for everyone.
Being advantaged through race and gender come with
a responsibility to do something about changing a system
that unfairly disadvantages others.





Do You Have ���Advantage Blindness���?
 •  0 comments  •  flag
Share on Twitter
Published on April 27, 2018 07:36

October 30, 2017

The Juvet Agenda

I had the privilege last month of joining 19 other designers, researchers, and writers to consider the future (both near and far) of artificial intelligence and machine learning. We headed into the woods���to the Juvet nature retreat in Norway���for several days of hard thinking. Under the northern lights, we considered the challenges and opportunities that AI presents for society, for business, for our craft���and for all of us individually.



Answers were elusive, but questions were plenty. We decided to share those questions, and the result is the Juvet Agenda. The agenda lays out the urgent themes surrounding AI���and presents a set of provocations for teasing out a future we want to live in:




Artificial intelligence? It���s complicated. It���s the here and now of hyper-efficient algorithms, but it���s also the heady possibility of sentient systems. It might be history’s greatest opportunity or its worst existential threat ��� or maybe it will only optimize what we���ve already got. Whatever it is and whatever it might become, the thing is moving too fast for any of us to sit still. AI demands that we rethink our methods, our business models, maybe even our cultures.



In September 2017, 20 designers, urbanists, researchers,
writers, and futurists gathered at the Juvet nature
retreat among the fjords and forests of Norway. We
came together to consider AI from a humanist perspective,
to step outside the engineering perspective that dominates
the field. Could we sort out AI���s contradictions? Could
we describe its trajectory? Could we come to any conclusions?



Across
three intense days the group captured ideas, played
games, drew diagrams, and snapped photos. In the end,
we arrived at more questions than answers ��� and Big
Questions at that. These are not topics we can or should
address alone, so we share them here.



Together these questions ask how we can shape AI for
a world we want to live in. If we don���t decide for
ourselves what that world looks like, the technology
will decide for us. The future should not be self-driving;
let���s steer the course together.





The Juvet Agenda
 •  0 comments  •  flag
Share on Twitter
Published on October 30, 2017 09:12

September 9, 2017

Stop Pretending You Really Know What AI Is

“Artificial intelligence” is broadly used in everything from science fiction to the marketing of mundane consumer goods, and it no longer has much practical meaning, bemoans John Pavlus at Quartz. He surveys practitioners about what the phrase does and doesn’t mean:




It���s just a suitcase word enclosing a foggy constellation
of ���things������plural���that do have real definitions and
edges to them. All the other stuff you hear about���machine
learning, deep learning, neural networks, what have
you���are much more precise names for the various scientific,
mathematical, and engineering methods that people employ
within the field of AI.



But what���s so terrible about using the phrase ���artificial
intelligence��� to enclose all that confusing detail���especially
for all us non-PhDs? The words ���artificial��� and ���intelligent���
sound soothingly commonsensical when put together.
But in practice, the phrase has an uncanny almost-meaning
that sucks adjacent ideas and images into its orbit
and spaghettifies them.




Me, I prefer to use “machine learning” for most of the algorithmic software I see and work with, but “AI” is definitely a convenient (if overused) shorthand.




Quartz | Stop Pretending You Really Know What AI Is and Read This Instead
 •  0 comments  •  flag
Share on Twitter
Published on September 09, 2017 03:28