Jeremy Keith's Blog, page 25

April 20, 2023

Read-only web apps

The most cartoonish misrepresentation of progressive enhancement is that it means making everything work without JavaScript.

No. Progressive enhancement means making sure your core functionality works without JavaScript.

In my book Resilient Web Design, I quoted Wilto:

Lots of cool features on the Boston Globe don���t work when JS breaks; ���reading the news��� is not one of��them.


That���s an example where the core functionality is readily identifiable. It���s a newspaper. The core functionality is reading the news.

It isn���t always so straightforward though. A lot of services that self-identify as ���apps��� will claim that even their core functionality requires JavaScript.

Surely I don���t expect Gmail or Google Docs to provide core functionality without JavaScript?

In those particular cases, I actually do. I believe that a textarea in a form would do the job nicely. But I get it. That might take a lot of re-engineering.

So how about this compromise���

Your app should work in a read-only mode without JavaScript.

Without JavaScript I should still be able to read my email in Gmail, even if you don���t let me compose, reply, or organise my messages.

Without JavaScript I should still be able to view a document in Google Docs, even if you don���t let me comment or edit the document.

Even with something as interactive as Figma or Photoshop, I think I should still be able to view a design file without JavaScript.

Making this distinction between read-only mode and read/write mode could be very useful, especially at the start of a project.

Begin by creating the read-only mode that doesn���t require JavaScript. That alone will make for a solid foundation to build upon. Now you���ve built a fallback for any unexpected failures.

Now start adding the read/write functionally. You���re enhancing what���s already there. Progressively.

Heck, you might even find some opportunities to provide some read/write functionality that doesn���t require JavaScript. But if JavaScript is needed, that���s absolutely fine.

So if you���re about to build a web app and you���re pretty sure it requires JavaScript, why not pause and consider whether you can provide a read-only version.

1 like ·   •  0 comments  •  flag
Share on Twitter
Published on April 20, 2023 05:20

April 15, 2023

Progressive disclosure with HTML

Robin penned a little love letter to the details element. I agree. It is a joyous piece of declarative power.

That said, don���t go overboard with it. It���s not a drop-in replacement for more complex widgets. But it is a handy encapsulation of straightforward progressive disclosure.

Just last week I added a couple of more details elements to The Session ���kind of. There���s a bit of server-side conditional logic involved to determine whether details is the right element.

When you���re looking at a tune, one of the pieces of information you see is how many recordings there of that tune. Now if there are a lot of recordings, then there���s some additional information about which other tunes this one gets recorded with. That information is extra. Mere details, if you will.

You can see it in action on this tune listing. Thanks to the details element, the extra information is available to those who want it, but by default that information is tucked away���very handy for not clogging up that part of the page.

There are 181 recordings of this tune.This tune has been recorded together with���������

Likewise, each tune page includes any aliases for the tune (in Irish music, the same tune can have many different titles���and the same title can be attached to many different tunes). If a tune has just a handful of aliases, they���re displayed in situ. But once you start listing out more than twenty names, it gets overwhelming.

The details element rides to the rescue once again.

Compare the tune I mentioned above, which only has a few aliases, to another tune that is known by many names.

Again, the main gist is immediately available to everyone���how many aliases are there? But if you want to go through them all, you can toggle that details element open.

You can effectively think of the summary element as the TL;DR of HTML.

There are 31 other names for this tune.

Also known as���

There���s another classic use of the details element: frequently asked questions. In the case of The Session, I���ve marked up the house rules and FAQs inside details elements, with the rule or question as the summary.

But there���s one house rule that���s most important (���Be civil���) so that details element gets an additional open attribute.

Be civil

Contributions should be constructive and polite, not mean-spirited or contributed with the intention of causing trouble.

 •  0 comments  •  flag
Share on Twitter
Published on April 15, 2023 00:32

April 13, 2023

Browser history

I woke up today to a very annoying new bug in Firefox. The browser shits the bed in an unpredictable fashion when rounding up single pixel line widths in SVG. That���s quite a problem on The Session where all the sheet music is rendered in SVG. Those thin lines in sheet music are kind of important.

Browser bugs like these are very frustrating. There���s nothing you can do from your side other than filing a bug. The locus of control is very much with the developers of the browser.

Still, the occasional regression in a browser is a price I���m willing to pay for a plurality of rendering engines. Call me old-fashioned but I still value the ecological impact of browser diversity.

That said, I understand the argument for converging on a single rendering engine. I don���t agree with it but I understand it. It���s like this…

Back in the bad old days of the original browser wars, the browser companies just made shit up. That made life a misery for web developers. The Web Standards Project knocked some heads together. Netscape and Microsoft would agree to support standards.

So that���s where the bar was set: browsers agreed to work to the same standards, but competed by having different rendering engines.

There���s an argument to be made for raising that bar: browsers agree to work to the same standards, and have the same shared rendering engine, but compete by innovating in all other areas���the browser chrome, personalisation, privacy, and so on.

Like I said, I understand the argument. I just don���t agree with it.

One reason for zeroing in a single rendering engine is that it���s just too damned hard to create or maintain an entirely different rendering engine now that web standards are incredibly powerful and complex. Only a very large company with very deep pockets can hope to be a rendering engine player. Google. Apple. Heck, even Microsoft threw in the towel and abandoned their rendering engine in favour of Blink and V8.

And yet. Andreas Kling recently wrote about the Ladybird browser. How we’re building a browser when it’s supposed to be impossible:

The ECMAScript, HTML, and CSS specifications today are (for the most part) stellar technical documents whose algorithms can be implemented with considerably less effort and guesswork than in the past.


I���ll be watching that project with interest. Not because I plan to use the brower. I���d just like to see some evidence against the complexity argument.

Meanwhile most other browser projects are building on the raised bar of a shared browser engine. Blisk, Brave, and Arc all use Chromium under the hood.

Arc is the most interesting one. Built by the wonderfully named Browser Company of New York, it���s attempting to inject some fresh thinking into everything outside of the rendering engine.

Experiments like Arc feel like they could have more in common with tools-for-thought software like Obsidian and Roam Research. Those tools build knowledge graphs of connected nodes. A kind of hypertext of ideas. But we���ve already got hypertext tools we use every day: web browsers. It���s just that they don���t do much with the accumulated knowledge of our web browsing. Our browsing history is a boring reverse chronological list instead of a cool-looking knowledge graph or timeline.

For inspiration we can go all the way back to Vannevar Bush���s genuinely seminal 1945 article, As We May Think. Bush imagined device, the Memex, was a direct inspiration on Douglas Engelbart, Ted Nelson, and Tim Berners-Lee.

The article describes a kind of hypertext machine that worked with microfilm. Thanks to Tim Berners-Lee���s World Wide Web, we now have a global digital hypertext system that we access every day through our browsers.

But the article also described the idea of ���associative trails���:

Wholly new forms of encyclopedias will appear, ready made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified.


Our browsing histories are a kind of associative trail. They���re as unique as fingerprints. Even if everyone in the world started on the same URL, our browsing histories would quickly diverge.

Bush imagined that these associative trails could be shared:

The lawyer has at his touch the associated opinions and decisions of his whole experience, and of the experience of friends and authorities.


Heck, making a useful browsing history could be a real skill:

There is a new profession of trail blazers, those who find delight in the task of establishing useful trails through the enormous mass of the common record.


Taking something personal and making it public isn���t a new idea. It was what drove the wave of web 2.0 startups. Before Flickr, your photos were private. Before Delicous, your bookmarks were private. Before Last.fm, what music you were listening to was private.

I���m not saying that we should all make our browsing histories public. That would be a security nightmare. But I am saying there���s a lot of untapped potential in our browsing histories.

Let���s say we keep our browsing histories private, but make better use of them.

From what I���ve seen of large language model tools, the people getting most use of out of them are training them on a specific corpus. Like, ���take this book and then answer my questions about the characters and plot��� or ���take this codebase and then answer my questions about the code.��� If you treat these chatbots as calculators for words they can be useful for some tasks.

Large language model tools are getting smaller and more portable. It���s not hard to imagine one getting bundled into a web browser. It feeds on your browsing history. The bigger your browsing history, the more useful it can be.

Except, y���know, for the times when it just make shit up.

Vannevar Bush didn���t predict a Memex that would hallucinate bits of microfilm that didn���t exist.

 •  0 comments  •  flag
Share on Twitter
Published on April 13, 2023 06:33

April 12, 2023

Scholarship sponsorship

I wrote a while back about the UX London 2023 scholarship programme. Applications are still open (until May 19th) so if you know someone who you think should apply, here���s the link. As I said then:

Wondering if you should apply? It���s hard to define exactly who qualifies for a diversity scholarship, but basically, the more your life experience matches mine, the less qualified you are. If you are a fellow able-bodied middle-aged heterosexual white dude with a comfortable income, do me a favour and don���t apply. Everyone else, go for it.


The response so far has been truly amazing���so many great applicants!

And therein lies the problem. Clearleft can only afford to sponsor a limited number of people. It���s going to be very, very, very hard to have to whittle this down.

But perhaps you can help. Do you work at a company that could afford to sponsor some places? If so, please get in touch!

Just to be clear, this would be different from the usual transactional sponsorship opportunities for UX London where we offer you a package of benefits in exchange for sponsorship. In the case of diversity scholarships, all we can offer you is our undying thanks.

I���ll admit I have an ulterior motive in wanting to get as many of the applicants as possible to UX London. The applications are positively aglow with the passion and fervour of the people applying. Frankly, that���s exactly who I want to hang out with at an event.

Anyway, on the off chance that your employer might consider this investment in the future of UX, spread the word that we���d love to have other companies involved in the UX London diversity scholarship programme.

 •  0 comments  •  flag
Share on Twitter
Published on April 12, 2023 07:20

March 28, 2023

Design transformation on the Clearleft podcast

Boom! The Clearleft podcast is back!

The first episode of season four just dropped. It���s all about design transformation.

I���ve got to be honest, this episode is a little inside baseball. It���s a bit navel-gazey and soul-searching as I pick apart the messaging emblazoned on the Clearleft website:

The design transformation consultancy.


Whereas most of the previous episodes of the podcast would be of interest to our peers���fellow designers���this one feels like it might of more interest to potential clients. But I hope it���s not too sales-y.

You���ll hear from Danish designer Maja Raunbak, and American in Amsterdam Nick Thiel as well as Clearleft���s own Chris Pearce. And I���ve sampled a talk from the Leading Design archives by Stuart Frisby.

The episode clocks in at a brisk eighteen and a half minutes. Have a listen.

While you���re at it, take this opportunity to subscribe to the Clearleft podcast on Overcast, Spotify, Apple, Google or by using a good ol���-fashioned RSS feed. That way the next episodes in the season will magically appear in your podcatching software of choice.

But I���m not making any promises about when that will be. Previously, I released new episodes in a season on a weekly basis. This time I���m going to release each episode whenever it���s ready. That might mean there���ll be a week or two between episodes. Or there might be a month or so between episodes.

I realise that this unpredictable release cycle is the exact opposite of what you���re supposed to do, but it���s actually the most sensible way for me to make sure the podcast actually gets out. I was getting a bit overwhelmed with the prospect of having six episodes ready to launch over a six week period. What with curating UX London and other activities, it would���ve been too much for me to do.

So rather than delay this season any longer, I���m going to drop each episode whenever it���s done. Chaos! Anarchy! Dogs and cats living together!

 •  0 comments  •  flag
Share on Twitter
Published on March 28, 2023 03:16

March 27, 2023

More speakers for UX London 2023

I���d like to play it cool when I announce the latest speakers for UX London 2023, like I could be all nonchalant and say, ���oh yeah, did I not mention these people are also speaking���?���

But I wouldn���t be able to keep up that fa��ade for longer than a second. The truth is I am excited to the point of skittish gigglyness about this line-up.

Look, I���ll let you explore these speakers for yourself while I try to remain calm and simply enumerate the latest additions���

Ignacia Orellana, Service design and research consultant,Stefanie Posavec, Designer, artist and author, andDavid Dylan Thomas, Author, speaker, filmmaker. A smiling white woman with shoulder-length brown hair wearing a bright red top in a pink chair in front of a bright blue wall. A studio portrait of a white woman with long straight light brown hair wearing a black top. A smiling black man with glasses and close-cropped hair and beard wearing a leather jacket outdoors.

The line-up is almost complete now! Just one more speaker to announce.

I highly recommend you get your UX London ticket if you haven���t already. You won���t want to miss this!

 •  0 comments  •  flag
Share on Twitter
Published on March 27, 2023 07:13

March 23, 2023

Steam

Picture someone tediously going through a spreadsheet that someone has filled in by hand and finding yet another error.

���I wish to God these calculations had been executed by steam!��� they cry.

The year was 1821 and technically the spreadsheet was a book of logarithmic tables. The frustrated cry came from Charles Babbage, who channeled his frustration into a scheme to create the world���s first computer.

His difference engine didn���t work out. Neither did his analytical engine. He���d spend his later years taking his frustrations out on street musicians, which���as a former busker myself���earns him a hairy eyeball from me.

But we���ve all been there, right? Some tedious task that feels soul-destroying in its monotony. Surely this exactly what machines should be doing?

I have a hunch that this is where machine learning and large language models might turn out to be most useful. Not in creating breathtaking works of creativity, but in menial tasks that nobody enjoys.

Someone was telling me earlier today about how they took a bunch of haphazard notes in a client meeting. When the meeting was done, they needed to organise those notes into a coherent summary. Boring! But ChatGPT handled it just fine.

I don���t think that use-case is going to appear on the cover of Wired magazine anytime soon but it might be a truer glimpse of the future than any of the breathless claims being eagerly bandied about in Silicon Valley.

You know the way we no longer remember phone numbers, because, well, why would we now that we have machines to remember them for us? I���d be quite happy if machines did that for the annoying little repetitive tasks that nobody enjoys.

I���ll give you an example based on my own experience.

Regular expressions are my kryptonite. I���m rubbish at them. Any time I have to figure one out, the knowledge seeps out of my brain before long. I think that���s because I kind of resent having to internalise that knowledge. It doesn���t feel like something a human should have to know. ���I wish to God these regular expressions had been calculated by steam!���

Now I can get a chatbot with a large language model to write the regular expression for me. I still need to describe what I want, so I need to write the instructions clearly. But all the gobbledygook that I���m writing for a machine now gets written by a machine. That seems fair.

Mind you, I wouldn���t blindly trust the output. I���d take that regular expression and run it through a chatbot, maybe a different chatbot running on a different large language model. ���Explain what this regular expression does,��� would be my prompt. If my input into the first chatbot matches the output of the second, I���d have some confidence in using the regular expression.

A friend of mine told me about using a large language model to help write SQL statements. He described his database structure to the chatbot, and then described what he wanted to select.

Again, I wouldn���t use that output without checking it first. But again, I might use another chatbot to do that checking. ���Explain what this SQL statement does.���

Playing chatbots off against each other like this is kinda how machine learning works under the hood: generative adverserial networks.

Of course, the task of having to validate the output of a chatbot by checking it with another chatbot could get quite tedious. ���I wish to God these large language model outputs had been validated by steam!���

Sounds like a job for machines.

 •  0 comments  •  flag
Share on Twitter
Published on March 23, 2023 08:52

March 22, 2023

Disclosure

You know how when you���re on hold to any customer service line you hear a message that thanks you for calling and claims your call is important to them. The message always includes a disclaimer about calls possibly being recorded ���for training purposes.���

Nobody expects that any training is ever actually going to happen���surely we would see some improvement if that kind of iterative feedback loop were actually in place. But we most certainly want to know that a call might be recorded. Recording a call without disclosure would be unethical and illegal.

Consider chatbots.

If you���re having a text-based (or maybe even voice-based) interaction with a customer service representative that doesn���t disclose its output is the result of large language models, that too would be unethical. But, at the present moment in time, it would be perfectly legal.

That needs to change.

I suspect the necessary legislation will pass in Europe first. We���ll see if the USA follows.

In a way, this goes back to my obsession with seamful design. With something as inherently varied as the output of large language models, it���s vital that people have some way of evaluating what they���re told. I believe we should be able to see as much of the plumbing as possible.

The bare minimum amount of transparency is revealing that a machine is in the loop.

This shouldn���t be a controversial take. But I guarantee we���ll see resistance from tech companies trying to sell their ���AI��� tools as seamless, indistinguishable drop-in replacements for human workers.

 •  0 comments  •  flag
Share on Twitter
Published on March 22, 2023 03:26

March 15, 2023

Another three speakers for UX London 2023

I know I���m being tease, doling out these UX London speaker announcements in batches rather than one big reveal. Indulge me in my suspense-ratcheting behaviour.

Today I���d like to unveil three speakers whose surnames start with the letter H���

Stephen Hay, Creative Director at Rabobank,Asia Hoe, Senior Product Designer, andAmy Hupe, Design Systems consultant at Frankly Design.A professional portrait of a smiling white man in a turtleneck jumper and suit jacket with close-cut dark curly hair that's beginning to show signs of grey. An outdoor portrait of a smiling dark-skinned woman smiling with shoulder-length black hair. A smiling white woman with long dark hair sitting on the sofa in a cosy room with a nice cup of tea.

Just look at how that line-up is coming together! There���ll be just one more announcement and then the roster will be complete.

But don���t wait for that. Grab your ticket now and I���ll see you in London on June 22nd and 23rd!

 •  0 comments  •  flag
Share on Twitter
Published on March 15, 2023 08:45

March 14, 2023

Guessing

The last talk at the last dConstruct was by local clever clogs Anil Seth. It was called Your Brain Hallucinates Your Conscious Reality. It���s well worth a listen.

Anil covers a lot of the same ground in his excellent book, Being You. He describes a model of consciousness that inverts our intuitive understanding.

We tend to think of our day-to-day reality in a fairly mechanical cybernetic manner; we receive inputs through our senses and then make decisions about reality informed by those inputs.

As another former dConstruct speaker, Adam Buxton, puts it in his interview with Anil, it feels like that old Beano cartoon, the Numskulls, with little decision-making homonculi inside our head.

But Anil posits that it works the other way around. We make a best guess of what the current state of reality is, and then we receive inputs from our senses, and then we adjust our model accordingly. There���s still a feedback loop, but cause and effect are flipped. First we predict or guess what���s happening, then we receive information. Rinse and repeat.

The book goes further and applies this to our very sense of self. We make a best guess of our sense of self and then adjust that model constantly based on our experiences.

There���s a natural tendency for us to balk at this proposition because it doesn���t seem rational. The rational model would be to make informed calculations based on available data ���like computers do.

Maybe that���s what sets us apart from computers. Computers can make decisions based on data. But we can make guesses.

Enter machine learning and large language models. Now, for the first time, it appears that computers can make guesses.

The guess-making is not at all like what our brains do���large language models require enormous amounts of inputs before they can make a single guess���but still, this should be the breakthrough to be shouted from the rooftops: we���ve taught machines how to guess!

And yet. Almost every breathless press release touting some revitalised service that uses AI talks instead about accuracy. It would be far more honest to tout the really exceptional new feature: imagination.

Using AI, we will guess who should get a mortgage.

Using AI, we will guess who should get hired.

Using AI, we will guess who should get a strict prison sentence.


Reframed like that, it���s easy to see why technologists want to bury the lede.

Alas, this means that large language models are being put to use for exactly the wrong kind of scenarios.

(This, by the way, is also true of immersive ���virtual reality��� environments. Instead of trying to accurately recreate real-world places like meeting rooms, we should be leaning into the hallucinatory power of a technology that can generate dream-like situations where the pleasure comes from relinquishing control.)

Take search engines. They���re based entirely on trust and accuracy. Introducing a chatbot that confidentally conflates truth and fiction doesn���t bode well for the long-term reputation of that service.

But what if this is an interface problem?

Currently facts and guesses are presented with equal confidence, hence the accurate descriptions of the outputs as bullshit or mansplaining as a service.

What if the more fanciful guesses were marked as such?

As it is, there���s a ���temperature��� control that can be adjusted when generating these outputs; the more the dial is cranked, the further the outputs will stray from the safest predictions. What if that could be reflected in the output?

I don���t know what that would look like. It could be typographic���some markers to indicate which bits should be taken with pinches of salt. Or it could be through content design���phrases like ���Perhaps������, ���Maybe������ or ���It���s possible but unlikely that������

I���m sure you���ve seen the outputs when people request that ChatGPT write their biography. Perfectly accurate statements are generated side-by-side with complete fabrications. This reinforces our scepticism of these tools. But imagine how differently the fabrications would read if they were preceded by some simple caveats.

A little bit of programmed humility could go a long way.

Right now, these chatbots are attempting to appear seamless. If 80% or 90% of their output is accurate, then blustering through the other 10% or 20% should be fine, right? But I think the experience for the end user would be immensely more empowering if these chatbots were designed seamfully. Expose the wires. Show the workings-out.

Mind you, that only works if there is some way to distinguish between fact and fabrication. If there���s no way to tell how much guessing is happening, then that���s a major problem. If you can���t tell me whether something is 50% true or 75% true or 25% true, then the only rational response is to treat the entire output as suspect.

I think there���s a fundamental misunderstanding behind the design of these chatbots that goes all the way back to the Turing test. There���s this idea that the way to make a chatbot believable and trustworthy is to make it appear human, attempting to hide the gears of the machine. But the real way to gain trust is through honesty.

I want a machine to tell me when it���s guessing. That won���t make me trust it less. Quite the opposite.

After all, to guess is human.

 •  0 comments  •  flag
Share on Twitter
Published on March 14, 2023 05:59

Jeremy Keith's Blog

Jeremy Keith
Jeremy Keith isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Jeremy Keith's blog with rss.