Jeremy Keith's Blog, page 137
February 2, 2012
Image-y nation
There's a great article by Wilto in the latest edition of A List Apart. It's called Responsive Images: How they Almost Worked and What We Need.
What all I really like about the article is that it details the the thought process that went into trying working out responsive images for the Boston Globe. Don't get me wrong: I like it when articles provide code, but I really like it when they provide an insight into how the code was created.
The Filament Group team working on the Boston Globe site were attempting to abide by the two rules of responsive images that I've outlined before:
The small image should be default.
Don't load images twice (in other words, don't load the small images and the larger images).
There are three reasons for this: performance, performance, performance. As Luke put it so succinctly:
Being a Web designer & not considering speed/performance is like being a print designer & not considering how your colors will print.
That said, I came across a situation recently where loading both images for desktop browsers could actually be a pretty good thing to do.
Wait, wait! Here me out…
Okay, so the way that many of the responsive image techniques work is by means of a cookie. The basic challenge of responsive images is for the client to communicate with the server (and let it know the viewport size) before the server starts sending images. Because cookies can be used both by the client and the server, they offer a way to do that:
As the document begins to load, set a cookie on the client side with JavaScript recording the viewport width.
On the server side, when an image is requested, check for the contents of that cookie and serve up the appropriate image for the viewport size.
There are some variations on this: you could initially route all image requests to send back a 1x1 pixel blank .gif and then, after the page has loaded, use JavaScript to load in the appropriate image for the viewport size.
That's the theory anyway. As Mat outlined in his article, there's a bit of a race condition with the cookie being set by the client and the images being sent from the server. New browsers are doing some clever pre-fetching of images. That means they fetch the small images first, violating the second rule of responsive images.
But, like I said, in some situations that might not be so bad…
Josh is working on a responsive project at Clearleft right now—and doing a superb job of it—where he's deliberately cutting the server-side aspect of responsive images out of the picture. He's still starting with the small (mobile) images by default and then, after the page has loaded, swaps them out with JavaScript if the viewport is wide enough.
Suppose the small image is 20K and the large image is 60K. That means that desktop browsers are now loading 80K of images (instead of 60). On the face of it, this sounds like really bad news for performance… but because that extra 60K is being downloaded after the page has downloaded, the perceived performance isn't bad at all. In fact, the experience feels quite snappy. Here's what happens:
The markup contains the small image as well as some kind of indication where the larger size resides (either in a query string or in a data- attribute):
[image error]
That's about 240 by 180 pixels. Now for the large-screen layout, we want those pictures to be more like 500 by 375 pixels:
@media screen and (min-width: 50em) {
.photo {
width: 500px;
height: 375px;
}
}
That results in a "blown up" pixely image.
Once the page has loaded, that small image is swapped out for the larger image specified in the data- attribute.
Large-screen browsers have now downloaded 20K more than they actually needed but the perceived performance of the page was actually pretty snappy:
Blown-up pixely images act as placeholders while the page is downloading.
Once the page has loaded, the full-sized images snap into place.
Does that sound familiar? This is exactly what the lowsrc did.
I'm probably showing my age by even acknowledging the existence of lowsrc. It was a proprietary attribute created by Netscape back in the days of universally scarce bandwidth:
[image error]
(See how I'm using unquoted attributes and uppercase tags and attributes for added nostalgic value?)
The lowsrc value would usually be a monochrome version of the image in the src attribute.
And we only had 256 colours to play with. You tell that to the web developers today …they wouldn't believe you.
Seriously though, it's funny how problems from the early days of the web have a habit of resurfacing. I remember when Ajax was getting popular, all the problems associated with frames rose from the grave: bookmarking, breaking the back button, etc. Now that we're in a time of small-screen devices on low-bandwidth networks, we're rediscovering a lot of the same issues we had when we were developing for 640 pixel wide screens with 28K or 56K modems.
Ultimately, I think that what the great brainstorming around fixing the problems with the img element shows a fundamental impedance mismatch between the fluid nature of the web and the fixed pixel-based nature of bitmap images. We've got ems for setting type and percentages for specifying the proportions of our grids, but when it comes to photographic images, all we've got is the pixel—a unit that makes less and less sense every day.
Tagged with
responsive
images
development
lowsrc
February 1, 2012
Publishing Paranormal Interactivity
I've published the transcript of a talk I gave at An Event Apart in 2010. It's mostly about interaction design, with a couple of diversions into progressive enhancement and personality in products. It's called Paranormal Interactivity.
I had a lot of fun with this talk. It's interspersed with videos from The Hitchhiker's Guide To The Galaxy, Alan Partridge, and Super Mario, with special guest appearances from the existentialist chalkboard and Poshy's upper back torso.
If you don't feel like reading it, you can always watch the video or listen to the audio.
Adactio: Articles—Paranormal Interactivity on Huffduffer
You could even look at the slides but, as I always say, they won't make much sense without the context of the presentation.
Tagged with
transcript
conference
presentation
aea
aneventapart
aea2010
interaction
dConstruct Audio Archive
Clearleft has been running dConstruct since 2005. You can still visit the site for each year:
2005
2006
2007
2008
2009
2010
2011
Right from the first event, we recorded and released a podcast of the talks—thanks to Drew's l33t audio skillz—and all of those audio files are still online. That's quite a collection of aural goodies. So we decided to put them all together in one place. I give you…
Michelle came up with the visual design—evolving it from last year's dConstruct site—while I worked on the build. The small-screen and large-screen layouts were designed simultaneously and then I took a small-screen first approach to building it, progressively layering on the wider layouts and tweaking for the in-between states that didn't have mock-ups. It was a lot of fun.
There's nothing very complicated going on in the back end. I'm just using a JSON file to store all the info about the talks and I'm piggybacking on the dConstruct Huffduffer account to offer up podcast feeds by year and by category. The categories are fairly arbitrary and unscientific but they give a good indication of the kind of topics that dConstruct speakers have covered over the years …and you can see the trend of each topic over time in a sparkline on each category page, generated by Google's Chart API.
One tricky challenge was figuring out how to handle the images of speakers to make them responsive. Initially I was looking at Andy's context-aware responsive images because the small-screen single-column layout often displayed wider images than on a larger screen's multiple-column layout. In the end though, I decided that my time would be better spent optimising the images for every screen by getting the file sizes as low as I could so I spent a lot of time in Photoshop blurring backgrounds and messing with export settings. So while the images are all 450 pixels wide by 300 pixels tall, the average file size is around 20K. That's not ideal for small-screen, low-bandwidth devices that are squishing the images down but I figured it was a good start.
There's still lots more I'd like to tweak (I need to add links to slides, transcripts and videos where available) but rather than wait for everything to be perfect, I thought I might as well launch it now and continue to work on it.
So feel free to explore the archive, find some talks you like, subscribe to a podcast of your liking or huffduff anything that catches your ear.
And if listening to all the previous talks piques your interest, you'll be happy to that dConstruct will be back this year …and it's going to be splendid!
Tagged with
dconstruct
audio
responsive
design
January 31, 2012
Brighton Coffee
We've had a new intern at Clearleft for the past few weeks: Alex Jones. He likes a good coffee and as it's his first time in Brighton, I promised I'd tell him where he could find the best flat whites. So I made a map tale of Brighton Coffee.
January 30, 2012
Detection
When I wrote about responsible responsive images a few months back, I outlined my two golden rules when evaluating the various techniques out there:
The small image should be default.
Don't load images twice (in other words, don't load the small images and the larger images).
I also described why that led to my dissatisfaction with most server-side device libraries for user-agent sniffing:
When you consider the way that I'm approaching responsive images, those libraries are over-engineered. They contain a massive list of mobile user-agent strings that I'll never need. Remember, I'm taking a mobile-first approach and assuming a mobile browser by default. So if I'm going to overturn that assumption, all I need is a list of desktop user-agent strings.
I finished by asking:
Anybody fancy putting it together?
Well, it turns out that Brett Jankord is doing just that with a device-detection script called Categorizr:
Instead of assuming the device is a desktop, and detecting mobile and tablet device user agents, Categorizr is a mobile first based device detection. It assumes the device is mobile and sets up checks to see if it's a desktop or tablet. Desktops are fairly easy to detect, the user agents are known, and are not changing anytime soon.
It isn't ready for public consumption yet and there are plenty of known issues to iron out first, but I think the fundamental approach is spot-on:
By assuming devices are mobile from the beginning, Categorizr aims to be more future friendly. When new phones come out, you don't need to worry if their new user agent is in your device detection script since devices are assumed mobile from the start.
Tagged with
mobilefirst
futurefriendly
ffly
device
detection
categorizr
Making
There's definitely something stirring in the geek zeitgeist: something three-dimensional.
Tim Maly just published an article in Technology Review called Why 3-D Printing Isn't Like Virtual Reality:
Something interesting happens when the cost of tooling-up falls. There comes a point where your production runs are small enough that the economies of scale that justify container ships from China stop working.
Meanwhile The Atlantic interviewed Brendan for an article called Why Apple Should Start Making a 3D Printer Right Now:
3D Printing is unlikely to prove as satisfying to manual labor evangelists as an afternoon spent with a monkey wrench. But by bringing more and more people into the innovation process, 3D printers could usher in a new generation of builders and designers and tinkerers, just as Legos and erector sets turned previous generations into amateur engineers and architects.
Last month Anil Dash published his wishlist for the direction this technology could take: 3D Printing, Teleporters and Wishes:
Every 3D printer should seamlessly integrate a 3D scanner, even if it makes the device cost much more. The reason is simple: If you set the expectation that every device can both input and output 3D objects, you provide the necessary fundamentals for network effects to take off amongst creators. But no, these devices are not "3D fax machines". What you've actually made, when you have an internet-connected device that can both send and receive 3D-printed objects, is a teleporter.
Anil's frustrations and hopes echo a white paper from 2010 by Michael Weinberg called It Will Be Awesome if They Don't Screw it Up: 3D Printing, Intellectual Property, and the Fight Over the Next Great Disruptive Technology:
The ability to reproduce physical objects in small workshops and at home is potentially just as revolutionary as the ability to summon information from any source onto a computer screen.
Michael Weinberg also appears as one of the guests on an episode ABC Radio's Future Tense along with Tom Standage, one of my favourite non-fiction authors.
But my favourite piece of speculation on where this technology could take us comes from Russell Davies. He gave an excellent talk as part of the BBC's Four Thought series in which he talks not so much about The Internet Of Things, but The Geocities Of Things. I like that.
BBC - Podcasts - Four Thought: Russell M. Davies 21 Sept 2011 on Huffduffer
It's a short talk. Take the time to listen to it, then go grab a copy of Cory Doctorow's book Makers and have a poke around Thingiverse
Tagged with
3dprinting
hacking
January 29, 2012
Eighteen
On Twitter the other day, Justin Hall wrote:
hah! 18 years ago today, I posted my home page on the public web; here's a 27 January 1994 version bit.ly/AraMW0
Eighteen years! That's quite something. For reference, Justin's site links.net is generally acknowledged to be the web's first blog, before John Barger coined the term "weblog" (or Peter coined the more common contraction).
If you go right back to the start of links.net, Justin explains that he was inspired to start publishing online by a 1993 article in the New York Times—he has kept a copy on his site. What's fascinating about the article is that, although it's talking about the growth of the World Wide Web, it focuses on the rising popularity of Mosaic:
A new software program available free to companies and individuals is helping even novice computer users find their way around the global Internet, the network of networks that is rich in information but can be baffling to navigate.
From a journalistic point of view, this makes a lot of sense: focusing on the interface to the web, rather than trying to explain the more abstract nature of the web itself is a good human-centric approach. When the author does get around to writing about the web, there's a lot that must be explained for the audience of the time:
With hypertext, highlighted key words and images are employed to point a user to related sources of information.
"I realized that if everyone had the same information as me, my life would be easier," Mr. Berners-Lee said.
From a small electronic community of physicists, the World-Wide Web has grown into an international system of data base "server" computers offering diverse information.
Links, servers, the World Wide Web …these were actually pretty tricky concepts to explain, and unlikely to elicit excitement. But explaining the browser gets straight to the heart of how it felt to surf the web:
Mosaic lets computer users simply click a mouse on words or images on their computer screens to summon text, sound and images from many of the hundreds of data bases on the Internet that have been configured to work with Mosaic.
Click the mouse: there's a NASA weather movie taken from a satellite high over the Pacific Ocean. A few more clicks, and one is reading a speech by President Clinton, as digitally stored at the University of Missouri. Click-click: a sampler of digital music recordings as compiled by MTV. Click again, et voila: a small digital snapshot reveals whether a certain coffee pot in a computer science laboratory at Cambridge University in England is empty or full.
These days we take it for granted that we have the ability to surf around from website to website (and these days we do so on many more devices). I think it's good to remember just how remarkable that ability is.
Thanks, Tim Berners-Lee for dreaming up the web. Thanks, Marc Andreessen for giving us a tool to navigate the web. Thanks, Justin Hall for publishing on the web.
Tagged with
web
history
blogging
publishing
January 26, 2012
Cool your eyes don't change
At last November's Build conference I gave a talk on digital preservation called All Our Yesterdays:
Our communication methods have improved over time, from stone tablets, papyrus, and vellum through to the printing press and the World Wide Web. But while the web has democratised publishing, allowing anyone to share ideas with a global audience, it doesn't appear to be the best medium for preserving our cultural resources: websites and documents disappear down the digital memory hole every day. This presentation will look at the scale of the problem and propose methods for tackling our collective data loss.
The audio has been huffduffed.
Adactio: Articles—All Our Yesterdays on Huffduffer
I've published a transcription over in the "articles" section.
I blogged a list of relevant links shortly after the presentation.
You can also download the slides or view them on speakerdeck but, as usual, they won't make much sense out of context.
I hope you'll enjoy watching or reading or listening to the talk as much as I enjoyed presenting it.
Tagged with
buildconf
conference
belfast
digital
preservation
transcript
video
audio
presentation
January 17, 2012
One moment
I use my walk to and from work every day as an opportunity to catch up on my Huffduffer podcast. Today I started listening to a talk I've really been looking forward to. It's a Long Now seminar called Universal Access To All Knowledge by one of my heroes: Brewster Kahle, founder of The Internet Archive.
Brewster Kahle: Universal Access to All Knowledge — The Long Now on Huffduffer
As expected, it's an excellent talk. I caught the start of it on my walk in to work this morning and I picked up where I left off on my walk home this evening. In fact, I deliberately didn't get the bus home—despite the cold weather—so that I'd get plenty of listening done.
Round about the 23 minute mark he starts talking about Open Library, the fantastic project that George worked on to provide a web page for every book. He describes how it works as a lending library where an electronic version of a book can be checked out by one person at a time:
You can click on: hey! there's this HTML5 For Web Designers. We bought this book—we bought this book from a publisher such that we could lend it. So you can say "Oh, I want to borrow this book" and it says "Oh, it's checked out." Darn! And you can add it to your list and remind yourself to go and get it some other time.
Holy crap! Did Brewster Kahle just use my book to demonstrate Open Library‽
It literally stopped me in my tracks. I stopped walking and stared at my phone, gobsmacked.
It was a very surreal moment. It was also a very happy moment.
Now I'm documenting that moment—and I don't just mean on a third-party service like Twitter or Facebook. I want to be able to revisit that moment in the future so I'm documenting it at my own URL …though I'm very happy that the Internet Archive will also have a copy.
Tagged with
huffduffer
podcast
longnow
archive
books
html5forwebdesigners
January 16, 2012
Audio Update
Aral recently released the videos from last September's Update conference. You can watch the video of my talk if you like or, if video isn't your bag, I've published a transcription of the talk.
It's called One Web, Many Devices and I'm pretty happy with how it turned out. It's a short talk—just under 17 minutes—but I think I made my point well, without any um-ing and ah-ing. At the time I described the talk like this:
I went in to the lion's den to encourage the assembled creative minds to forego the walled garden of Apple's app store in favour of the open web.
It certainly got people talking. Addy Osmani wrote an op-ed piece in .net magazine after seeing the talk.
The somewhat contentious talk was followed by an even more contentious panel, which Amber described as Jeremy Keith vs. Everyone Else. The video of that panel has been published too. My favourite bit is around the five-minute mark where I nailed my colours to the mast.
Me: I'm not going to create something specifically for Windows Phone 7. I'm not going to create a specific Windows Phone 7 app. I'm not going to create a specific iPhone app or a specific Android app because I have as much interest in doing that as I do in creating a CD-ROM or a Laserdisc…
Aral: I don't think that's a valid analogy.
Me: Give it time.
But I am creating stuff that can be accessed on all those devices because an iPhone and Windows Phone 7 and Android—they all come with web browsers.
I was of course taking a deliberately extreme stance and, as I said at the time, the truthful answer to most of the questions raised during the panel discussion is "it depends" …but that would've made for a very dull panel.
Unfortunately the audio of the talks and panels from Update hasn't been published—just videos. I've managed to extract an mp3 file of my talk which involved going to some dodgy warez sitez.
Adactio: Articles—One Web, Many Devices on Huffduffer
I wish conference organisers would export the audio of any talks that they're publishing as video. Creating the sound file at that point is a simple one-click step. But once the videos are up online—be it on YouTube or Vimeo—it's a lot, lot harder to get just the audio.
Not everyone wants to watch video. In fact, I bet there are plenty of people who listen to conference talks by opening the video in a separate tab so they can listen to it while they do something else. That's one of the advantages of publishing conference audio: it allows people to catch up on talks without having to devote all their senses. I've written about this before:
Not that I have anything against the moving image; it's just that television, film and video demand more from your senses. Lend me your ears! and your eyes. With your ears and eyes engaged, it's pretty hard to do much else. So the default position for enjoying television is sitting down.
A purely audio channel demands only aural attention. That means that radio—and be extension, podcasts—can be enjoyed at the same time as other actions; walking around, working out at the gym. Perhaps it's this symbiotic, rather than parasitic, arrangement that I find engaging.
When I was chatting with Jesse from SFF Audio he told me how he often puts video podcasts (vodcasts?) on to his iPod/iPhone but then listens to them with the device in his pocket. That's quite a waste of bandwidth but if no separate audio is made available, the would-be listener is left with no choice.
SFFaudio with Jeremy Keith on Huffduffer
So conference organisers: please, please take a second or two to export an audio file if you're publishing a video. Thanks.
Tagged with
update
conference
audio
video
publishing
transcription
mobile
native
oneweb
Jeremy Keith's Blog
- Jeremy Keith's profile
- 55 followers
