Jeremy Keith's Blog, page 129

January 22, 2013

Twitter permissions

Twitter has come in for a lot of (justifiable) criticism for changes to its API that make it somewhat developer-hostile. But it has to be said that developers don’t always behave responsibly when they’re using the API.



The classic example of this is the granting of permissions. James summed it up nicely: it’s just plain rude to ask for write-access to my Twitter account before I’ve even started to use your service. I could understand it if the service needed to post to my timeline, but most of the time these services claim that they want me to sign up via Twitter so that I can find my friends who are also using the service — that doesn’t require write access. Quite often, these requests to authenticate are accompanied by reassurances like “we’ll never tweet without your permission” …in which case, why ask for write-access in the first place?



To be fair, it used to be a lot harder to separate out read and write permissions for Twitter authentication. But now it’s actually not that bad, although it’s still not as granular as it could be.



One of the services that used to require write-access to my Twitter account was Lanyrd. I gave it permission, but only because I knew the people behind the service (a decision-making process that doesn’t scale very well). I always felt uneasy that Lanyrd had write-access to my timeline. Eventually I decided that I couldn’t in good conscience allow the lovely Lanyrd people to be an exception just because I knew where they lived. Fortunately, they concurred with my unease. They changed their log-in system so that it only requires read-access. If and when they need write-access, that’s the point at which they ask for it:




We now ask for read-only permission the first time you sign in, and only ask to upgrade to write access later on when you do something that needs it; for example following someone on Twitter from the our attendee directory.




Far too many services ask for write-access up front, without providing a justification. When asked for an explanation, I’m sure most of them would say “well, that’s how everyone else does it”, and they would, alas, be correct.



What’s worse is that users grant write-access so freely. I was somewhat shocked by the amount of tech-savvy friends who unwittingly spammed my timeline with automated tweets from a service called Twitter Counter. Their reactions ranged from sheepish to embarrassed to angry.



I urge you to go through your Twitter settings and prune any services that currently have write-access that don’t actually need it. You may be surprised by the sheer volume of apps that can post to Twitter on your behalf. Do you trust them all? Are you certain that they won’t be bought up by a different, less trustworthy company?



If a service asks me to sign up but insists on having write-access to my Twitter account, it feels like being asked out on a date while insisting I sign a pre-nuptial agreement. Not only is somewhat premature, it shows a certain lack of respect.



Not every service behaves so ungallantly. Done Not Done, 1001 Beers, and Mapalong all use Twitter for log-in, but none of them require write-access up-front.



Branch and Medium are typical examples of bad actors in this regard. The core functionality of these sites has nothing to do with posting to Twitter, but both sites want write-access so that they can potentially post to Twitter on my behalf later on. I know that I won’t ever want either service to do that. I can either trust them, or not use the service at all. Signing up without granting write-access to my Twitter account isn’t an option.



I sent some feedback to Branch and part of their response was to say the problem was with the way Twitter lumps permissions together. That used to be true, but Lanyrd’s exemplary use of Twitter for log-in makes that argument somewhat hollow.



In the case of Branch, Medium, and many other services, Twitter authentication is the only way to sign up and start using the service. Using a username and password isn’t an option. On the face of it, requiring Twitter for authentication doesn’t sound all that different to requiring an email address for authentication. But demanding write-access to Twitter is the equivalent of demanding the ability to send emails from your email address.



The way that so many services unnecessarily ask for write-access to Twitter—and the way that so many users unquestioningly grant it—reminds me of the password anti-pattern all over again. Because this rude behaviour is so prevalent, it has now become the norm. If we want this situation to change, we need to demand more respect.



The next time that a service demands unwarranted write-access to your Twitter account, refuse to grant it. Then tell the people behind that service why you’re refusing to sign up.



And please take a moment to go through the services you’ve already authorised.





Tagged with
twitter
authentication
permission
api
respect
antipattern
lanyrd
branch
medium

 •  0 comments  •  flag
Share on Twitter
Published on January 22, 2013 10:09

January 21, 2013

Long time

A few years back, I was on a road trip in the States with my friend Dan. We drove through Maryland and Virginia to the sites of American Civil War battles—Gettysburg, Antietam. I was reading Tom Standage’s magnificent book The Victorian Internet at the time. When I was done with the book, I passed it on to Dan. He loved it. A few years later, he sent me a gift: a glass telegraph insulator.



Glass telegraph insulator from New York



Last week I received another gift from Dan: a telegraph key.



Telegraph key



It’s lovely. If my knowledge of basic electronics were better, I’d hook it up to an Arduino and tweet with it.



Dan came over to the UK for a visit last month. We had a lovely time wandering around Brighton and London together. At one point, we popped into the National Portrait Gallery. There was one painting he really wanted to see: the portrait of Samuel Pepys.



Pepys



“Were you reading the online Pepys diary?”, I asked.



“Oh, yes!”, he said.



“I know the guy who did that!”



The “guy who did that” is, of course, the brilliant Phil Gyford.



Phil came down to Brighton and gave a Skillswap talk all about the ten-year long project.



The diary of Samuel Pepys: Telling a complex story online on Huffduffer



Now Phil has restarted the diary. He wrote a really great piece about what it’s like overhauling a site that has been online for a decade. Given that I spent a lot of my time last year overhauling The Session (which has been online in some form or another since the late nineties), I can relate to his perspective on trying to choose long-term technologies:




Looking ahead, how will I feel about this Django backend in ten years’ time? I’ve no idea what the state of the platform will be in a decade.




I was thinking about switching The Session over to Django, but I decided against it in the end. I figured that the pain involved in trying to retrofit an existing site (as opposed to starting a brand new project) would be too much. So the site is still written in the very uncool LAMP stack: Linux, Apache, MySQL, and PHP.



Mind you, Marco Arment makes the point in his Webstock talk that there’s a real value to using tried and tested “boring” technologies.



One area where I’ve found myself becoming increasingly wary over time is the use of third-party APIs. I say that with a heavy heart—back at dConstruct 2006 I was talking all about The Joy of API. But Yahoo, Google, Twitter …they’ve all deprecated or backtracked on their offerings to developers.



Anyway, this is something that has been on my mind a lot lately: evaluating technologies and services in terms of their long-term benefit instead of just their short-term hit. It’s something that we need to think about more as developers, and it’s certainly something that we need to think about more as users.



Compared with genuinely long-term projects like the 10,000 year Clock of the Long Now making something long-lasting on the web shouldn’t be all that challenging. The real challenge is acknowledging that this is even an issue. As Phil puts it:




I don’t know how much individuals and companies habitually think about this. Is it possible to plan for how your online service will work over the next ten years, never mind longer?




As my Long Bet illustrates, I can be somewhat pessimistic about the longevity of our web creations:




The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.




But I really hope I lose that bet. Maybe I’ll suggest to Matt (my challenger on the bet) that we meet up on February 22nd, 2022 at the Long Now Salon. It doesn’t exist yet. But give it time.





Tagged with
time
telegraph
pepys
thesession
development
longnow

 •  0 comments  •  flag
Share on Twitter
Published on January 21, 2013 10:11

January 17, 2013

A question of time

Some of the guys at work occasionally provide answers to .net magazine’s “big question” feature. When they told me about the latest question that landed in their inboxes, I felt I just had to stick my oar in and provide my answer.



I’m publishing my response here, so that if they decide not to publish it in the magazine or on the website (or if they edit it down), I’ve got a public record of my stance on this very important topic.



The question is:




If you could send a message back to younger designer or developer self, what would it say? What professional advice would you give a younger you?




This is my answer:



Rather than send a message back to my younger self, I would destroy the message-sending technology immediately. The potential for universe-ending paradoxes is too great.



I know that it would be tempting to give some sort of knowledge of the future to my younger self, but it would be the equivalent of attempting to kill Hitler—that never ends well.



Any knowledge I supplied to my past self would cause my past self to behave differently, thereby either:




destroying the timeline that my present self inhabits (assuming a branching many-worlds multiverse) or
altering my present self, possibly to the extent that the message-sending technology never gets invented. Instant paradox.


But to answer your question, if I could send a message back to a younger designer or developer self, the professional advice I would give would be:




Jeremy,



When, at some point in the future, you come across the technology capable of sending a message like this back to your past self, destroy it immediately!



But I know that you will not heed this advice. If you did, you wouldn’t be reading this.



On the other hand, I have no memory of ever receiving this message, so perhaps you did the right thing after all.



Jeremy






Tagged with
question
answer
timetravel
sci-fi
sciencefiction
paradox

 •  0 comments  •  flag
Share on Twitter
Published on January 17, 2013 11:24

Clearleft.com past and present

We finally launched the long-overdue redesign of the Clearleft website last week. We launched it late on Friday afternoon, because, hey! that’s not a stupid time to push something live or anything.



The actual moment of launch was initiated by Josh who had hacked together a physical launch button containing a teensy USB development board.



The launch button
Preparing to launch



So nerdy.



Mind you, just because the site is now live doesn’t mean the work is done. Far from it, as Paul pointed out:



Keep the feedback and comments about the new @clearleft site coming. Launching a site is just the start; much to iterate upon and improve.

— Paul Lloyd (@paulrobertlloyd) January 11, 2013


But it’s nice to finally have something new up. We were all getting quite embarrassed by the old site.



Still, rather than throw the old design away and never speak of it again, we’ve archived it. We’ve archived every iteration of the site:




Version 1 launched in 2005. I wrote about it back then. It looked very much of its time. This was before responsive design, but it was, of course, nice and liquid.
Version 2 came a few years later. There were some little bits I liked it about it but it always felt a bit “off”.
Version 3 was more of a re-alignment than a full-blown redesign: an attempt to fix some of the things that felt “off” about the previous version.
Version 4 is where we are now. We don’t love it, but we don’t hate it either. Considering how long it took to finally get this one done, we should probably start planning the next iteration now.


I’m glad that we’ve kept the archived versions online. I quite enjoy seeing the progression of the visual design and the technologies used under the hood.





Tagged with
clearleft
redesign
iteration
archive
digital
preservation

 •  0 comments  •  flag
Share on Twitter
Published on January 17, 2013 09:41

January 11, 2013

Dealing with IE again

People have been linking to—and saying nice things about—my musings on dealing with Internet Explorer. Thank you. You’re very kind. But I think I should clarify a few things.



If you’re describing the techniques I showed (using Sass and conditional comments) as practical or useful, I appreciate the sentiment but personally I wouldn’t describe them as either. Jake’s technique is genuinely useful and practical.



I wasn’t really trying to provide any practical “take-aways”. I was just thinking out loud. The only real point to my ramblings was at the end:




When we come up with clever hacks and polyfills for dealing with older versions of Internet Explorer, we shouldn’t feel pleased about it. We should feel angry.




My point is that we really shouldn’t have to do this. And, in fact, we don’t have to do this. We choose to do this.



Take the particular situation I was describing with a user of The Session who using IE8 on Windows XP with a monitor set to 800x600 pixels. A lot people picked up on this observation:




As a percentage, this demographic is tiny. But this isn’t a number. It’s a person. That person matters.




But here’s the thing: that person only started to experience problems when I chose to try to “help” IE8 users. If I had correctly treated IE8 as the legacy browser that it is, those users would have received the baseline experience …which was absolutely fine. Not great, but fine. But instead, I decided to jump in with my hacks, my preprocessor, my conditional comments, and worst of all, my assumptions about the viewport size.



In this case, I only have myself to blame. This is a personal project so I’m the client. I decided that I wanted to give IE8 and IE7 users the same kind of desktop navigation that more modern browsers were getting. All the subsequent pain for me as the developer, and for the particular user who had problems, is entirely my fault. If you’re working at a company where your boss or your client insists on parity for IE8 or IE7, I guess you can point the finger at them.



My point is: all the problems and workarounds that I talked about in that post were the result of me trying to crowbar modern features into a legacy browser. Now, don’t get me wrong—I’m not suggesting that IE8 or IE7 should be shut out or get a crap experience: “baseline” doesn’t mean “crap”. There’s absolutely nothing wrong with serving up a baseline experience to a legacy browser as long as your baseline experience is pretty good …and it should be.



So, please, don’t think that my post was a hands-on, practical example of how to give IE8 and IE7 users a similar experience to modern browsers. If anything, it was a cautionary tale about why trying to do that is probably a mistake.





Tagged with
browsers
development
internetexplorer
ie8
ie7
hack
standards

 •  0 comments  •  flag
Share on Twitter
Published on January 11, 2013 06:21

January 10, 2013

Responsive Day Out updates

The Responsive Day Out is just seven weeks away. That’s only 49 sleeps!



Alas, Malarkey has had to drop out. Sad face. But, I’ve managed to find another talented web geek from Corby: Tom Maslen of Responsive News fame. Happy face!



Here’s another bit of news: there will be an after-party. Rejoice! Thanks to the generosity of the fine people behind Gridset—that would be Mark Boulton Design then—we’ll be heading to The Loft on Ship Street in the evening to have a few drinks and a good ol’ chinwag. That’s where Remy had the Full Frontal after-party and I thought it worked really, really well. Instead of shouting over blaring music, you could actually have a natter with people.



Many, many thanks to Gridset for making this possible.



In other news of generosity, Drew has offered to work his podcasting magic. Thank you, Drew! And Craig has very kindly offered to record and host video of the event, made possible with sponsorship from Mailchimp. Thank you, Craig! Thank you, Mailchimp!



And get this: A Book Apart are also getting behind the day and they’ll be sending on some books that I plan to give away to attendees who ask questions during the discussiony bits. Thank you, my friends apart!



I could still do with just one more sponsor: I’d really like to hire out the Small Batch coffee cart for the day, like we did at dConstruct. There will be coffee and tea available from the Dome bar anyway, but the coffee is pretty awful. Even though the Responsive Day Out is going to be a very shoestring, grassroots affair, I’d still like to maintain some standards. So if you know of a company that might be interested in earning the gratitude of a few hundred web geeks, let them know about this (it’s around a grand to caffeinate a conference worth of geeks for a day).



If you’ve got your ticket, great; I look forward to seeing you on March 1st. If you don’t; sorry. And before you ask, no, I’m afraid there is no waiting list. We’re not doing any refunds or transfers—if someone with a ticket can’t make it, they can simply give (or sell) their ticket to someone else. We won’t be making any lanyards so we don’t need to match up people to name badges.



So keep an eye on Twitter (especially closer to the day) in case anyone with a ticket is planning to bail.





Tagged with
responsiveconf
conference
event
brighton

 •  0 comments  •  flag
Share on Twitter
Published on January 10, 2013 07:50

January 9, 2013

Dealing with IE

Laura asked a question on Twitter the other day about dealing with older versions of Internet Explorer when you’ve got your layout styles nested within media queries (that older versions of IE don’t understand):



Sass, media queries and support for (older) Internet Explorers. How do you do it? This StackOverflow question asks: stackoverflow.com/questions/1062…

— Laura Kalbag (@laurakalbag) January 3, 2013


It’s a fair question. It also raises another question: how do you define “dealing with” Internet Explorer 8 or 7?



You could justifiably argue that IE7 users should upgrade their damn browser. But that same argument doesn’t really hold for IE8 if the user is on Windows XP: IE8 is as high as they can go. Asking users to upgrade their browser is one thing. Asking them to upgrade their operating system feels different.



But this is the web and websites do not need to look the same in every browser. Is it acceptable to simply give Internet Explorer 8 the same baseline experience that any other old out-of-date browser would get? In other words, is it even a problem that older versions of Internet Explorer won’t parse media queries? If you’re building in a mobile-first way, they’ll get linearised content with baseline styles applied.



That’s the approach that Alex advocates in the Q&A after his excellent closing keynote at Fronteers. That’s what I’m doing here on adactio.com. Users of IE8 get the linearised layout and that’s just fine. One of the advantages of this approach is that you are then freed up to use all sorts of fancy CSS within your media query blocks without having to worry about older versions of IE crapping themselves.



On other sites, like Huffduffer, I make an assumption (always a dangerous thing to do) that IE7 and IE8 users are using a desktop or laptop computer and so they could get some layout styles. I outlined that technique in a post about Windows mobile media queries. Using that technique, I end up splitting my CSS into two files:








The downside to this technique is that now there are two HTTP requests for the CSS …even for users of modern browsers. The alternative is to maintain one stylesheet for modern browsers and a separate stylesheet for older versions of Internet Explorer. That sounds like a maintenance nightmare.



Pre-processors to the rescue. Using Sass or LESS you can write your CSS in separate files (e.g. one file for basic styles and another for layout styles) and then use the preprocessor to combine those files in two different ways: one with media queries (for modern browsers) and another without media queries (for older versions of Internet Explorer). Or, if you don’t want to have your media query styles all grouped together, you can use Jake’s excellent method.



When I relaunched The Session last month, I initially just gave Internet Explorer 8 and lower the linearised content—the same layout that small-screen browsers would get. For example, the navigation is situated at the bottom of each page and you get to it by clicking an internal link at the top of each page. It all worked fine and nobody complained.



But I thought that it was a bit of a shame that users of IE8 and IE7 weren’t getting the same navigation that users of other desktop browsers were getting. So I decided to use a preprocesser (Sass in this case) to spit out an extra stylesheet for IE8 and IE7.



So let’s say I’ve got .scss files like this:




base.scss
medium.scss
wide.scss


Then in my standard .scss file that’s going to generate the CSS for all browsers (called global.css), I can write:



import "base.scss";
@media all and (min-width: 30em) {
@import "medium";
}
@media all (min-width: 50em) {
@import "wide";
}


But I can also generate a stylesheet for IE8 and IE7 (called legacy.css) that calls in those layout styles without the media query blocks:



@import "medium";
@import "wide";


IE8 and IE7 will be downloading some styles twice (all the styles within media queries) but in this particular case, that doesn’t amount to too much. Oh, and you’ll notice that I’m not even going to try to let IE6 parse those styles: it would do more harm than good.







So I did that (although I don’t really have .scss files named “medium” or “wide”—they’re actually given names like “navigation” or “columns” that more accurately describe what they do). I thought I was doing a good deed for any users of The Session who were still using Internet Explorer 8.



But then I read this. It turned out that someone was not only using IE8 on Windows XP, but they had their desktop’s resolution set to 800x600. That’s an entirely reasonable thing to do if your eyesight isn’t great. And, like I said, I can’t really ask him to upgrade his browser because that would mean upgrading the whole operating system.



Now there’s a temptation here to dismiss this particular combination of old browser + old OS + narrow resolution as an edge case. It’s probably just one person. But that one person is a prolific contributor to the site. This situation nicely highlights the problem of playing the numbers game: as a percentage, this demographic is tiny. But this isn’t a number. It’s a person. That person matters.



The root of the problem lay in my assumption that IE8 or IE7 users would be using desktop or laptop computers with a screen size of at least 1024 pixels. Serves me right for making assumptions.



So what could I do? I could remove the conditional comments and the IE-specific stylesheet and go back to just serving the linearised content. Or I could serve up just the medium-width styles to IE8 and IE7.



That’s what I ended up doing but I also introduced a little bit of JavaScript in the conditional comments to serve up the widescreen styles if the browser width is above a certain size:







It works …I guess. It’s not optimal but at least users of IE8 and IE7 are no longer just getting the small-screen styles. It’s a hack, and not a particularly clever one.



Was it worth it? Is it an improvement?



I think this is something to remember when we’re coming up solutions to “dealing with” older versions of Internet Explorer: whether it’s a dumb solution like mine or a clever solution like Jake’s, we shouldn’t have to do this. We shouldn’t have to worry about IE7 just like we don’t have to worry about Netscape 4 or Mosaic or Lynx; we should be free to build according to the principles of progressive enhancement safe in the knowledge that older, less capable browsers won’t get all the bells and whistles, but they will be able to access our content. Instead we’re spending time coming up with hacks and polyfills to deal with one particular family of older, less capable browsers simply because of their disproportionate market share.



When we come up with clever hacks and polyfills for dealing with older versions of Internet Explorer, we shouldn’t feel pleased about it. We should feel angry.









Tagged with
browsers
css
responsive
development
internetexplorer
ie8
ie7
mediaqueries
hack
thesession

 •  0 comments  •  flag
Share on Twitter
Published on January 09, 2013 10:50

January 7, 2013

New year, old year

2013 is one week old. This time of transition from one calendar year to another is the traditional time to cast an eye back over the year that has just come to a close. Who am I to stand in the way of tradition?



2012 was quite a jolly year for me. Nothing particularly earth-shattering happened, and that’s just fine with me. I made some websites. I did some travelling. It was grand.



I really enjoyed working on Matter by day and hacking away at relaunching The Session by night.



The trip to New Zealand at the start of 2012 was great. Not only was Webstock a great conference (and I’m very happy with the talk I gave, Of Time And The Network), but the subsequent road trip with Jessica, Ethan, Liz, Erin and Peter was a joyous affair.



Thinking about it, I went to lovely new places in 2012 like Newfoundland and Oslo as well as revisiting New York, Austin, Chicago, and Freiburg. And I went to CERN, which was a great experience.



But the highlight of my year was undoubtedly the first week of September right here in Brighton. The combination of Brighton SF followed by dConstruct was simply amazing. I feel very privileged to have been involved in both events. I’m still pinching myself.



Now it’s 2013, and I’m already starting to plan this year’s dConstruct: be sure to put Friday, September 6th, 2013 in your calendar. Before that, I’ve got the Responsive Day Out—more on that soon. I’ve got some speaking engagements lined up, mostly in the States in the latter half of the year at An Event Apart. Interestingly, apart from compering dConstruct and BrightonSF, I didn’t speak at all in the UK in 2012—the last talk I gave in the UK was All Our Yesterdays at Build 2011.



I’m going to continue hacking away on Huffduffer and The Session whenever I can in 2013. I find those personal projects immensely rewarding. I’m also hoping to have time to do some more writing.



Let’s see what this year brings.





Tagged with
2012
2013

 •  0 comments  •  flag
Share on Twitter
Published on January 07, 2013 08:53

December 30, 2012

Canvas sparklines

I like sparklines a lot. Tufte describes a sparkline as:




…a small intense, simple, word-sized graphic with typographic resolution.




Four years ago, I added sparklines to Huffduffer using Google’s chart API. That API comes in two flavours: a JavaScript API for client-side creation of graphs, and image charts for server-side rendering of charts as PNGs.



The image API is really useful: there’s no reliance on JavaScript, it works in every browser capable of displaying images, and it’s really flexible and customisable. Therefore it is, of course, being deprecated.



The death warrant for Google image charts sets the execution date for 2015. Time to start looking for an alternative.



I couldn’t find a direct equivalent to the functionality that Google provides i.e. generating the images dynamically on the server. There are, however, plenty of client-side alternatives, many of them using canvas.



Most of the implementations I found were a little heavy-handed for my taste: they either required jQuery or Processing or both. I just wanted a quick little script for generating sparklines from a dataset of numbers. So I wrote my own.



I’ve put my code up on Github as Canvas Sparkline.



Here’s the JavaScript. You create a canvas element with the dimensions you want for the sparkline, then pass the ID of that element (along with your dataset) into the sparkline function:



sparkline ('canvasID', [12, 18, 13, 12, 11, 15, 17, 20, 15, 12, 8, 7, 9, 11], true);


(that final Boolean value at the end just indicates whether you want a red dot at the end of the sparkline).



The script takes care of normalising the values, so it doesn’t matter how many numbers are in the dataset or whether the range of the numbers is in the tens, hundreds, thousands, or hundreds of thousands.



There’s plenty of room for improvement:




The colour of the sparkline is hardcoded (50% transparent black) but it could be passed in as a value.
All the values should probably be passed in as an array of options rather than individual parameters.


Feel free to fork, adapt, and improve.



The sparklines are working quite nicely, but I can’t help but feel that this isn’t the right tool for the job. Ideally, I’d like to keep using a server-side solution like Google’s image charts. But if I am going to use a client-side solution, I’m not sure that canvas is the right element. This should really be SVG: canvas is great for dynamic images and animations that need to update quite quickly, but sparklines are generally pretty static. If anyone fancies making a lightweight SVG solution for sparklines, that would be lovely.



In the meantime, you can see Canvas Sparkline in action on the member profiles at The Session, like here, here, here, or here.



Update: Ask and thou shalt receive. Check out this fantastic lightweight SVG solution from Stuart—bloody brilliant!





Tagged with
canvas
sparklines
javascript
scripting
code
github
tufte
dataviz
charts
graph

 •  0 comments  •  flag
Share on Twitter
Published on December 30, 2012 11:10

December 22, 2012

Returning control

In his tap essay Fish, Robin sloan said:




On the internet today, reading something twice is an act of love.




I’ve found a few services recently that encourage me to return to things I’ve already read.



Findings is looking quite lovely since its recent redesign. They may have screwed up with their email notification anti-pattern but they were quick to own up to the problem. I’ve been taking the time to read back through quotations I’ve posted, which in turn leads me to revisit the original pieces that the quotations were taken from.



Take, for example, this quote from Dave Winer:




We need to break out of the model where all these systems are monolithic and standalone. There’s art in each individual system, but there’s a much greater art in the union of all the systems we create.




…which leads me back to the beautifully-worded piece he wrote on Medium.



At the other end of the scale, reading this quote led me to revisit Rob’s review of Not Of This Earth on NotComing.com:




Not of This Earth is an early example of a premise conceivably determined by the proverbial writer’s room dartboard. In this case, the first two darts landed on “space” and “vampire.” There was no need to throw a third.




Although I think perhaps my favourite movie-related quotation comes from Gavin Rothery’s review of Saturn 3:




You could look at this film superficially and see it as a robot gone mental chasing Farrah Fawcett around a moonbase trying to get it on with her and killing everybody that gets in its way. Or, you could see through that into brilliance of this film and see that is in fact a story about a robot gone mental chasing Farrah Fawcett around a moonbase trying to get it on with her and killing everybody that gets in its way.




The other service that is encouraging me to revisit articles that I’ve previously read is Readlists. I’ve been using it to gather together pieces of writing that I’ve previously linked to about the Internet of Things, the infrastructure of the internet, digital preservation, or simply sci-fi short stories.



Frank mentioned Readlists when he wrote about The Anthologists:




Anthologies have the potential to finally make good on the purpose of all our automated archiving and collecting: that we would actually go back to the library, look at the stuff again, and, holy moses, do something with it. A collection that isn’t revisited might as well be a garbage heap.




I really like the fact that while Readlists is very much a tool that relies on the network, the collected content no longer requires a network connection: you can send a group of articles to your Kindle, or download them as one epub file.



I love tools like this—user style sheets, greasemonkey scripts, Readability, Instapaper, bookmarklets of all kinds—that allow the end user to exercise control over the content they want to revisit. Or, as Frank puts it:




…users gain new ways to select, sequence, recontextualize, and publish the content they consume.




I think the first technology that really brought this notion to the fore was RSS. The idea that the reader could choose not only to read your content at a later time, but also to read it in a different place—their RSS reader rather than your website—seemed quite radical. It was a bitter pill for the old guard to swallow, but once publishing RSS feeds became the norm, even the stodgiest of old media producers learned to let go of the illusion of control.



That’s why I was very surprised when Aral pushed back against RSS. I understand his reasoning for not providing a full RSS feed:




every RSS reader I tested it in displayed the articles differently — completely destroying my line widths, pull quotes, image captions, footers, and the layout of the high‐DPI images I was using.




…but that kind of illusory control just seems antithetical to the way the web works.



The heart of the issue, I think, is when Aral talks about:




the author’s moral rights over the form and presentation of their work.




I understand his point, but I also value the reader’s ideas about the form and presentation of the work they are going to be reading. The attempt to constrain and restrict the reader’s recontextualising reminds me of emails I used to read on Steve Champeon’s Webdesign-L mailing list back in the 90s that would begin:




How can I force the user to …?




or




How do I stop the user from …?




The questions usually involved attempts to stop users “getting at” images or viewing the markup source. Again, I understand where those views come from, but they just don’t fit comfortably with the sprit of the web.



And, of course, the truth was always that once something was out there on the web, users could always find a way to read it, alter it, store it, or revisit it. For Aral’s site, for example, although he refuses to provide a full RSS feed, all I have to is use Reeder with its built-in Readability functionality to get the full content.



Breaking Things



This is an important point: attempting to exert too much control will be interpreted as damage and routed around. That’s exactly why RSS exists. That’s why Readability and Instapaper exist. That’s why Findings and Readlists exist. Heck, it’s why Huffduffer exists.



To paraphrase Princess Leia, the more you tighten your grip, the more content will slip through your fingers. Rather than trying to battle against the tide, go with the flow and embrace the reality of what Cameron Koczon calls Orbital Content and what Sara Wachter-Boettcher calls Future-Ready Content.



Both of those articles were published on A List Apart. But feel free to put them into a Readlist, or quote your favourite bits on Findings. And then, later, maybe you’ll return to them. Maybe you’ll read them twice. Maybe you’ll love them.





Tagged with
reading
publishing
content
rss
readlists
findings
readability
instapaper
control
anthologies

 •  0 comments  •  flag
Share on Twitter
Published on December 22, 2012 07:51

Jeremy Keith's Blog

Jeremy Keith
Jeremy Keith isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Jeremy Keith's blog with rss.