Jeremy Keith's Blog, page 119
January 8, 2014
Playing TAG
I was up in London yesterday to spend the day with the web developers of a Clearleft client, talking front-end architecture and strategies for implementing responsive design. ‘Twas a good day, although London always tires me out quite a bit.
On this occasion, I didn’t head straight back to Brighton. Instead I braved the subterranean challenges of the Tube to make my way across london to Google Campus, where a panel discussion was taking place. This was Meet The TAG.
TAG is the Technical Architecture Group at the W3C. It doesn’t work on any one particular spec. Instead, it’s a sort of meta-group to steer how standards get specified.
Gathered onstage yesterday evening were TAG members Anne van Kesteren, Tim Berners-Lee, Alex Russell, Yehuda Katz, and Daniel Appelquist (Henry Thompson and Sergey Konstantinov were also there, in the audience). Once we had all grabbed a (free!) beer and settled into our seats, Bruce kicked things off with an excellent question: in the intros, multiple TAG members mentioned their work as guiding emerging standards to make sure they matched the principles of the TAG …but what are those principles?
It seemed like a fairly straightforward question, but it prompted the first rabbit hole of the evening as Alex and Yehuda focussed in on the principle of “layering”—stacking technologies in a sensible way that provides the most power to web developers. It’s an important principle for sure, but it didn’t really answer Bruce’s question. I was tempted to raise my hand and reformulate Bruce’s question into three parts:
Does the Technical Architecture Group have design principles?
If so, what are there?
And are they written down somewhere?
There’s a charter and that contains a mission statement, but that’s not the same as documenting design principles. There is an extensible web manifesto—that does document design principles—which contains the signatures of many (but not all) TAG members …so does that represent the views of the TAG? I’d like to get some clarification on that.
The extensible web manifesto does a good job of explaining the thinking behind projects like web components. It’s all about approaching the design of new browser APIs in a sensible (and extensible) way.
I mentioned that the TAG were a kind of meta-standards body, and in a way, what the extensible web manifesto—and examples like web components—are proposing is a meta-approach to how browsers implement new features. Instead of browser makers (in collaboration with standards bodies) creating new elements, UI widgets and APIs, developers will create new elements and UI widgets.
When Yehuda was describing this process, he compared it with the current situation. Currently, developers have to petition standards bodies begging them to implement some new kind of widget and eventually, if you’re lucky, browsers might implement it. At this point I interrupted to ask—somewhat tongue-in-cheek—”So if we get web components, what do we need standards bodies for?” Alex had an immediate response for that: standards bodies can look at what developers are creating, find the most common patterns, and implement them as new elements and widgets.
“I see,” I said. “So browsers and standards bodies will have a kind of ‘rough consensus’ based on …running code?”
“Yes!”, said Alex, laughing. “Jeremy Keith, ladies and gentlemen!”
So the idea with web components (and more broadly, the extensible web) is that developers will be able to create new elements with associated JavaScript functionality. Currently developers are creating new widgets using nothing but JavaScript. Ideally, web components will result in more declarative solutions and reduce our current reliance on JavaScript to do everything. I’m all for that.
But one thing slightly puzzled me. The idea of everyone creating whatever new elements they want isn’t a new one. That’s the whole idea behind XML (and by extension, XHTML) and yet the very same people who hated the idea of that kind of extensibility are the ones who are most eager about web components.
Playing devil’s advocate, I asked “How come the same people who hated RDF love web components?” (although what I really meant was RDFa—a means of extending HTML).
I got two answers. The first one was from Alex. Crucially, he said, a web component comes bundled with instructions on how it works. So it’s useful. That’s a big, big difference to the Tower of Babel scenario where everyone could just make up their own names for elements, but browsers have no idea what those names mean so effectively they’re meaningless.
That was the serious answer. The other answer I got was from Tim Berners-Lee. With a twinkle in his eye and an elbow in Alex’s ribs he said, “Well, these youngsters who weren’t around when we doing things with XML all want to do things with JSON now, which is a much cooler format because you can store number types in it. So that’s why they want to do everything in JavaScript.” Cheeky trickster!
Anyway, there was plenty of food for thought in the discussion of web components. This really is a radically new and different way of adding features to browsers. In theory, it shifts the balance of power much more to developers (who currently have to hack together everything using JavaScript). If it works, it will be A Good Thing and result in expanding HTML’s vocabulary with genuinely useful features. I fear there may be a rocky transition to this new way of thinking, and I worry about backwards compatibility, but I can’t help but admire the audacity of the plan.
The evening inevitably included a digression into the black hole of DRM. As always, the discussion got quite heated and I don’t think anybody was going to change their minds. I tried to steer things away from the ethical questions and back to the technical side of things by voicing my concerns with the security model of EME. Reading the excellent description by Henri, sentences like this should give you the heebie-jeebies:
Neither the browser nor the JavaScript program understand the bytes.
But the whole DRM discussion was, fortunately, curtailed by Anne who was ostensibly moderating the panel. Before it was though, Sir Tim made one final point. Because of the heat of the discussion, people were calling for us to separate the societal questions (around intellectual property and payment) from the technical ones (around encryption). But, Sir Tim pointed out, that separation isn’t really possible. Even something as simple as the hyperlink has political assumptions built in about the kind of society that would value being able to link resources together and share them around.
That’s an important point, well worth remembering: all software is political. That’s one of the reasons why I’d really appreciate an explicit documentation of design principles from the Technical Architecture Group.
Still, it was a very valuable event. Bruce has also written down his description of the evening. Many thanks to Dan and the rest of the TAG team for putting it together. I’m very glad I went along. As well as the panel discussion, it was really nice to chat to Paul and have the chance to congratulate Jeni in person on her appearance on her OBE.
Alas, I couldn’t stick around too long—I had to start making the long journey back to Brighton—so I said my goodbyes and exited. I didn’t have the opportunity to speak to Tim Berners-Lee directly, which is probably just as well: I’m sure I would’ve embarrassed myself by being a complete fanboy.
Tagged with
tag
w3c
meetup
london
standards
extensible
components
browsers
drm
eme
Have you published a response to this? Let me know the URL:
January 2, 2014
New year
At the start of 2013, I wrote:
Let’s see what this year brings.
Well, it brought much the same as the year before. Here’s what I wrote about 2012:
Nothing particularly earth-shattering happened, and that’s just fine with me. I made some websites. I did some travelling. It was grand.
That’s also true of 2013.
The travelling was particularly nice. Work—specifically conference speaking—brought me to some beautiful locations: Porto, Dubrovnik, and Nürnberg to name just three. And not all of my travelling was work-related. Jessica and I went to the wonderful San Sebastián to celebrate her fortieth birthday. “I’ll take to you to any restaurant in the world for your birthday”, I said. She chose Etxebarri. Good choice.
Conference-speaking took me back to some old favourites too: Freiburg, New York, San Francisco, Chicago, Amsterdam. I’m very lucky (and privileged) to have the opportunity to travel to interesting places, meet my peers, and get up on a stage to geek out to a captive audience. I enjoy the public speaking anyway, but it’s always an extra bonus when it takes me to a nice location. In fact, between you and me, that’s often the biggest criterion for me when it comes to speaking at an event …so if you want me to speak at an event you’re organising in some exotic location, give me a shout.
Mind you, two of my event highlights in 2013 didn’t involve any travelling at all: Responsive Day Out at the start of March, and dConstruct at the start of September, both of them right here in Brighton. I’m really, really pleased with how both of those events turned out. Everyone had a splendid time. I’m already starting to plan the next dConstruct: put Friday, September 5th 2014 in your calendar now. And who knows? …maybe there’ll even be a reprise of the Responsive Day Out in 2014.
Other highlights of the year include travelling to CERN for the line-mode browser dev days, and the inspiring Science Hack Day in San Francisco.
It was a big year for Clearleft. We moved into our lovely new building and hired quite a few new lovely people. So much change in such a short period of time was quite nerve-wracking, to be honest, but it’s all turning out just fine (touch wood).
Last year, I wrote:
I’m going to continue hacking away on Huffduffer and The Session whenever I can in 2013. I find those personal projects immensely rewarding.
Both projects continue to be immensely rewarding, although I probably neglected Huffduffer a bit; I definitely spent more time working on The Session. In 2014 I should really devote more time to adactio.com, because I also said:
I’m also hoping to have time to do some more writing.
I suppose I did a fair amount of wordsmithing here in my journal but perhaps in 2014 I might get my teeth stuck into something more bookish again. We’ll see.
So, all in all, a perfectly fine year for me personally and professionally. Like I said, it was grand.
Looking beyond my own personal sphere, 2013 was far from grand. The worst fears of even the most paranoid conspiracy theorist turned out to be nothing compared to what we found out about GCHQ and the NSA. It would be very easy to become despondent and fatalistic about the dystopian cyberpunk reality that we found ourselves living in.
Or we can look on the bright side, like Bruce Schneier, Glenn Greenwald, and Aral are doing. Schneier points out that the crypto works (it was routed around), Greenwald points to the Pinkerian positive overall trend in human history, and Aral reminds us that we have the power to build the kind of technologies we want to see in the world.
Whatever your reaction—despair, hope, or everything in between—we all owe Edward Snowden an enormous debt for his actions. I’m not sure that I would have had his courage were I in his situation. The year—perhaps the decade—belongs to Edward Snowden.
Have you published a response to this? Let me know the URL:
December 28, 2013
In dependence
Jason Kottke wrote an end-of-the-year piece for the Nieman Journalism Lab called The blog is dead, long live the blog:
Sometime in the past few years, the blog died. In 2014, people will finally notice.
But the second part of the article’s title is as important as the first:
Over the past 16 years, the blog format has evolved, had social grafted onto it, and mutated into Facebook, Twitter, and Pinterest and those new species have now taken over.
Jason’s piece prompted some soul-searching. John Scalzi wrote The Death of the Blog, Again, Again. Colin Devroe wrote The blog isn’t dead. It is just sleeping.:
The advantages to using Facebook should be brought out onto the web. There should be no real disadvantage to using one platform or another. In fact, there should be an advantage to using your own platform rather than those of a startup that could go out of business at any moment.
That’s a common thread in amongst a number of the responses: the specific medium of the blog may certainly be waning, but the idea of independent publishing still burns brightly. Ben Werdmuller sums that feeling up, saying the blog might be dying, but the web’s about to fight back:
If you buy the idea that articles aren’t dying - and anecdotally, I know I read as much as I ever did online - then a blog is simply the delivery mechanism. It’s fine for that to die. Even welcome. In some ways, that death is due to the ease of use of the newer, siloed sites, and makes the way for new, different kinds of content consumption; innovation in delivery.
Kartik Prabhu writes about The Blogging Dead:
In any case, let’s not ‘blog’, let’s just write—on our own personal place on the Web.
In fact, Jason’s article was preceded by a lovely post from Jeffrey called simply This is a website:
Me, I regret the day I started calling what I do here “blogging.”
I know how he feels. I still call what I write here my “journal” rather than my “blog”. Call it what you like, publishing on your own website can be a very powerful move, now more than ever:
Blogging may have been a fad, a semi-comic emblem of a time, like CB Radio and disco dancing, but independent writing and publishing is not. Sharing ideas and passions on the only free medium the world has known is not a fad or joke.
One of the most overused buzzwords of today’s startup scene is the word “disruption”. Young tech upstarts like to proclaim how they’re going to “disrupt” some incumbent industry of the old world and sweep it away in a bright new networked way. But on today’s web of monolithic roach-motel silos like Facebook and Twitter, I can’t imagine a more disruptive act than choosing to publish on your own website.
It’s not a new idea. Far from it. Jeffrey launched a project called Independent’s Day in 2001:
No one is in control of this space. No one can tell you how to design it, how much to design it, when to “dial it down.” No one will hold your hand and structure it for you. No one will create the content for you.
Those words are twelve years old, but they sound pretty damn disruptive to me today.
Frank is planting his flag in his own sand with his minifesto Homesteading 2014
I’m returning to a personal site, which flips everything on its head. Rather than teasing things apart into silos, I can fuse together different kinds of content.
So, I’m doubling down on my personal site in 2014.
He is not alone. Many of us are feeling an increasing unease, even disgust, with the sanitised, shrink-wrapped, handholding platforms that make it oh-so-easy to get your thoughts out there …on their terms …for their profit.
Of course independent publishing won’t be easy. Facebook, Pinterest, Medium, Twitter, and Tumblr are all quicker, easier, more seductive. But I take great inspiration from the work being done at Indie Web Camp. Little, simple formats and protocols—like webmentions—can have a powerful effect in aggregate. Small pieces, loosely joined.
Mind you, it’s worth remembering that not everybody wants to be independent. Tyler Fisher wrote about this on Medium—“because it is easier and hopefully more people will see it”— in a piece called I’m 22 years old and what is this. :
Fighting to get the open web back sounds great. But I don’t know what that means.
If we don’t care about how the web works, how can we understand why it is important to own our data? Why would we try if what we can do now is so easy?
Therein lies the rub. Publishing on your own website is still just too damn geeky. The siren-call of the silos is backed up with genuinely powerful, easy to use, well-designed tools. I don’t know if independent publishing can ever compete with that.
In all likelihood, the independent web will never be able to match the power and reach of the silos. But that won’t stop me (and others) from owning our own words. If nothing else, we can at least demonstrate that the independent path is an option—even if that option requires more effort.
Like Tyler Fisher, Josh Miller describes his experience with a web of silos—the only web he has ever known:
Some folks are adamant that you should own your own words when you publish online. For example, to explain why he doesn’t use services like Quora, Branch, and Google-Plus, Dave Winer says: “I’m not going to put my writing in spaces that I have no control over. I’m tired of playing the hamster.”
As someone who went through puberty with social media, it is hard to relate to this sentiment. I have only ever “leased,” from the likes of LiveJournal (middle school), Myspace (middle school), Facebook (high school), and Twitter (college).
There’s a wonderful response from Gina Trapani:
For me, publishing on a platform I have some ownership and control over is a matter of future-proofing my work. If I’m going to spend time making something I really care about on the web—even if it’s a tweet, brevity doesn’t mean it’s not meaningful—I don’t want to do it somewhere that will make it inaccessible after a certain amount of time, or somewhere that might go away, get acquired, or change unrecognizably.
This! This is why owning your own words matters.
I have a horrible feeling that many of the people publishing with the easy-to-use tools of today’s social networks don’t realise how fragile their repository is, not least because everyone keeps repeating the lie that “the internet never forgets.”
Stephanie Georgopulos wrote a beautiful piece called Blogging Ourselves to Live—published on Medium, alas—describing the power of that lie:
We were told — warned, even — that what we put on the internet would be forever; that we should think very carefully about what we commit to the digital page. And a lot of us did. We put thought into it, we put heart into, we wrote our truths. We let our real lives bleed onto the page, onto the internet, onto the blog. We were told, “Once you put this here, it will remain forever.” And we acted accordingly.
Sadly, when you uncover the deceit of that lie, it is usually through bitter experience:
Occasionally I become consumed by the idea that I can somehow find — somehow restore — all the droppings I’ve left on the internet over the last two decades. I want back the IMed conversations that caused tears to roll from my eyes, I want back the alt girl e-zines I subscribed to, wrote poetry for. I fill out AOL’s Reset Password form and send new passwords to email addresses I don’t own anymore; I use the Way Back Machine to search for the diary I kept in 1999. I am hunting for tracks of my former self so I can take a glimpse or kill it or I don’t know what. The end result is always the same, of course; these things are gone, they have been wiped away, they do not exist.
I’m going to continue to publish here on my own website, journal, blog, or whatever you want to call it. It’s still possible that I might lose everything but I’d rather take the responsibility for that, rather than placing my trust in ”the cloud” someone else’s server. I’m owning my own words.
The problem is …I publish more than words. I publish pictures too, even the occasional video. I have the originals on my hard drive, but I’m very, very uncomfortable with the online home for my photos being in the hands of Yahoo, the same company that felt no compunction about destroying the cultural wealth of GeoCities.
Flickr has been a magnificent shining example of the web done right, but it is in an inevitable downward spiral. There are some good people still left there, but they are in the minority and I fear that they cannot fight off the douchtastic consultants of growth-hacking that have been called in to save the patient by killing it.
I’ve noticed that I’m taking fewer and fewer photos these days. I think that subconsciously, I’ve started the feel that publishing my photos to a third-party site—even one as historically excellent as Flickr—is a fragile, hollow experience.
In 2014, I hope to figure out a straightforward way to publish my own photos to my own website …while still allowing third-party sites to have a copy. It won’t be easy—binary formats are trickier to work with than text—but I want that feeling of independence.
I hope that you too will be publishing on your own website in 2014.
Tagged with
indieweb
independent
publishing
writing
blogging
blogs
silos
Have you published a response to this? Let me know the URL:
December 26, 2013
That was my jam
Those lovely people at the jam factory have reprised their Jam Odyssey for 2013—this time it’s an underwater dive …through jam.
Looking back through my jams, I thought that they made for nice little snapshots of the year.
February 3rd: Meat Abstract by Therapy? …because apparently I had a dream about Therapy?
February 9th: Jubilee Street by Nick Cave And The Bad Seeds …because I had just been to the gig/rehearsal that Jessica earned us tickets to. That evening was definitely a musical highlight of the year.
February 20th: Atlanta Lie Low by Robert Forster …because I was in Atlanta for An Event Apart.
March 25th: Larsen B by British Sea Power …because I had just seen them play a gig (on their Brighton home turf) and this was the song they left us with.
April 8th: Tramp The Dirt Down by Elvis Costello …because it was either this or Ding Dong, The Witch Is Dead! (or maybe Margaret In A Guillotine). I had previously “jammed” it in August 2012, saying “Elvis Costello (Davy Spillane, Donal Lunny, and Steve Wickham) in 1989. Still waiting.”
May 7th: It’s A Shame About Ray by The Lemonheads …because Ray Harryhausen died.
July 5th: Summertime In England by Van Morrison …because it was a glorious Summer’s day and this was playing on the stereo in the coffee shop I popped into for my morning flat white.
August 13th: Spaceteam by 100 Robots …because Jim borrowed my space helmet for the video.
September 22nd: Higgs Boson Blues by Nick Cave And The Bad Seeds …because this was stuck in my head the whole time I was at hacking at CERN (most definitely a highlight of 2013).
October 7th: Hey, Manhattan by Prefab Sprout …because I was in New York.
October 15th: Pulsar by Vangelis …because I was writing about Jocelyn Bell Burnell.
October 27th: Romeo Had Juliette by Lou Reed …because Lou Reed died, and also: this song is pure poetry.
I like This Is My Jam. On the one hand, it’s a low-maintenance little snippet of what’s happening right now. On the other hand, it makes for a lovely collage over time.
Or, as Matt put it back in 2010:
We’ve all been so distracted by The Now that we’ve hardly noticed the beautiful comet tails of personal history trailing in our wake.
Without deliberate planning, we have created amazing new tools for remembering. The real-time web might just be the most elaborate and widely-adopted architecture for self-archival ever created.
Tagged with
music
2013
thisismyjam
archive
memory
Have you published a response to this? Let me know the URL:
December 17, 2013
Sasstraction
Emil has been playing around with CSS variables (or “custom properties” as they should more correctly be known), which have started landing in some browsers. It’s well worth a read. He does a great job of explaining the potential of this new CSS feature.
For now though, most of us will be using preprocessors like Sass to do our variabling for us. Sass was the subject of Chris’s talk at An Event Apart in San Francisco last week—an excellent event as always.
At one point, Chris briefly mentioned that he’s quite happy for variables (or constants, really) to remain in Sass and not to be part of the CSS spec. Alas, I didn’t get a chance to chat with Chris about that some more, but I wonder if his thinking aligns with mine. Because I too believe that CSS variables should remain firmly in the realm of preprocessers rather than browsers.
Hear me out…
There are a lot of really powerful programmatic concepts that we could add to CSS, all of which would certainly make it a more powerful language. But I think that power would come at an expense.
Right now, CSS is a relatively-straightforward language:
CSS isn’t voodoo, it’s a simple and straightforward language where you declare an element has a style and it happens.
That’s a somewhat-simplistic summation, and there’s definitely some complexity to certain aspects of CSS—like specificity or margin collapsing—but on the whole, it has a straightforward declarative syntax:
selector {
property: value;
}
That’s it. I think that this simplicity is quite beautiful and surprisingly powerful.
Over at my collection of design principles, I’ve got a section on Bert Bos’s essay What is a good standard? In theory, it’s about designing standards in general, but it matches very closely to CSS in particular. Some of the watchwords are maintainability, modularity, extensibility, simplicity, and learnability. A lot of those principles are clearly connected. I think CSS does a pretty good job of balancing all of those principles, while still providing authors with quite a bit of power.
Going back to that fundamental pattern of CSS, you’ll notice that is completely modular:
selector {
property: value;
}
None of those pieces (selector, property, value) reference anything elsewhere in the style sheet. But as soon as you introduce variables, that modularity is snapped apart. Now you’ve got a value that refers to something defined elsewhere in the style sheet (or even in a completely different style sheet).
But variables aren’t the first addition to CSS that sacrifices modularity. CSS animations already do that. If you want to invoke a keyframe animation, you have to define it. The declaration and the invocation happen in separate blocks:
selector {
animation-name: myanimation;
}
@keyframes myanimation {
from {
property: value;
}
to {
property: value;
}
}
I’m not sure that there’s any better way to provide powerful animations in CSS, but this feature does sacrifice modularity …and I believe that has a knock-on effect for learnability and readability.
So CSS variables (or custom properties) aren’t the first crack in the wall of the design principles behind CSS. To mix my metaphors, the slippery slope began with @keyframes (and maybe @font-face too).
But there’s no denying that having variables/constants in CSS provide a lot of power. There’s plenty of programming ideas (like loops and functions) that would provide lots of power to CSS. I still don’t think it’s a good idea to mix up the declarative and the programmatic. That way lies XSLT—a strange hybrid beast that’s sort of a markup language and sort of a programming language.
I feel very strongly that HTML and CSS should remain learnable languages. I don’t just mean for professionals. I believe it’s really important that anybody should be able to write and style a web page.
Now does that mean that CSS must therefore remain hobbled? No, I don’t think so. Thanks to preprocessors like Sass, we can have our cake and eat it too. As professionals, we can use tools like Sass to wield the power of variables, functions (mixins) and other powerful concepts from the programming world.
Preprocessors cut the Gordian knot that’s formed from the tension in CSS between providing powerful features and remaining relatively easy to learn. That’s why I’m quite happy for variables, mixins, nesting and the like to remain firmly in the realm of Sass.
Incidentally, at An Event Apart, Chris was making the case that Sass’s power comes from the fact that it’s an abstraction. I don’t think that’s necessarily true—I think the fact that it provides a layer of abstraction might be a red herring.
Chris made the case for abstractions being inherently A Good Thing. Certainly if you go far enough down the stack (to Assembly Language), that’s true. But not all abstractions are good abstractions, and I’m not just talking about Spolky’s law of leaky abstractions.
Let’s take two different abstractions that share a common origin story:
Sass is an abstraction layer for CSS.
Haml is an abstraction layer for HTML.
If abstractions were inherently A Good Thing, then they would both provide value to some extent. But whereas Sass is a well-designed tool that allows CSS-savvy authors to write their CSS more easily, Haml is a steaming pile of poo.
Here’s the crucial difference: Sass doesn’t force you to write all your CSS in a completely new way. In fact, every .css file is automatically a valid .scss file. You are then free to use—or ignore—the features of Sass at your own pace.
Haml, on the other hand, forces you to use a completely new whitespace-significant syntax that maps on to HTML. There are no half-measures. It is an abstraction that is not only opinionated, it refuses to be reasoned with.
So I don’t think that Sass is good because it’s an abstraction; I think that Sass is good because it’s a well-designed abstraction. Crucially, it’s also easy to learn …just like CSS.
Tagged with
css
sass
design
principles
abstraction
modularity
learning
simplicity
haml
Have you published a response to this? Let me know the URL:
December 15, 2013
Tracking
Ajax was a really big deal six, seven, eight years ago. My second book was all about Ajax. I spoke about Ajax at conferences and gave workshops all about using Ajax and progressive enhancement.
During those workshops, I would often point out that Ajax had the potential to be abused terribly. Until the advent of Ajax, it was very clear to a user when data was being submitted to a server: you’d have to click a link or submit a form. As soon as you introduce asynchronous communication, it’s possible for the server to get information from the client even without a full-page refresh.
Imagine, for example, that you’re typing a message into a textarea. You might begin by typing, “Why, you stuck up, half-witted, scruffy-looking nerf…” before calming down and thinking better of it. Before Ajax, there was no way that what you had typed could ever reach the server. But now, it’s entirely possible to send data via Ajax with every key press.
It was just a thought experiment. I wasn’t actually that worried that anyone would ever do something quite so creepy.
Then I came across this article by Jennifer Golbeck in Slate all about Facebook tracking what’s entered—but then erased—within its status update form:
Unfortunately, the code that powers Facebook still knows what you typed—even if you decide not to publish it. It turns out that the things you explicitly choose not to share aren’t entirely private.
Initially I thought there must have been some mistake. I erronously called out Jen Golbeck when I found the PDF of a paper called The Post that Wasn’t: Exploring Self-Censorship on Facebook. The methodology behind the sample group used for that paper was much more old-fashioned than using Ajax:
First, participants took part in a weeklong diary study during which they used SMS messaging to report all instances of unshared content on Facebook (i.e., content intentionally self-censored). Participants also filled out nightly surveys to further describe unshared content and any shared content they decided to post on Facebook. Next, qualified participants took part in in-lab interviews.
But the Slate article was referencing a different paper that does indeed use Ajax to track instances of deleted text:
This research was conducted at Facebook by Facebook researchers. We collected self-censorship data from a random sample of approximately 5 million English-speaking Facebook users who lived in the U.S. or U.K. over the course of 17 days (July 6-22, 2012).
So what I initially thought was a case of alarmism—conflating something as simple as simple as a client-side character count with actual server-side monitoring—turned out to be a pretty accurate reading of the situation. I originally intended to write a scoffing post about Slate’s linkbaiting alarmism (and call it “The shocking truth behind the latest Facebook revelation”), but it turns out that my scoffing was misplaced.
That said, the article has been updated to reflect that the Ajax requests are only sending information about deleted characters—not the actual content. Still, as we learned very clearly from the NSA revelations, there’s not much practical difference between logging data and logging metadata.
The nerds among us may start firing up our developer tools to keep track of unexpected Ajax requests to the server. But what about everyone else?
This isn’t the first time that the power of JavaScript has been abused. Every browser now ships with an option to block pop-up windows. That’s because the ability to spawn new windows was so horribly misused. Maybe we’re going to see similar preference options to avoid firing Ajax requests on keypress.
It would be depressingly reductionist to conclude that any technology that can be abused will be abused. But as long as there are web developers out there who are willing to spawn pop-up windows or force persistent cookies or use Ajax to track deleted content, the depressingly reductionist conclusion looks like self-fulfilling prophecy.
Tagged with
facebook
ajax
privacy
surveillance
tracking
javascript
ethics
Have you published a response to this? Let me know the URL:
Defining the damn thang
Chris recently documented the results from his survey which asked:
Is it useful to distinguish between “web apps” and “web sites”?
His conclusion:
There is just nothing but questions, exemptions, and gray area.
This is something I wrote about a while back:
Like obscenity and brunch, web apps can be described but not defined.
The results of Chris’s poll are telling. The majority of people believe there is a difference between sites and apps …but nobody can agree on what it is. The comments make for interesting reading too. The more people chime in an attempt to define exactly what a “web app” is, the more it proves the point that the the term “web app” isn’t a useful word (in the sense that useful words should have an agreed-upon meaning).
Tyler Sticka makes a good point:
By this definition, web apps are just a subset of websites.
I like that. It avoids the false dichotomy that a product is either a site or an app.
But although it seems that the term “web app” can’t be defined, there are a lot of really smart people who still think it has some value.
Having spent years working on both apps & sites, I think there’s a significant difference. Hard to explain doesn’t mean non-existent.
— Cennydd Bowles (@Cennydd) December 6, 2013
I think Cennydd is right. I think the differences exist …but I also think we’re looking for those differences at the wrong scale. Rather than describing an entire product as either a website or an web app, I think it makes much more sense to distinguish between patterns.
Ok, I’ll try. An app’s primary material is behaviour. A website’s primary material is information.
— Cennydd Bowles (@Cennydd) December 6, 2013
Let’s take those two modifiers—behavioural and informational. But let’s apply them at the pattern level.
The “get stuff” sites that Jake describes will have a lot of informational patterns: how best to present a flow of text for reading, for example. Typography, contrast, whitespace; all of those attributes are important for an informational pattern.
The “do stuff” sites will probably have a lot of behavioural patterns: entering information or performing an action. Feedback, animation, speed; these are some of the possible attributes of a behavioural pattern.
But just about every product out there on the web contains a combination of both types of pattern. Like I said:
Is Wikipedia a website up until the point that I start editing an article? Are Twitter and Pinterest websites while I’m browsing through them but then flip into being web apps the moment that I post something?
Now you could make an arbitrary decision that any product with more than 50% informational patterns is a website, and any product with more than 50% behavioural patterns is a web app, but I don’t think that’s very useful.
Take a look at Brad’s collection of responsive patterns. Some of them are clearly informational (tables, images, etc.), while some of them are much more behavioural (carousels, notifications, etc.). But Brad doesn’t divide his collection into two, saying “Here are the patterns for websites” and “Here are the patterns for web apps.” That would be a dumb way to divide up his patterns, and I think it’s an equally dumb way to divide up the whole web.
What I’m getting at here is that, rather than trying to answer the question “what is a web app, anyway?”, I think it’s far more important to answer the other question I posed:
Why?
Why do you want to make that distinction? What benefit do you gain by arbitrarily dividing the entire web into two classes?
I think by making the distinction at the pattern level, that question starts to become a bit easier to answer. One possible answer is to do with the different skills involved.
For example, I know plenty of designers who are really, really good at informational patterns—they can lay out content in a beautiful, clear way. But they are less skilled when it comes to thinking through all the permutations involved in behavioural patterns—the “arrow of time” that’s part of so much interaction design. And vice-versa: a skilled interaction designer isn’t necessarily the best at old-skill knowledge of type, margins, and hierarchy. But both skillsets will be required on an almost every project on the web.
So I do believe there is value in distinguishing between behaviour and information …but I don’t believe there is value in trying to shoehorn entire products into just one of those categories. Making the distinction at the pattern level, though? That I can get behind.
Addendum
Incidentally, some of the respondents to Chris’s poll shared my feeling that the term “web app” was often used from a marketing perspective to make something sound more important and superior:
Perhaps it’s simply fashion. Perhaps “website” just sounds old-fashioned, and “web app” lends your product a more up-to-date, zingy feeling on par with the native apps available from the carefully-curated walled gardens of app stores.
Approaching things from the patterns perspective, I wonder if those same feelings of inferiority and superiority are driving the recent crop of behavioural patterns for informational content: parallaxy, snowfally, animation patterns are being applied on top of traditional informational patterns like hierarchy, measure, and art direction. I’m not sure that the juxtaposition is working that well. Taking the single interaction involved in long-form informational patterns (that interaction would be scrolling) and then using it as a trigger for all kinds of behavioural patterns feels …uncanny.
Tagged with
website
webapp
webthang
language
terminology
patterns
behaviour
information
Have you published a response to this? Let me know the URL:
December 14, 2013
Trust
My debit card is due to expire so my bank has sent me a new card to replace it. I’ve spent most of the day updating my billing details on various online services that I pay for with my card.
I’m sure I’ll forget about one or two. There’s the obvious stuff like Netflix and iTunes, but there are also the many services that I use to help keep my websites running smoothly:
hosting providers like Digital Ocean and Engine Hosting,
DNS managers like DNSimple,
email providers like Fastmail,
transactional email suppliers like Mailchimp and Postmark,
code repositories like Github,
and distributed storage providers like Amazon’s S3.
But there’s one company that will not be receiving my new debit card details: Adobe. That’s not because of any high-and-mighty concerns I might have about monopolies on the design software market—their software is, mostly, pretty darn good (‘though I’m not keen on their Mafia-style pricing policy). No, the reason why I won’t give Adobe my financial details is that they have proven that they cannot be trusted:
We also believe the attackers removed from our systems certain information relating to 2.9 million Adobe customers, including customer names, encrypted credit or debit card numbers, expiration dates, and other information relating to customer orders.
The story broke two months ago. Everyone has mostly forgotten about it, like it’s no big deal. It is a big deal. It is a very big deal indeed.
I probably won’t be able to avoid using Adobe products completely; I might have to use some of their software at work. But I’ll be damned if they’re ever getting another penny out of me.
Tagged with
adobe
software
security
trust
Have you published a response to this? Let me know the URL:
November 15, 2013
A map to build by
The fifth and final Build has just wrapped up in Belfast. As always, it delivered an excellent day of thought-provoking talks.
It felt like some themes emerged, not just from this year, but from the arc of the last five years. More than one speaker tapped into a feeling that I’ve had for a while that the web has changed. The web has grown up. Unfortunately, it has grown up to be kind of a dickhead.
There were many times during the day’s talks at Build that I was reminded of Anil Dash’s The Web We Lost. Both Jason and Frank pointed to the imbalance of power on the web, where the bottom line has become more important than the user. It’s a landscape dominated by The Stacks—Google, Facebook, et al.—and by fly-by-night companies who have no interest in being good web citizens, and even less interest in the data that they’re sucking from their users.
Don’t get me wrong: I’m not saying that companies shouldn’t be interested in making money—that’s what companies do. But prioritising profit above all else is not going to result in a stable society. And the web is very much part of the fabric of society now. Still, the web is young enough to have escaped the kind of regulation that “real world” companies would be subjected to. Again, don’t get me wrong: I don’t want top-down regulation. What I want is some common standards of decency amongst web companies. If the web ends up getting regulated because of repeated acts of abuse, it will be a tragedy of the commons on an unprecedented scale.
I realise that sounds very gloomy and doomy, and I don’t want to give the impression that Build was a downer—it really wasn’t. As the last ever speaker at Build, Frank ended on a note of optimism. Sure, the way we think about the web now is filled with negative connotations: it appears money-grabbing, shallow, and locked down. But that doesn’t mean that the web is inherently like that.
Harking back to Ethan’s fantastic talk at last year’s Build, Frank made the point that our map of the web makes it seem a grim place, but the territory of the web isn’t necessarily a lost cause. What we need is a better map. A map of openness, civility, and—something that’s gone missing from the web’s younger days—a touch of wildness.
I take comfort from that. I take comfort from that because we are the map makers. The worst thing that could happen would be for us to fatalistically accept the negative turn that the web has taken as inevitable, as “just the way things are.” If the web has grown up to be a dickhead, it’s because we shaped it that way, either through our own actions or inactions. But the web hasn’t finished growing. We can still shape it. We can make it less of a dickhead. At the very least, we can acknowledge that things can and should be better.
I’m not sure exactly how we go about making a better map for the web. I have a vague feeling that it involves tapping into the kind of spirit that informs places like CERN—the kind of spirit that motivated the creation of the web itself. I have a feeling that making a better map for the web doesn’t involve forming startups and taking venture capital. Neither do I think that a map for a better web will emerge from working at Google, Facebook, Twitter, or any of the current incumbents.
So where do we start? How do we begin to attempt to make a better web without getting overwehlmed by the enormity of the task?
Perhaps the answer comes from one of the other speakers at this year’s Build. In a beautifully-delivered presentation, Paul Soulellis spoke about resistance:
How do we, as an industry of creative professionals, reconcile the fact that so much of what we make is used to perpetuate the demands of a bloated marketplace? A monoculture?
He spoke about resisting the intangible nature of digital work with “thingness”, and resisting the breakneck speed of the network with slowness. Perhaps we need our own acts of resistance if we want to change the map of the web.
I don’t know what those acts of resistance are. Perhaps publishing on your own website is an act of resistance—one that’s more threatening to the big players than they’d like to admit. Perhaps engaging in civil discourse online is an act of resistance.
Like I said, I don’t know. But I really appreciate the way that this year’s Build has pushed me into asking these uncomfortable questions. Like the web, Build has grown up over the years. Unlike the web, Build turned out just fine.
Tagged with
buildconf
web
ethics
business
culture
build
conference
speaking
Have you published a response to this? Let me know the URL:
November 6, 2013
Icon fonts, unicode ranges, and IE8’s compatibility mode
While doing some browser testing this week, Mark come across a particularly wicked front-end problem. Something was triggering compatibility mode in Internet Explorer 8 and he couldn’t figure out what it was.
Compatibility mode was something introduced in IE8 to try not to “break the web”, as Microsoft kept putting it. Effectively it makes IE8 behave like IE7. Why would you ever want to do that? Well, if you make websites exactly the wrong way and code for a specific browser (like, say, IE7), then better, improved browsers are something to be feared and battled against. For the rest of us, better, improved browsers are something to be welcomed.
Shockingly, Microsoft originally planned to have compatibility mode enabled by default in Internet Explorer 8. It was bad enough that they were going to ship a browser with a built-in thermal exhaust port, they also contemplated bundling a proton torpedo with it too. Needless to say, right-minded people were upset at that possibility. I wrote about my concerns back in 2008.
Microsoft changed their mind about the default behaviour, but they still shipped IE8 with the compatibility mode “feature”, which Mark was very much experiencing as a bug. Something in the CSS was triggering compatibility mode, but frustratingly, there was no easy way of figuring out what was doing it. So he began removing chunks of CSS, reducing until he could focus in on the exact piece of CSS that was triggering IE8’s errant behaviour.
Finally, he found it. He was using an icon font. Now, that in itself isn’t enough to give IE8 its conniptions—an icon font is just a web font like any other. The only difference is that this font was using the private use area of the unicode range. That’s the default setting if you’re creating an icon font using the excellent icomoon service. There’s a good reason for that:
Using Latin letters is not recommended for icon fonts. Using the Private Use Area of Unicode is the best option for icon fonts. By using PUA characters, your icon font will be compatible with screen readers. But if you use Latin characters, the screen reader might read single, meaningless letters, which would be confusing.
Well, it turns out that using assigning glyphs to this private use area was causing IE8 to flip into compatibility mode. Once Mark assigned the glyphs to different characters, IE8 started behaving itself.
Now, we haven’t tested to see if this is triggered by all of the 6400 available slots in the UTF-8 private use range. If someone wants to run that test (presumably using some kind of automation), ’twould be much appreciated.
Meantime, just be careful if you’re using the private use area for your icon fonts—you may just inadvertently wake the slumbering beast of compatibility mode.
Tagged with
browsers
ie8
compatibility
webfonts
icons
fonts
icomoon
standards
unicode
Have you published a response to this? Let me know the URL:
Jeremy Keith's Blog
- Jeremy Keith's profile
- 55 followers
