Tom Chatfield's Blog, page 3
December 10, 2013
Clouds, autocomplete and tweets
I’ve been experimenting with re-publishing a few of my recent columns on Medium. If you’ve arrived here in hope of reading them, simply follow (and share!) the links below.
I am the algorithm – on language, thought and ten more years of Twitter
Is autocomplete evil? – how a machine’s whispers in my ears are changing the way I think
The tyranny of the cloud – why you should think twice before you hit “upload”
Why computers will become invisible – our extraordinary intimacy with unseen technology
December 2, 2013
Clouds, autocomplete and tweets
I’ve been experimenting with re-publishing a few of my recent columns on Medium. If you’ve arrived here in hope of reading them, simply follow (and share!) the links below.
I am the algorithm – on language, thought and ten more years of Twitter.
Is autocomplete evil? – how a machine’s whispers in my ears are changing the way I think.
The tyranny of the cloud – why you should think twice before you hit “upload”.
November 13, 2013
The meaning of Medium
I’ve been experimenting recently with the young writing space Medium. Brought to you by Twitter co-founder, Ev Williams, its potted pitch is “a better place to read and write” – and it certainly delivers in terms of interface and ease of reading.
Behind the scenes, composing on Medium takes What You See Is What You Get to an elegant extreme, with as few options as sensibly possible left to the author: you can determine your text, title, subtitle, illustrations, bold, italics, quotations, links, and that’s about it. Almost every other aspect of formatting is automatic, with acres of white space in the best modern taste atop responsive design fit for any device.
All this, Williams explains, is about creating “a beautiful space for reading and writing — and little else. The words are central. They can be accompanied by images to help illustrate your point. But there are no gratuitous sidebars, plug-ins, or widgets. There is nothing to set up or customize.”
For a writer, it’s a little addictive. I’m typing this into the back end of my own website, which runs on WordPress. The interface is functional, the options and sidebars useful and not too overwhelming. I have a fairly good idea of what my post will look like once I hit “publish.” After Medium, though, it all feels a little weighty: a series of small barriers between these thoughts and their audience that, once I’ve had them lowered, suddenly looms disconcertingly large when they return.
In this, Medium is a bit like Twitter: a service I struggled to “get” initially, but which instilled a feeling it’s hard to shake off once experienced. This was the feeling of hardware and software getting out of the way, leaving me free to say pretty much anything that popped into my head, or to share anything I found interesting, to anyone who might be interested.
Today, I think of Twitter as a technology that asked the world a question even its creators didn’t quite know how to answer: what will you do with this novel brand of freedom and feeling? Over the last few years, the world has given its answer: we will do more than you could have dared imagined, together; and if you’re sensible, you’ll follow where we lead.
Similarly, I don’t quite “get” some aspects of Medium yet. Rather than comments, it uses nested Notes alongside particular paragraphs for collaboration and conversation. It doesn’t privilege novelty in the way that blogs and feeds do, by insisting new material tops the screen – instead it’s about discovery, themed Collections, and helping authors find an audience.
None of this has yet cohered into something I understand in the way I do the tweets snaking down another tab in my browser. But I’m not sure this matters. In fact, it might be the best thing of all: because, for all the clarity of its core experience, I don’t believe Medium’s creators have a clear idea of what it should or will look like in five years’ time either. They’re waiting for the rest of us to bring our words, and to decide together what they mean.
EDIT: A quick Twitter chat with @adrianhon following this post’s publication has confirmed one thing: we would both love to see what may end up as an extremely profitable platform find a way to pass some of that income on to its authors. Jaron Lanier’s idea of creating a new middle class based on micropayments may sound bonkers to commentators versed in digital ”logic”, but something like it is a rather beautiful prospect: content valued not only in time and attention, but something that thus far only online giants have seen – remuneration.
October 7, 2013
The attention economy: what price do we pay?
I’ve written my third essay this year for Aeon magazine, exploring the idea of the attention economy – and what it may cost us to pay for content with our time, clicks, tweets and endlessly aggregated attention.
How many other things are you doing right now while you’re reading this piece? Are you also checking your email, glancing at your Twitter feed, and updating your Facebook page? What five years ago David Foster Wallace labelled ‘Total Noise’ — ‘the seething static of every particular thing and experience, and one’s total freedom of infinite choice about what to choose to attend to’ — is today just part of the texture of living on a planet that will, by next year, boast one mobile phone for each of its seven billion inhabitants. We are all amateur attention economists, hoarding and bartering our moments — or watching them slip away down the cracks of a thousand YouTube clips.
If you’re using a free online service, the adage goes, you are the product. It’s an arresting line, but one that deserves putting more precisely: it’s not you, but your behavioural data and the quantifiable facts of your engagement that are constantly blended for sale, with the aggregate of every single interaction (yours included) becoming a mechanism for ever-more-finely tuning the business of attracting and retaining users.
September 27, 2013
The main reason for IT project failures? Us.
My latest BBC column, republished here for UK readers, looks at some of the dispiritingly enduring human reasons behind IT project failures.
The UK’s National Health Service may seem like a parochial subject for this column. But with 1.7 million employees and a budget of over £100 billion, it is the world’s fifth biggest employer – beaten only by McDonalds, Walmart, the Chinese Army, and the US Department of Defence. And this means its successes and failures tend to provide salutary lessons for institutions of all sizes.
Take the recent revelation that an abandoned attempt to upgrade its computer systems will cost over £9.8 billion– described by the Public Accounts Committee as one of the “worst and most expensive contracting fiascos” in the history of the public sector.
This won’t come as a surprise to anyone who has worked on large computing projects. Indeed, there’s something alarmingly monotonous to most litanies of tech project failure. Planning tends to be inadequate, with projected timings and budgets reflecting wishful thinking rather than a robust analysis of requirements. Communication breaks down, with side issues dominating discussions to the exclusion of core functions. And the world itself moves on, turning yesterday’s technical marvel into tomorrow’s white elephant, complete with endless administrative headaches and little scope for technical development.
Statistically, there are few fields more prone to extravagant failure. According to a 2011 study of 1,471 ICT projects by Alexander Budzier and Bent Flyvbjerg of Oxford’s Said Business School, one in every six ICT projects costs at least three times as much as initially estimated: around twenty times the rate at which projects in fields like construction go this wrong.
But if costly IT failures are a grimly unsurprising part of 21st-Century life, what’s revealing is not so much what went wrong this time as why the same mistakes continue to be repeated. Similar factors were, for example, in evidence during one of the first and most famous project management failures in computing history: the IBM 7030 Stretch supercomputer.
Begun in 1956, IBM’s goal was to build a machine at least one hundred times more powerful than its previous system, the IBM 704. This target won a prestigious contract with the Los Alamos National Laboratory – and, in 1960, the machine’s price was set at $13.5 million, with negotiation beginning for other orders.
The only problem was that, when a working version was actually tested in 1961, it turned out to be just 30 times faster than its predecessor. Despite containing a number of innovations that would prove instrumental in the future of computing, the 7030 had dismally failed to meet its target – and IBM had failed to realise what was going on until too late. The company’s CEO announced that the price of the nine systems already ordered would be cut by almost $6 million each – below cost price – and that no further machines would be made or sold. Cheaper, nimbler competitors stepped into the gap.
Are organisations prone to a peculiar blindness around all things digital? Is there something special about information technology that invites unrealistic expectations?
I would suggest that there is – and that one reason is the disjunction between problems as a business sees them, and problems seen in terms of computer systems. Consider the health service. The idea of moving towards an entirely electronic system of patient records makes excellent sense – but bridging the gap between this pristine goal and the varied, interlocking ways in which 1.7 million employees currently work is a fiendish challenge. IBM faced a far simpler proposition, on paper: make a machine one hundred times faster than their previous best. But the transition from paper to reality entailed difficulties that didn’t even exist until new components had been built, complete with new dead ends and frustrations.
All projects face such challenges. With digital systems, though, the frame of reference is not so much the real world as an abstracted vision of what may be possible. The sky is the limit – and big talk has a good chance of winning contracts. Yet there’s an inherent divide between the real-world complexities of any situation and what’s required to get these onscreen. Computers rely on models, systems and simplifications which we have built in order to render ourselves comprehensible to them. And the great risk is that we simply don’t understand ourselves, or our situation, well enough to explain it to them.
We may think we do, of course, and propose astounding solutions to complex problems – only to discover that what we’ve “solved” looks very little like what we wanted or needed. In the case of almost every sufficiently large computing project, in fact, the very notion of solving a small number of enormous problems is an almost certain recipe for disaster, given that beneath such grandeur lurk countless conflicting requirements just waiting to be discovered.
If there is hope, it lies not in endlessly anatomizing those failures we seem fated to repeat, but in better understanding the fallibilities that push us towards them. And this means acknowledging that people often act like idiots when asked to explain themselves in terms machines can understand.
You might call it artificial stupidity: the tendency to scrawl our hopes and biases across a digital canvas without pausing to ask what reality itself will support. We, not our machines, are the problem – and any solution begins with embracing this.
Such modesty is a tough sell, especially when it’s up against polished solutionism and obfuscation – both staples of debate between managers and technicians since well before the digital era. The alternative, though, doesn’t bear thinking about: an eternity of over-promising and under-delivering. Not to mention wondering why the most powerful tools we’ve ever built only seem to offer more opportunities for looking stupid.
August 2, 2013
Blocking net porn: worse than pointless
My latest BBC Future column, reproduced here for UK readers, looked at the perversities of censorship and online pornography.
What is the most searched-for term on the web? Contrary to popular myth, it’s not “sex”, “porn”, “xxx”, or any other common search term for pornography. Instead, as a quick glance at services like Google Trends shows, terms like “Facebook” and “YouTube” comfortably beat all of the above – as does “Google” itself. Onscreen as in life, it’s sport, celebrities and global news that command the most attention.
In fact, looking at lists of the world’s most-visited websites compiled by companies like Alexa, there’s strikingly little “adult” content. Search engines, social media, news and marketplaces dominate, with the world’s top pornographic site coming in at number 34, and just six others breaking into the top one hundred. As an excellent analysis by the Ministry of Truth blog notes, “overall, adult websites account for no more than 2-3% of global Internet traffic, measured in terms of both individual visits to websites and page views.”
All of this sits slightly strangely alongside recent hysteria and headlines (and dubious maths) in Britain. If you missed it, Prime Minister David Cameron announced his intention to prevent online pornography from “corroding childhood” by making internet service providers automatically block pornographic websites. Britain is set to become a place where internet users have to “opt in” to view pornography – a moral beacon in a world increasingly alarmed by the filth pouring out of its screens.
Except, of course, it isn’t. As author and activist Cory Doctorow pointed out in the Guardian when a similar proposal surfaced last year, filtering online content either requires people to look at every page on the internet, or a piece of software algorithmically to identify and filter out “inappropriate” content. With trillions of pages online, the first option is clearly impossible – and the second certain to generate an immense amount of false positives (not to mention failing to block an equal number of undesirable sites).
The result would be an opaque, piecemeal and ideologically incoherent mess. Should any site featuring nude images or videos be blocked automatically in an effort to shield the innocent? YouTube features extensive reserves of such content, as indeed do almost all image-sharing and social-media services; and that’s before you consider fiction, art and film containing material that’s explicit but not pornographic by any commonly understood measure (for instance, classical sculpture, Botticelli’s Venus, James Joyce’s Ulysses… the list is endless…). What about politically sensitive materials, controversial opinions, violence or discussions of any of the above “adult”topics? Censorship is a blunt instrument, rendered blunter still by automation – and there are few precedents to suggest that its wielding would either benefit those it’s supposed to protect, or deter the worst offenders it’s designed to suppress.
Indeed, the whole notion of an opt-in pornography register is in itself alarming. Would a list of households requesting an unfiltered internet remain secure and private – and could governments refrain from cross-referencing it with other potential indices of suspicion? How should citizens undertaking perfectly legal browsing of explicit materials feel about being listed on such a database – or about wanting to be free of arbitrary restrictions across countless sites and resources?
All of this also risks muddying the waters around the quite separate field of genuinely abusive images. Images of child abuse are unambiguously illegal across most of the world, and their creators and distributors are pursued by governments, internet service providers and corporations alike, via a mix of automated and investigative processes. Such images exist largely on peer-to-peer networks and covert forums, making any blocking service unlikely to be much help in their eradication – and possibly an unwelcome rival for resources and political attention.
None of this will be comforting news for parents and others trying to deal with one intractable problem that the internet itself poses: if a child is not under constant supervision, it is almost impossible to prevent them from accessing an almost infinite variety of immensely disturbing and inappropriate content. Indeed, it’s a kind of innocence to restrict these concerns to even the broadest definition of pornography. Social-media interactions, videos of real-world events, encounters in virtual worlds – all have the potential to be explicit, profoundly disturbing and damaging in a manner that any parent would desperately wish to prevent.
But the dream of a pristine onscreen realm – purged of all toxicity, as if that toxicity somehow originated there rather than in the world itself – is a dangerous fantasy. Not only is it unachievable; it offers false hope to those eager to believe that some safety is better than nothing, or that technology can be wiped clean by a magical meta-filter.
While it’s all very well to pour scorn on censorship, I have every sympathy for those who say that granting young people unrestricted access to the world’s most depraved outpourings demands action. It does. What it demands, though, is the same kind of preparedness that living among others has always required: the pursuit and prosecution of abusers; and the imperfect but steady effort to educate a next generation able to live within their era’s complexities. There are worse things out there than porn – and delegating your children’s safety, freedom and education to algorithms is one of them.
July 26, 2013
In conversation with Mark Cerny
Last week, I profiled the PlayStation 4′s lead system architect, Mark Cerny, for the Independent. The profile is online here, and outlines his background and role. For those interested in a little more detail, this is an edited transcript of some of the key points from our conversation when we met in London in July.
In 1982, aged 17, Mark Cerny quit university in his native California to work as a designer and programmer for the era’s most important games company, Atari. By 1984, he had created his first hit game, Marble Madness. By 1985, he had moved to Japan to work with gaming’s rising giant, Sega, where he worked on both games and the cutting edge of console design – a combination that saw him leave in the 1990s to develop for one of the world’s first CD-based consoles, the 3DO Interactive Multiplayer.
The 3DO failed to take off – but by 1994 Cerny had become one of the first non-Japanese developers to work on Sony’s new PlayStation, and a major player in Sony’s success. His games Crash Bandicoot (1996), Spyro the Dragon (1998) and their sequels sold over 30 million copies. Cerny founded his own consultancy in 1998, and has since helped produce, programme and design a gamut of key titles for three generations of PlayStations.
Perhaps his most important work of all, though, is only just coming to fruition: the PlayStation 4, the hardware on which many of Sony‘s hopes for this decade rest. Since 2008, Cerny has worked as the machine’s lead system architect – a job he himself pitched to Sony Computer Entertainment’s senior management – as well as directing the development of one of its key launch titles, Knack.
Tom Chatfield: Historically, the original PlayStation came out in 1994. What’s your take on how it changed the games industry?
Mark Cerny: Oh, it had a huge effect on the games industry, but not for the reason that is usually voiced. So, historically, Trip Hawkins with the 3DO Multiplayer tried to move us from cartridges to CD-ROMs. Ken Kutaragi [with the PlayStation] succeeded, and I believe that is a tremendous part of his legacy, because back when we were making cartridge games we had to buy these cartridges, they had silicon chips in them, and it was ten or fifteen bucks just for the hardware costs in some of those cartridges.
You’re going out to retail and you have that inventory cost dragging you down: and you also have the time to build a cartridge, which in those days was three months. By moving to optical memory we suddenly had something like five times as much money to spend on the actual product development. Back in the cartridge days, typical spend was about a dollar a copy on the actual software, maybe only fifty cents. We have gone from that to closer to twenty dollars.
TC: That is interesting, because one narrative spun at the time was about CD-quality sound, fast racing games and iconic 3D titles: grown-up gaming for people with more disposable income. But you’re saying that this was really a developer revolution, led by media change?
MC: Yes.
TC: And then, with the PlayStation 2, we have eventually 150 million sales, still the best-selling console in history. Can another living room console sell over 150 million copies today?
MC: If you look at the world at large, a billion or two billion people are playing games. Consoles, a couple of hundred million. The key to selling that number of consoles will be bringing the larger game playing audience into the console world.
TC: Yet in today’s tech landscape, the console feels like something of an anomaly. For most products, upgrade cycles have accelerated to 18 months or less, but you’re building a machine with a 6 or 7 year lifespan. It’s a huge challenge technically, and I know you’ve done a lot to build something that has at least six or seven years of life in it, and that will work for developers.
MC: A lot of the strategy for PlayStation 4 came out of the experiences of the PlayStation 3, which is to say that the PlayStation 3 was challenging in many ways. And part of that was the hardware, and part of that was the development environment around it that the developers used when they created their games.
So with PlayStation 4, we wanted to be sure that we would have something much more accessible, where developers could quickly get going on their games, and really could focus more on the vision of the game than the minutiae of the hardware.
A number of the features we’ve put in the hardware so that developers have something to dig into in later years of the console life cycle: I think that’s probably 2016, 2017 when they’ll really start taking advantage of them, and you’ll see the benefits of that in hardware in the actual games.
TC: In terms of consoles and opportunities, though, we have some people who are saying that the very idea of “owning the living room screen” is an anachronism in the age of the personal screen. What do you feel the console means, in 2013?
MC: I think consoles play a vital role. And part of what they provide is a stable target. A lot of the developers need five years to make a title, and because a console spec does not change, it allows them to engineer to that spec and ultimately bring out games that simply would never come out any other way. Why do you feel it’s an anachronism?
TC: I don’t, personally. But one argument goes that prices are very high, barriers to entry are high, and that owning personal screens simply matters more. It’s almost ideological, I think, in that some people see the living room as a dead tech paradigm: cheap and freemium and handheld and personal and mobile are the way of the present.
MC: I think we need to be very cautious. Today there are very many places that people can play games. Historically that hasn’t been true. If you go back to the early 1990s, the killer app for Game Boy was Tetris. Thirty million Tetris cartridges were sold. That is a testament to how few options you had in those days if you wanted to play a game. Well, today we can play a game just about anywhere, your PC, your tablet, your phone, your console, and all that.
That doesn’t mean that consoles don’t have a role to play, but it does want us to make the console an integrated experience, because we know that people have all of these devices that they own – and we want to make sure that playing the game isn’t strictly restricted to your time in front of the TV set.
So part of our approach is with companion applications. We have a companion application that lets you stay in touch with the world of PlayStation no matter where you are, and then a lot of the developers are creating very specific companion applications that can act as a second screen for the game as it is being played – but also be the game in and of itself. So you are on the road, you can check in on the status of the game world, or you can even interact with the game world via your smartphone or tablet.
TC: You’re involved very passionately in the hardware design and its principles, but you’re also building one of the big launch titles in Knack: could you tell me a little about what it means to you as a game?
MC: The hardware is very developer driven: it’s an extraordinarily broad collaboration. Knack is much more about my personal thoughts about games and consoles in general. The original idea was simply that I knew that there would be quality core games at launch, for the predominantly male core gamer, but I wanted to be sure that there would be something for the rest of the family: sons, daughters, spouses and the like.
Knack is really designed with two audiences in mind. One audience is core gamers: for those people who enjoyed playing Crash Bandicoot or Sonic the Hedgehog back in the day. We are trying to speak to the nostalgia that they have for those experiences in the past, which were actually, something like Crash was a brutally difficult game. So when you play on the hard difficulty setting, even though the control scheme with Knack is pretty simple, it’s quite a challenging game.
The other audience is light or beginner players. There are one or two billion people playing games out there. If you have a smartphone or a tablet, pretty much, you’re playing games. But there is a bit of a gap between a child who plays tablet or smartphone games – or an adult who is just getting into Fruit Ninja or Candy Crush Saga – and the skillset that is required for a AAA console game, where the controller has 16 or so buttons on it and a AAA game uses almost all of them.
I started playing games in the era of the Atari 2600. That had one button. I had 30 years to get used to the increase in complexity of games as they involved. If we look at the 1990s, the pattern was that children would start with handheld games consoles with much simpler control schemes, and then they would move to the home consoles.
But now it’s much more about somebody playing on their iPhone or tablet or the like, and so the idea with Knack was that that could be for this other audience, the on-ramp. It very much is a console game, it is a story-driven action adventure, but on the easy difficulty setting it is a game that pretty much anyone can play, regardless of their game-playing history.
TC: How do you strike this balance between audiences as a developer?
MC: Trying to make a game broadly accessible took us to some interesting places. One thing we looked at was children and their ability to play these AAA games, and part of the issue is not the control scheme, it’s simply the size of the controller. It is, when you look at the size of children’s hands and the size of the controller, it’s hard for them to reach certain buttons.
And so we would have our producers’ children showing us how they were using the controller and the like. But we ultimately ended up making a – this is my prop – a giant controller [Cerny produces his famous giant controller]. Be very careful, if you push the joysticks too far they’ll snap off…
We also did a lot of play-testing with people who don’t regularly play games. We looked for people who don’t play games at all. And we couldn’t find them. That was very interesting for me. If we look for people who have never played games in their lives, which was our initial request for play-testers, they were extraordinarily hard to find. They would maybe have played games during the PlayStation One generation, maybe they played games on smartphones today, but it’s a testament to just how broad the appeal of games is that it’s very hard to find somebody who has never picked up a game.
We found people who claimed never to have played games. They would play a little bit, and they would do surprisingly well. I would say to them, hit the start button and pause the game so we can talk, and they would know where the start button was. That’s a clue that they had played games.
TC: There’s this word “gamers”, which I’m always ambivalent about, because it’s people self-defining and being often very possessive. I feel it’s a kind of immaturity in the industry: that if people were a little bit more relaxed out about the labels, there might more breadth to what’s on offer. But, as you say, some people will claim they don’t play video games, because they feel it’s not for them.
MC: If we look at the top ten sales for console, it is dominated by heavy content: Skyrim and Assassin’s Creed. Console players love this, and that’s why it’s dominating the top ten. But the console audience has also woken up to these smaller but equally compelling experiences. Walking Dead won a tremendous number of awards last year. And Journey, which is a two-and-a-half-hour game, was widely regarded as game of the year.
So it appears that, going forward to PlayStation 4, the ecosystem is a bit different. We are going to have the heavy content, but be balancing it out with these games that challenge your expectation of what a game can be, or provide a break from the larger titles, or simply show variety in how a game can be played.
TC: One bugbear of mine is that I love playing co-op games, but not just FPS ones. There’s a beautiful PS3 co-op tower defence game called PixelJunk Monsters, but for me it’s emblematic that there are so few. There are big gaps in the variety of games out there, where games aren’t being made that ought to be made.
MC: If you can quote yourself on that, that’s your piece!
TC: What for you are the areas where games aren’t being made on consoles; that you would like to see created to provide this variety?
MC: If we look at trends, there are a tremendous number of people working [today] to create games as part of a small team. Many of those have never known any other way of making games. Many of those, though, are also people who worked on AAA titles and had perhaps a very specialized role on a 100-person team, and are looking for a way to contribute more boldly and more deeply.
Due to the very accessible nature of the PS4 and more specifically to its supercharged PC architecture, it’s quite easy to bring those games over to PS4. Anecdotally, I’m hearing four weeks to convert a game from PC over to the PS4. So, what we’re going to see as a result of that is a broader variety of experiences on the platform.
When I look at that historically, if we go back all the way to the PS1, AAA development wasn’t like it is today. Typical game teams were quite small. Crash Bandicoot, which was a larger team, was seven people: so a game that sold six million units was made by seven people. It was very possible in those days to have an idea and go after it, and pursue it more through the idea of what would be fun create than thinking so strongly about what particular part of the playing public you’re going after.
And so there were titles like PaRappa [the Rapper] and Devil Dice and Intelligent Cube, that were created by these small teams, and really found a home: each of those sold over a million units. And that was really part of the joy of PlayStation, that was very much in the PlayStation DNA in the early days.
When I was talking about a Renaissance of Gaming [at the Develop conference in Brighton], that is because I really believe that is where PlayStation 4 will be.
TC: And this can push back against the pressure simply to pour resources into developing visual assets, these polished immensely detailed titles with huge budgets and huge teams?
MC: Of course, those [older] titles were either not really about the assets at all, or they were asset-light. That was a big part of how those titles could be made for just a few hundred thousand dollars.
TC: So what is it about video games, for you, that is most unique and exciting?
MC: The simple answer is that games are interactive. I think the thread is that there are a tremendous number of ways to make a quality game and, just to hold up two, Portal and Braid.
In Braid, the rule set is different as you go through depending on what level you’re playing, and it is not explained, and you have to work it out for yourself. Portal was designed using extensive usability testing, and every one of those 50 or 100 challenges is just another brick or layer on all the previous challenges. And they’re both incredibly good games.
TC: When I think about Portal 2, its co-op mode is one of the supreme collaborative gaming experiences. I enjoy the challenge of talking about how that makes me feel, of what this excellence feels like. You can’t cheat with games. You’re either having fun or or you’re not – fun is a very pure test.
MC: I guess for me, when I’m laying out levels in a game, I’m really as my target trying to create an experience that you can settle into, a groove that you can get into as a player. Many years ago I used to call this meditative. I think it’s the wrong word for it, but it’s a very smooth experience.
TC: I know that in your working method, you like to polish up one level to a high degree of completion before doing the whole thing.
MC: So that’s “Method”. As an industry, we got off track in the 1990s. There are stories, of some of those game cartridges not even being playable to the end, because nobody had cared enough to make sure it was completable – they needed to be shipped on time, that was the most important thing. More commonly, we would sit around and make the games for ourselves, and never actually have a real player play it during the entire product cycle. And we had this belief that making a game was about making graphical assets and sounds, and putting them all together, and when they were all together it was done. Whereas that’s certainly something you need to do, but it also needs to be a game.
And so the two big things we focused on [to make things better] were, first, usability testing, using real players; and the idea was that you just watched them play. You could then chose to make the game Portal-like, and make sure it was this smooth experience; or intentionally make it Braid-like, where you wanted there to be that very specific challenge, with an epiphany.
And the second aspect was trying to make part of the game early on, spending maybe 25% to 30% of the budget making something that looked very polished, that was a small portion of the game, to understand whether or not it would be worthwhile to spend the time to make the whole game. Because if that wasn’t a sufficiently compelling experience, you were better off to just cut that project then and there.
So part of what I believe is that, if you have two games, probably one of them shouldn’t be made. It’s about the right ratio. If you complete every game that you start out to make, you probably aren’t making the best games that you can.
June 4, 2013
The price of living our lives onscreen
I wrote a comment piece for the Independent last week, looking at Sofia Coppola’s new movies The Bling Ring – and the culture of constant self-broadcasting it conjures. Here it is.
Sofia Coppola’s new movie, The Bling Ring, tells a true story for our times: how a glamorous young Hollywood gang stole millions of dollars by tracking celebrities’ movements online, then robbing their houses when they were out.
It’s a film loaded with images of hyper-mediated modernity – of constant texting, filming, social media sharing, and vicarious living through status updates. As Coppola commented in a recent interview, it also reflects an “almost sci-fi” view of the world, where “living does not count unless you are documenting it”.
On a planet where there will soon be as many mobile phones as people, Coppola’s comment reflects a growing unease with the intimacy of our relationships with technology. The average American teenager now sends and receives more than 3,000 text messages each month. Ever-accumulating data swirls beneath the surface of our lives. (IBM claims that 90 per cent of the world’s digital data was created within the past two years.) For many people, the first thing they touch when they wake in the morning – and the last thing they touch when they go to sleep at night – is the screen of their smartphone.
There’s much to celebrate in this, of course. Coppola’s adolescents may be drifters in the media maelstrom, but that doesn’t make them inherently different from previous generations, or mean that they deserve our pity. My own teenage years in the 1990s were a frequent agony of awkward silences and unexpressed desires: I would have given almost anything for the levelling convenience of a social life mediated on screen, or the opportunity to present myself to peers through text rather than just stammering small talk at parties.
Yet this appeal is also a central part of the problem – and I’m quietly relieved to have got through school and university before Facebook swept me up. As MIT professor Sherry Turkle puts it in her latest book, Alone Together: “Technology is seductive when what it offers meets our human vulnerabilities. And, as it turns out, we are very vulnerable indeed.” The mix of constant articulacy and control on offer on screen is a staggering opportunity – and a staggering temptation.
I’m now a man in my thirties, yet can still find it hard to drag myself away from the screen towards the messy, risky business of human interactions. Even when I’ve done so, the constant connectivity of the world in my pocket breeds a strange doubleness: one part of my mind waiting for the buzz of incoming mail, the minute endorsement of a retweet, the thrill of connection. No matter where I am, no matter what I’m doing, I need never miss out or feel myself ignored. The only thing worse than being tweeted about, to paraphrase Wilde, is not being tweeted about.
As so often, these themes are as old as civilisation – but the arena within which they’re playing out is violently new. Weightless, infinitely and instantaneously reproducible, digital data girdles the globe like nothing before it, confronting our all-too-human attention spans with endless opportunity. There’s no limit to its capacities – and social media may soon be the least of our concerns.
Witness Google’s “Glass” technology. Essentially a voice-activated computer built into a pair of glasses, complete with discreet display at the edges of your vision, it’s due for commercial release in late 2013. It also promises to be the first in a sequence of wearable technologies aimed at ever-more seamlessly integrating our daily lives with their digital media shadows. Apple’s rumoured iWatch is in a similar vein, with still more startling prospects: live monitoring of heart rate and blood pressure; altitude sensors and GPS combined to position users precisely in three-dimensional space.
Like social media and the always-on screens of our smartphones, these technologies promise an unprecedented species of control over our own lives: everything from location to social reputation made explicit, measurable, and manipulable. Little wonder that merely material reality seems insubstantial by comparison: a dataless desert within which nothing is preserved or personalised, ripe for abandonment.
While information and control are the great promises of mediation, however, they come at a price, as The Bling Ring elegantly illustrates: a constantly broadcast identity is owned not by you, but by other people.
One of the film’s victims is the heiress and professional party girl Paris Hilton – a glittering cypher for her robbers’ dreams, and for the fantasies of countless others. (Her greatest claim to fame is probably a sex tape viewed by over seven million people in two days.) Celebrities, publicists and politicians have long understood the bargain that a life like Hilton’s embodies: “you” are whatever the world says you are, and your job is to feed grist to its mill. Today, though, we’re all being thrust into the same position: not only citizens, but also the full-time narrators, curators, publicists, ambassadors and agents of our own lives.
None of these is a role we’re obliged to take on, of course. But there’s much to be gained as well as lost – and much that we desperately wish to assert. Which of us doesn’t long for some kind of status, certainty or connection, for a way of thickening our best moments into permanence, or sharing what we love?
The world is what we make of it. But it’s also, as a generation is increasingly realising, what others and posterity choose to make of us: a bargain that some may only realise they’ve struck when it’s too late.
June 1, 2013
I type, therefore I am
I’ve written another essay for the marvellous Aeon magazine, exploring a topic close to my heart: how the act of typing onto screens – and the sheer number of people now actively involved in doing this – is changing language and identity. The first few paras are below, the rest on the Aeon magazine site.
At some point int he past two million years, give or take half a million, the genus of great apes that would become modern humans crossed a unique threshold. Across unknowable reaches of time, they developed a communication system able to describe not only the world, but the inner lives of its speakers. They ascended — or fell, depending on your preferred metaphor — into language.
The vast bulk of that story is silence. Indeed, darkness and silence are the defining norms of human history. The earliest known writing probably emerged in southern Mesopotamia around 5,000 years ago but, for most of recorded history, reading and writing remained among the most elite human activities: the province of monarchs, priests and nobles who reserved for themselves the privilege of lasting words.
Mass literacy is a phenomenon of the past few centuries, and one that has reached the majority of the world’s adult population only within the past 75 years. In 1950, UNESCO estimated that 44 per cent of the people in the world aged 15 and over were illiterate; by 2012, that proportion had reduced to just 16 per cent, despite the trebling of the global population between those dates. However, while the full effects of this revolution continue to unfold, we find ourselves in the throes of another whose statistics are still more accelerated.
In the past few decades, more than six billion mobile phones and two billion internet-connected computers have come into the world. As a result of this, for the first time ever we live not only in an era of mass literacy, but also — thanks to the act of typing onto screens — in one of mass participation in written culture…
May 23, 2013
Apple: An end to skeuomorphic design?
My latest BBC Future column takes a look at Apple, the future of operating system design, and the vexed term “skeumorphic design.”
Why do most smartphones make a clicking noise, like a camera shutter closing, when you take a picture with them? Why do the virtual pages of a book on a tablet appear to turn as you swipe across the screen?
The answer is skeuomorphic design, from the Greek words for a tool (skeuos) and shape (morph). It means designing a tool in a new medium that incorporates some of the features of its antecedents. These no long perform any necessary function but – like the unfurling of virtual paper across a digital screen – forge an intuitive link with the past, not to mention being (hopefully) attractive in their own right.
Though it sounds obscure, skeuomorphism is everywhere around us – from “retro” detailing on clothes to electric kettles shaped like their stove-top ancestors. It’s also a topic of much hand-wringing and angst in the tech world, thanks to Apple CEO Tim Cook’s decision to shake up the design principles of his company’s iOS mobile operating system – one of the world’s touchstones for digital appearances.
The latest incarnation of iOS – version 7 – is likely to be previewed in June ahead of a September release. And its putative appearance is feeding a frenzy of speculation thanks to Cook’s decision in October last year to put hardware supremo Jonathan Ive – the designer responsible for iconic minimalist designs from the iMac and iPod to the iPad – in charge not only of the physical product, but also the look and feel of its software.
Ive replaces perhaps the world’s most influential exponent of skeuomorphic software, Scott Forstall, whose work at Apple included creating iOS as we know it – complete with a compass app that looks like a handheld orienteering compass, a notes app mimicking yellow sticky paper, a calculator app designed like an old-fashioned accountant’s pocket calculator, a game centre themed around wood and green baize, and analogue dials on its clocks.
Apple’s skeuomorphism has, over the last few years, divided the opinions of designers, to say the least. For author and tech design consultant Adam Greenfield, it’s inexplicable that the company has for so long saddled its exquisite devices with “the most awful and mawkish and flat-out tacky visual cues”; while software developer James Higgs has bluntly described it as “horrific, dishonest and childish crap.”
For more sympathetic critics like user interface designer Sacha Greif, meanwhile, the decision to launch the iPhone with such a textured, “realistic” interface was a sensible move given just how novel the device was in 2007. “Nobody,” he argues, “had seen such visual richness in an operating system’s user interface before (let alone on a phone)… Realism was a way to link the future with the past, and make people feel at ease with their new device.”
This ease has been important to the company’s success. Outside the rank of designers, few ordinary users are likely to give the subtle stylistic influences of their screens much thought. Yet these are crucial psychological components within a weightless, immaterial medium.
Unlike physical materials and the traditions surrounding them, digital pixels have no inherent aesthetic or “feel”. Everything onscreen must be fabricated from scratch – and, for many early users of Apple products, the textured tones of a skeuomorphic interface offered a democratic counterpoint to the elitism of their seamless exteriors.
Today, though, even the fanboys agree that some species of revamp is in order – and that Ive’s fondness for modernist minimalism is in line with trends elsewhere. Rather than fake wood and leather, a visual style has emerged in the last few years that is neatly embodied by Microsoft and its technicolour approach to Windows’s latest incarnation.
Here, bright square and rectangular panels match a crisp, two-dimensional aesthetic, with an emphasis on clarity and clean blocks of colour. It’s a deliberately “flat” look, embracing the pinpoint resolutions of modern screens rather than softer-edged illusions of weight and depth – and it was heralded by some on its release as “incredibly innovative”.
The only problem is that many users don’t especially seem to enjoy it – a backlash that has led to embarrassing recent reports of Microsoft preparing to restore some aspects of its older operating systems.
If Apple heads down this path, it will be seeking its own distinctive evolution of Ive’s design philosophy – a philosophy explicitly indebted to the great German designer Dieter Rams and his dictums, the most famous of which states that “good design is as little design as possible.”
Some change is likely to be welcomed. But – as Microsoft’s experience shows – any reinvention of a widely-used standard breeds a particular gamut of hazards, especially within the open and potentially unanchored spaces of an electronic medium.
All digital design is to some extent a game of metaphor and illusion. Yet, increasingly, some of the objects being gestured towards are vanishing from users’ remembered experience. Will the youngest generation of iPad users ever physically have handled analogue dials, desktop calendars or yellowed paper notepads? Will many of them, soon, even have turned the pages of a physical book?
These hollowed out metaphors haunt digital design – together with the fear that imitation and repetition risk shackling the present to an increasingly irrelevant past. Successful simplicity, as Ive and Rams have each shown, is about capturing the essence of an experience via the painstaking elimination of anything redundant. How far, though, are skeuomorphism’s visual echoes and references themselves essential?
Design can never go entirely without mimicry; not least because, if you’re not speaking some kind of common visual language, you cannot make yourself understood. Ive’s greatest triumphs at Apple pay explicit tributeto Rams’s work at Braun in the 1960s – and it seems unlikely that someone with such a deep sensitivity to its history will abandon onscreen dialogue with the manufactured world.
This dialogue is likely to be as much with Apple itself as its antecedents – and to draw deeply on its own aesthetic of industrial design. In the end, though, skeuomorphism is not about wood and leather any more than “flat” design is about colours or rectangles. Each aims at an experience that is its own justification – and that, if the experience should somehow fall short, cannot be saved by all the justifications in the world. As Ive himself put it in a 2012 interview, “we don’t really talk about design, we talk about developing ideas and making products.” Once you have a sufficiently complete understanding of what you wish to achieve, the rest is detail.


