More on this book
Community
Kindle Notes & Highlights
by
James Bridle
Read between
October 16 - December 30, 2018
Moore’s law is not merely technical or economic; it is libidinal.
Another effect, according to many in the industry, was the end of a culture of craft, care, and efficiency in software itself. While early software developers had to make a virtue of scarce resources, endlessly optimising their code and coming up with ever more elegant and economical solutions to complex calculation problems, the rapid advancement of raw computing power meant that programmers only had to wait eighteen months for a machine twice as powerful to come along. Why be parsimonious with one’s resources when biblical plenty is available in the next sales cycle? In time, the founder of
...more
For those with the most money – the drug companies – the impulse to feed these problems into the latest and fastest technologies is irresistible. As one report puts it: ‘Automation, systematisation and process measurement have worked in other industries. Why let a team of chemists and biologists go on a trial and error-based search of indeterminable duration, when one could quickly and efficiently screen millions of leads against a genomics-derived target, and then simply repeat the same industrial process for the next target, and the next?’
But it’s in the laboratory that the limitations of this approach are becoming starkly clear. High-throughput screening has accelerated Eroom’s law, rather than abated it. And some are starting to suspect that messy human empiricism may actually be more, not less, efficient than computation. Eroom’s law might even be the codification – with data – of something many leading scientists have been saying for some time.
The mechanism that is being enacted when the Optometrist goes to work is particularly interesting to those attempting to reconcile the opaque operation of complex computational problem solving with human needs and desires. On the one hand is a problem so fiendishly complicated that the human mind cannot fully grasp it, but one that a computer can ingest and operate upon. On the other is the necessity of bringing a human awareness of ambiguity, unpredictability, and apparent paradox to bear on the problem – an awareness that is itself paradoxical, because it all too often exceeds our ability to
...more
Admitting to the indescribable is one facet of a new dark age: an admission that the human mind has limits to what it can conceptualise. But not all problems in the sciences can be overcome even by the application of computation, however sympathetic. As more-complex solutions are brought to bear on ever more complex problems, we risk even-greater systemic problems being overlooked. Just as the accelerating progress of Moore’s law locked computation into a particular pathway, necessitating certain architecture and hardware, so the choice of these tools fundamentally shapes the way we can
...more
There is also a deeper cognitive pressure at work: the belief in the singular, inviolable answer, produced, with or without human intervention, by the alleged neutrality of the machine. As science becomes increasingly technologised, so does every domain of human thought and action, gradually revealing the extent of our unknowning, even as it reveals new possibilities.
Digitisation meant that trades within, as well as between, stock exchanges could happen faster and faster. As the actual trading passed into the hands of machines, it became possible to react almost instantaneously to any price change or new offer. But being able to react meant both understanding what was happening, and being able to buy a place at the table. Thus, as in everything else, digitisation made the markets both more opaque to noninitiates, and radically visible to those in the know.
Seen within the turmoil of the markets, it was rarely clear who actually operated these algorithms; and it is no more so today, because their primary tactic is stealth: masking their intentions and their origins while capturing a vast portion of all traded value. The result was an arms race: whoever could build the fastest software, reduce the latency of their connection to the exchanges, and best hide their true objective, made bank.
Lewis details a world in which the market has become a class system – a playground for those with the vast resources needed to access it, completely invisible to those who do not: The haves paid for nanoseconds; the have-nots had no idea that a nanosecond had value. The haves enjoyed a perfect view of the market; the have-nots never saw the market at all. What had once been the world’s most public, most democratic, financial market had become, in spirit, something more like a private viewing of a stolen work of art.15
This is an inversion of the commonly held idea of progress, wherein societal development leads inexorably towards greater equality. Since the 1950s, economists have believed that in advanced economies, economic growth reduces the income disparity between rich and poor. Known as the Kuznets curve, after its Nobel Prize–winning inventor, this doctrine claims that economic inequality first increases as societies industrialise, but then decreases as mass education levels the playing field and results in wider political participation. And so it played out – at least in the West – for much of the
...more
As the capabilities of machines increase, more and more professions are under attack, with artificial intelligence augmenting the process. The internet itself helps shape this path to inequality, as network effects and the global availability of services produces a winner-takes-all marketplace, from social networks and search engines to grocery stores and taxi companies. The complaint of the Right against communism – that we’d all have to buy our goods from a single state supplier – has been supplanted by the necessity of buying everything from Amazon. And one of the keys to this augmented
...more
Reducing workers to meat algorithms, useful only for their ability to move and follow orders, makes them easier to hire, fire, and abuse. Workers who go where their wrist-mounted terminal tells them to don’t even need to understand the local language, and they don’t need an education. Both of these factors, together with the atomisation produced by technological augmentation, also prevent effective organisation.
(This hasn’t stopped Uber, for example, from requiring that its drivers listen to a set number of anti-union podcasts every week, all controlled by their app, to drive the message home.)
The result was that huge amounts of goods were effectively stored on trucks, ready to go at any time, and as close to the factories as possible. The car companies had simply passed the costs of storage and stock control back to their suppliers. In addition, whole new towns and service areas sprung up in the hinterlands of the factories to feed and water the waiting truckers, fundamentally altering the geographies of manufacturing towns. Companies are deploying these lessons, and their effects, at the level of individuals, passing costs onto their employees and demanding that they submit their
...more
Whatever one might think of the morals of executives at Uber, Amazon, and many, many companies like them, few set out to actively create such conditions for their workers. Nor is this a simple return to the robber barons and industrial tyrants of the nineteenth century. To the capitalist ideology of maximum profit has been added the possibilities of technological opacity, with which naked greed can be clothed in the inhuman logic of the machine.
Technology extends power and understanding; but when applied unevenly it also concentrates power and understanding. The history of automation and computational knowledge, from cotton mills to microprocessors, is not merely one of upskilled machines slowly taking the place of human workers. It is also a story of the concentration of power in fewer hands, and the concentration of understanding in fewer heads. The price of this wider loss of power and understanding is, ultimately, death.
In London in 2016, workers for UberEats, Uber’s food delivery service, succeeded in challenging their own employment conditions by deploying the logic of the app itself. In the face of new contracts that lowered wages and increased hours, many drivers wanted to fight back, but their hours and working practices – late nights and distributed routes – prevented them from organising effectively. A small group communicated in online forums in order to arrange a protest at the company’s office, but they knew they needed to gather more colleagues in order to get their message across. So, on the day
...more
EPA testers, Amazon employees, Uber drivers, their customers, the people on the polluted streets: they are all the have-nots of the technologically augmented market, in that they never see the market at all. But it’s increasingly apparent that nobody at all sees what’s actually going on. Something deeply weird is occurring within the massively accelerated, utterly opaque markets of contemporary capital. While high-frequency traders deploy ever-faster algorithms to skim off multibillion-point differences, the dark pools are breeding even darker surprises.
When one haywire algorithm started placing and cancelling orders that ate up 4 per cent of all traffic in US stocks in October 2012, one commentator was moved to comment wryly that ‘the motive of the algorithm is still unclear’.
At 1:07 p.m. on April 23, the official AP Twitter account sent a tweet to its 2 million followers: ‘Breaking: Two Explosions in the White House and Barack Obama is injured.’ Other AP accounts, as well as journalists, quickly flooded the site with claims that the message was false; others pointed out inconsistencies with the organisation’s house style. The message was the result of a hack, and the action was later claimed by the Syrian Electronic Army, a group of hackers affiliated with Syrian President Bashar al-Assad and responsible for many website attacks as well as celebrity Twitter hacks.
The algorithms following breaking news stories had no such discernment however. At 1:08 p.m., the Dow Jones, victim of the first flash crash in 2010, went into nosedive. Before most human viewers had even seen the tweet, the index had fallen 150 points in under two minutes, before bouncing back to its earlier value. In that time, it erased $136 billion in equity market value.32 While some commentators dismissed the event as ineffective or even juvenile, others pointed to the potential for new kinds of terrorism, disrupting markets through the manipulation of algorithmic processes.
In Hollywood, studios run their scripts through the neural networks of a company called Epagogix, a system trained on the unstated preferences of millions of moviegoers developed over decades in order to predict which lines will push the right – meaning the most lucrative – emotional buttons.43 Their algorithmic engines are enhanced with data from Netflix, Hulu, YouTube and others, whose access to the minute-by-minute preferences of millions of video watchers, combined with an obsessive focus on the acquisition and segmentation of data, provides them with a level of cognitive insight undreamed
...more
Game developers enter endless cycles of updates and in-app purchases directed by A/B testing interfaces and real-time monitoring of players’ behaviours until they have such a finegrained grasp on dopamine-producing neural pathways that teenagers die of exhaustion in front of their computers, unable to tear themselves away.44 Entire cultural industries become feedback loops for an increasingly dominant narrative of fear and violence.
Or perhaps the flash crash in reality looks exactly like everything we are experiencing right now: rising economic inequality, the breakdown of the nation-state and the militarisation of borders, totalising global surveillance and the curtailment of individual freedoms, the triumph of transnational corporations and neurocognitive capitalism, the rise of far-right groups and nativist ideologies, and the utter degradation of the natural environment.
None of these are the direct result of novel technologies, but all of them are the product of a general inability to perceive the wider, networked effects of individual and corporate actions accelerated by opaque, technologically augmented complexity.
But such a position seems to ignore the fact that the complexity of contemporary technologies is itself a driver of inequality, and that the logic that drives technological deployment might be tainted at the source. It concentrates power into the hands of an ever-smaller number of people who grasp and control these technologies, while failing to acknowledge the fundamental problem with computational knowledge: its reliance on a Promethean extraction of information from the world in order to smelt the one true solution, the answer to rule them all. The result of this wholesale investment in
...more
This cautionary tale, which has been told over and over again in the academic literature on machine learning,1 is probably apocryphal, but it illustrates an important issue when dealing with artificial intelligence and machine learning: What can we know about what a machine knows? The story of the tanks encodes a fundamental realisation, and one of increasing importance: whatever artificial intelligence might come to be, it will be fundamentally different, and ultimately inscrutable, to us. Despite increasingly sophisticated systems of both computation and visualisation, we are no closer today
...more
One of the more surprising advocates of early connectionism was Friedrich Hayek, best known today as the father of neoliberalism. Forgotten for many years, but making a recent comeback among Austrian-inclined neuroscientists, Hayek wrote The Sensory Order: An Inquiry into the Foundations of Theoretical Psychology in 1952, based on ideas he’d formulated in the 1920s. In it, he outlines his belief in a fundamental separation between the sensory world of the mind and the ‘natural’, external world. The former is unknowable, unique to each individual, and thus the task of science – and economics –
...more
It’s not hard to see a parallel between the neoliberal ordering of the world – where an impartial and dispassionate market directs the action independent of human biases – and Hayek’s commitment to a connectionist model of the brain. As later commentators have noted, in Hayek’s model of the mind, ‘knowledge is dispersed and distributed in the cerebral cortex much as it is in the marketplace among individuals’.3 Hayek’s argument for connectionism is an individualist, neoliberal one, and corresponds directly with his famous assertion in The Road to Serfdom (1944) that all forms of collectivism
...more
Today, the connectionist model of artificial intelligence reigns supreme again, and its primary proponents are those who, like Hayek, believe that there is a natural order to the world that emerges spontaneously when human bias is absent in our knowledge production. Once again, we see the same claims being made about neural networks as were made by their cheerlea...
This highlight has been truncated due to consecutive passage length restrictions.
What was left untouched in the original paper was the assumption that any such system could ever be free of encoded, embedded bias. At the outset of their study, the authors write, Unlike a human examiner/judge, a computer vision algorithm or classifier has absolutely no subjective baggages, having no emotions, no biases whatsoever due to past experience, race, religion, political doctrine, gender, age, etc., no mental fatigue, no preconditioning of a bad sleep or meal. The automated inference on criminality eliminates the variable of meta-accuracy (the competence of the human judge/examiner)
...more
Technology does not emerge from a vacuum. Rather, it is the reification of a particular set of beliefs and desires: the congruent, if unconscious dispositions of its creators. In any moment it is assembled from a toolkit of ideas and fantasies developed over generations, through evolution and culture, pedagogy and debate, endlessly entangled and enfolded. The very idea of criminality itself is a legacy of nineteenth-century moral philosophy, while the neural networks used to ‘infer it’ are, as we’ve seen, the product of a specific worldview – the apparent separation of the mind and the world,
...more
But the technology of the Nikon Coolpix and the HP Pavilion masks a more modern, and more insidious, racism: it’s not that their designers set out to create a racist machine, or that it was ever employed for racial profiling; rather, it seems likely that these machines reveal the systemic inequalities still present within today’s technological workforce, where those developing and testing the systems are still predominately white. (As of 2009, Hewlett-Packard’s American workforce was 6.74 per cent black.)16 It also reveals, as never before, the historic prejudices deeply encoded in our data
...more
We will not solve the problems of the present with the tools of the past. As the artist and critical geographer Trevor Paglen has pointed out, the rise of artificial intelligence amplifies these concerns, because of its utter reliance on historical information as training data: ‘The past is a very racist place. And we only have data from the past to train Artificial Intelligence.’17
Walter Benjamin, writing in 1940, phrased the problem even more fiercely: ‘There is no document of civilisation which is not at the same time a document of barbarism.’18 To train these nascent intelligences on the remnants of prior knowledge is thus to encode such barbarism into our future.
Rather than trying to understand how languages actually worked, the system imbibed vast corpora of existing translations: parallel texts with the same content in different languages. It was the linguistic equivalent of Chris Anderson’s ‘end of theory’; pioneered by IBM in the 1990s, statistical language inference did away with domain knowledge in favour of huge quantities of raw data. Frederick Jelinek, the researcher who led IBM’s language efforts, famously stated that ‘every time I fire a linguist, the performance of the speech recogniser goes up’.21 The role of statistical inference was to
...more
The map is thus multidimensional, extending in more directions than the human mind can hold. As one Google engineer commented, when pursued by a journalist for an image of such a system, ‘I do not generally like trying to visualise thousand-dimensional vectors in three-dimensional space.’25 This is the unseeable space in which machine learning makes its meaning.
Meanwhile, when not being forced to visualise their dreams for our illumination, the machines progress further into their own imaginary space, to places we cannot enter. Walter Benjamin’s greatest wish, in The Task of the Translator, was that the process of transmission between languages would invoke a ‘pure language’ – an amalgam of all the languages in the world. It is this aggregate language that is the medium in which the translator should work, because what it reveals is not the meaning but the original’s manner of thinking.
Following the activation of Google Translate’s neural network in 2016, researchers realised that the system was capable of translating not merely between languages, but across them; that is, it could translate directly between two languages it had never seen explicitly compared. For example, a network trained on Japanese–English and English–Korean examples is capable of generating Japanese–Korean translations without ever passing through English.33 This is called ‘zero-shot’ translation, and what it implies is the existence of an ‘interlingual’ representation: an internal metalanguage composed
...more
David Levy, a Scottish chess champion who played many exhibition games against machines in the 1970s and ’80s, developed an ‘anti-computer’ style of restricted play that he described as ‘doing nothing but doing it well’. His play was so conservative that his computer opponents were unable to discern a long-term plan until Levy’s position was so strong that he was unbeatable. Likewise, Boris Alterman, an Israeli grandmaster, developed a strategy in matches against machines in the ’90s and early ’00s that became known as the ‘Alterman Wall’: he would bide his time behind a row of pawns, knowing
...more
Acknowledging the reality of nonhuman intelligence has deep implications for how we act in the world and requires clear thinking about our own behaviours, opportunities, and limitations. While machine intelligence is rapidly outstripping human performance in many disciplines, it is not the only way of thinking, and it is in many fields catastrophically destructive. Any strategy other than mindful, thoughtful cooperation is a form of disengagement: a retreat that cannot hold. We cannot reject contemporary technology any more than we can ultimately and utterly reject our neighbours in society
...more