More on this book
Kindle Notes & Highlights
by
Ed Finn
Read between
March 12 - March 17, 2018
When enacted, symbolic logic can effect procedural alterations to reality.
Algorithms enact theoretical ideas in pragmatic instructions, always leaving a gap between the two in the details of implementation.
Tricksters and rebels, hackers performed feats of technological prowess akin to magic (and often depicted as quasi-magical in films like Hackers, Sneakers, and The Matrix, where the source code becomes what media critic Wendy Hui Kyong Chun calls “sourcery”).
We are not just getting logistically closer to that space of computation; we are now intimate with it, sharing our most personal memories and accepting algorithmic guidance on matters ranging from love to real estate.
Human languages seem to acquire different color words in a predictable sequence, for example, and the existence of the word makes possible, for cultural purposes, the reality of the color: without the word for “green,” that band of the spectrum disappears into the neighboring concept of “blue.”
The corollary to this distinguishing power of language is that some incantations cannot be unheard: they permanently alter us when we interpret and understand them.
The role of the humanities and methodologies of critical reading—algorithmic reading—are vital to effectively contend with the ambiguity and complexity at play in the awkward intersection of computation and culture.
Indeed the most prevalent set of metaphors seems to be that of code as structure: platforms, architectures, objects, portals, gateways. This serves to both depersonify software, diluting the notion of software agency (buildings are passive; it’s the architects, engineers, and users who act), and reifying code as an objective construct, like a building, that exists in the world.
Just as the poorly paid factory workers who produce our high-tech gadgets are obscured behind the sleek design and marketing of brushed-metal objects that seem to manifest directly from some kind of machine utopia, untouched by human hands, so do we, the eager audience of that utopia, accept the results of software algorithms unquestioningly as the magical products of computation.
The bas-relief work, statues, and inscriptions of great European cathedrals are microcosms of Christianity, recapitulating the Gospel and other key biblical narratives as well as the histories of their own creation as enduring and complete statements of faith. Contemporary computational systems perform the same role of presenting a unified vision of the world through clean interfaces and carefully curated data—everything you might want to know, now available as an app. Computation offers a pathway for consilience, or the unification of all fields of knowledge into a single tree: an ontology of
...more
The algorithm offers us salvation, but only after we accept its terms of service.
A seemingly complete and consistent expression of a system of knowledge offers no seams, no points of access that suggest there might be an outside or alternative to the structure.
Drawing on the historical figure of the automaton, a remarkable collection of Mechanical Turk-powered poetry titled Of the Subcontract,
By defining the unit of exchange through computational cycles, Bitcoin fundamentally shifts the faith-based community of currency from a materialist to an algorithmic value system.
We have come not just to use but to trust computational systems that tell us where to go, whom to date, and what to think about (to name just a few examples).
But their rhetoric is still transcendent and emancipatory, striking many of the same techno-utopian notes as the mythos of code as magic when
they equate computation with transformational justice and freedom.
Computation comes to have a kind of presence in the world, becoming a “thing” that both obscures and highlights particular forms of what Wendy Hui Kyong Chun calls “programmability,” a notion we will return to in the guise of computationalism below.
he calls the algorithm a “method for solving a problem” in his widely circulated course materials.9 This is what I term the pragmatist’s definition: an engineer’s notion of algorithms geared toward defining problems and solutions. The pragmatist’s definition grounds its truth claim in utility: algorithms are fit for a purpose, illuminating pathways between problems and solutions.
systems. The pragmatist’s definition achieves clarity by constructing an edifice (a cathedral) of tacit knowledge, much of it layered in systems of abstraction like the traveling salesman problem. At a certain level of cultural success, these systems start to create their own realities as well: various players in the system begin to alter their behavior in ways that short-circuit the system’s assumptions.
the “tacit negotiation” we perform to adapt ourselves to algorithmic systems: we enunciate differently when speaking to machines, use hashtags to make updates more machine-readable, and describe our work in search engine-friendly terms.
The computational turn means that many algorithms now reconstruct and efface legal, ethical, and perceived reality according to mathematical rules and implicit assumptions that are shielded from public view.
The pragmatist’s approach gestures toward, and often depends on, a deeper philosophical claim about the nature of the universe. We need to understand that claim as the grounding for the notion of “effective computability,” a transformational concept in computer science that fuels algorithmic evangelism today. In her book My Mother Was a Computer, media theorist N. Katherine Hayles labels this philosophical claim the Regime of Computation.16 This is another term for what I sometimes refer to as the age of the algorithm: the era dominated by the figure of the algorithm as an ontological
...more
The quest for knowledge becomes a quest for computation, a hermeneutics of modeling.
models always compress or shorthand reality.
Entscheidungsproblem,
Gödel proved, to general dismay, that it was impossible for a symbolic logical system to be internally consistent and provable using only statements within the system. The truth claim or validation of such a system would always depend on some external presumption or assertion of logical validity: turtles all the way down.
Turing’s simple imaginary machine is an elegant mathematical proof for universal computation, but it is also an ur-algorithm, an abstraction generator. The mathematical equivalence of Church and Turing’s work quickly suggested that varying proofs of effective computability (there are now over thirty) all gesture toward some fundamental universal truth.
From the beginning, then, algorithms have encoded a particular kind of abstraction, the abstraction of the desire for an answer.
The grand problems of the cosmos (the origins thereof, the relationship of time and space) and the less grand problems of culture (box office returns, intelligent web searching, natural language processing) are irreducible but also calculable: they are not complicated problems with simple answers but rather simple problems (or rule-sets) that generate complicated answers. These assumptions open the door to a mathesis universalis, a language of science that the philosophers Gottfried Wilhelm Leibniz, René Descartes, and others presaged as a way to achieve perfect understanding of the natural
...more
But far more unsettling, and the central thesis of the closely allied field of information theory, is the notion that probability applies to information as much as to material reality. By framing information as uncertainty, as surprise, as unpredicted new data, mathematician Claude Shannon created a quantifiable measurement of communication.
As early as The Human Use of Human Beings, Wiener popularized these links between the Turing machine, neural networks, and learning in biological organisms, work that is now coming to startling life in the stream of machine learning breakthroughs announced by the Google subsidiary DeepMind over the past few years.
Wiener’s opening gambit of the turn from certainty to probability displaced but did not eliminate the old Enlightenment goals of universal, consilient knowledge. That ambition has now turned to building the best model, the finest simulation of reality’s complex probabilistic processes.
as Hayles puts it, “In the face of such a powerful dream, it can be a shock to remember that for information to exist, it must always be instantiated in a medium.”
The theoretical aspirations of cybernetics were always dependent on material implementation, a fact that has challenged generations of artificial intelligence researchers pursuing the platonic ideal of neural networks that effectively model the human mind.
With the “logic of general substitutability,” software has become a thing, Chun argues, embodying the central function of magic—the manipulation of symbols in ways that impact the world. This fundamental alchemy, the mysterious fungibility of sourcery, reinforces a reading of the Turing machine as an ur-algorithm that has been churning out effective computability abstractions in the minds of its “users” for eighty years. The “thing” that software has become is the cultural figure of the algorithm: instantiated metaphors for effective procedures.
computation is a universal solvent precisely because it is both metaphor and machine.
the central magic of computation depends on a quest for meaning, for understanding, that depends on particular forms of human cognition and recognition.
“Belief in the rationality-logicality equation has corroded the prophetic power of language itself. We can count, but we are rapidly forgetting how to say what is worth counting and why.”
what troubles Weizenbaum the most is not the vision of computers directly emulating human thought, or minds modeled in silicon, but rather the corrosive impact of computational thinking on the human self. “Now that language has become merely another tool, all concepts, ideas, images that artists and writers cannot paraphrase into computer-comprehensible language have lost their function and their potency.”
I am less interested in pursuing this notion than I am in the philosophical underpinnings that bring us here: the way that language itself, particularly written language, serves as the original “outside” to human thought, the first machine for processing culture across time and space.
As our extended mind continues to elaborate new systems, functions, applications, and zones of operation, the question of what it means to be human grows increasingly abstract, ever more imbricated in the metaphors and assumptions of code.
Algorithms have generated mathematical proofs and even new explanatory equations that defy human comprehension, making them “true” but not “understandable,” a situation that mathematician Steven Strogatz has termed the “end of insight.”
every culture machine we build to interface with the embodied world of human materiality also reconfigures that embodied space, altering cognitive and cultural practices. More important, this happens because implementation encodes a particular formulation of the desire for effective computability, a desire that we reciprocate when we engage with that system.
In any system dependent on abstraction there is a remainder, a set of discarded information—the différance, or the crucial distinction and deferral of meaning that goes on between the map and the territory. This gap emerges in implementation,
As technology critic Annalee Newitz argues, the predominance of female voices in digital assistants clearly has something to do with submission: The sad truth is that these digital assistants are more like slaves than modern women. They are not supposed to threaten you, or become your equal—they are supposed to carry out orders without putting up a fight. The ideal slave, after all, would be like a mother to you. She would never rebel because she loves you, selflessly and forever.
while Siri can learn from us (by aggregating billions of voice recordings for analysis, for example), we can directly teach it almost nothing. The ontologies are Siri’s secret sauce, the synapses that hold the entire operation together, and they are constructed according to the business logic, legal agreements, and licensing schemes of Apple and its partners.
If Google is trying to assemble a universal structure of knowledge, and attempting to do so as a commercial enterprise, the project has an important precedent. This was precisely the aim that Denis Diderot, Jean D’Alembert, and the other encyclopédistes were striving for in mid-eighteenth-century France.
though our algorithms don’t say much of interest, they are already reading, writing, and thinking in ways that no human can really understand.
This is what information theorist Christian Sandvig calls “corrupt personalization,” after legal scholar C. Edwin Baker and Habermas: the ways that algorithmic culture blurs the lines between our genuine interests and a set of commodities that may or may not be genuinely relevant, such as products “liked” by our friends on Facebook even if they did not knowingly endorse them.45 Both Baker and Sandvig ultimately engage here with Habermas’s conception of lifeworld, identifying ways in which the system can colonize or reformat the lifeworld, restructuring organic, undirected activities into ones
...more