More on this book
Community
Kindle Notes & Highlights
popular music created in the industrialized world in the decade from the late 1990s to the late 2000s doesn’t have a distinct style—that is, one that would provide an identity for the young people who grew up with it.
Whenever I’m around “Facebook generation” people and there’s music playing—probably selected by an artificial intelligence or crowd-based algorithm, as per the current fashion—I ask them a simple question: Can you tell in what decade the music that is playing right now was made?
I can’t find a decade span in the first century of recorded music that didn’t involve extreme stylistic evolution, obvious to listeners of all kinds.
So far, my theory has held: even true fans don’t seem to be able to tell if an indie rock track or a dance mix is from 1998 or 2008, for instance.
I am saying that this kind of work is more nostalgic than reaching. Since genuine human experiences are forever unique, pop music of a new era that lacks novelty raises my suspicions that it also lacks authenticity.
this is the first time since electrification that mainstream youth culture in the industrialized world has cloaked itself primarily in nostalgic styles.
The sort of “fresh, radical culture” you expect to see celebrated in the online world these days is a petty mashup of preweb culture.
This is embarrassing. The whole point of connected media technologies was that we were supposed to come up with new, amazing cultural expression.
If the internet is really destined to be no more than an ancillary medium, which I would view as a profound defeat, then it at least ought to do whatever it can not to bite the hand that feeds it—that is, it shouldn’t starve the commercial media industries.
Spore addresses an ancient conundrum about causality and deities that was far less expressible before the advent of computers. It shows that digital simulation can explore ideas in the form of direct experiences, which was impossible with previous art forms.
The business model that allows this to happen is the only one that has been proven to work so far: a closed model. You actually pay real money for Wright’s stuff.
Digital representations can be very good, but you can never foresee all the ways a representation might need to be used.
A physical object, on the other hand, will be fully rich and fully real whatever you do to it. It will respond to any experiment a scientist can conceive. What makes something fully real is that it is impossible to represent it to completion.
The original classical buildings were tarted up with garish colors and decorations, and their statues were painted to appear more lifelike. But when architects and sculptors attempted to re-create this style long after the paint and ornamentation had faded away, they invented a new cliché: courthouses and statuary made of dull stone.
hip-hop has been alive during the web era, or at least not as stuck as the endless repetitions of the pop, rock, and folk genres.
Hip-hop is imprisoned within digital tools like the rest of us. But at least it bangs fiercely against the walls of its confinement.
Digital production usually has an overly regular beat because it comes out of a looper or a sequencer.
But hip-hop pierced through this problem in a shocking way. It turns out these same deficits can be turned around and used to express anger with incredible intensity. A sample played again and again expresses stuckness and frustration, as does the regular beat.
Why must all the new schemes that compete with traditional music licensing revere remoteness? There’s no significant technological barrier to getting musicians involved in the contextual side of expression, only an ideological one.
When you come upon a video clip or picture or stretch of writing that has been made available in the web 2.0 manner, you almost never have access to the history or the locality in which it was perceived to have meaning by the anonymous person who left it there.
DNA sequences might float around from garage experimenter to garage experimenter via the internet, following the trajectories of pirated music downloads and being recombined in endless ways.
However it happens, species boundaries will become defunct, and genes will fly about, resulting in an orgy of creativity. Untraceable multitudes of new biological organisms will appear as frequently as new videos do on YouTube today.
My guess is that a poorly encapsulated communal gloop of organisms lost out to closely guarded species on the primordial Earth for the same reason that the Linux community didn’t come up with the iPhone: encapsulation serves a purpose.
The interval of nonopenness—the time before publication—functions like the walls of a cell. It allows a complicated stream of elements to be defined well enough to be explored, tested, and then improved.
The open-source software community is simply too connected to focus its tests and maintain its criteria over an extended duration. A global process is no test at all, for the world happens only once. You need locality to have focus, evolution, or any other creative process.
you’ll generally find that for most topics, the Wikipedia entry is the first URL returned by search engines but not necessarily the best URL available.
Typical authors of Wikipedia, however, implicitly celebrate the ideal of intellectual mob rule. “Edit wars” on Wikipedia are called that for a reason. Whether they are cordial or not, Wikipedians always act out the idea that the collective is closer to the truth and the individual voice is dispensable.
The reason moviemaking has become as much a part of pop culture as movie viewing is that new gadgets appeared. Cheap, easy-to-use video cameras, editing software, and distribution methods—such as YouTube—are what made the difference.
And while it’s true that there are still only a few special geniuses of cinema, the basic competence turns out to be as easily acquired as learning to talk or drive a car.
One institution from this nearly forgotten chapter of the early web was ThinkQuest. This was a contest run by internet pioneers, especially Al Weis, in which teams of high school students competed for scholarships by designing websites that explained ideas from a wide variety of academic disciplines, including math.
Their work included simulations, interactive games, and other elements that were pretty new to the world. They weren’t just transferring material that already existed into a more regularized, anonymous form.
The web should have developed along the ThinkQuest model instead of the wiki model—and would have, were it not for hive ideology.
word, I call it computationalism. This term is usually used more narrowly to describe a philosophy of mind, but I’ll extend it to include something like a culture. A first pass at a summary of the underlying philosophy is that the world can be understood as a computational process, with people as subprocesses.
Dividing the world into two parts, one of which is ordinary—deterministic or mechanistic, perhaps—and one of which is mystifying, or more abstract, is particularly difficult for scientists. This is the dreaded path of dualism.
I’ll discuss three common flavors of computationalism and then describe a fourth flavor, the one that I prefer. Each flavor can be distinguished by a different idea about what would be needed to make software as we generally know it become more like a person.
One flavor is based on the idea that a sufficiently voluminous computation will take on the qualities we associate with people—such as, perhaps, consciousness.
A second flavor of computationalism holds that a computer program with specific design features—usually related to self-representation and circular references—is similar to a person. Some of the figures associated with this approach are Daniel Dennett and Douglas Hofstadter, though each has his own ideas about what the special features should be.
A third flavor of computationalism is found in web 2.0 circles. In this case, any information structure that can be perceived by some real human to also be a person is a person. This idea is essentially a revival of the Turing test.
The approach to thinking about people computationally that I prefer, on those occasions when such thinking seems appropriate to me, is what I’ll call “realism.”
I believe humans are the result of billions of years of implicit, evolutionary study in the school of hard knocks. The cybernetic structure of a person has been refined by a very large, very long, and very deep encounter with physical reality.
From this point of view, what can make bits have meaning is that their patterns have been hewn out of so many encounters with reality that they aren’t really abstractable bits anymore, but are instead a nonabstract continuation of reality.
both colors and sounds can be described with just a few numbers; a wide spectrum of colors and tones is described by the interpolations between those numbers. The human retina need be sensitive to only a few wavelengths, or colors, in order for our brains to process all the intermediate ones.
Odors are completely different, as is the brain’s method of sensing them.
The number of distinct odors is limited only by the number of olfactory receptors capable of interacting with them.
the human nose contains about one thousand different types of olfactory neurons, each type able to detect a particular set of chemicals.
Think of it this way: colors and sounds can be measured with rulers, but odors must be looked up in a dictionary.
Keep in mind that smells are not patterns of energy, like images or sounds. To smell an apple, you physically bring hundreds or thousands of apple molecules into your body. You don’t smell the entire form; you steal a piece of it and look it up in your smell dictionary for the larger reference.
To solve the problem of olfaction—that is, to make the complex world of smells quickly identifiable—brains had to have evolved a specific type of neural circuitry,
He often refers to the olfactory parts of the brain as the “Old Factory,” as they are remarkably similar across species, which suggests that the structure has ancient origins.
Is there a relationship between olfaction and language, that famous product of the human cerebral cortex?

