Why Greatness Cannot Be Planned: The Myth of the Objective
Rate it:
Open Preview
Kindle Notes & Highlights
69%
Flag icon
If the path you’re on does not resemble where you thought you’d be, you’re probably doing something right. In the long run, stepping stones lead to other stepping stones and eventually to great discoveries.
70%
Flag icon
When all is said and done, when even visionaries grow weary of stale visions, when the ash of unrequited expectation settles on the cloak of the impenetrable future, there is but one principle that may yet pierce the darkness: To achieve our highest goals, we must be willing to abandon them.
73%
Flag icon
Our actual evolutionary ancestors, such as flatworms, don’t resemble us. So evolution couldn’t have been actively searching for us—otherwise we’d never have been found!
75%
Flag icon
is selection really necessary in the strongest sense to create the complex creatures we see around us? Or is it possibly a restriction that limits the creativity of evolution?
76%
Flag icon
So “survive and reproduce” can be viewed as constraining the stepping stones that evolution explores further, and not an objective in itself.
79%
Flag icon
you stand as a trillion-celled tower of reproductive inefficiency. When viewed this way, a human is curiously like a complicated outgrowth of a bacteria’s reproductive process. We proliferate trillions of cells to produce our offspring where only one is needed to do the job.
80%
Flag icon
Seen this way, evolution is a special kind of non-objective search: a minimal criteria search. It isn’t heading anywhere in particular, but it heads everywhere that passes the minimal criteria of survival and reproduction, which was satisfied from the very start of evolution by the first reproducing cell.
86%
Flag icon
While things that are “meta” may seem a little mind-bending, in this instance it isn’t as complicated as it sounds. In fact, this sort of “meta-search” happens in real life, too. Imagine if you were trying to pick which puppy to take home from a puppy breeder, and you happened to like curious puppies. You’d be searching for the puppy that most likes to search. Or imagine that your job was to hire someone at a treasure-hunting company for the position of senior treasure hunter. You’d be searching for the best treasure hunter, whose job is to search for treasure. So the whole thing is a ...more
87%
Flag icon
Perhaps it isn’t obvious that there are problems with the way the AI community conducts research at all.
88%
Flag icon
Who would need to argue that ideas that perform worse should receive less attention? It’s common knowledge. But it’s also “common knowledge” that objectives are a good way to guide search. We need to be careful about common knowledge.
88%
Flag icon
Recall that performance is exactly the heuristic against which novelty search compared favorably in Chap. 5. There’s no reason to suspect that the same heuristic, but used one level up (to guide the AI community as a whole instead of a search algorithm), will avoid the problems plaguing objective-driven search. When performance is the rule of thumb that filters which algorithms are shown to the larger community, all other kinds of stepping stones are rejected and ignored.
90%
Flag icon
On the other hand, we don’t mean to suggest that no one should ever investigate whether OldReliable outperforms Weird.
90%
Flag icon
what’s best for the practitioner needn’t be connected to what’s best for the researcher.
92%
Flag icon
While higher performance or a surprising new theorem may be impressive achievements, being impressed also isn’t a reliable guide for reaching particular objectives in a search.
93%
Flag icon
The objectives of AI are so far off across the mist-cloaked lake that we shouldn’t worry so much about the ruler of performance.
94%
Flag icon
Think back to Picbreeder. There are no rules in that community. There’s no panel of experts that decide whether a user’s picture is really worth sharing. There aren’t any rigid yardsticks that objectively show which picture is “best.”
94%
Flag icon
“False analogy,” you might say.
94%
Flag icon
Let’s pretend that there’s a very unusual journal in the field of AI called the Journal of AI Discovery (or JAID for short). But unlike any other journal in the field of AI, reviewers for the journal are not allowed to refer to the results of the experiments in their reviews. The authors submitting their studies to JAID may include theoretical and experimental results as usual, but the reviewers can’t base their reviews on those results.
95%
Flag icon
The question is: Are the articles published in JAID worse than those published in the most distinguished journals in the field? Or are they far better? Would you read JAID if you were an AI researcher, knowing that its reviewers can’t criticize (or reward) performance measures or demand guarantees?
95%
Flag icon
But as John Stewart Mill knew even in 1846,
Tiago
STEWART
« Prev 1 2 Next »