More on this book
Community
Kindle Notes & Highlights
Read between
March 24 - April 7, 2023
There are even examples of non-functional design. For instance, most animals have a gene for synthesizing vitamin C, but in primates, including humans, though that gene is recognizably present, it is faulty: it does not do anything. This is very difficult to account for except as a vestigial feature that primates have inherited from non-primate ancestors.
Socrates was right to point out that the appearance of design in living things is something that needs to be explained. It cannot be the ‘product of chance’. And that is specifically because it signals the presence of knowledge. How was that knowledge created?
That pair of birds would then be guaranteed the best nesting site on the island – an advantage which, in terms of the survival of their offspring, might well outweigh all the slight disadvantages of nesting earlier. In that case, in the following generation, there will be more March-nesting birds, and, again, all
Thus the original situation that we imagined – with genes that were optimally adapted to maximizing the population (‘benefiting the species’) – is unstable. There will be evolutionary pressure to make the genes become less well adapted to that function.
From the point of view of both the species and all its members, the change brought about by this period of its evolution has been a disaster. But evolution does not ‘care’ about that. It favours only the genes that spread best through the population.
So what would refute the Darwinian theory of evolution? Evidence which, in the light of the best available explanation, implies that knowledge came into existence in a different way. For instance, if an organism was observed to undergo only (or mainly) favourable mutations, as predicted by Lamarckism or spontaneous generation, then Darwinism’s ‘random variation’ postulate would be refuted.
So Sciama concludes that, if we did measure one of those constants of physics, and found that it was extremely close to the optimum value for producing astrophysicists, that would statistically refute, not corroborate, the anthropic explanation for its value.
Creationism, therefore, is misleadingly named. It is not a theory explaining knowledge as being due to creation, but the opposite: it is denying that creation happened in reality, by placing the origin of the knowledge in an explanationless realm. Creationism is really creation denial – and so are all those other false explanations.
It was only by causing people to do this that the Roman-numeral system survived – that is to say, caused itself to be copied from generation to generation of Romans: they found it useful, so they passed it on to their offspring. As I have said, knowledge is information which, when it is physically embodied in a suitable environment, tends to cause itself to remain so.
First the brain was supposed to be like an immensely complicated set of gears and levers. Then it was hydraulic pipes, then steam engines, then telephone exchanges – and, now that computers are our most impressive technology, brains are said to be computers. But this is still no more than a metaphor, says Searle, and there is no more reason to expect the brain to be a computer than a steam engine.
About four billion years ago – soon after the surface of the Earth had cooled sufficiently for liquid water to condense – the oceans were being churned by volcanoes, meteor impacts, storms and much stronger tides than today’s (because the moon was closer). They were also highly active chemically, with many kinds of molecules being continually formed and transformed, some spontaneously and some by catalysts. One such catalyst happened to catalyse the formation of some of the very kinds of molecules from which it itself was formed. That catalyst was not alive, but it was the first hint of life.
The mysterious universality of DNA as a constructor may have been the first universality to exist.
This may seem like logic-chopping, but it is not. The reason for these paradoxes and parallels between blind optimism and blind pessimism is that those two approaches are very similar at the level of explanation. Both are prophetic: both purport to know unknowable things about the future of knowledge.
Malthus had quite accurately foretold the one phenomenon, but had missed the other altogether. Why? Because of the systematic pessimistic bias to which prophecy is prone. In 1798 the forthcoming increase in population was more predictable than the even larger increase in the food supply not because it was in any sense more probable, but simply because it depended less on the creation of knowledge.
They all thought they were making sober predictions based on the best knowledge available to them. In reality they were all allowing themselves to be misled by the ineluctable fact of the human condition that we do not yet know what we have not yet discovered.
Neither Malthus nor Rees intended to prophesy. They were warning that unless we solve certain problems in time, we are doomed. But that has always been true, and always will be. Problems are inevitable. As I said, many civilizations have fallen. Even before the dawn of civilization, all our sister species, such as the Neanderthals, became extinct through challenges with which they could easily have coped, had they known how.
And all those options add up to the overarching option that they failed to create, namely that of forming a scientific and technological civilization like ours. Traditions of criticism. An Enlightenment.
And I propose to replace, therefore, the question of the sources of our knowledge by the entirely different question: ‘How can we hope to detect and eliminate error?’
future possibilities to ignore and which to rely on. Induction, instrumentalism and even Lamarckism all make the same mistake: they expect explanationless progress. They expect knowledge to be created by fiat with few errors, and not by a process of variation and selection that is making a continual stream of errors and correcting them.
violence if they are. Just as the institutions of science are structured so as to avoid entrenching theories, but instead to expose them to criticism and testing, so political institutions should not make it hard to oppose rulers and policies, non-violently, and should embody traditions of peaceful, critical discussion of them and of the institutions themselves and everything else. Thus, systems of government are to be judged not for their prophetic ability to choose and install good leaders and policies, but for their ability to remove bad ones that are already there.
Now forget the material of which the televisions are made. Only the pictures exist. This is to stress that a universe is not a receptacle containing physical objects: it is those objects.
Similarly, it is misleading to speak of the ‘original’ object and its ‘doppelgänger’: they are simply the two instances of the object.
You will not find the concept of fungibility discussed or even mentioned in many textbooks or research papers on quantum theory, even the small minority that endorse the many-universes interpretation. Nevertheless, it is everywhere just beneath the conceptual surface, and I believe that making it explicit helps to explain quantum phenomena without fudging. As will become clear, it is an even weirder attribute than Leibniz guessed – much weirder than multiple universes for instance, which are, after all, just common sense, repeated. It allows radically new types of motion and information flow,
...more
First, it is a perfect example of bad explanation: it could be used to ‘explain’ anything. Second, one way it achieves that status is by addressing only the form of the question and not the substance: it is about who said something, not what they said. That is the opposite of truth-seeking. Third, it reinterprets a request for true explanation (why should something-or-other be as it is?) as a request for justification (what entitles you to assert that it is so?), which is the justified-true-belief chimera. Fourth, it confuses the nonexistent authority for ideas with human authority (power) – a
...more
The apportionment issue has been referred several times to eminent mathematicians, including twice to the National Academy of Sciences, and on each occasion these authorities have made different recommendations. Yet none of them ever accused their predecessors of making errors in mathematics. This ought to have warned everyone that this problem is not really about mathematics. And on each occasion, when the experts’ recommendations were implemented, paradoxes and disputes kept on happening.
A devil’s advocate might now ask: if majority voting among apportionment rules is such a bad idea, why is majority voting among voters a good idea? It would be disastrous to use it in, say, science. There are more astrologers than astronomers, and believers in ‘paranormal’ phenomena often point out that purported witnesses of such phenomena outnumber the witnesses of most scientific experiments by a large factor.
For instance, if there are two options of which you mildly prefer one, you have an incentive to register your preference as ‘strong’ instead. Perhaps you are prevented from doing that by a sense of civic responsibility. But a decision-making system moderated by civic responsibility has the defect that it gives disproportionate weight to the opinions of people who lack civic responsibility and are willing to lie.
In terms of Edison’s metaphor, the model refers only to the perspiration phase, without realizing that decision-making is problem-solving, and that without the inspiration phase nothing is ever solved and there is nothing to choose between. At the heart of decision-making is the creation of new options and the abandonment or modification of existing ones.
losing its explanatory power, is hard to mix with a rival explanation: something halfway between them is usually worse than either of them separately. Mixing two explanations to create a better explanation requires an additional act of creativity. That is why good explanations are discrete – separated from each other by bad explanations – and why, when choosing between explanations, we are faced with discrete options.
More generally, the most important conditions for rational decision-making – such as freedom of thought and of speech, tolerance of dissent, and the self-determination of individuals – all require ‘dictatorships’ in Arrow’s mathematical sense. It is understandable that he chose that term.
In particular, what voters are doing in elections is not synthesizing a decision of a superhuman being, ‘Society’. They are choosing which experiments are to be attempted next, and (principally) which are to be abandoned because there is no longer a good explanation for why they are best. The politicians, and their policies, are those experiments.
The conditions of ‘fairness’ as conceived in the various social-choice problems are misconceptions analogous to empiricism: they are all about the input to the decision-making process – who participates, and how their opinions are integrated to form the ‘preference of the group’. A rational analysis must concentrate instead on how the rules and institutions contribute to the removal of bad policies and rulers, and to the creation of new options.
But this is not so that all members can contribute to the answer. It is because such discrimination entrenches in the system a preference among their potential criticisms. It does not make sense to include everyone’s favoured policies, or parts of them, in the new decision; what is necessary for progress is to exclude ideas that fail to survive criticism, and to prevent their entrenchment, and to promote the creation of new ideas.
Proportional representation is often defended on the grounds that it leads to coalition governments and compromise policies. But compromises – amalgams of the policies of the contributors – have an undeservedly high reputation. Though they are certainly better than immediate violence, they are generally, as I have explained, bad policies. If a policy is no one’s idea of what will work, then why should it work? But that is not the worst of it. The key defect of compromise policies is that when one of them is implemented and fails, no one learns anything because no one ever agreed with it. Thus
...more
However, under Popper’s criterion, that is all insignificant in comparison with the greater effectiveness of plurality voting at removing bad governments and policies.
Recognizing the right of women to vote, for instance, doubled the number of voters – and implicitly admitted that in every previous election half the population had been disenfranchised, and the other half over-represented compared with a just representation. In numerical terms, such injustices dwarf all the injustices of apportionment that have absorbed so much political energy over the centuries.
Popper’s criterion Good political institutions are those that make it as easy as possible to detect whether a ruler or policy is a mistake, and to remove rulers or policies without violence when they are.
Choice that involves creating new options rather than weighing existing ones. – Political institutions that meet Popper’s criterion.
That omits the most important element of decision-making, namely the creation of new options. Good policies are hard to vary, and therefore conflicting policies are discrete and cannot be arbitrarily mixed. Just as rational thinking does not consist of weighing the justifications of rival theories, but of using conjecture and criticism to seek the best explanation, so coalition governments are not a desirable objective of electoral systems. They should be judged by Popper’s criterion of how easy they make it to remove bad rulers and bad policies. That designates the plurality voting system as
...more
Scientific theories are hard to vary because they correspond closely with an objective truth, which is independent of our culture, our personal preferences and our biological make-up.
At one level the answer is simply that we are universal explainers and can create knowledge about anything. But still, why did we want to create aesthetic knowledge in particular?
But I guess that when beauty is better understood it will turn out that most of the differences have been in the direction of making humans objectively more beautiful than apes.
universal and parochial beauty, are mixed together in our subjective appreciation of things. It will be important to discover which is which. For it is only in the objective direction that we can expect to make unlimited progress. The other directions are inherently finite. They are circumscribed by the finite knowledge inherent in our genes and our existing traditions.
For the same reason, any kind of art that consists solely of spontaneous or mechanical acts, such as throwing paint on to canvas, or of pickling sheep, lacks the means of making artistic progress, because real progress is difficult and involves many errors for every success. If I am right, then the future of art is as mind-boggling as the future of every other kind of knowledge: art of the future can create unlimited increases in beauty. I can only speculate,
(More precisely, what would it be like for a person to have the echo-location senses of a bat?) Perhaps the full answer is that in future it will be not so much the task of philosophy to discover what that is like, but the task of technological art to give us the experience itself.