Brief Answers to the Big Questions
Rate it:
Open Preview
Read between November 21 - December 1, 2024
20%
Flag icon
Unlike laws made by humans, the laws of nature cannot be broken—that’s why they are so powerful and, when seen from a religious standpoint, controversial too.
20%
Flag icon
The laws of science determine the evolution of the universe, given its state at one time. These laws may, or may not, have been decreed by God, but he cannot intervene to break the laws, or they would not be laws. That leaves God with the freedom to choose the initial state of the universe, but even here it seems there may be laws. So God would have no freedom at all.
22%
Flag icon
When the Big Bang produced a massive amount of positive energy, it simultaneously produced the same amount of negative energy. In this way, the positive and the negative add up to zero, always. It’s another law of nature.
24%
Flag icon
You can’t get to a time before the Big Bang because there was no time before the Big Bang. We have finally found something that doesn’t have a cause, because there was no time for a cause to exist in. For me this means that there is no possibility of a creator, because there is no time for a creator to have existed
29%
Flag icon
If the laws of science are suspended at the beginning of the universe, might not they also fail at other times? A law is not a law if it only holds sometimes.
42%
Flag icon
This has meant that no one person can be the master of more than a small corner of human knowledge. People have to specialise, in narrower and narrower fields. This is likely to be a major limitation in the future. We certainly cannot continue, for long, with the exponential rate of growth of knowledge that we have had in the last 300 years.
43%
Flag icon
Laws will probably be passed against genetic engineering with humans. But some people won’t be able to resist the temptation to improve human characteristics, such as size of memory, resistance to disease and length of life. Once such superhumans appear, there are going to be major political problems with the unimproved humans, who won’t be able to compete. Presumably, they will die out, or become unimportant. Instead, there will be a race of self-designing beings, who are improving themselves at an ever-increasing rate.
43%
Flag icon
It might be possible to use genetic engineering to make DNA-based life survive indefinitely, or at least for 100,000 years. But an easier way, which is almost within our capabilities already, would be to send machines. These could be designed to last long enough for interstellar travel. When they arrived at a new star, they could land on a suitable planet and mine material to produce more machines, which could be sent on to yet more stars. These machines would be a new form of life, based on mechanical and electronic components rather than macromolecules. They could eventually replace ...more
44%
Flag icon
is difficult to say how often such collisions occur, but a reasonable guess might be every twenty million years, on average. If this figure is correct, it would mean that intelligent life on Earth has developed only because of the lucky chance that there have been no major collisions in the last sixty-six million years. Other planets in the galaxy, on which life has developed, may not have had a long enough collision-free period to evolve intelligent beings.
45%
Flag icon
Meeting a more advanced civilisation, at our present stage, might be a bit like the original inhabitants of America meeting Columbus—and I don’t think they thought they were better off for
46%
Flag icon
A scientific law is not a scientific law if it only holds when some supernatural being decides to let things run and not intervene.
48%
Flag icon
All the evidence points to God being an inveterate gambler, who throws the dice on every possible occasion.
50%
Flag icon
Michell argued that there could be stars that were much more massive than the Sun which had escape velocities greater than the speed of light. We would not be able to see them, because any light they sent out would be dragged back by gravity. Thus they would be what Michell called dark stars, what we now call black holes.
51%
Flag icon
one neglected pressure, a uniform spherically systematic symmetric star would contract to a single point of infinite density. Such a point is called a singularity.
51%
Flag icon
All our theories of space are formulated on the assumption that space–time is smooth and nearly flat, so they break down at the singularity, where the curvature of space–time is infinite. In fact, it marks the end of space and time itself.
52%
Flag icon
Falling through the event horizon is a bit like going over Niagara Falls in a canoe. If you are above the Falls, you can get away if you paddle fast enough, but once you are over the edge you are lost. There’s no way back. As you get nearer the Falls, the current gets faster. This means it pulls harder on the front of the canoe than the back. There’s a danger that the canoe will be pulled apart. It is the same with black holes.
53%
Flag icon
Although you wouldn’t notice anything in particular as you fell into a black hole, someone watching you from a distance would never see you cross the event horizon. Instead, you would appear to slow down and hover just outside. Your image would get dimmer and dimmer, and redder and redder, until you were effectively lost from sight. As far as the outside world is concerned, you would be lost for ever.
84%
Flag icon
I think there is no significant difference between how the brain of an earthworm works and how a computer computes. I also believe that evolution implies there can be no qualitative difference between the brain of an earthworm and that of a human. It therefore follows that computers can, in principle, emulate human intelligence, or even better it. It’s clearly possible for something to acquire higher intelligence than its ancestors: we evolved to be smarter than our ape-like ancestors, and Einstein was smarter than his parents.
85%
Flag icon
If computers continue to obey Moore’s Law, doubling their speed and memory capacity every eighteen months, the result is that computers are likely to overtake humans in intelligence at some point in the next hundred years.
86%
Flag icon
Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.
87%
Flag icon
Recent developments in the advancement of AI include a call by the European Parliament for drafting a set of regulations to govern the creation of robots and AI. Somewhat surprisingly, this includes a form of electronic personhood, to ensure the rights and responsibilities for the most capable and advanced AI.
95%
Flag icon
A world where only a tiny super-elite are capable of understanding advanced science and technology and its applications would be, to my mind, a dangerous and limited one.