The Scientific Method

The “Scientific Method” we scientists keep talking about has many ingredients, but this is the most important: intellectual integrity. Some call it objectivity: look at all the relevant data and go where the evidence takes you, whether you like it or not. Kepler discarded the results of years of very hard work because his theory did not quite agree with the observations. Einstein once published a paper documenting the colossal failure of a research that had taken him two years, “to save another fool two years should he wander down the same path”.

You might say that I am talking about plain honesty, not something special reserved for science and scientists. Yes and no. The difference is that scientists must take it seriously, because without it the whole endeavour is pointless.

No ‘white lies’ are possible in science. We cannot make an exception and suspend our objectivity, even for a second, because we are forging chains of logic and if even one link is missing or ‘weak’, the whole thing falls apart. We might fool ourselves, but we cannot fool Mother Nature, who doesn’t care whether we manage to uncover her secrets.

Honesty and integrity are only prerequisites for science. They are not the method itself. The method starts with observation. Not just looking but seeing.

If you look at the stars on a clear night, your first impression is randomness. Thousands and thousands of pinpricks of light scattered all over the sky. After a while, you begin to see structure. The bright band of the Milky Way should be the first thing you notice. Then, if you have the time and imagination, you might look for some way to remember the shape of star-groups. Maybe they remind you of something: a cart, a bear, a dragon…. Eventually, you might realize that the arrangement of stars within a group and the relation of the groups among themselves never change -- except for half a dozen ‘stars’ that seem to wander all over the place, moving forward for a while, than stopping and moving backwards, then forward again.

It takes time to observe nature and find patterns that are not immediately obvious. Once you have observed something, you have to be able to describe it to others, which means you have to be able to describe it to yourself first. You give names to what you observed to save time in both thought and communication. The fixed lights, you call stars; the wandering few you call planets (‘planetes’ means ‘wanderer’ in Greek). Inventing names and explaining what they mean is called ‘definition’.

And that is where mere curiosity turns into science. Of course scientists don’t just sit down and make up definitions before doing anything else – it doesn’t work that way. We must have some ideas about the attributes of matter and the processes that we want to study. Often, we understand something intuitively before we are able to give precise definitions. ‘Force’ and ‘mass’ in Newtonian mechanics were described before they were defined. Some concepts, like ‘space’ and ‘time’, we can never define precisely.

Science is not a linear progress, leading from ‘A’ to ‘B’. It can be best described as ‘iterative’: getting closer and closer with every pass we make at it. We gain depth in the process and our definitions and statements will be more and more precise. A common error made by undergraduate students is the attempt to treat physics like mathematics. You can’t because it isn’t! Mathematics is a logical, self-consistent creation of the human mind. It does not need to be tested against messy, approximate reality. Physics, by necessity, is imprecise. How do we know where an object’s boundaries are when, with enough magnification, everything dissolves into whirling, colliding atoms and molecules that move in and out of macroscopic objects?

Eventually things are defined; theories are developed and tested against reality. A new theory will have to be built on top of a self-consistent network of known facts and already proven theories that are connected to basic experiments, axioms and principles. This is the existing body of science that, to the best of our current knowledge, does not contain contradictions. This knowledge-base is the result of thousands of years of curiosity, passion, determination. It contains knowledge gained in every era and location through history: the Babylonians, the Egyptians, the Greeks, the Chinese, the Arabs, the Europeans, the Russians, the Americans all added data that became ‘integrated’ along the way.

Integration is a process that constantly compares statements in the knowledge-base for consistency and agreement. Whenever a contradiction is found, every effort is made to resolve it. Contradiction in science is like poison in the human body – it has to be expelled for the body to survive. Even if all the experiments performed so far are in agreement with the theory, scientists must be prepared to discard or modify the theory at any future time, should contradictory evidence surface in further research.

Once we have a knowledge-base, we need the tools to expand it through meticulous observations. The primary tool used by science for learning new things about the physical world is experimentation. With experiments we collect facts, identify the exact nature of what we know and how we know it. Experiments must be repeatable and consistent, publicly demonstrated and the resultant data freely available to anyone. And, as many teachers and books point out: no measurement is valid without identifying its precision and its margin of error.

An integral part of scientific experiments is called ‘reduction’. We try to determine which parameters play a role in a process we want to study. Once we have an idea about those attributes that affect the process, we have to set up an experimental situation wherein we can change any one parameter and see what happens to the rest.

A good example is studying the relationship between pressure, volume and temperature of a gas. After many experiments performed by Robert Boyle in 1661 and Joseph Louis Gay-Lussac in 1802, the “Ideal Gas Law” was discovered: the relationship between temperature (T), pressure (P) and volume (V) of a body of gas is such that the magnitude of PV/T remains constant (values determined by the kind of gas we use). Which means that:

• If we keep the temperature unchanged and decrease the volume then the pressure has to increase (V and P are inversely proportional)

• If we keep the pressure unchanged, and increase the temperature, then the volume has to increase as well (T and V are directly proportional)

• If we keep the volume unchanged and increase the temperature, then the pressure has to increase as well. (T and P are directly proportional)

While collecting data on a given subject, scientists use both pattern-recognition techniques and imagination to find relationships and cause-and-effect links among these facts. This is usually done by trying out different models (hypotheses, theories) that could explain the experimental results and not contradict anything we know.

Then the scientist is ready to try to develop a theory that will explain the collected data, based on the existing knowledge-base of science. The theory is usually built on an assumption or hypothesis that appears reasonable, in view of all the known facts. Quite often it states this assumption as a hypothetical law or laws. Some of the best known theories in physics are:

• The Heliocentric Universe theory, suggested first by Aristarchus in ancient Greece, followed by Copernicus in 1514 and then embraced by Kepler, Galileo and Newton. Galileo got into trouble with the church because he had stated it as a proven theory instead of as a hypothesis.

• Newton’s three Laws of Mechanics, coupled with the new mathematical tools he invented, could be used to deduce results in perfect agreement with all known facts at the time.

• Maxwell’s Electromagnetic Field Theory offered a set of equations that could be used to calculate all known electromagnetic phenomena. It also suggested experimental results (like radio waves) that were only confirmed later.

• Einstein’s Special Theory of Relativity was based on a few philosophical assumptions that explained many unresolved problems and perplexing experimental results. Its predictions have been experimentally confirmed since.

• Einstein’s General Theory of Relativity stated a number of equations that explained anomalies in Mercury’s orbit, suggested a new view of gravity and correctly predicted that the sun’s gravitational field bends light.

• Heisenberg’s Uncertainty Principle states that in the world of atoms, we can never measure everything to arbitrary degree of precision. For example, the more precise we are in measuring the location of an electron, the less precisely we will be able to measure its momentum - mass times speed - and vice versa.

Actually, Albert Einstein and Niels Bohr spent over 25 years arguing about the Uncertainty Principle. Einstein would dream up thought-experiments to prove that the theory was incorrect and Bohr would prove Einstein wrong every time. Until, one day, at the 1930 Solvay conference, Einstein managed to come up with one example that completely baffled Bohr, who went to bed and spent a sleepless night trying to find a way to prove Einstein wrong, yet one more time.

Next morning they met at breakfast, each with a huge grin on his face. Einstein was convinced that he’d finally defeated Bohr. Bohr, on the other hand, had found Einstein’s mistake. Albert forgot to take only one thing into consideration: the effect of his own General Theory of Relativity. There was much merriment around the table that morning!

All the experiments performed since then seem to decide in Bohr’s favour. The Uncertainty Principle Theory – as of this moment - is considered to be proven beyond any doubt.

Many other theories have been proposed by physicists during the last 400 years, some fundamental, others minor; some proved correct, many turned out to be wrong. To prove a theory, we need deductive logic. By applying mathematical tools, we logically deduce the consequences of the new theory and make predictions. We can test these predictions against existing experimental data, or perform new experiments to verify the results of the calculation.

Once these deductions are tested and their accuracy demonstrated, the theory is considered to be proven.
For example, the “Kinetic Theory of Gases” assumes that gases are made up of atoms or molecules in random motion inside a container. The pressure of the gas is due to the force exerted by the molecules hitting the walls of the container and the temperature is due to the kinetic energy of the molecules. If we apply Newton’s Laws to this model, then we can deduce the experimentally obtained “Ideal Gas Law”, so the theory is proven within its limits.

Luckily, we don’t always have to do a lot of math before realizing that a theory is wrong. A common mistake many young physicists make is taking mathematical deductions too seriously. I don’t mean that math can be sloppy and incorrect - far from it. However, as experienced physicists will tell you, it is possible to think in terms of critical and determining variables, and see how they stand up in the new theory. Much deductive work can be saved if one can spot a show-stopper at the beginning: thinking ‘physically’ before thinking ‘mathematically’.

The theories we make up will be judged by what is called: “Ockham’s razor”. William of Ockham (1280-1349) laid down the rule that “entities must not needlessly be multiplied”. Between two theories that fit all observed facts, we accept the theory that requires the fewer or simpler assumptions. This does not mean that the simplest explanation is always correct. Remember: all observed facts must be taken into account!

The famous quote from Albert Einstein is appropriate here: “The most incomprehensible thing about the universe is that it is comprehensible”.
Physicists like this elegance and simplicity of nature. They like it so much that they have been pursuing the Holy Grail of Physics for over a century: the “Unified Field Theory”. They would like to come up with one theory that explains absolutely everything! A tall order indeed, but there is cause for optimism: as the science of physics progressed over the decades, more and more phenomena that seemed to have nothing in common were proven to be manifestations of the same thing.

For example electricity and magnetism, once considered totally different areas of physics, were unified by James Maxwell in 1873 under the “Electromagnetic Field Theory”. Many advances on the ‘unification front’ have been made since then and physicists still hope that one day they will have one equation that explains the universe!
 •  0 comments  •  flag
Share on Twitter
Published on June 13, 2015 15:40
No comments have been added yet.