R.P. Nettelhorst's Blog, page 117
May 2, 2013
Vasquez Rocks
One of my favorite hiking spots is Vasquez Rocks, a 905 acre state park about half-way between Lancaster and Santa Clarita, just off the Antelope Valley Freeway. Crisscrossed with trails, the park sits right on top of the San Adreas Fault. The rocks of the name are gigantic jagged plates of sedimentary rock thrusting upward at about a 45 degree angle, like the sails of old clipper ships rising from the desert floor. Eroded by years of rains and sand, the rocks are made of multi-hued bands of parallel browns, yellows and reds. People climbing them look like ants scrambling upon their anthills.
My wife and I, sometimes alone, sometimes with our children and sometimes with our friends, will spend a few hours hiking around the park. We’ll pause at the babbling creek that runs in the gully, where we’ll sit beneath a shady tree and to eat our lunch. In the spring, the grass is green and flowers bloom; there are pepper trees, yucca bushes, and juniper trees everywhere. The aroma of flower and greenery fills the air, while birds twitter and butterflies flit.
On occasion, we’ll hike up to the top of those perilous rock outcroppings. One doesn’t need to know how to climb mountains to reach their peaks. No ropes or special shoes are required. It can be a bit intimidating for novices, but the view from the top is magnificent. It isn’t a spot for toddlers; there are no guardrails or fences to keep someone from falling.
In two weeks or so, we’re planning on hiking there again.
The Vasquez Rocks were named not for a famous explorer or politician. Instead, they were named after a notorious outlaw: Tiburcio Vasquez. He used the rocks to elude capture by the police—at least for a while—back around 1874.
Born in 1835, by 1852 Vasquez had fallen under the influence of Anastacio Garcia, one of California’s most dangerous bandits. He was there when Garcia killed the Monterey Constable, William Hardmount. And Hardmount would not be the last person Vasquez had something to do with killing.
Though Vasquez liked to claim he was a defender of Mexican-American rights, he was, in fact, far more of a bandit than a revolutionary. By 1856 he had been convicted as a horse rustler and spent five years behind bars at San Quentin, where he participated in four prison breaks that left twenty other convicts dead.
After his release from San Quentin, he soon returned to crime, committing numerous burglaries, cattle thefts, and highway robberies. At the same time, he was a notorious womanizer, having multiple affairs with women regardless of their marital status. After he and his gang stole 2200 dollars from Snyder’s Store in San Benito County—and after killing three bystanders in the process—his days were numbered. Posses began seriously hunting for him after a reward of a thousand dollar was posted for his capture. He continued robbing stores on a regular basis. He and his gang sacked the town of Kingston in Fresno County and robbed all the businesses there, making off without 2500 dollars in cash and jewelry. The governor of California, Newton Booth, was authorized by the California State Legislature to spend up to 15,000 dollars to bring Vasquez to justice. In January, 1874 Booth offered 3000 dollars if he was brought in alive and 2000 dollars if he was brought in dead. The rewards were increased to 8000 dollars and 6000 dollars respectively by February.
Vasquez was finally captured in late 1874. After a four day trial in January, 1875 he was sentenced to be hanged. Visitors flocked to his jail cell after his conviction, many of them women. He signed autographs and posed for photographs while he awaited his execution which took place barely three months later, in March 1875. He was only 39 years old.
Even if you haven’t visited Southern California, you’ve seen Vasquez Rocks—assuming you’ve watched any TV or movies during the last seventy years. Some of the better-known movies in which the Vasquez Rocks have made an appearance include the film Dracula in 1931, Blazing Saddles in 1974, Star Trek IV in 1986, the Flinstones in 1994, and the latest Star Trek movie of 2009. Some of the television series that have used Vasquez Rocks as a set includes CSI: Crime Scene Investigation, The Twilight Zone, Mission Impossible, Have Gun—Will Travel, Maverick. And perhaps most frequently, the rocks have been used for alien planets in episodes from all the various versions of Star Trek: from the original series in the 1960s to Enterprise in the 2000s. In fact, the rocks have appeared in over seventy television series and over forty movies, serving as backdrops for various westerns, for the town of Bedrock in the Flintstone movies, and for Vulcan, the homeworld of Mr. Spock from Star Trek. Captain Kirk fought a lizard-like Gorn there, defeating him with a well-placed blast from a homemade gun.
May 1, 2013
Transister
A bit more than fifty years ago, in 1960, Sony introduced the first transistor radio. At the time, and for several years thereafter, claiming a new device had transistors was used as a marketing ploy, much as high definition is now. On the outside of the case, most new radios would boast how many transistors they had on the inside. I recall, back when I was a kid, my dad gave me such a transistor radio. I loved that thing. Besides picking up the normal AM and FM signals, it could also pick up short wave radio broadcasts. And on the outside of the case, in rather large letters, it announced that it had “8 Transistors.”
For those of us born a more comfortable distance from the apocalypse (to borrow a phrase from the comedian, Emo Philips), we may remember the old glowing vacuum tubes that used to fill radios and televisions. Periodically, when the TV or radio would stop working, we’d open up the back of our machine and peer inside for burned out tubes, which we’d then take down to the grocery store and plug into a big metal machine they had there to see if the tubes really were working or not. If we were lucky, we’d find the blown tube and then study the numbers on it so we could grab a replacement. What did those tubes do in our old TVs and radios? They were used to amplify, switch, or otherwise modify or create electrical signals by controlling the movement of electrons. More simply, they were what made our magic boxes work. At most, our old radio or televisions, that hummed and took minutes to warm up, might have a dozen tubes stuffed inside them. Computers from the era were rare, noisy, enormous and unreliable. The first computers, such as the Colossus from World War II ran on perhaps five hundred vacuum tubes that were constantly burning out and needing to be replaced. Operators were lucky if a computer could actually do anything useful between breakdowns.
Even so, vacuum tubes were critical to the development of electronic technology, which drove the expansion and commercialization of radio broadcasting, television, radar, sound reproduction, large telephone networks, analog and digital computers, and industrial process control. Some of those applications pre-dated electronics, but it was the vacuum tube that made them widespread and practical.
The introduction of transistors was transformative in the realm of all things electronic. Transistors manage to do the same job that all those old tubes did. Because they are smaller, use less electricity, cost far less to manufacture, they don’t hum, glow or need to warm up. More importantly, they don’t burn out. That’s why transistors very quickly replaced the vacuum tube in all electronic devices within just a few decades after their invention.
The transistor was invented by several people working together at AT&T’s Bell labs in 1947. John Bardeen, Walter Brattain, and William Shockley were instrumental in making them practical. The first silicon transistor was then produced by Texas Instruments in 1954, and commercialization of the device soon followed.
The transistor is one of the most important inventions of the twentieth century. Today, transistors are ubiquitous. No marketing department would ever think to emblazen the fact that a new device has transistors in it. We take them entirely for granted. If you were to open up an old transistor radio, you could see the transistors soldered onto the circuit board. But if you were to open up any modern gadget, you wouldn’t be able to locate the transistors unless you had a high powered electron microscope. And you would be hard pressed, even then, to count them all.
A modern computer has a central processing unit, made by either Intel or AMD. Intel’s first central processing unit for micro computers was released in 1971. It contained 2300 transistors. When Intel first released the Core i7 processors in 2007 they had 781 million transistors on them. And the processors containing those millions of transistors were only about an inch and a half by an inch and a half square, and maybe an eighth of an inch thick. The next generation Core i7 that followed had well over two billion transistors on it. And so it goes.
The number of transistors on a processor chip, packed into the same amount of space, doubles about every eighteen to twenty-four months. This is known as Moore’s Law, named after the cofounder of Intel, Gordon Moore. He had first made the observation in the April, 19, 1965 publication, Electronics Magazine. Intel has kept up that doubling of transistors on chips for the last forty years with no sign of slowing anytime soon.
As if that weren’t enough, the cost to the consumer for computer chips has dropped rapidly every year. In fact, if the price of automobiles followed the same pattern and percentages as the cost of transistors, cars would today cost less than the gasoline used to fill their tanks.
In 2008 about ten billion computer processor chips were manufactured. Every year, that number grows.
Every television, every cell phone, every microwave, and every automobile sold today has thousands, if not millions of transistors inside of them. In fact well over sixty million transistors are manufactured for each man, woman and child on Earth every single year. Our current civilization, as it functions today, simply could not exist without the transistor.
April 30, 2013
Titan
Titan is Saturn’s largest moon. If you go outside some evening now, you’ll be able to see Saturn, looking like a very bright yellow star. If you have a small telescope, you should have little trouble seeing its rings and its largest moon, which now, thanks to the space probe Cassini and its lander, Huygens, is just a little less mysterious than it used to be.
The sky and surface of Titan seems to be mostly colored in various shades of orange, at least based on the early photographs returned by Huygens, the lander that the space probe Cassini (still orbiting Saturn and still sending back data) dropped on Titan. By the clocks here in California, early on the morning of January 14, 2005 that Volkswagen sized spaceship blasted through the dense atmosphere of Titan, slowed to subsonic speed, dumped its heat shield, popped a parachute, and floated gently down onto mud.
But it was not ordinary mud made of dirt and water. Instead, it was mud made of a mixture of dirt and liquid methane and ethane: something like liquid natural gas. The temperatures outside were far from Earth normal, as well. The thermometer onboard registered a chilly 280 degrees below zero. That’s colder than the coldest recorded temperature in Antarctica at Vostok on July 21, 1983. By comparison, Antarctica was only a balmy 129 degrees below zero.
Near Huygens landing spot, a lake of methane gently sloshed in the chilly breeze. Apparently there are rivers of the stuff, too, washing down from nearby mountains.
And it’s not that Titan has no water. In fact, it has quite a lot. But it is all frozen solid, hard as granite. The “stones” visible in some of the early photographs are thus not made of rock at all. They’re just dusty ice cubes.
An airplane would have no trouble flying on Titan. Its air is about fifty percent thicker than the atmosphere on Earth at sea level, but it is all smog. So whereas a jet here carries fuel and then sucks oxygen in through its scoops to make the fuel burn, an airplane on Titan would have to carry oxygen and then suck in the fuel from the atmosphere!
The Huygens space probe that landed only had enough battery power to survive for about half an hour. But in that brief time, it relayed back close to 400 photographs, along with many other sensor readings of the surface. Meanwhile, its mother ship, the Cassini space probe, continued its orbit around Saturn, where it continues to take pictures of Saturn and its satellites, now more than eight years later.
The Huygens lander was named after Christaan Huygens, the Dutch astronomer who first discovered Titan. Built by the European Space Agency, it hitched a ride on Cassini for the last seven years. It was a billion mile trip. Huygens and Cassini are now so far away from Earth that it takes its radio transmissions over an hour to get here, even traveling at the speed of light. The speed of light is about 186,000 miles per second. At that speed, it would take you less time than an eyelid flutter to go from Los Angeles to New York and back. Therefore, everything that Huygens did—the entry into the atmosphere, its landing, its sending of signals back to Earth—was fully automated. Its battery had in fact already gone dead by the time Earth received its first signals.
The mother ship Cassini, was named after the Italian-French astronomer Giovanni Cassini, who is also known as Jean Dominique Cassini. He watched the planet so much that he noticed a space inside Saturn’s rings. He discovered there were actually two main rings. This gap between them is still called the “Cassini Division.”
The Cassini spaceship is huge. In fact, it is one of the largest interplanetary spacecraft ever built, and the third heaviest unmanned spacecraft ever launched into space. It is about the same size as a thirty passenger school bus and weighs close to 6 tons. It’s way bigger than the Curiosity rover currently tooling about Mars.
Cassini has twelve high-tech instruments capable of twenty-seven different science investigations. To operate them, the spacecraft has an elaborate electronic system that consists of more than seven and a half miles of cabling, some 20,000 wire connections and 1,630 interconnect circuits. It was built by the wizards at the Jet Propulsion Laboratories in Pasadena, which is a part of NASA.
For constant updated information and some truly spectacular pictures, check out the official Cassini-Huygens home page at saturn.jpl.nasa.gov, operated by NASA. And don’t put www in front of that address; it won’t work if you do.
April 29, 2013
Tipler
Back in 1994 Frank J. Tipler, a professor of mathematical physics at Tulane University in New Orleans published a book, The Physics of Immortality, in which he argued that immortality and the resurrection of the dead were consistent with the known laws of physics. He argued that intelligent species would come to fill the universe and would, at the end of time, become what he called the Omega Point, which he identified as God.
When he wrote The Physics of Immortality Tipler was an agnostic. But by 2007, he had converted to Christianity. He since then has published a book entitled The Physics of Christianity, in which he argues that Christian theology is consistent with the laws of physics and that everything from the virgin birth to Jesus’ resurrection can be proven scientifically.
As I read his latest book, I wondered who exactly it would appeal to. Certainly agnostics and atheists will not like it any more than most of them liked his first book. But most Christians will be made uncomfortable by what he says, too. The book begins with an overview of modern physics, which non-physicists may find hard to understand. Nevertheless, I found the book fascinating.
Regarding his first book, The Physics of Immortality, the physicist George Ellis didn’t like it at all. When he reviewed it in the journal Nature, he wrote that it was “a masterpiece of pseudoscience … the product of a fertile and creative imagination unhampered by the normal constraints of scientific and philosophical discipline.” Another scientist, Michael Shermer, devoted a chapter of Why People Believe Weird Things to enumerating the flaws he perceived in Tipler’s thesis.
On the other hand, the Oxford physicist David Deutsch, who pioneered the field of quantum computers, finds Tipler’s arguments compelling enough that he incorporated his Omega Point concept as a central feature of his “four strands” Theory of Everything that he outlined in his 1997 book, The Fabric of Reality.
Tipler is probably best known for a generally well-received book he wrote with John D. Barrow and John A. Wheeler in 1986 called The Anthropic Cosmological Principle. In it, he and his co-authors review the intellectual history of teleology and the large number physical coincidences which allow sapient life to exist.
What is the anthropic principle? There are two basic forms of it, called the weak anthropic principle and the strong anthropic principle. The weak anthropic principle goes as follows (according to Tipler, Barrow and Wheeler): “The observed values of all physical and cosmological quantities are not equally probable but they take on values restricted by the requirement that there exist sites where carbon-based life can evolve and by the requirements that the Universe be old enough for it to have already done so.”
The strong anthropic priciple argues that “the Universe must have those properties which allow life to develop within it at some stage in its history” and “there exists one possible universe ‘designed’ with the goal of generating and sustaining ‘observers.’” It implies, therefore, that the purpose of the universe is to give rise to intelligent life, with the laws of physics and the fundamental constants set so as to ensure that life as we know it will emerge.
What are the fundamental constants in question? The nuclear strong force holds together the particles in the nucleus of an atom. If the nuclear force were only a percentage or two stronger or weaker the universe wouldn’t have the heavier elements in it, such as iron or carbon necessary for life. Likewise, if the nuclear weak force were slightly stronger or weaker, the heavier elements wouldn’t exist. The force of gravity is another constant that affects the interaction of particles and again, if its strength were more or less than it is, the universe would not be condusive to life. The same can be said of electromagnetism.
These and other examples are often given as evidence of the universe being fine-tuned.
Paul Davies discussed the universe’s fine-tuning at length in his book The Goldilocks Enigma (published in 2006). He summarises the current state of the debate over how fine-tuned the universe must be in detail and discusses the question of how this fine tuning is to be understood. He gives several possible interpretations of what we see in nature. First, it could be that the universe is absurd: it just happens to be this way. We were lucky. Second, it could be that there is something in the laws of physics which necessitates the universe being the way it is: that simply having a universe means the strenghths and ratios of the various underlying constants can be only the way they are and no other way. Third, perhaps there are many universes which have any and all possible characteristics, so that we just naturally find ourselves in one of the ones that supports life and consciousness. Or fourth, perhaps an intelligent creator designed the universe specifically to support life and the emergence of intelligence.
These four understandings of the nature of the fine-tuning of our universe are not necessarily mutually exclusive. Although Frank J. Tipler concludes that the fourth possiblility is the true one, he also accepts both the second and third intepretations as being valid as well.
Tipler’s books (and the others) are thought provoking. If you don’t mind having your mind stretched you might enjoy reading them whether you fully—or even at all—accept any of the conclusions.
April 28, 2013
Time
We’re more than ten years past our fears of Y2K and more than ten percent of the twenty-first century is now history.
As long as there have been human beings, they have kept track of the time. Through most of human history, that meant paying attention to the passage of the seasons so that crops could be put into the ground at the right time. Paying attention to months and days were secondary to that, and certainly paying attention to smaller fragments of a day came rather late in human history. The concept of punctuality as a virtue arrived only after the invention of the clock in the late 1200s. It wasn’t until the late 1400s that it became common for clocks to indicate minutes and even later for them to commonly keep track of seconds. The stimulus for accurate clocks was their value in navigation. The position of a ship at sea could be determined with reasonable accuracy if a navigator could refer to a clock that lost or gained less than about ten seconds per day. Such a level of accuracy was not achieved until 1761.
Only as timepieces became common—and really, only with the introduction of the industrial revolution, with factories and hourly wages and the like—did the concept of punctuality really take hold in western thought.
For most of human history, keeping track of the years was done by calculating them from the time the current king had taken the throne. So a date would be given as, “In the second year of King Darius, on the first day of the sixth month.” Such a date would have meaning only within the lands ruled by that king. The dates for a neighboring kingdom would be given in terms of that monarch’s reign. For historians, trying to figure out when something happened according to the calendar we use today is not easy, and the further back in time, the harder it becomes, with some dates, even of important, well-known events, having margin of error that can be measured in decades, if not in centuries.
Our current method of keeping track of time, with a twelve month calendar, seven day week, and counting from the approximate date of Jesus’ birth goes back to when Dionysius Exiguus came up with our current method of counting the year. At that time, the Diocletian Era was used for devising when Easter should be celebrated. But Diocletian had persecuted Christians, and so Dionysius Exiguus wanted to replace that calendar system for calculating when Easter should be celebrated.
The last year of the old system for determining when to celebrate Easter was Diocletian 247. The first year of Dionysius Exiguus’ new system started the next year, AD 532. AD is an abbreviation for the Latin phrase, Anno Domini, short for Anno Domini Nostri Jesu Christi, “In the year of Our Lord Jesus Christ.” Dionysius Exiguus’ new system of dating was only very slowly adopted. The Anglo Saxon historian known as the Venerable Bede used the AD system for his Ecclesiastical History of the English People which he finished in 731.
The AD system of dating was then endorsed by the Emperor Chalemagne (reigned 768-814) and his successors, which popularized the use of the dating system, at least within the Carolingian Empire (roughly corresponding to modern France and Germany). The popes in Rome continued to date documents according to their regnal years for quite some time, though the use of AD gradually become more common in Roman Catholic countries between the eleventh and fourteenth centuries. Portugal was the last Catholic nation to switch to the AD system of dating; they did so in 1422. Eastern Orthodox countries only began to adopt AD in place of the old Byzantine calendar in 1700, when Russia switched to the AD system of dating. The old Byzantine calendar had dated things from the supposed date of the creation of the world on September 1, 5509 BC.
By the nineteenth century, most nations on earth were using the AD system, though many continue to use alternate systems in addition to it. So, for instance, all Moslem nations date things according to the standard Moslem calendar which dates years from the Hijra, the emigration of Muhammed from Mecca to Medina. Thus, the current Islamic year is 1434 AH (After Hijra) which goes from the evening of November 14, 2012 to the evening of November 14, 2013. The Islamic calendar is a lunar calendar system, in contrast to the AD system which is a solar calendar. Likewise, the Hebrew calendar, used today in Israel alongside the standard AD system, is a lunar calendar and like the old Byzantine Calendar dates from the supposed date for the creation of the world—which is placed later than that of the Byzantine system. According to the Hebrew calendar, this is the year 5773, which began at sundown on the evening of September 16, 2012 and will last until sundown on September 4, 2013. Both the Moslem and Hebrew lunar calendars are brought back into sync with the solar calendar every four years or so by adding an extra month.
April 27, 2013
The Pace of Writing
The United States has the highest worker productivity on the planet. That is, the average American laborer will produce more widgets, or process more paperwork, or cook more burgers, or design more aircraft than their equivalents elsewhere.
How does an author measure his or her productivity? One way, of course, would be to point to the number of books written, or the number of articles generated, or the number of short stories or movie scripts that have your byline.
But more often, if writers are talking amongst themselves, or neophytes are seeking insights from their already published brethren, the question come down to the practical: what happens on a given work day? How many hours per week does an author write? And how does that translate into written documents? That is, what is considered normal in the writing world when it comes to daily page count, or word count?
Most authors, if they are full time, have a normal work day, like anyone else who works in an office. Although one can make one’s own hours, a schedule and rhythm is very helpful if you want to be a real writer and not just a hobbyist. Real writers, as the saying goes, write. More than that, they develop a certain amount of self-discipline. If you don’t have self-discipline, your chances of ever being successful are slim and none—even more slim and none than your chances from working hard and regularly.
So, for myself, I work five days a week, for about eight hours every day, usually nine to five, with time off for lunch. On occasion—like the time I had to write two books in about four months—I wind up working far longer hours and my weekends become virtually indistinguishable from the rest of the week. During those sorts of marathon sessions, I forget what color the sun is and my children see me so seldom that they wonder “who is that strange hobo I just saw in the kitchen getting another cup of coffee?”
But, during the vast majority of my year, I am writing only forty hours per week. For almost a year, I was finding it very difficult to focus and accomplish anything during those hours, as my dysthymia was getting worse. But following diagnosis and the proper medication, my ability to concentrate has returned—and thus my productivity has gone back to where I expect it to be.
Some authors merely set for themselves the goal of spending a set number of hours each day sitting in front of their computer. Others set for themselves word counts. For instance, the science fiction author John Scalzi tries to write a minimum of 2000 words per day. He has an amusing posting on his blog, Whatever, where he explains how his mind works in this regard. If he writes less than a 1000 words, he tells himself that he is a toad who doesn’t deserve to eat. If he reaches his minimum goal of 2000, then he pats himself on the back and eats a donut. 3000 words, he’s on fire. 4000 words, he’s in danger of blowing a brain lobe. At 5000 words he’s reduced to a babbling idiot. He posted that the most words he ever did in a day was 14,000 and that he had to sleep for three days afterwards. I think the most I’ve ever written in a day is around 10,000 words. Most of the time I’m producing half that.
For myself, I prefer to set page count goals. I’m currently working on three novels, a new science fiction novel with the working title, Cold and an old novel, Hacker’s Apprentice, that needed heavy rewriting, and a newer novel that I’m rewriting and needed some additions with the working title Bent Anvil. For the last month I’ve been maintaining a pace of at least thirty pages per day—ten pages in each book—five days per week (for the books I’m rewriting, I often cover many more pages than that). I now estimate, barring any life crises, that I’ll finish those books to a level I can let someone read around the end of summer.
For some authors, that sort of schedule would seem nightmarish. For others, they’d wonder how it is I can be such a lazy goof-off. Isaac Asimov, the late science fiction author, worked upwards of ten hours a day—and he worked seven days a week. Consequently, he authored about 500 books in his lifetime, not counting all the short stories and magazine articles that he also wrote. Romance novelist Barbara Cartland holds the Guinness World Record for the most novels written in a single year: in 1983 she produced 23 of them! I’m a complete slacker–or to use Scalzi’s word, “toad”–compared to that.
And I have no desire to try to emulate that sort of productivity. I like to write well enough—it’s a wonderful job. But it’s still a job and frankly, it’s not the only thing I like or want to do any more than ditch diggers enjoy working more than 40 hours a week.
April 26, 2013
The Coin
Recently I went through a box of loose coins from my coin collection: coins I hadn’t had time to sort or organize up until that moment. One of the coins I chanced upon was a 1913 10 pfennig piece from Germany. It’s in Very Fine condition—VF-35 or so, for those readers who might be numismatists. Not worth very much. I found several examples of the same coin, with the same date and in similar condition going for about a dollar on EBay.
What fascinated me about the coin, however, was not its rarity or lack thereof, but rather the moment in time it represented. Germany had become a unified nation barely four decades earlier, in 1871, when Bismarck had finally succeeded in unifying, sometimes by force of arms, the majority of the German states. Within a year of this 10 pfennig coin being issued, Germany would start the First World War—a war that would destroy both it and a world and set the stage for the coming of the Nazis.
The world that existed when this coin appeared was radically different from the one that existed barely a year later on the July 28, 1914 when World War I began. Most intellectuals and leaders in 1913 believed that war was no longer even possible, that the complex economic relationships between the nations, the rule of law, the advancement of technology had at last rendered war obsolete. Jack Kegon wrote that just before war broke out, “Europe in the summer of 1914 enjoyed a peaceful productivity so dependent on international exchange and co-operation that a belief in the impossibility of a general war seemed the most conventional of wisdoms. In 1910 an analysis of prevailing economic interdependence, The Great Illusion, had become a best-seller; its author Norman Angell had demonstrated, to the satisfaction of almost all informed opinion, that the disruption of international credit inevitably to be caused by war would either deter its outbreak or bring it speedily to an end.” The world of 1913 was a marvelous place: prosperous, full of wonderful new technologies. Telegraph cables stretching across the oceans meant instant communication anywhere on the planet. The radio was making ocean travel safer; soon news and entertainment would be available anywhere day or night. The telephone allowed people on opposite sides of the country to converse as easily as if they were sitting side by side. The airplane, ten years old in 1913, was a wonder whose effect on the world could only be imagined.
The future seemed bright. Everything seemed possible. The Europeans were colonizing the world, bringing the benefits of modern life to the people of Asia and Africa. It seemed that nothing stood in the way of everlasting wealth and ever better lives: everyone would live in peace and harmony.
But within a year of the coin being issued, World War I brought the deaths of more than sixteen million people, military and civilian. It represented a loss of nearly two percent of the world’s population between 1914 and 1918. Between June 1917 and December 1920, somewhere between 50 and 100 million others would die from an influenza pandemic known as the Spanish Flu—one of the deadliest natural disasters in history. At least three percent of the human race died from the illness, while over twenty-eight percent of the world’s population was infected by it.
Eugenics, the belief that “undesirables” should be bred out of the human population went mainstream, leading to the forced sterilizations of those deemed physically or mentally imperfect in both the United States and Europe. Anti-Semitism was promoted by such luminaries as Henry Ford, while it was regularly preached on the radio in America and Europe.
A worldwide depression hit starting in 1929, with the collapse of the American stock market. Runaway inflation devastated Germany; within months a postage stamp that went for a fraction of a German Mark cost more than a million of them. The Nazis and other extremist parties seemed reasonable options to a desperate population seeking to solve its economic disaster.
A century now has passed since that ten pfennig coin was minted in the German Empire. The people who used that coin have all long since died. The Empire the minted the coin is history. The hopes and dreams of those who carried it in their pockets have long since turned to ash. Their future was nothing at all like they imagined or wanted.
On the other hand, their future was not all bleak. The century that saw two devastating world wars, also saw women gaining the right to vote in the United States. It saw the rise of the civil rights movement, the marginalization of racism and the discrediting of notions of eugenics. It saw technological marvels that the people of 1913 couldn’t even have imagined, from improvements in transportation and communication, to medical advances, and revolutions in agriculture that have made food so cheap and abundant that the biggest problem facing people today is obesity rather than hunger. Barely five decades passed from the manufacture of that coin before people were walking on the moon. The motion pictures that were silent and in black and white had become colorful and full of sound—and could be watched in the comfort of our living rooms.
So how do we expect our new century to unfold? As we hold a coin from 2013, how naïve are our hopes and fears about the next century? There will be wonders we can’t imagine, and probably—given history as our guide—horrors we can only dread. As Charles Dickens wrote in his tale about the French Revolution, A Tale of Two Cities, “it was the best of times, it was the worst of times.” His words apply to every era.
April 25, 2013
Tesla Coil
When I was growing up I came upon an old 1940 Popular Science magazine (and yes, it was old even when I was growing up). Inside, I read an article that gave detailed instructions on how to build something called a Tesla Coil. It described all the incredible things it could do, such as spitting out lightning-like sparks and making fluorescent tubes glow even when you were just holding them in your hands. And the article made it look very easy to build. You just needed a cardboard tube of the sort that you might find holding a role of aluminum foil, a cigar box, a bunch of copper wire, and a transformer. Unfortunately, though cardboard tubes were easy enough for me to get a hold of, cigar boxes were impossible—no one I knew smoked—and the other bits that were required, such as copper wiring and the transformer were beyond the financial resources of a twelve year old. Worse, I had no idea at the time where to go about finding transformers and some of the other electronic bits described in the instructions. Nevertheless, I periodically pulled the old magazine off my shelves and studied it, imagining that someday I would find a way to build such a thing.
Nikola Tesla, the inventor of the coil, is responsible for giving the world much more than a fascinating object for twelve year olds to fantasize about. Every time you flip a switch to send electricity to your lamp, TV or radio, it’s thanks to Nikola Tesla.
You see, Tesla’s patents and theoretical work formed the basis of modern alternating current power systems. Without him, there would be no 110 volt AC power in your home.
Born in Serbia on July 10, 1856, he moved to Paris in 1882 to work as an engineer for the Continental Edison Company. Within two years he had moved to the United States, where Thomas Edison himself hired him, offering him 50,000 dollars to redesign Edison’s inefficient motors and generators. After doing the work, however, Edison refused to pay him, creating a permanent breach in their relationship.
So in 1886 Tesla formed his own company: Tesla Electric Light & Manufacturing. In 1887 he constructed the first brushless alternating current induction motor. Then in 1888 he began working with George Westinghouse, who listened to his ideas for a polyphase system allowing for the transmission of alternating current electricity over long distances—and more importantly, bankrolled it.
At the age of 35 in 1891 Tesla became a naturalized American citizen. His first patents concerning the polyphase power system were granted the next year. At the 1893 World’s Fair in Chicago, the World’s Columbian Exposition, Tesla and George Westinghouse introduced visitors to AC power by using it to illuminate the exposition. On display were Tesla’s fluorescent lamps and single node bulbs.
Edison, meanwhile, promoted the use of direct current (DC) for electric power distribution. The problem with DC distribution however, was that it worked only over very short distances—of no more than a mile—meaning that a generating plant had to be built within a mile of any customers. That would have made Edison’s system prohibitively expensive. With an AC system, in contrast, one power generating plant could supply electricity for hundreds of square miles.
Edison didn’t want to lose his customer base, however, and did everything he could to try to prevent the adoption of Westinghouse and Tesla’s idea, including publically electrocuting animals with AC power to try to convince people that AC was too dangerous to use. He was also instrumental in inventing the electric chair for executions.
The conflict between the two almost drove both Edison and Westinghouse into bankruptcy. Tesla ultimately released Westinghouse from his contract so that he didn’t have to continue paying him for using his patents. Tesla’s—and Westinghouse’s—AC power distribution system ultimately won out, and today all the plugs in your house, and all houses around the world, are AC powered.
When Tesla was 41 years old, he filed the first radio patent (beating Marconi) and a year later demonstrated a radio-controlled boat to the US military. The same year, he also devised an “electric igniter” or spark plug for internal combustion engines. It is thanks to Tesla that we don’t have to turn a hand crank in order to start our cars.
Had Tesla not torn up his contract with Westinghouse, he would have become a billionaire. Instead, he sank into poverty and lived the last ten years of his life in a two-room suite on the thirty-third floor of the Hotel New Yorker. He died with significant debts on January 7, 1943. Later that year, the US Supreme Court upheld his radio patent in a ruling that served as the basis for patented radio technology in the United States. Of course, that was too late to help Tesla financially.
As to building my own Tesla Coil, I’ve gone back and looked up the instructions. The cost is minor—less than thirty dollars—for building a small one. But although I now know where to go about getting the parts I would need, I find that I no longer really have the time to devote to building one.
Of course, the kind of Tesla I’d really like to get now isn’t a coil, it’s a car. And it costs a whole lot more than a coil: the Tesla Model S.
April 24, 2013
Telescopes
As a child I had wanted to be an astronomer. Instead, I became an author and a theologian. But I’ve never lost my love of astronomy.
My first telescope was made of plastic; it was a reflector and was supposed to be a replica of something that Isaac Newton had built. Thinking back on it, I have my doubts about that. Although I enjoyed putting it together and playing with it, it never really quite worked right. I think I managed to see the moon through it once.
Later, when I was a little older, my parents gave me a 3 inch reflector telescope with a wobbly tripod and a sort of ball joint alt-azimuth mount. It too, never worked very well and was terribly difficult to aim. I was lucky to see the moon through it once or twice. The same can be said of a small 2.4 inch refractor they gave me some time later. It also came with a wobbly alt-azimuth type of mount. Like the previous reflector, it was always difficult to use and I never saw much more than the moon with it. My experience with those three telescopes taught me a lot, however. For instance, one thing I learned was that the sort of mount a telescope has is very important. I promised myself that I would never, ever buy a telescope with an alt-azimuth type of mount again.
Despite all the trouble I had with relatively inexpensive telescopes over the years, I never lost my love for astronomy. So, about five years ago, I bought myself a small telescope that did not have an alt-azimuth mount. Specifically, I found a Meade 3.5 inch Maksutov-Cassegrain telescope on sale for about half its normal price. I thought it was a good deal. It came with two eyepieces, a 26mm Plossl and a 6mm. It has a yoke-type equatorial mount with a clock drive. But it came with only a tabletop tripod. A field tripod was an option that I didn’t purchase at the time. Since it was a small telescope, the idea was that if you also had a small tripod it would be easy to pack up and carry with you; it came with a nice little carrying case. Its portability was certainly unmatched, and I took it with me whenever I traveled, but in practice I found the tabletop tripod made the telescope a pain to use. If I put it on the ground, then I had to lie on my belly to peer through it and that was not always particularly comfortable. If I didn’t want to lie on my stomach, or crouch in a weird position, I had to find a table. That usually proved to be a problem. Often times, for instance when I was camping, there simply were no tables around. After all, portable tables, such as folding card tables, are rather bulky and defeated the purpose of having a small, easily transportable telescope. Worse, portable tables tend to be a bit wobbly and wobbly is the enemy of any telescope since wobbles in the table are magnified to the degree that images in the telescope are magnified. It was hard to really see Mars or Jupiter when they bounced around like burning billiard balls every time I happened to brush against the table.
So, about four years ago I finally sprang for a field tripod for my little telescope. It is solid, it is well-made, and it does not wobble. And the tripod folds up nicely and turns out to be much easier to carry with me than a card table. For the first time in my life I now have a telescope that is actually fully usable.
On the first Saturday after getting the field tripod, I took my little telescope outside, carefully set it up so the equatorial mount was aligned properly, and took a look through the eyepiece. I was able to see several things besides the moon. Jupiter on that particular evening was a late evening object, arising around 11 PM that particular night four years ago. (Tonight, if you go outside and look east before midnight, the brightest thing you’ll see in the sky–besides the moon or a passing airplane–will be Jupiter).
What was especially interesting about Jupiter that first night I used my scope after getting the new tripod, was that on that evening Jupiter’s position made it remarkably easy to also see the planet Neptune. In fact, it was positioned near Jupiter on that night very similarly to the way Galileo saw it in 1609 when he was looking at Jupiter. Although recently some scholars have suggested that Galileo might have realized he was seeing something other than a star, that seems unlikely. Certainly there is nothing in any of his documents that have survived indicating that he paid any attention to the dot he drew, along with some other stars, around Jupiter on the nights he observed that planet. Therefore, Neptune wasn’t officially discovered until 1846 by Urbain Le Verrier.
So, on that Saturday, besides seeing Jupiter, I also saw Neptune for the first time in my life. The 5th-magnitude star Mu Capricorni was 1/4° north-northwest of Jupiter, and the 7.8-magnitude Neptune was 1/4° north of that. I could also see all four Galilean satellites scattered around Jupiter and a bit of banding on the planet. Neptune was just a bluish dot that looked just like a star–but unlike Galileo I knew what I was seeing.
April 23, 2013
Storage Space
My now old second generation Kindle, the electronic book reader made by Amazon.com, has a about 1.5 gigabytes of space that is available for storing the books that I purchase for it. How many books can I fit into this device that is only about five and a half inches wide, by eight inches tall, by less than half an inch thick—and which weighs about the same as a single paperback book? It can hold the contents of 1500 books averaging about 300 pages each.
I have an office in my home which is lined on all the walls, floor to ceiling, with bookcases. The room is about ten feet by ten feet. Those shelves hold about that many books on them. Moving those books would be very unpleasant: I would have to fill dozens of cardboard boxes and would work up quite the sweat hauling them around. Each fifty pound box, perhaps two by two feet in size, would hold at best fifty or so books. I obviously would not carry even one such a box around with me when I went to the doctor’s office. But I can easily carry my Kindle, with the contents of my whole office, anywhere I go.
And if I finish all the books I currently have on my electronic book reader? Kindle allows me to find and purchase any of more than three hundred thousand books wirelessly. They range in price from free to less than twenty dollars; most are cheaper than a paperback. I can purchase a book and download it to my device any time night or day, wherever I happen to be: in my house, at the park, on the beach, riding in a car or sitting in my doctor’s office. Within sixty seconds after pushing the button that says “purchase” I can start reading.
Thanks to digital technology, we can cram an incredible amount of information into remarkably small spaces. I have in my pocket what’s sometimes called a thumb drive; it is smaller than a tube of lipstick. It has twice the storage capacity of my Kindle. I have a small netbook computer about the size and weight of a hard back book. It has a hard drive with a 160 gigabyte capacity—more than eighty times the capacity of my Kindle: it could store at least 120,000 books—more than the number of books you’ll find in most public libraries.
The Library of Congress is currently the largest library in the world, with about 530 miles of shelving. It holds about 130 million items, of which 29 million are books. It has been estimated that if all those books were put into digital format, they would fit in on about 20 terabytes worth of hard drive space. A terabyte is about 1000 gigabytes. Today, many desktop computers that can be purchased for less than a thousand dollars come with a 1 terabyte hard drive. However, it is now possible to find some with 2 terabyte hard drives for not much more. Thus, all the books in the Library of Congress currently sitting on 530 miles of shelves would fit on on ten desktop computers that would easily fit in my office at home. Doubtless within the next five years or so it will be possible to buy a single desktop computer that has a big enough hard drive in it to contain all the books in the Library of Congress, with room to spare.
The amount of information available on the internet is literally astronomical by comparison to the contents of the Library of Congress, however. Just now, there are over 500 billion gigabytes worth of information on the internet. If all that digital content were printed and made into books, it would form a stack that would stretch from the Earth to Pluto ten times—that is, about thirty billion miles. In fact, the world’s digital output is increasing so rapidly every day that if it were instantly being converted into books, that stack would grow faster than a space shuttle could zoom.
Of course, not all that digital content is particularly interesting. Much of it would be email, twitter feeds, Facebook pages, cat videos, and porn. Still, the amount of information available on the internet is remarkable. Google is currently digitizing all the books it can get its hands on—thousands of them a year—and putting them on the internet where they can be easily accessed.
Thanks to high speed connections, both wired and wireless, you don’t even need to have gigabytes or terabytes worth of storage in your computer. On a small handheld device like a cell phone, one can access all that astronomically massive amount of data anywhere and anytime you happen to be. In our pockets, we now have at our disposal the knowledge of the human race: we can read any book, find any answer to any question we might have, any time that we might take a notion to find out. I can stream music from the cloud and listen to anything that has ever been recorded. With Netflix or Amazon Prime, or Hulu, I can select from thousands upon thousands of television shows and movies any time I take a notion to watch them.
Though most of the time, all people do with that wonderful opportunity in their pockets is to play solitaire, text emoticons, or look at cat videos.