John G. Messerly's Blog, page 95

February 13, 2016

Summary of Gregory Paul & Earl Cox’s, Beyond Humanity

Gregory Scott Paul (1954 – ) is a freelance researcher, author and illustrator who works in paleontology, sociology and theology. Earl D. Cox is the founder of Metus Systems Group and an independent researcher. Their book, Beyond Humanity is an assault on the mindset of those who oppose their view of scientific progress.


Like many of our previous authors, Paul and Cox argue that the universe, as well as all life and mind within it, have evolved over time from the bottom up. However, genes now have little to do with our evolution—science and technology move the accelerating rate of evolution. In the course of that evolution a general pattern emerges—more change in less time. While it took nature a long time to produce a bio-brain, technology will produce a cyber-brain much faster.


Despite its promises people are ambivalent about science and technology (SciTech). They believe it will improve their lives, yet it has contributed to the death of millions. Its success has, in some sense, backfired. To be completely accepted SciTech must solve the problems of suffering and death which inevitably leads to questions about human nature. When taking a good look at human nature, the authors conclude that there is good news—we have brains that produce self-aware, conscious thought which is itself connected with wonderful auditory and visual systems. However, our bodies need sleep, demand exercise, lust for fatty foods, and have limited mobility and strength.


The bad news continues if we consider the limited memories and storage capacity of our brain. We upload information slowly; often cannot control our underdeveloped emotions; are easily conditioned by all sorts of irrationalities as children; have difficulty unlearning old falsehoods as adults; don’t know how our brains work; often cannot change unwanted behavioral patterns; and brain chemicals control our moods—suggesting that we are much less free than we admit. Moreover, when individual minds join they are particularly destructive, often killing each other at astonishing rates. We are also vulnerable to: brainwashing, pain, sun, insects, viruses, trauma, broken bones, disease, infection, organ failure, paralysis, miniscule DNA glitches, cancer, depression, and psychosis. We degrade and suffer pain as we age, and we die without a backup system since evolution perpetuates our DNA not our minds. On the whole, this is not a pretty picture.


Disease and aging can be thought of as a war which matches our brains and computers versus the RNA and DNA computers of microbes and diseased cells. What is the best way to win this war? Regeneration from our DNA would only regenerate the body—the mind would still have died—so it is not a wholly promising approach. The way around this limitation is to have a nanocomputer within your brain that receives downloads from your conscious mind. If the mind storage unit receives continuous downloads you can always be brought back after death—you would be immortal. But why stop there? Why not just make an indestructible cyber-body and cyber-brain? Why not become immortal cyber-beings?


This all leads to questions about us becoming gods. The authors argue that the existence of gods is a science and engineering project—we can create minds as powerful as those of our imaginary gods with sufficient technology. Of course supernaturalism opposes this project, but SciTech will win the struggle, just as it has historically dismantled other supernatural superstitions one by one. Science will defeat supernaturalism by explaining it, by providing in reality what religions supply only in the imagination. When science conquers death and suffering, religion will die; religions fundamental reason for being—comforting our fear of death—will become irrelevant. As for the custodians of religion, the theologians, the authors issue a stern warning:


Theologians are like a group of Homo erectus huddling around a fire, arguing over who should mate with whom, and which clan should live in the green valley, while paying no mind to the mind-boggling implications of the first Homo sapiens … Theologians of the world … the affairs you devote so much attention to are in danger of having as much meaning as the sacrifices offered to Athena … science and technology may be about to deliver … minds [that] will no longer be weak and vulnerable to suffering, and they will never die out. The gods will soon be dead, but they will be replaced with real minds that will assume the power of gods, gods that may take over the universe and even make new universes. It will be the final and greatest triumph of science and technology over superstition. 


Summary – We should proceed beyond humanity, overcoming the religious impulses which are the last vestige of superstition.


___________________________________________________________________


Gregory Paul and Earl Cox, Beyond Humanity: CyberEvolution and Future Minds (Rockland, MA.: Charles River Media, 1996), 415.

 •  0 comments  •  flag
Share on Twitter
Published on February 13, 2016 00:21

February 11, 2016

Summary of Jaron Lanier’s, “One Half A Manifesto”

Jaron Lanier (1960 – ) is a pioneer in the field of virtual reality who left Atari in 1985 to found VPL Research, Inc., the first company to sell VR goggles and gloves. In the late 1990s Lanier worked on applications for Internet2, and in the 2000s he was a visiting scholar at Silicon Graphics and various universities. More recently he has acted as an advisor to Linden Lab on their virtual world product Second Life, and as “scholar-at-large” at Microsoft Research where he has worked on the Kinect device for Xbox 360.


Lanier’s “One Half A Manifesto” opposes what he calls “cybernetic totalism,” the view of Kurzweil and others which proposes to transform the human condition more than any previous ideology. The following beliefs characterize cybernetic totalism.



That cybernetic patterns of information provide the ultimate and best way to understand reality.
That people are no more than cybernetic patterns.
That subjective experience either doesn’t exist, or is unimportant because it is some sort of peripheral effect.
That what Darwin described in biology, or something like it, is in fact also the singular, superior description of all creativity and culture.
That qualitative as well as quantitative aspects of information systems will be accelerated by Moore’s Law. And
That biology and physics will merge with computer science (becoming biotechnology and nanotechnology), resulting in life and the physical universe becoming mercurial; achieving the supposed nature of computer software. Furthermore, all of this will happen very soon! Since computers are improving so quickly they will overwhelm all the other cybernetic processes, like people, and fundamentally change the nature of what’s going on in the familiar neighborhood of Earth at some moment when a new “criticality” is achieved—maybe in about the year 2020. To be a human after that moment will be either impossible or something very different than we now can know.

Lanier responds to each belief in detail. A summary of those responses are as follows:



Culture cannot be reduced to memes, and people cannot be reduced to cybernetic patterns.
Artificial intelligence is a belief system, not a technology.
Subjective experience exists, and it separates humans from machines.
Darwin provides the “algorithm for creativity” which explains how computers will become smarter than humans. However, that nature didn’t require anything “extra” to create people doesn’t mean that computers will evolve on their own.
There is little reason to think that software is getting better, and no reason at all to think it will get better at a rate like hardware.

The sixth belief, the heart of the cybernetic totalism, terrifies Lanier. Yes, computers might kill us, preserve us in a matrix, or be used by evil humans to do harm to the rest of us. It is deviations of this latter scenario that most frightens Lanier for it is easy to imagine that a wealthy few would become a near godlike species, while the rest of us remain relatively the same. And Lanier expects immortality to be very expensive, unless software gets much better. For example, if you were to use biotechnology to try to make your flesh into a computer, you would need excellent software without glitches to achieve such a thing. But this would be extraordinarily costly.


Lanier grants that there will indeed be changes in the future, but they should be brought about by humans not by machines. To do otherwise is to abdicate our responsibility. Cybernetic totalism, if left unchecked, may cause suffering like so many other eschatological visions have in the past. We ought to remain humble about implementing our visions.


Summary – Cybernetic totalism is philosophically and technologically problematic.


____________________________________________________________________


Jaron Lanier, “One Half A Manifesto”

 •  0 comments  •  flag
Share on Twitter
Published on February 11, 2016 00:02

February 9, 2016

Summary of Michio Kaku’s, Visions: How Science Will Revolutionize the 21st Century

Michio Kaku (1947 – ) is the Henry Semat Professor of Theoretical Physics at the City College of New York of City University of New York. He is the co-founder of string field theory and a popularizer of science. He earned his PhD in physics from the University of California-Berkeley in 1972.


In his book, Visions: How Science Will Revolutionize the 21st CenturyKaku sets out an overall picture of what is happening in science today that will revolutionize our future. He begins by noting the three great themes of 20th century science—the atom, the computer, and the gene. The revolutions associated with these themes ultimately aim at a complete understanding of matter, mind, and life. Progress toward reaching our goals has been stunning—in just the past few years more scientific knowledge has been created than in all previous human history. We no longer need to be passive observers of nature, we can be its active directors; we are moving from discover of nature’s laws to their masters.


The quantum revolution spawned the other two revolutions. Until 1925 no one understood the world of the atom; now we have an almost complete description of matter. The basic postulates of that understanding are: 1) energy is not continuous but occurs in discrete bundles called “quanta;” 2) sub-atomic particles have both wave and particle characteristics; and 3) these wave/particles obey Schrodinger’s wave equation which determines the probability that certain events will occur. With the standard model we can predict the properties of things from quarks to supernovas. We now understand matter and we may be able to manipulate it almost at will in this century.


The computer revolution began in the 1940s. At that time computers were crude but subsequent development of the laser in the next decade started an exponential growth. Today there are tens of millions of transistors in the area the size of a fingernail. As microchips become ubiquitous, life will change dramatically. We used to marvel at intelligence; in the future we may create and control it.  


The bio-molecular revolution began with the unraveling of the double helix in the 1950s. We found that our genetic code was written on the molecules within the cells—DNA. The techniques of molecular biology allow us to read the code of life like a book. With the owner’s manual for human beings, science and medicine will be irrevocably altered. Instead of watching life we will be able to direct it almost at will.  


Hence we are moving from the unraveling stage to the mastery stage in our understanding of nature. We are like aliens from outer space who land and view a chess game. It takes a long time to unravel the rules and merely knowing the rules doesn’t make one a grand master. We are like that. We have learned the rules of matter, life, and mind but are not yet their masters. Soon we will be.


What really moves these revolutions is their interconnectivity, the way they propel each other. Quantum theory gave birth to the computer revolution via transistors and lasers; it gave birth to the bio-molecular revolution via x-ray crystallography and the theory of chemical bonding. While reductionism and specialization paid great dividends for these disciplines, intractable problems in each have forced them back together, calling for synergy of the three. Now computers decipher genes, while DNA research makes possible new computer architecture using organic molecules. Kaku calls this cross-fertilization—advances in one science boost the others along—and it keeps the pace of scientific advance accelerating.


In the next decade Kaku expects to see an explosion in scientific activity that will include growing organs and curing cancer. By the middle of the 21st century he expects to see progress in slowing aging, as well as huge advances in nanotechnology, interstellar travel, and nuclear fusion. By the end of the century we will create new organisms, and colonize space. Beyond that we will see the visions of Kurzweil and Moravec come to pass—we will extend life by growing new organs and bodies, manipulating genes, or by merging with computers.


Where is all this leading? One way to answer is by looking at the labels astrophysicists attach to hypothetical civilizations based on ways they utilize energy—labeled Type I, II, and III civilizations. Type I civilizations control terrestrial energy, modify weather, mine oceans, and extract energy from planet’s core. Type II civilizations have mastered stellar energy, use their sun to drive machines and explore other stars. Type III – manage interstellar energy, since they have exhausted their stars energy. Energy is available on a planet, its star and in its galaxy, while the type of civilization corresponds to that civilizations power over those resources.


Based on a growth rate of about 3%  a year in our ability to control resources, Kaku estimates that we might expect to become a Type I civilization in a century or two, a type II civilization in about 800 years, and a type III civilization in about ten thousand years. At the moment, however, we are a Type 0 civilization which uses the remains of dead plants and animals to power our civilization. (And change our climate dramatically.) By the end of the 22nd century Kaku predicts we will be close to becoming a Type 1 civilization, and take our first steps into space. Agreeing with Kurzweil and Moravec, Kaku believes this will lead to a form of immortality when our technology replaces our brains, preserving them in robotic bodies or virtual realities. Evolution will have replaced us, just as we replaced all that died in the evolutionary struggle so that we could live. Our job is to push evolution forward.


Summary – Knowledge of the atom, the gene, and the computer will lead to a mastery of matter, life, and mind.


_______________________________________________________________________


Michio Kaku, Visions: How Science Will Revolutionize the 21st Century (New York: Anchor, 1998).

 •  0 comments  •  flag
Share on Twitter
Published on February 09, 2016 00:58

February 7, 2016

Summary of Marshall Brain’s, “The Day You Discard Your Body”


Marshall Brain (1961 – ) is an author, public speaker, and entrepreneur. He earned an MS in computer science from North Carolina State University where he taught for many years, and is the founder of the website HowStuffWorks, which was sold in 2007 to Discovery Communications for $250,000,000. He also maintains a website where his essays on transhumanism, robotics, and naturalism can be found. His essay, “The Day You Discard Our Bodies,” presents a compelling case that sometime in this century the technology will be available to discard our bodies. And when the time comes, most of us will do so.


Why would we want to discard our bodies? The answer is that by doing so we would achieve an unimaginable level of freedom and longevity. Consider how vulnerable your body is. If you fall off a horse or dive into a too-shallow pool of water, your body will become completely useless. If this happened to you, you would gladly discard your body. But this happens to all of us as we age—our bodies generally kill our brains—creating a tragic loss of knowledge and experience. Our brains die because our bodies do.


Consider also how few of us are judged to have beautiful bodies, and how the beauty we do have declines with age. If you could have a more beautiful body, you would gladly discard your body. Additionally, your body has to go to the bathroom, it smells, it becomes obese easily, it takes time for it to travel through space, it cannot fly or swim underwater for long, and it cannot perform telekinesis. As for the aging of our bodies, most would happily dispense with it, discarding their bodies if they could.


Why would the healthy discard their bodies? Consider that healthy people play video games in staggering numbers. As these games become more realistic, we can imagine people wanting to live and be immersed in them. Eventually you would want to connect your biological brain to your virtual body inside the virtual reality. And your virtual body could be so much better than your biological body—it could be perfect. Your girlfriend or boyfriend who made the jump to the virtual world would have a perfect body. They would ask you to join them. All you would have to do is undergo a painless surgery to connect your brain to its new body in the virtual reality. There you could see anything in the world without having to take the plane ride (or go through security.) You could visit the Rome or Greece of two thousand years ago, fight in the battle of Stalingrad, talk to Charles Darwin, or live the life of Superman. You could be at any time and any place, you can overcome all limitations, you could have great sex!  When your virtual body would be better in every respect from your biological body, you would discard the latter.


Initially your natural brain may still be housed in your natural body, but eventually your brain will be disconnected from your body and housed in a safe brain storage facility. Your transfer will be complete—you will live in a perfect virtual reality without your cumbersome physical body, and the limitations it imposes.


Summary – We will be able to discard our bodies and live in a much better virtual reality relatively soon. We should do so.


______________________________________________________________


Marshall Brain, “The Day You Discard Your Body”

 •  0 comments  •  flag
Share on Twitter
Published on February 07, 2016 00:50

Summary of Marshall Brain’s, “The Day You Discard Our Bodies”

Marshall Brain (1961 – ) is an author, public speaker, and entrepreneur. He earned an MS in computer science from North Carolina State University where he taught for many years, and is the founder of the website HowStuffWorks, which was sold in 2007 to Discovery Communications for $250,000,000. He also maintains a website where his essays on transhumanism, robotics, and naturalism can be found. His essay, “The Day You Discard Our Bodies,” presents a compelling case that sometime in this century the technology will be available to discard our bodies. And when the time comes, most of us will do so.


Why would we want to discard our bodies? The answer is that by doing so we would achieve an unimaginable level of freedom and longevity. Consider how vulnerable your body is. If you fall off a horse or dive into a too-shallow pool of water, your body will become completely useless. If this happened to you, you would gladly discard your body. But this happens to all of us as we age—our bodies generally kill our brains—creating a tragic loss of knowledge and experience. Our brains die because our bodies do.


Consider also how few of us are judged to have beautiful bodies, and how the beauty we do have declines with age. If you could have a more beautiful body, you would gladly discard your body. Additionally, your body has to go to the bathroom, it smells, it becomes obese easily, it takes time for it to travel through space, it cannot fly or swim underwater for long, and it cannot perform telekinesis. As for the aging of our bodies, most would happily dispense with it, discarding their bodies if they could.


Why would the healthy discard their bodies? Consider that healthy people play video games in staggering numbers. As these games become more realistic, we can imagine people wanting to live and be immersed in them. Eventually you would want to connect your biological brain to your virtual body inside the virtual reality. And your virtual body could be so much better than your biological body—it could be perfect. Your girlfriend or boyfriend who made the jump to the virtual world would have a perfect body. They would ask you to join them. All you would have to do is undergo a painless surgery to connect your brain to its new body in the virtual reality. There you could see anything in the world without having to take the plane ride (or go through security.) You could visit the Rome or Greece of two thousand years ago, fight in the battle of Stalingrad, talk to Charles Darwin, or live the life of Superman. You could be at any time and any place, you can overcome all limitations, you could have great sex!  When your virtual body would be better in every respect from your biological body, you would discard the latter.


Initially your natural brain may still be housed in your natural body, but eventually your brain will be disconnected from your body and housed in a safe brain storage facility. Your transfer will be complete—you will live in a perfect virtual reality without your cumbersome physical body, and the limitations it imposes.


Summary – We will be able to discard our bodies and live in a much better virtual reality relatively soon. We should do so.


______________________________________________________________


Marshall Brain, “The Day You Discard Your Body”

 •  0 comments  •  flag
Share on Twitter
Published on February 07, 2016 00:50

February 6, 2016

Summary of Charles T. Rubin’s, “Artificial Intelligence and Human Nature,”


Charles T. Rubin is a professor of political science at Duquesne University. His 2003 article, “Artificial Intelligence and Human Nature,” is a systematic attack on the thinking of Ray Kurzweil and Hans Moravec, thinkers we have discussed in recent posts.


Rubin finds nearly everything about the futurism of Kurzweil and Moravec problematic. It involves metaphysical speculation about evolution, complexity, and the universe; technical speculation about what may be possible; and philosophical speculation about the nature of consciousness, personal identity, and the mind-body problem. Yet Rubin avoids attacking the futurists, whom he calls “extinctionists,” on the issue of what is possible, focusing instead on their claim that a future robotic-type state is necessary or desirable.


The argument that there is an evolutionary necessity for our extinction seems thin. Why should we expedite our own extinction? Why not destroy the machines instead? And the argument for the desirability of this vision raises another question. What is so desirable about a post-human life? The answer to this question, for Kurzweil and Moravec, is the power over human limitations that would ensue. The rationale that underlies this desire is the belief that we are but an evolutionary accident to be improved upon, transformed, and remade.


But this leads to another question: will we preserve ourselves after uploading into our technology? Rubin objects that there is a disjunction between us and the robots we want to become. Robots will bear little resemblance to us, especially after we have shed the bodies so crucial to our identities, making the preservation of a self all the more tenuous. Given this discontinuity, how can we know we would want to be in this new world, or whether it would be better, anymore than one of our primate ancestors could have imagined what a good human life would be like. Those primates would be as uncomfortable in our world, as we might be in the post-human world. We really have no reason to think we can understand what a post-humans life would be like, but it is not out of the question that the situation will be nightmarish.


Yet Rubin acknowledges that technology will evolve, moved by military, medical, commercial, and intellectual incentives, hence it is unrealistic to limit technological development. The key in stopping or at least slowing the trend is to educate individuals as to the unique characteristics of being human which surpass machine life in so many ways. Love, courage, charity, and a host of other human virtues may themselves be inseparable from our finitude. Evolution may hasten our extinction, but even if it did not there is no need to pursue the process, because there is no reason to think the post-human world will be better than our present one. If we pursue such Promethean visions, we may end up worse off than before.


Summary – We should reject transhumanist ideals and accept our finitude.


__________________________________________________________________


Charles T. Rubin, “Artificial Intelligence and Human Nature,” The New Atlantis, No. 1, spring 2003.

 •  0 comments  •  flag
Share on Twitter
Published on February 06, 2016 00:08

February 5, 2016

Summary of Hans Moravec’s, Robot: Mere Machine To Transcendent Mind

Hans Moravec (1948 – ) is a faculty member at the Robotics Institute of Carnegie Mellon University and the chief scientist at Seegrid Corporation. He received his PhD in computer science from Stanford in 1980, and is known for his work on robotics, artificial intelligence, and writings on the impact of technology, as well as his many of publications and predictions focusing on transhumanism.


Moravec set forth his futuristic ideas most clearly in his 1998 book Robot: Mere Machine to Transcendent Mind. He notes that by almost any measure society is changing faster than ever before, primarily because the products of technology keep speeding up the process. The radical future that awaits us can be understood by thinking of technology as soon reaching an escape velocity. In the same way that rubbing sticks together in the proper manner will produce ignition, or powering a rocket correctly will allow it to escape the earth’s gravity, our machines will soon escape their previous boundaries. At that time the old rules will no longer apply; robots will have achieved their own escape velocity.


For many of us this is hard to imagine because we are like riders in an elevator who forget how high we are until we get an occasional glimpse of the ground—as when we meet cultures frozen in time. Then we see how different the world we live in today is compared to the one we adapted to biologically. For all of human history culture was secondary to biology, but about five thousand years ago things changed, as cultural evolution became the most important means of human evolution. It is the technology created by culture that is exponentially speeding up the process of change. Today we are reaching the escape velocity from our biology.


Not that building intelligent machines will be easy—Moravec constantly reminds us how difficult robotics is. He outlines the history of cybernetics, from its beginnings with Alan Turing and John von Neumann, to the first working artificial intelligence programs which proved many mathematical theorems. He admits that most of these programs were not very good and proved theorems no better or faster than a college freshman. So reaching escape velocity will require hard work.


One of the most difficult issues in robotics/artificial intelligence is the disparity between programs that calculate and reason, versus programs that interact with the world. Robots still don’t perform as well behaviorally as infants or non-human animals but play chess superbly. So the order of difficulty for machines from easier to harder is: calculating; reasoning; perceiving; and acting. For humans the order is exactly the reverse. The explanation for this probably lays in the fact that perceiving and acting were beneficial for survival in a way that calculation and abstract reasoning was not. Machines are way behind in many areas yet catching up, and Moravec predicts that in less than fifty years inexpensive computers will exceed the processing power of a human brain. Can we then program them to intuit and perceive like humans? Moravec thinks there is reason to answer in the affirmative, and much of his book cites the evolution of robotics as evidence for this claim.


He also supports his case with a clever analogy to topography. The human landscape of consciousness has high mountains like hand-eye coordination, locomotion and social interaction; foothills like theorem proving and chess playing; and lowlands like arithmetic and memorization. Computers/robots are analogous to a flood which has drowned the lowlands; has just reached the foothills, and well eventually submerge the peaks.


Robots will advance through generational change as technology advances: from lizard-like robots, to mouse-like, primate-like, and human-like ones. Eventually they will be smart enough to design their own successors —without help from us! So a few generations of robots will mimic the four hundred million year evolution marked by the brain stem, cerebellum, mid-brain, and neo-cortex. Will our machines be conscious? Moravec says yes. Just as the terrestrial and celestial was once a sacred distinction, so today is the animate/inanimate distinction. Of course if the animating principle is a supernatural soul, then the distinction remains, but our current knowledge suggests that complex organization provides animation. This means that our technology is doing what it took evolution billions of years to do—animating dead matter.


Moravec argues that robots will slowly come to have a conscious, internal life as they advance. Fear, shame, and joy may be emotions valuable to robots to help them retreat from danger, reduce the probability of bad decisions, or reinforce good ones. He even thinks there would be good reasons for robots to care about their owners or get angry, but surmises that generally they will be nicer than humans, since robots don’t have to be selfish to guarantee their survival. He recognizes that many reject the view that dead matter can give rise to consciousness. The philosopher Herbert Dreyfus has argued that computers cannot experience subjective consciousness, his colleague John Searle says, as we have already seen, that computers will never think, and the mathematician Roger Penrose argues that consciousness is achieved through certain quantum phenomena in the brain, something unavailable to robots. But Moravec points to the accumulating evidence from neuroscience to disagree. Mind is something that runs of a physical substrate and we will eventually accept sufficiently complex robots as conscious.


Moravec sees these developments as the natural consequence of humans using one of their two channels of heredity. Not the slower biological means utilizing DNA, but the faster culture channel utilizing  books, language, databases, and machines. For most of human history there was more info in our genes than in our culture, but now libraries alone hold thousands of times more information than genes. “Given fully intelligent robots, culture becomes completely independent of biology. Intelligent machines, which will grow from us, learn our skills, and initially share our goals and values, will be the children of our minds.”


To get a better understanding of the coming age of robots consider our history as it relates to technology. A hundred thousand years ago, our ancestors were supported by, what Moravec calls, a fully automated nature. With agriculture we increased production but added work and, until recently, production of food was the chief occupation of humankind. Farmers lost their jobs to machines and moved to manufacturing, but more advanced machines displaced farmers out of factories and into offices—where machines have put them out of work again. Soon machines will do all the work. Tractors and combines amplify farmers; computer workstations amplify engineers; layers of management and clerical help slowly disappear; and the scribe, priest, seer and chief are no longer repositories of wisdom—printing and mass communication ended that. Automation and robots will displace gradually replace labor as never before; just consider how much physical and mental labor has already been replaced by machines. In the short run this will cause panic and the scramble to earn a living in new ways. In the medium run it will provide the opportunity to have a more leisurely lifestyle. In the long run, “it marks the end of the dominance of biological humans and the beginning of the age of robots.”


Moravec is optimistic that robotic labor will make life more pleasant for humanity, but inevitably evolution will lead beyond humans to a world of “ex-humans” or “exes.” These post-biological beings will populate a galaxy which is as benign for them as it is hostile for biological beings. “We marvel at the Earth’s biodiversity … but the diversity and range of the post-biological world will be astronomically greater. Imagination balks at the challenge of guessing what it could be like.” Still, he is willing to hazard a guess: “…Exes trapped in neutron stars may become the most powerful minds in the galaxy … But, in the fast-evolving world of superminds, nothing lasts forever …. Exes, [will] become obsolete.”


In that far future, Moravec speculates that exes will “be transformed into intelligence-boosting computing elements … physical activity will gradually transform itself into a web of increasingly pure thought, here every smallest interaction represents a meaningful computation.” Exes may learn to arrange space-time and energy into forms of computation, with the result that “the inhabited portions of the universe will be rapidly transformed into a cyberspace, where overt physical activity is imperceptible, but the world inside the computation is astronomically rich.” Beings won’t be defined by physical location but will be patterns of information in cyberspace. Minds, pure software, will interact with other minds. The wave of physical migration into space will have long given way to “a bubble of Mind expanding at near lightspeed.” Eventually, the expanding bubble of cyberspace will recreate all it encounters “memorizing the old universe as it consumes it.”


For the moment our small minds cannot give meaning to the universe, but a future universal mind might be able to do so, when that cosmic mind is infinitely subjective, self-conscious, and powerful. At that point our descendents will be capable of traversing in and through other possible worlds. Unfortunately, those of us alive today are governed by the laws of the universe, at least until we die when our ties to physical reality will be cut. It is possible we will then be reconstituted in the minds of our super intelligent successors or in simulated realities. But for the moment this is still fantasy, all we have for now is Shakespeare’s lament:


To die, to sleep;

To sleep: perchance to dream: ay there’s the rub;

For in that sleep of death what dreams may come

When we have shuffled off this mortal coil …


Summary – Our robotic descendents will be our mind children and they will live in realities now unimaginable to us. For now though, we die.


______________________________________________________________________









Moravec, Robot: Mere Machine to Transcendent, 167

 •  0 comments  •  flag
Share on Twitter
Published on February 05, 2016 00:01

February 3, 2016

How Science Can Make Us Immortal

If death is inevitable, then all we can do is die and hope for the best. But perhaps we don’t have to die. Many respectable scientists now believe that humans can overcome death and achieve immortality through the use of future technologies. But how will we do this?


The first way we might achieve physical immortality is by conquering our biological limitations—we age, become diseased, and suffer trauma. Aging research, while woefully underfunded, has yielded positive results. Average life expectancies have tripled since ancient times, increased by more than fifty percent in the industrial world in the last hundred years, and most scientists think we will continue to extend our life-spans. We know that we can further increase our life-span by restricting calories, and we increasingly understand the role that telomeres play in the aging process. We also know that certain jellyfish and bacteria are essentially immortal, and the bristlecone pine may be as well. There is no thermodynamic necessity for senescence—aging is presumed to be a byproduct of evolution —although why mortality should be selected for remains a mystery. There are reputable scientists who believe we can conquer aging altogether—in the next few decades with sufficient investment—most notably the Cambridge researcher Aubrey de Grey.



If we do unlock the secrets of aging, we will simultaneously defeat many other diseases as well, since so many of them are symptoms of aging. Many researches now consider aging itself to be a disease which progresses as you age. There are a number of strategies that could render disease mostly inconsequential. Nanotechnology may give us nanobot cell-repair machines and robotic blood cells; biotechnology may supply replacement tissues and organs; genetics may offer genetic medicine and engineering; and full-fledge genetic engineering could make us impervious to disease.


Trauma is a more intransigent problem from the biological perspective, although it too could be defeated through some combination of cloning, regenerative medicine, and genetic engineering. We can even imagine that your physicality could be recreated from a bit of your DNA, and other technologies could then fast forward your regenerated body to the age of your traumatic death, where a backup file with all your experiences and memories would be implanted in your brain. Even the dead may be resuscitated if they have undergone the process of cryonics—preserving organisms at very low temperatures in glass-like states. Ideally these clinically dead would be brought back to life when future technology was sufficiently advanced. This may now be science fiction, but if nanotechnology fulfills its promise there is a reasonably good chance that cryonics will be successful.


In addition to biological strategies for eliminating death, there are a number of technological scenarios for immortality which utilize advanced brain scanning techniques, artificial intelligence, and robotics. The most prominent scenarios have been advanced by the renowned futurist Ray Kurzweil and the roboticist Hans Moravec. Both have argued that the exponential growth of computing power in combination with advances in other technologies will make it possible to upload the contents of one’s consciousness into a virtual reality. This could be accomplished by cybernetics, whereby hardware would be gradually installed in the brain until the entire brain was running on that hardware, or via scanning the brain and simulating or transferring its contents to a computer with sufficient artificial intelligence. Either way we would no longer be living in a physical world.



In fact we may already be living in a computer simulation. The Oxford philosopher and futurist Nick Bostrom has argued that advanced civilizations may have created computer simulations containing individuals with artificial intelligence and, if they have, we might unknowingly be in such a simulation. Bostrom concludes that one of the following must be the case: civilizations never have the technology to run simulations; they have the technology but decided not to use it; or we almost certainly live in a simulation.


If one doesn’t like the idea of being immortal in a virtual reality—or one doesn’t like the idea that they may already be in one now—one could upload one’s brain to a genetically engineered body if they liked the feel of flesh, or to a robotic body if they liked the feel of silicon or whatever materials comprised the robotic body. MIT’s Rodney Brooks envisions the merger of human flesh and machines, whereby humans slowly incorporate technology into their bodies, thus becoming more machine-like and indestructible. So a cyborg future may await us.


The rationale underlying most of these speculative scenarios has to do with adopting an evolutionary perspective. Once one embraces that perspective, it is not difficult to imagine that our descendants will resemble us about as much as we do the amino acids from which we sprang. Our knowledge is growing exponentially and, given eons of time for future innovation, it easy to envisage that humans will defeat death and evolve in unimaginable ways. For the skeptics, remember that our evolution is no longer moved by the painstakingly slow process of Darwinian evolution—where bodies exchange information through genes—but by cultural evolution—where brains exchange information  through memes. The most prominent feature cultural evolution is the exponentially increasing pace of technological evolution—an evolution that may soon culminate in a technological singularity.


The technological singularity, an idea first proposed by the mathematician Vernor Vinge,refers to the hypothetical future emergence of greater than human intelligence. Since the capabilities of such intelligences is difficult for our minds to comprehend, the singularity is seen as an event horizon beyond which the future becomes nearly impossible to understand or predict. Nevertheless we may surmise that this intelligence explosion will lead to increasingly powerful minds for which the problem of death will be solvable. Science may well vanquish death—quit possibly in the lifetime of some of my readers.


But why conquer death? Why is death bad? It is bad because it ends something which at its best is beautiful; bad because it puts an end to all our projects; bad because all the knowledge and wisdom of a person is lost at death; bad because of the harm it does to the living; bad because it causes people to be unconcerned about the future beyond their short lifespan; bad because it renders fully meaningful lives impossible; and bad because we know that if we had the choice, and if our lives were going well, we would choose to live on. That death is generally bad—especially for the physically, morally, and intellectually vigorous—is nearly self-evident.


Yes there are indeed fates worse than death and in some circumstances death may be welcomed. Nevertheless for most of us most of the time, death is one of the worst fates that can befall us. That is why we think that suicide and murder and starvation are tragic. That is why we cry at the funerals of those we love.


Our lives are not our own if they can be taken from us without our consent. We are not truly free unless death is optional.





Share the joy
 •  0 comments  •  flag
Share on Twitter
Published on February 03, 2016 00:03

February 1, 2016

Daniel Dennett: In Defense of Robotic Consciousness


Daniel Dennett (1942 – ) is an American philosopher, writer and cognitive scientist whose research is in the philosophy of mind, philosophy of science and philosophy of biology, particularly as those fields relate to evolutionary biology and cognitive science. He is currently the Co-director of the Center for Cognitive Studies, the Austin B. Fletcher Professor of Philosophy, and a University Professor at Tufts University. He received his PhD from Oxford University in 1965 where he studied under the eminent philosopher Gilbert Ryle.


In his book, DARWIN’S DANGEROUS IDEA: EVOLUTION AND THE MEANINGS OF LIFE, Dennett present a thought experiment that defends strong artificial intelligence (SAI)—one that matches or exceeds human intelligence. Dennett asks you to suppose that you want to live in the 25th century and the only available technology for that purpose involves putting your body in a cryonic chamber where you will be frozen in a deep coma and later awakened. In addition you must design some supersystem to protect and supply energy to your capsule. You would now face a choice. You could find an ideal fixed location that will supply whatever your capsule will need, but the drawback would be that you would die if some harm came to that site. Better then to have a mobile facility to house your capsule that could move in the event harm came your way—better to place yourself inside a giant robot. Dennett claims that these two strategies correspond roughly to nature’s distinction between stationary plants and moving animals.


If you put your capsule inside a robot, then you would want the robot to choose strategies that further your interests. This does not mean the robot has free will, but that it executes branching instructions so that when options confront the program, it chooses those that best serve your interests. Given these circumstances you would design the hardware and software to preserve yourself, and equip it with the appropriate sensory systems and self-monitory capabilities for that purpose. The supersystem must also be designed to formulate plans to respond to changing conditions and seek out new energy sources.


What complicated the issue further is that, while you are in cold storage, other robots and who knows what else are running around in the external world. So you would need to design your robot to determine when to cooperative, form alliances, or fight with other creatures. A simple strategy like always cooperating would likely get you killed, but never cooperating may not serve your self-interests either, and the situation may be so precarious that your robot would have to make many quick decisions. The result will be a robot capable of self-control, an autonomous agent which derives its own goals based on your original goal of survival; the preferences with which it was originally endowed. But you cannot be sure it will act in your self-interest. It will be out of your control, acting partly on its own desires.


Now opponents of SAI claim that this robot does not have its own desires or intentions, those are simply derivative of its designer’s desires. Dennett calls this “client centrism.” I am the original source of the meaning within my robot, it is just a machine preserving me, even though it acts in ways that I could not have imagined and which may be antithetical to my interests. Of course it follows, according to the client centrists, that the robot is not conscious. Dennett rejects this centrism, primarily because if you follow this argument to its logical conclusion you have to conclude the same thing about yourself! You would have to conclude that you are a survival machine built to preserve your genes and your goals and your intentions derive from them. You are not really conscious. To avoid these unpalatable conclusions, why not acknowledge that sufficiently complex robots have motives, intentions, goals, and consciousness? They are like you; owing their existence to being a survival machine that has evolved into something autonomous by its encounter with the world.


Critics like Searle admit that such a robot is possible, but deny that it is conscious. Dennett responds that such robots would experience meaning as real as your meaning; they would have transcended their programming just as you have gone beyond the programming of your selfish genes. He concludes that this view reconciles thinking of yourself as a locus of meaning, while at the same time being a member of a species with a long evolutionary history. We are artifacts of evolution, but our consciousness is no less real because of that. The same would hold true of our robots.


Summary – Sufficiently complex robots would be conscious


________________________________________________________________


Daniel Dennett, Darwin’s Dangerous Idea: Evolution And The Meaning of Life (New York: Simon & Schuster, 1995), 422-26.

 •  0 comments  •  flag
Share on Twitter
Published on February 01, 2016 00:54

January 30, 2016

John Searle’s Critique of Ray Kurzweil


John Searle (1932 – ) is currently the Slusser Professor of Philosophy at the University of California, Berkeley. He received his PhD from Oxford University. He is a prolific author and one of the most important living philosophers.


According to Searle, Kurzweil’s book is an extensive reflection on the implications of Moore’s law. The essence of that argument is that smarter than human computers will arrive, we will download ourselves into this smart hardware, thereby guaranteeing our immortality. Searle attacks this fantasy by focusing on the chess playing computer “Deep Blue,” (DB) which defeated world chess champion Gary Kasparov in 1997.


Kurzweil thinks DB is a good example of the way that computers have begun to exceed human intelligence. But DB’s brute force method of searching through possible moves differs dramatically from the how human brains play chess. To clarify Searle offers his famous Chinese Room Argument. If I’m in a room with a program that answers questions in Chinese even though I do not understand Chinese, the fact that I can output the answer in Chinese does not mean I understand the language. Similarly DB does not understand chess, and Kasparov was playing a team of programmers, not a machine. Thus Kurzweil is mistaken if he believes that DB was thinking.


According to Searle, Kurzweil confuses a computers seeming to be conscious with it actually being conscious, something we should worry about if we are proposing to download ourselves into it! Just as a computer simulation of digestion cannot eat pizza, so too a computer simulation of consciousness is not conscious. Computers manipulate symbols or simulate brains through neural nets—but this is not the same as duplicating what the brain is doing. To duplicate what the brain does the artificial system would have to act like the brain. Thus Kurzweil confuses simulation with duplication.


Another confusion is between observer-independent (OI) features of the world, and observer-dependent (OD) features of the world. The former include features of the world studied by, for example, physics and chemistry; while the latter are things like money, property, governments and all things that exist only because there are conscious observers of them. (Paper has objective physical properties, but paper is money only because persons relate to it that way.)


Searle says that he is more intelligent than his dog and his computer in some absolute, OI sense because he can do things his dog and computer cannot. It is only in the OD sense that you could say that computers and calculators are more intelligent than we are. You can use intelligence in the OD sense provided that you remember it does not mean that a computer is more intelligent in the OI sense. The same goes for computation. Machines compute analogously to the way we do, but they don’t computer intrinsically at all—they know nothing of human computation.


The basic problem with Kurzweil’s book is its assumption that increased computational power leads to consciousness. Searle says that increased computational power of machines gives us no reason to believe machines are duplicating consciousness. The only way to build conscious machines would be to duplicate the way brains work and we don’t know how they work. In sum, behaving like one is conscious is not the same as actually being conscious.


Summary – Computers cannot be conscious.


______________________________________________________________________


John Searle, “I Married A Computer,” review of The Age of Spiritual Machines, by Ray Kurzweil, New York Review of Books, April 8, 1999.

 •  0 comments  •  flag
Share on Twitter
Published on January 30, 2016 00:41