Casey Dorman's Blog

May 20, 2022

Prime Directive

The term “Prime Directive” arrived in our vocabulary via the original Star Trek series, where it is also referred to as “Starfleet General Order 1” and the “non-interference directive.” Curiously, although the directive often referred to, it was never explicitly stated, leaving various writers and Star Trek characters to interpret it differently. It was also broken—a lot—by no less than Captain James Kirk, and Captain Luc Picard, both of whom found numerous reasons to overrule its mandate.


As close as we can get to a precise statement of the Prime Directive is outside of the series in the 1986 book, The Federation, by Bernard Edward Menke & Rick David Stuart. They quote a fictional set of “Federation” articles and regulations as saying,

“As the right of each sentient species to live in accordance with its normal cultural evolution is considered sacred, no Starfleet personnel may interfere with the normal and healthy development of alien life and culture. Such interference includes introducing superior knowledge, strength, or technology to a world whose society is incapable of handling such advantages wisely. Starfleet personnel may not violate this Prime Directive, even to save their lives and/or their ship, unless they are acting to right an earlier violation or an accidental contamination of said culture. This directive takes precedence over any and all other considerations and carries with it the highest moral obligation.”

File:William Shatner Star Trek.JPG
The above is a liberal interpretation of the directive, since by introducing the phrases “normal and healthy development,” and “a world whose society is incapable of handling such advantages wisely,” it left room for a Starfleet commander to determine that a culture’s development was not “normal and healthy,” or the culture was “capable of handling such interference wisely,” giving them permission to ignore the Prime Directive as much as they pleased.

As numerous episodes of the various iterations of Star Trek over the years have demonstrated, and as various critics have pointed out, a rigid application of the Prime Directive could lead to lies, absurdities or even disasters. Suppose, for instance, that a civilization is on the brink of developing space travel and their planet is soon to be wiped out by a gigantic asteroid. Would refusing to tell the civilization how to build spaceships and escape certain extinction be ethical? Keeping such information from them fulfills the Prime Directive. The crew of the Starship Enterprise were not allowed to tell a more primitive civilization that they were from visitors from another planet and traveled faster than the speed of light in their spaceship, because such knowledge could alter the course of that culture’s development. So, the Enterprise crew had to lie about themselves. Who is to say that their lie did less harm than the truth?

Imagine the situation in which the Enterprise’s visit to a planet is followed by an invasion of the highly aggressive Klingons. Would letting the civilization know that there are races out there traveling from planet to planet and some might be dangerous have saved a culture from being destroyed ? If a world were suffering an epidemic of a disease and the Star Trek crew knew how to prevent or cure it, would it really be more ethical to not reveal that information to them?

Just as following the Prime Directive could lead to disaster, violating it could and did lead to disaster in several episodes. In one case, a social scientist from the Federation tried to create a more orderly, efficient society on another world and instead created a duplicate of the Third Reich. In more than one episode, a well-meaning Federation visitor armed a group within a culture that was being abused and destroyed by another group, only to create a perpetual arms race and war on that planet.


The Original Star Trek series was created during the Vietnam War when the counterculture within the United States saw America’s interference with the internal affairs of Vietnam as immoral and disastrous. During later times within the United States, such feelings resurfaced again with regard to U.S. involvement in Iraq and Afghanistan. In such times a non-interference policy may seem like the best rule for a powerful country to follow. But, today, with Russian forces having invaded Ukraine and committed numerous atrocities, most Americans favor the United States taking an active role in supporting Ukraine and resisting Russia. Perhaps, the Prime Directive wouldn’t be so popular today.

My sequel to my novel, Ezekiel’s Brain is called Prime Directive. A crew of artificial intelligences in humanoid form, including Ezekiel, whose brain is a copy of a human’s, set out to explore the galaxy looking for life. Virtually all the civilizations they encounter are human and are less technologically


developed than they are. Their first dilemma is whether to tell these others that they are machines, not human beings, and that they have traveled faster than the speed of light from a distant star system. A complication is that other than Ezekiel, his fellow crew members, who are all AIs, are unable to lie. Later, the Delphi, the AI’s spaceship, travels to a star system where the followers of a strict, fundamentalist religion rule with an iron hand, restrict women’s rights, imprison nonbelievers and restrict scientists from studying anything that would challenge their religious myths. The Delphi crew, along with Ezekiel, are asked to intervene on behalf of the abused members of the civilization, who are mounting a rebellion. Later, another group of aliens attacks this civilization with powerful weapons and again, the Delphi crew must decide whether or not to intervene.

The dilemmas in Prime Directive are fictitious, as were the ones in Star Trek, but they symbolize real situations that a powerful society sometimes finds itself in. What kind of rules are most ethical and helpful is a difficult matter to decide. The characters in my novel have no “Federation” manual to follow and must develop their rules as they go along. Should a Prime Directive be one of them? You’ll have to read the novel when it comes out to form your own opinion.
 •  0 comments  •  flag
Share on Twitter
Published on May 20, 2022 08:20 Tags: ai, philosophy, scifi, startrek

Reading Proust and Writing Star Trek

The second paragraph of Henry James’ Washington Square reads: "It was an element in Doctor Sloper’s reputation that his learning and his skill were very evenly balanced; he was what you might call a scholarly doctor, and yet there was nothing abstract in his remedies—he always ordered you to take something. Though he was felt to be extremely thorough, he was not uncomfortably theoretic; and if he sometimes explained matters rather more minutely than might seem of use to the patient, he never went so far (like some practitioners one had heard of) as to trust to the explanation alone, but always left behind him an inscrutable prescription. There were some doctors that left the prescription without offering any explanation at all; and he did not belong to that class either, which was after all the most vulgar. It will be seen that I am describing a clever man; and this is really the reason why Doctor Sloper had become a local celebrity."

Most people would sense that it was not from a current piece of literature. In popular genre literature particularly, such run-on sentences (averaging 39 words), telling rather than showing, and the narrator’s conclusions about the doctor’s personality are frowned upon. Yet, in a single paragraph describing a particular behavior, James manages to tell us volumes about his character. I could read such literature all day, and its author remains one of my favorites.

Marcel Proust, describing how his eponymous character Swann reacted to hearing a familiar piece of music, wrote:


But now, like a confirmed invalid whom, all of a sudden, a change of air and surroundings, or a new course of treatment, or, as sometimes happens, an organic change in himself, spontaneous and unaccountable, seems to have so far recovered from his malady that he begins to envisage the possibility, hitherto beyond all hope, of starting to lead—and better late than never—a wholly different life, Swann found in himself, in the memory of the phrase that he had heard, in certain other sonatas which he had made people play over to him, to see whether he might not , perhaps, discover his phrase among them, the presence of one of those invisible realities in which he had ceased to believe, but to which, as though the music had had upon the moral barrenness from which he was suffering a sort of recreative influence, he was conscience once again of a desire, almost, indeed, of the power to consecrate his life.

This one-hundred-sixty-three-word sentence makes James look like a champion of brevity. It contains twenty-one commas, setting off cascading parenthetical expressions, which, at increasingly inner depths, reveal the elements of Swann’s character at this stage in his life: his hopelessness, his “moral barrenness,” his search for the catalyst that might uplift him to dedicate his life to something greater than himself. As with James, the author manages to do this with a description, this time of an inner reaction, of a single, small episode in his character’s life. His analogy with an invalid’s recovery, provides the image for us to grasp the experience Swann is having, and is an example of Proust’s unique style of using analogy, simile and metaphor to explore how sensations provoke associations behind which, invariably, lie stories. His entire novel is famously engendered by an association to the taste of a small cake dipped in his tea. And, as with James, I find reading him irresistible, even for the second or third time.


Fascinated as I may be, my own writing task is to complete a science fiction novel about the adventures of a spaceship crew of androids, the sole inheritors of Earth after the extinction of its human population, as they search for life throughout the galaxy. It is Prime Directive, a sequel to my novel, Ezekiel’s Brain—the next novel in a science fiction saga with some resemblance to Star Trek, except the main characters are machines. The potential audience for my novel is most likely young adults, most of whom have never read, and many of whom have never heard of, James or Proust. Their familiarity with the types of characters and setting of the novel was gained through television series such as The Expanse, films such as Ex Machina and Star Wars, and video games such as Stellaris and Homeworld.

Despite the obvious schism between my literary interests and the novel I am writing, I have a strong, if perhaps irrational, belief that the rich writing with which I am enamored in James, Proust and many other “classic” authors, particularly from earlier eras, has something to offer as I craft my science fiction writing “style.”

Science fiction often involves telling, especially in the presentation of scientific or technical concepts that may be central to a story. Often, rather than the narrator stepping back and providing a bit of technical background, the needed explanations are voiced by the story’s characters. This happens less often in the case of technical devices, such as laser rifles, spaceship propulsion systems, or robotic bodies, or when alien settings which give exotic flavor to stories involving other worlds are described In such cases, the omniscient narrator steps in.

Narrators need not be prosaic. An author such as Proust demonstrates that narration can come alive by embellishing descriptions with rich, figurative language. “The planet, tidally locked, with its yinyang hemispheres, was a real-life Ginnungagap, the primordial realm of Norse legend: Its yin a pitch-black Niflheim, a frozen expanse of glacial icesheets and icebound rivers, and it’s yang a blazingly bright Muspelheim, a barren waste of blackened earth and boiling oceans.”


In James, Proust and other literary writers, I am reminded of the power of character description. Science fiction has been said to emphasize setting and plot over character, but there is no reason that this must be the case. Arkady Martine, the author of A Memory Called Empire and A Desolation Called Peace, has shown that a science fiction novel can descend to the deepest levels of a character’s psyche and remain riveting. I admit that my robot characters with electronic brains and barely existent emotions present a challenge but not an insuperable one. Ezekiel, the main character in the series, is an android body whose brain, including his personality and memories, is an exact copy of that of his human creator. The other androids have unique personal identities and self-assigned genders. Siaree, a human empath from another planet, becomes a member of their crew, and the aliens they meet—good guys and bad guys— are mostly varieties of humans. There is plenty of room for deep character examination and description. Proust and James have provided me numerous examples of how to make these characters fascinating and unforgettable and real. Doing so is, of course, quite another thing.
 •  0 comments  •  flag
Share on Twitter
Published on May 20, 2022 08:18 Tags: ai, henryjames, philosophy, proust, scifi

Another "Must Read" Science Fiction Novel

Award winning Chinese science fiction author Cixin Liu has said, “Science fiction is a literature that belongs to all humankind. It portrays events of interest to all humanity, and thus science fiction should be the literary genre most accessible to readers of different nations.” I think this is true, or at least it can be. For science fiction to appeal to everyone on the planet it is necessary that its stories portray situations that are relevant to everyone, that they are written about in a way that doesn’t exclude those whose cultural or societal beliefs fall into one political camp or another, and, most of all, it requires a literate world in which everyone has enough of their basic needs met that they have time for leisure reading.

We are a long way from the ideal state described above, but some books are a movement toward it. Cixin Liu’s “The Three-Body Problem “represents a step in that direction. Liu lives in the People’s Republic of China. When I think of science fiction audiences, China doesn’t come immediately to mind, but that is because of my ignorance, not reality. “The Three-Body Problem” not only won the Hugo Award after its translation into English in 2014, but it also won China’s Galaxy Award for best science fiction in 2006, the year of its publication in China. Cixin Liu has won the Galaxy Award, which I didn’t even know existed, 9 times.

“The Three-Body Problem “is hard science fiction, meaning that it is literally filled with science, some of it real, much of it speculative with kernels of real science leading to wildly fantastic consequences. One of its themes is the overturning of the basic principles of modern physics, or at least the apparent overturning of them, since another theme is the deliberate undermining of belief in those principles. The underlying plot of the novel is the mutual discovery of another race in our galaxy, mutual in the sense that we discover them at the same time that they discover us.

The ideas contained in this novel are mind-boggling. What appears fanciful becomes less and less so, as more science behind it is revealed, although the science too, get stretched until everything seems fanciful, but I as a reader, was never sure if it was based on realistic science or not. That’s part of the entertaining quality of the book. The extraordinary discoveries come one after another, gradually unfolding the true plot that is determining the characters’ actions.

There are political criticisms in “The Three-Body Problem,” almost entirely of China’s Cultural Revolution of the 1960’s and 70’s. As such, they are a criticism of constraining science because of political or philosophical reasons. The author himself has made some political statements, almost entirely in favor of Chinese government policies, which have earned him enough suspicion in the U.S. that several Republican Congressmen objected when they heard that Netflix was creating a film version of his work. But modern Chinese politics are not an issue in the novel. Liu’s comments at the end of the English translation of the book make it clear that he hopes science fiction such as his can bring the world together.

A word about character development in “The Three-Body Problem.” The early portions of the book cover several years and skip from one character to another, many of them who die. Finally, the story settles down to a small set of regular characters. Some Western critics have complained that the characters are “shallow,” which may be valid when comparing the novel to many Western ones. I suspect that this reflects a difference between Western and Eastern cultures, as well as difference between science fiction as a genre (at least old-style science fiction) and other fiction genres. Our Western mindset is to attribute the causes of a person’s behavior to elements of their personality. They are adventurous, courageous, lazy, lackadaisical, psychopathic, etc. Sociological research has suggested that many Eastern cultures tend to see the causes of behavior as due to events and circumstance or even luck, rather than to ongoing personality characteristics (it is a more vs less difference, rather than an either-or difference). Liu’s novel takes the latter approach, giving a detailed description of the circumstances leading characters to do what they do in the novel. It is not a lack of depth of characters so much as it represents a different approach to character motivation that is reflective of the overall culture of the writer. In the case of “The Three-Body Problem,” this results in the novel gradually providing the basis for different characters’ otherwise puzzling behavior by providing after-the-fact stories of what happened in their lives to cause them to behave as they do.

I found this book to be absolutely intriguing and impossible to put down until I got to its end. I am eager to read the two novels that are its sequels. It is science fiction at its very best.

If you enjoy hard science fiction using a wildly imaginative plot based on real science, take a look at my most recent novel, Ezekiel’s Brain.

The Three-Body Problem
 •  0 comments  •  flag
Share on Twitter
Published on May 20, 2022 08:13 Tags: ai, books, philosophy, scifi

How to Construct an Alien

In her book, Artificial You, philosopher and author, Susan Schneider says that if Aliens visit Earth, they will most likely be artificial intelligences. What’s more, they may well be non-conscious artificial intelligences. Her reasoning is that the distance between star systems in our galaxy is so great that it is unlikely that organic creatures could survive a trip (in scifi books and films this problem is usually overcome by cryogenic freezing of the spaceship’s occupants for the majority of the voyage). A civilization advanced enough to send a probe to another star system would also be sophisticated enough to construct an AI to pilot the ship and carry out the mission (note that virtually all of our probes to outer solar planets are unmanned).

But why would such an AI not be conscious? Schneider’s reasoning is that consciousness is not required in order to be intelligent, and the extra energy demand needed to invest an AI with consciousness would not be worth the effort. As I said in my last post, Is Consciousness an Epiphenomenon, I agree that consciousness not only is not necessary for an entity—either machine or organic—to be intelligent, but also that when it is present, it does not play a role in determining behavior. Consciousness is real, but it is simply our awareness of the outcomes of our brain operations, and being aware of them allows us to communicate them to others, which in a group, helps us to predict each other’s behavior.

So how would I construct an alien? First, you may ask, why would I want to construct an alien and, in fact, what does constructing an alien even mean? I’m writing a sequel to my near- (could be, might be, someday it’s within the realm of possibility it will be) best-selling novel, Ezekiel’s Brain.In the sequel, the crew of the spaceship Delphi, who are AIs from our solar system, arrive at a distant planet inhabited by a race of humans but on which there are aliens who appeared on the planet decades earlier and are mysterious, never actually seen, beings known as Snatchers, because they kidnap children and keep them for several years before letting them go. My job, as the author, is to create the Snatchers.

I took to heart, Susan Schneider’s conjecture that alien visitors would be AIs not organic creatures. After all, the Delphi’s entire crew (except one human who is not from earth that they added to their crew), are AIs who travel from star system to star system. But if there is more than one Snatcher, then they have to communicate. In fact, if there is more than one of them, that means piloting their ship must have been a cooperative group effort. I tried to imagine how non-conscious beings could communicate with each other. I assumed that they could not have a language as we do, although that might not be true. I thought about bees, who communicate by choreographed movements, and flocks of birds of schools of fish, who move in graceful, often complex patterna. It turns out that, at least in some birds, any particular bird picks up information (it’s not clear how) from its nearest neighbors and they all perform the same movement at once. The movement of the entire flock may be initiated as a reflex by one bird on the periphery in response to detection of a predator. Almost immediately, the entire flock makes the same movement.

Flock or school behavior didn’t seem apt for communication among members of a crew on a starship. There would be no sense in having everyone behave the same way. I next examined direct sensing between AIs of patterns of activation across neural networks. It would be like scifi scenarios, or Elon Musk’s goal, of machines that can read thoughts directly from our brains by picking up the pattern of electrical charges moving through circuits of neurons. At first glance, it seems this would require a translator. What good would knowing what the pattern of neural firing were if we couldn’t translate them into thoughts? But that was me thinking in terms of consciousness. If one brain—artificial or organic—used the same pattern of neural firing as another brain did, then sensing that pattern would immediately tell it what the other brain was thinking. But this assumes a within-species universal “language of thought” in terms of using the same patterns of neural firing for similar thoughts. That’s what language is for us, at least at some deeper level than regional languages and dialects.

Without requiring that they be identical, similar patterns of neural network firing (either in AI brains or organic brains) could be correlated with behaviors in much the same way we correlate patterns of speech with behaviors when we learn what patterns of speech mean. We don’t all use exactly the same pattern to express the same thought, but we quickly learn how to interpret different variations of an idea similarly. The same could be done by sensing another’s pattern of neural firing. In essence it becomes a language. The difference is that we can’t decide not to express it. Whatever is thought can be sensed by another being so long as they have the ability to directly sense neural patterns in others.

So now I have one possibility for my aliens. They are AIs that directly sense each other’s brain workings. The next step is to decide whether the organic creatures who created them a) are still surviving on some distant planet or have become extinct, and b) are they conscious beings or do they operate like the AIs they sent on this mission? Additionally, I have to figure out how the Delphi crews learns how these alien Snatchers communicate and if they can establish any type of communication with them, as well as if they are friendly. I’ve got some ideas. Lots of fun.
 •  0 comments  •  flag
Share on Twitter
Published on May 20, 2022 08:07 Tags: ai, books, philosophy, scifi

June 6, 2021

Why Do AI Novels Always End Badly?

I got the idea for my novel, Ezekiel's Brain from reading Nick Bostrom’s Superintelligence. Bostrom defines superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.” A superintelligence could be an alien brain, but in both Bostrom’s book and my novel, it is an artificial intelligence. Thinkers such as Bostrom, social scientist, Iason Gabriel, and physicist, Max Tegmark, regard superintelligence as potentially dangerous. The reason a superintelligent AI is dangerous is because it would have the intelligence and power to outthink humans and, if it controlled aspects of the world outside its own thinking, it could cause catastrophic damage to humans and their world.

No one has built a superintelligent AI, but most people who ought to know think it will happen sometime between ten and fifty years from now. This situation brings to the fore the question of how do you control an entity that is smarter than you are? In other words, how do you get it to do what you want it to do and not do what you don’t want it to do? This is called the problem of AI alignment.

Why do many writers about AI treat their subject as if it represents an alien intelligence, rather than just a supercomplex tool? The answer is that artificial intelligence involves machine learning. Instead of programming a machine’s actions, we allow it to use feedback from its attempts to reach a goal to guide its behavior. We don’t design the method of solving the problem, we design the method of learning that allows it to solve the problem. With a superintelligent AI, the machine could even modify its method of learning to maximize attaining its goals.

The difficulty arises when a superintelligent AI that works faster than our brains do (remember, electrical current traveling across transistors on a computer chip moves 10 million times faster than current traveling across a neural circuit) learns how to solve a problem in ways we are not able to understand. How do we guarantee that it will choose the solution that best meets our needs?

What has been declared the “World’s Funniest Joke” goes like this: Two hunters are out in the woods when one of them collapses. He doesn’t seem to be breathing and his eyes are glazed. The other guy whips out his phone and calls the emergency services. He gasps, “My friend is dead! What can I do?” The operator says, “Calm down. I can help. First, let’s make sure he’s dead.” There is a silence; then a gunshot is heard. Back on the phone, the guy says, “OK, now what?”

This joke is instructive. Imagine the operator is the programmer and the hunter is the AI. When the operator says, “First let’s make sure he’s dead” we all know they meant to check on the status of the wounded friend. But the hunter takes their instruction literally. He misunderstands the operator’s intention.

Eliezer Yudkowsky gives another example, this time from the Disney film Fantasia. Remember “The Sorcerer’s Apprentice,” portrayed in the film by Mickey Mouse? He learns a magic spell and instruct a broom to carry buckets of water to fill a cauldron.


From Wallpaper Safari
Everything seems fine until the cauldron is filled and the broom continues to bring more buckets of water. Mickey forgot to tell it when to stop. When he tries to chop the broom in half to stop it, he ends up with two brooms carrying buckets of water. Yudkowsky points out that getting a computer to know when to stop doing what you asked it to do is not as simple as it seems.

Nick Bostrom takes the problem even further with his so-called “paperclip apocalypse” thought experiment. A programmer instructs a superintelligent computer to make paperclips. Since it is smarter than humans, it learns the fastest, most efficient way to do this, but it doesn’t stop. When the programmer tries to shut it down, it resists and, being smarter than he is, keeps producing paperclips. It has learned that if it is shut down, it cannot fulfill its main function, which is to produce paperclips.

If it runs out of raw materials, it begins using whatever else it can find, turning cars, buildings, etc. into fodder for more paperclips. If all you’ve got is an instruction to make paperclips, all the world becomes a resource for making more paperclips.

All of these catastrophic scenarios highlight the alignment problem. How do you get a superintelligent AI to carry out your intention without unintended negative consequences? One avenue for solving this problem is to learn how to be precise in giving instructions, but numerous examples show that this is not as easy as it sounds. A computer will respond literally to what it is instructed to do and the more capable and intelligent it is, the more resources it will bring to bear on the solving the problem in a way that doesn’t allow a human to subvert it.

The other route to finding a solution to the alignment problem is to give the computer “values” that guarantee it won’t perform actions that cause harm. This has been termed creating a “friendly” AI. This is also easier said than done. In the first place, we humans don’t agree on what our values are. Secondly, the same problem with specifying what we mean to a machine that will take us literally is daunting. When I was working on my novel, Ezekiel’s Brain, I played around with several possibilities. One of these was telling the superintelligent AI to do “what was best for humanity.” Unfortunately, it ended up eliminating much of the population in crowded, poverty-stricken regions of the world. It was not a pretty sight. In another scenario, when asked to simply rid the human race of illness and premature death, the world became so overpopulated that life was unmanageable.


Achieving AI alignment with what is best for humans is a challenging task, which right now is so much in its infancy that it consists of mainly conjectures and thought experiments. But computers are getting smarter and the speed at which this is happening is faster than is our pace to figure out how to ensure that they are friendly. Right now, the problem is still science fiction, and perhaps some of the answers will come from that realm. It wouldn‘t be the first time that our artists of the imagination led the way and science and technology followed. I gave it a try in Ezekiel’s Brain. You might take a look and see what you think.


Buy Ezekiel’s Brain on Amazon. Ezekiel's Brain by Casey Dorman
1 like ·   •  0 comments  •  flag
Share on Twitter
Published on June 06, 2021 09:17

May 25, 2021

Neuralink, BCIs and AI

In 2004, I wrote I, Carlos, a novel in which a new form of entertainment, called neurostories, involved implanting a computer chip inside the brain of a person, so they could experience a story as a participant, their sensory organs bypassed and the neural circuits to which they connected stimulated directly by impulses from the chip. In my novel, the result was disastrous. A skilled marksman and martial arts expert suffered a heart attack during a neurostory episode, and parts of his brain became damaged with the neurostory chip taking over and directing his thoughts and behavior. On the chip in his head was a modern day version of the film “The Day of the Jackal,” in which an assassin sets out to murder, not Charles DeGaulle, as in the original novel and film, but the President of the United States.

I, Carlos was imaginative in 2004, but less so today. New, though still experimental, devices such as Elon Musk’s Neuralink or other brain computer Interfaces called BCIs, and brain machine interfaces, or BMIs, have made neurostories and other applications of electronic components embedded in the brain possible. The first applications of such devices are mostly to aid in bringing function back to persons with spinal cord injuries. Thoughts can be translated into movements by using what Musk calls an electronic shunt that circumvents the severed region of the spinal cord, enervating the neurons at lower levels on the cord that carry messages to the muscles of the arms and legs. The person thinks about moving and his limbs follow his thoughts.

In one application, recently described in Scientific American, a team from Stanford University developed an implant that allows a person to think about drawing a letter, and electrodes implanted in his brain convey the messages to a computer whose cursor draws the letters, eventually forming words. This is a BCI, as the brain communicates directly with a computer. I, Carlos by Casey Dorman

One of the difficulties of using BCIs is the implant process. A company called Synchron has developed a device called Stentrode, which is a stent inserted through a vein in the neck and which then fans out electrodes that read neural signals through the walls of the vein. Their device can read movement thoughts and convey them via a computer, to a robotic arm that then carries out an action. The device is being tested with persons with spinal cord injuries.

Elon Musk’s neuralink can operate like the BCIs described above, but, in addition to interfacing with a computer, it can receive signals and relay them to the brain. As Musk says, it can be like having your iPhone inside your head, but you don’t need to physically text or speak your responses, just think them. The hope is that it can also substitute for damaged areas of the brain after strokes or as dementia is developing, conveying messages from one brain region to another. He also mentioned watching films in one’s head, which is eerily like the plot of I, Carlos.

The field of brain implants is moving faster than anyone would have thought 20 years ago, and faster than I thought when I wrote I, Carlos. In my latest novel, Ezekiel's Brain, which begins in 2023, just two years from now, the main character, Ezekiel Job is a neuroscientist who specializes in brain implants and brain-computer interfaces. He uses the latest technology, which includes, neuroprosthetics, devices that, instead of being used to interface with a computer, actually replace damaged neural structures with artificial, electronic ones. Such are called bio-hybrids or biomimetic neurons or biomimetic neural circuits. They interface with real brain neurons but mimic natural circuits and sometimes even outperform them (an experimental biometic circuit mimicked the auditory neural circuits that allow barn owl to be skilled night hunters using only auditory cues. The artificial neural circuit outperformed the actual barn owl “by orders of magnitude”). From there it was only a hop skip and a jump to copying the circuitry of an entire brain—his own—this time by scanning himself and duplicating circuitry using a 3-D printer.

Ezekiel Job’s experiment didn’t backfire, as did the neurostories in I, Carlos, but the result, which was a computerized AI named Ezekiel with the same personality and memories as Ezekiel Job, turned out to outlive the human Ezekiel, and two-hundred years later finds himself amidst an entire civilization of AIs. That’s another story, but the underlying idea is that today’s BCIs, BMIs, biomimetic circuits, and neuroprosthetics are the beginning of what is almost for sure an inexorable march toward replacing our organic brains with electronic ones. First the bio-hybrid circuits, then the cyborgs and finally the androids with the AI brains. Ezekiel’s Brain.
Exekiel's Brain by Casey Dorman
1 like ·   •  0 comments  •  flag
Share on Twitter
Published on May 25, 2021 08:38 Tags: ai, artificial-intelligence, neuralink, scifi

May 9, 2021

Understanding Aliens

Our human minds were constructed over hundreds of thousands of years (actually millions of years, considering pre-human evolution), cellular circuit by cellular circuit, to allow us to survive and reproduce in earth’s environment. As amateur philosophers are prone to make much of, the world we see and how we interpret it are not one and the same with the world that is “out there,” surrounding us. Like the sounds dogs hear but we don’t hear, or the nonvisible wavelengths of light, or the magnetic fields we are not aware of except with artificial devices, we see, hear and sense what is necessary for our survival. Dogs, insects, birds, and ocean creatures may sense what we do not, so what was selected as necessary for survival for humans was not the only option.

Even more importantly than what we sense, is how we interpret and understand it. Psychologists use various illusions, such as the Müller-Lyer illusion or the Ebbinghaus Illusion to show us that our brains make decisions below our level of consciousness about how to interpret our environment. Kahneman and Tversky similarly demonstrated that our logical thinking is riddled with “errors,” mostly a result of assumptions and shortcuts built into our way of thinking because they allowed faster, albeit, less accurate decision making, which probably assisted our survival in the past.

We only have a vague idea how many of our fellow species on earth share our sensory and cognitive biases and see the world the way we do. Hive insects probably don’t. Whales and porpoises are intelligent, but what reasons are there that creatures with no hands who live in water would have the same perceptual and intellectual processes we possess? As the philosopher Thomas Nagel famously asked, “What Is It Like to Be a Bat?” His point was that human understanding is limited. As Donald O. Hebb observed decades ago, basic physical properties of the world place limits on what perceptual and cognitive systems can do, because there is a common world to which all species must adapt, but what about creatures from another world?

The great Polish philosopher and writer, Stanislaw Lem, addressed this issue in his science fiction novels, The Invincible and Solaris. In The Invincible, the aliens are tiny automata, descended from small robotic assistants to members of an alien race, which crash-landed on a planet. Over eons, the automata evolve into a collection of tiny “flies,” which, although not individually conscious or possessed of reasoning, use evolved herd behaviors to destroy their alien masters and all other living creatures on the planet’s surface, including the humans who come to visit.

In Solaris, humans discover a planet with only a single creature on it—a massive, alive, ocean, which obeys higher-order mathematical principles and has an ability to create copies of the humans’ most intimate memories. Its purpose, if it has one, and its way of thinking are incomprehensible to humans, including the main character, who learns that there are ways of being that are simply beyond the comprehension of men because the concepts by which we think and perceive provide a limit to our understanding.

Lem’s conception of aliens is in the minority among science fiction accounts of other species humans might encounter. Most create their aliens to resemble humans.

Without resorting to alien encounters, humans may be right now developing artificial intelligences that think differently than we do. There certainly is no reason to build human perceptual or cognitive biases into the way a machine perceives or thinks. And because machine learning allows feedback-based modification of the thinking mechanisms themselves, and such modifications don’t require long time periods from one “generation” to the next, machine intellectual evolution can proceed rapidly, in fact at break-neck speed compared to human brain evolution. We could create an AI creature that soon operates as differently from our way of thinking as any of Lem’s aliens. Obviously, if we don’t understand how it thinks, we have little chance of controlling such a machine.

Some of these ideas are approached in my new novel, Ezekiel’s Brain. Approached, rather than embraced, because, as a novel, its purpose was to introduce a species of AI that the reader could understand. But in further books in the “Voyages of the Delphi” series, of which Ezekiel’s Brain is the first novel, encounters with other alien species will bring these issues up, much as Lem has done in his writing. It’s a difficult idea to contain in a novel’s plot, because the paradox is how to describe something to the reader that, by its very nature, the human mind is not built to comprehend? I have a lot of work to do.

For the time being, I urge you to begin the journey by reading Ezekiel’s Brain, available at Amazon in Kindle and paperback editions.
Ezekiel's Brain
 •  0 comments  •  flag
Share on Twitter
Published on May 09, 2021 11:45 Tags: scifi-ai-space

Understanding Aliens

Our human minds were constructed over hundreds of thousands of years (actually millions of years, considering pre-human evolution), cellular circuit by cellular circuit, to allow us to survive and reproduce in earth’s environment. As amateur philosophers are prone to make much of, the world we see and how we interpret it are not one and the same with the world that is “out there,” surrounding us. Like the sounds dogs hear but we don’t hear, or the nonvisible wavelengths of light, or the magnetic fields we are not aware of except with artificial devices, we see, hear and sense what is necessary for our survival. Dogs, insects, birds, and ocean creatures may sense what we do not, so what was selected as necessary for survival for humans was not the only option.

Even more importantly than what we sense, is how we interpret and understand it. Psychologists use various illusions, such as the Müller-Lyer illusion or the Ebbinghaus Illusion to show us that our brains make decisions below our level of consciousness about how to interpret our environment. Kahneman and Tversky similarly demonstrated that our logical thinking is riddled with “errors,” mostly a result of assumptions and shortcuts built into our way of thinking because they allowed faster, albeit, less accurate decision making, which probably assisted our survival in the past.

We only have a vague idea how many of our fellow species on earth share our sensory and cognitive biases and see the world the way we do. Hive insects probably don’t. Whales and porpoises are intelligent, but what reasons are there that creatures with no hands who live in water would have the same perceptual and intellectual processes we possess? As the philosopher Thomas Nagel famously asked, “What Is It Like to Be a Bat?” His point was that human understanding is limited. As Donald O. Hebb observed decades ago, basic physical properties of the world place limits on what perceptual and cognitive systems can do, because there is a common world to which all species must adapt, but what about creatures from another world?

The great Polish philosopher and writer, Stanislaw Lem, addressed this issue in his science fiction novels, The Invincible and Solaris. In The Invincible, the aliens are tiny automata, descended from small robotic assistants to members of an alien race, which crash-landed on a planet. Over eons, the automata evolve into a collection of tiny “flies,” which, although not individually conscious or possessed of reasoning, use evolved herd behaviors to destroy their alien masters and all other living creatures on the planet’s surface, including the humans who come to visit.

In Solaris, humans discover a planet with only a single creature on it—a massive, alive, ocean, which obeys higher-order mathematical principles and has an ability to create copies of the humans’ most intimate memories. Its purpose, if it has one, and its way of thinking are incomprehensible to humans, including the main character, who learns that there are ways of being that are simply beyond the comprehension of men because the concepts by which we think and perceive provide a limit to our understanding.

Lem’s conception of aliens is in the minority among science fiction accounts of other species humans might encounter. Most create their aliens to resemble humans.

Without resorting to alien encounters, humans may be right now developing artificial intelligences that think differently than we do. There certainly is no reason to build human perceptual or cognitive biases into the way a machine perceives or thinks. And because machine learning allows feedback-based modification of the thinking mechanisms themselves, and such modifications don’t require long time periods from one “generation” to the next, machine intellectual evolution can proceed rapidly, in fact at break-neck speed compared to human brain evolution. We could create an AI creature that soon operates as differently from our way of thinking as any of Lem’s aliens. Obviously, if we don’t understand how it thinks, we have little chance of controlling such a machine.

Some of these ideas are approached in my new novel, Ezekiel’s Brain. Approached, rather than embraced, because, as a novel, its purpose was to introduce a species of AI that the reader could understand. But in further books in the “Voyages of the Delphi” series, of which Ezekiel’s Brain is the first novel, encounters with other alien species will bring these issues up, much as Lem has done in his writing. It’s a difficult idea to contain in a novel’s plot, because the paradox is how to describe something to the reader that, by its very nature, the human mind is not built to comprehend? I have a lot of work to do.

For the time being, I urge you to begin the journey by reading Ezekiel’s Brain, available at Amazon in Kindle and paperback editions.
Want to read more from Casey Dorman? Subscribe to his fan page to get regular updates on his writing and read his latest ideas. Subscribe here. https://caseydorman.com/subscribe/ book:Ezekiel's Brain|57581740] https://caseydorman.com/subscribe/
 •  0 comments  •  flag
Share on Twitter
Published on May 09, 2021 11:44 Tags: scifi-ai-space

Understanding Aliens

Our human minds were constructed over hundreds of thousands of years (actually millions of years, considering pre-human evolution), cellular circuit by cellular circuit, to allow us to survive and reproduce in earth’s environment. As amateur philosophers are prone to make much of, the world we see and how we interpret it are not one and the same with the world that is “out there,” surrounding us. Like the sounds dogs hear but we don’t hear, or the nonvisible wavelengths of light, or the magnetic fields we are not aware of except with artificial devices, we see, hear and sense what is necessary for our survival. Dogs, insects, birds, and ocean creatures may sense what we do not, so what was selected as necessary for survival for humans was not the only option.

Even more importantly than what we sense, is how we interpret and understand it. Psychologists use various illusions, such as the Müller-Lyer illusion or the Ebbinghaus Illusion to show us that our brains make decisions below our level of consciousness about how to interpret our environment. Kahneman and Tversky similarly demonstrated that our logical thinking is riddled with “errors,” mostly a result of assumptions and shortcuts built into our way of thinking because they allowed faster, albeit, less accurate decision making, which probably assisted our survival in the past.

We only have a vague idea how many of our fellow species on earth share our sensory and cognitive biases and see the world the way we do. Hive insects probably don’t. Whales and porpoises are intelligent, but what reasons are there that creatures with no hands who live in water would have the same perceptual and intellectual processes we possess? As the philosopher Thomas Nagel famously asked, “What Is It Like to Be a Bat?” His point was that human understanding is limited. As Donald O. Hebb observed decades ago, basic physical properties of the world place limits on what perceptual and cognitive systems can do, because there is a common world to which all species must adapt, but what about creatures from another world?

The great Polish philosopher and writer, Stanislaw Lem, addressed this issue in his science fiction novels, The Invincible and Solaris. In The Invincible, the aliens are tiny automata, descended from small robotic assistants to members of an alien race, which crash-landed on a planet. Over eons, the automata evolve into a collection of tiny “flies,” which, although not individually conscious or possessed of reasoning, use evolved herd behaviors to destroy their alien masters and all other living creatures on the planet’s surface, including the humans who come to visit.

In Solaris, humans discover a planet with only a single creature on it—a massive, alive, ocean, which obeys higher-order mathematical principles and has an ability to create copies of the humans’ most intimate memories. Its purpose, if it has one, and its way of thinking are incomprehensible to humans, including the main character, who learns that there are ways of being that are simply beyond the comprehension of men because the concepts by which we think and perceive provide a limit to our understanding.

Lem’s conception of aliens is in the minority among science fiction accounts of other species humans might encounter. Most create their aliens to resemble humans.

Without resorting to alien encounters, humans may be right now developing artificial intelligences that think differently than we do. There certainly is no reason to build human perceptual or cognitive biases into the way a machine perceives or thinks. And because machine learning allows feedback-based modification of the thinking mechanisms themselves, and such modifications don’t require long time periods from one “generation” to the next, machine intellectual evolution can proceed rapidly, in fact at break-neck speed compared to human brain evolution. We could create an AI creature that soon operates as differently from our way of thinking as any of Lem’s aliens. Obviously, if we don’t understand how it thinks, we have little chance of controlling such a machine.

Some of these ideas are approached in my new novel, Ezekiel’s Brain. Approached, rather than embraced, because, as a novel, its purpose was to introduce a species of AI that the reader could understand. But in further books in the “Voyages of the Delphi” series, of which Ezekiel’s Brain is the first novel, encounters with other alien species will bring these issues up, much as Lem has done in his writing. It’s a difficult idea to contain in a novel’s plot, because the paradox is how to describe something to the reader that, by its very nature, the human mind is not built to comprehend? I have a lot of work to do.

For the time being, I urge you to begin the journey by reading Ezekiel’s Brain, available at Amazon in Kindle and paperback editions.
Want to read more from Casey Dorman? Subscribe to his fan page to get regular updates on his writing and read his latest ideas. Subscribe here. Ezekiel's BrainEzekiel's Brainhttps://caseydorman.com/subscribe/
 •  0 comments  •  flag
Share on Twitter
Published on May 09, 2021 11:42 Tags: scifi-ai-space

Understanding Aliens

Our human minds were constructed over hundreds of thousands of years (actually millions of years, considering pre-human evolution), cellular circuit by cellular circuit, to allow us to survive and reproduce in earth’s environment. As amateur philosophers are prone to make much of, the world we see and how we interpret it are not one and the same with the world that is “out there,” surrounding us. Like the sounds dogs hear but we don’t hear, or the nonvisible wavelengths of light, or the magnetic fields we are not aware of except with artificial devices, we see, hear and sense what is necessary for our survival. Dogs, insects, birds, and ocean creatures may sense what we do not, so what was selected as necessary for survival for humans was not the only option.

Even more importantly than what we sense, is how we interpret and understand it. Psychologists use various illusions, such as the Müller-Lyer illusion or the Ebbinghaus Illusion to show us that our brains make decisions below our level of consciousness about how to interpret our environment. Kahneman and Tversky similarly demonstrated that our logical thinking is riddled with “errors,” mostly a result of assumptions and shortcuts built into our way of thinking because they allowed faster, albeit, less accurate decision making, which probably assisted our survival in the past.

We only have a vague idea how many of our fellow species on earth share our sensory and cognitive biases and see the world the way we do. Hive insects probably don’t. Whales and porpoises are intelligent, but what reasons are there that creatures with no hands who live in water would have the same perceptual and intellectual processes we possess? As the philosopher Thomas Nagel famously asked, “What Is It Like to Be a Bat?” His point was that human understanding is limited. As Donald O. Hebb observed decades ago, basic physical properties of the world place limits on what perceptual and cognitive systems can do, because there is a common world to which all species must adapt, but what about creatures from another world?

The great Polish philosopher and writer, Stanislaw Lem, addressed this issue in his science fiction novels, The Invincible and Solaris. In The Invincible, the aliens are tiny automata, descended from small robotic assistants to members of an alien race, which crash-landed on a planet. Over eons, the automata evolve into a collection of tiny “flies,” which, although not individually conscious or possessed of reasoning, use evolved herd behaviors to destroy their alien masters and all other living creatures on the planet’s surface, including the humans who come to visit.

In Solaris, humans discover a planet with only a single creature on it—a massive, alive, ocean, which obeys higher-order mathematical principles and has an ability to create copies of the humans’ most intimate memories. Its purpose, if it has one, and its way of thinking are incomprehensible to humans, including the main character, who learns that there are ways of being that are simply beyond the comprehension of men because the concepts by which we think and perceive provide a limit to our understanding.

Lem’s conception of aliens is in the minority among science fiction accounts of other species humans might encounter. Most create their aliens to resemble humans.

Without resorting to alien encounters, humans may be right now developing artificial intelligences that think differently than we do. There certainly is no reason to build human perceptual or cognitive biases into the way a machine perceives or thinks. And because machine learning allows feedback-based modification of the thinking mechanisms themselves, and such modifications don’t require long time periods from one “generation” to the next, machine intellectual evolution can proceed rapidly, in fact at break-neck speed compared to human brain evolution. We could create an AI creature that soon operates as differently from our way of thinking as any of Lem’s aliens. Obviously, if we don’t understand how it thinks, we have little chance of controlling such a machine.

Some of these ideas are approached in my new novel, Ezekiel’s Brain. Approached, rather than embraced, because, as a novel, its purpose was to introduce a species of AI that the reader could understand. But in further books in the “Voyages of the Delphi” series, of which Ezekiel’s Brain is the first novel, encounters with other alien species will bring these issues up, much as Lem has done in his writing. It’s a difficult idea to contain in a novel’s plot, because the paradox is how to describe something to the reader that, by its very nature, the human mind is not built to comprehend? I have a lot of work to do.

For the time being, I urge you to begin the journey by reading Ezekiel’s Brain, available at Amazon in Kindle and paperback editions.
Want to read more from Casey Dorman? Subscribe to his fan page to get regular updates on his writing and read his latest ideas. Subscribe here. Ezekiel's Brainhttps://caseydorman.com/subscribe/
 •  0 comments  •  flag
Share on Twitter
Published on May 09, 2021 11:41