Lee
asked
Scott Hawkins:
This question contains spoilers…
(view spoiler)[I cannot overstate how much I enjoyed Mount Char. I've been shuffling around in an existential fog for a coupla hours since finishing it...thanks for that! Curious to know the inspiration for planes of reality being rooted in emotion (i.e. plane of anguish) when most other fantastical happenings are explained rather scientifically by Carolyn? (hide spoiler)]
Scott Hawkins
Hi Lee,
This is going to be a long answer. The TL;DR is "it's a hobby horse of mine going back to my undergrad days."
Back when I was in school I studied computer science with a focus on natural language processing. I spent a lot of time thinking about what it means to 'think' and what intelligence is, and other undergraduate-y questions.
At the time, much of the field was focusing on things like mechanical translation of natural language and speech recognition. I thought about these things too, and I more or less concluded that this stuff was amenable to computational solutions and that sooner or later someone would sort it out. (Hello Siri.)
I thought then and continue to think that whether or not that constitutes "artificial intelligence" depends largely on how you define the word "intelligence." That's a large, separate discussion.
Certainly there's some interesting stuff going on. The other day I had a twitter chat with a guy in Brazil, in Portuguese, via google translate. Yeah, I mean, the grammar was probably a little off, but we communicated. That would have seemed like science fiction in 1993.
But I thought (and still think) that there is a much more difficult problem being overlooked. The gist of it is that a lot of human language is predicated on the assumption of common internal experiences. Pick any human language anywhere in the world and they'll have some sort of word for 'pleasure' and 'pain.' The need for these words arises from the commonality of internal experiences universal to humans. It's an implicit assumption.
However, the assumption doesn't hold true for machines, however many languages they speak. That sort of begs the question of whether or not they can truly be said to "understand." (I'm simplifying a lengthy chain of thought here, but if you're interested, you might google the "Chinese room problem.")
So, as we're seeing today, you've got machines that can demonstrate feats of mathematical intuition that are beyond all but the most gifted humans. But the inner workings of their "mind"--I think that's a fair use of the word--are such that they have no emotional state.
Is such a machine "intelligent?" Well, I don't know. Define "intelligent."
Anyway, that in turn begs the question of whether or not it might be possible to mechanically model the sensations of pleasure and pain. Arguably, I think we're getting close to a mechanical representation of something that could be described as "thought." But I remain pretty certain that no one anywhere has even the slightest clue as to how something like "pleasure" might be modeled by a computer.
To my mind, the essence of the problem is that on a fundamental level no one understands what the word "pleasure" means. Ditto "pain." (Whether or not it's a good idea to make a machine that can suffer or enjoy is a separate question. I vote "no." I think all this hand-wringing over the singularity is nonsense, but if you throw something that could resemble a human motive into the mix all bets are off.)
So, to answer your question, I was kind of alluding to this chain of thought when I talked about the "plane of anguish." The idea was that Father had discovered that the human mind was basically a node of intersection between these higher dimensional planes. Sort of like a capacitor--every now and then something causes a connection, and you get a little spark. It's not a serious theory, just hand-waving, but I think the questions behind it are interesting.
HTH,
Scott
This is going to be a long answer. The TL;DR is "it's a hobby horse of mine going back to my undergrad days."
Back when I was in school I studied computer science with a focus on natural language processing. I spent a lot of time thinking about what it means to 'think' and what intelligence is, and other undergraduate-y questions.
At the time, much of the field was focusing on things like mechanical translation of natural language and speech recognition. I thought about these things too, and I more or less concluded that this stuff was amenable to computational solutions and that sooner or later someone would sort it out. (Hello Siri.)
I thought then and continue to think that whether or not that constitutes "artificial intelligence" depends largely on how you define the word "intelligence." That's a large, separate discussion.
Certainly there's some interesting stuff going on. The other day I had a twitter chat with a guy in Brazil, in Portuguese, via google translate. Yeah, I mean, the grammar was probably a little off, but we communicated. That would have seemed like science fiction in 1993.
But I thought (and still think) that there is a much more difficult problem being overlooked. The gist of it is that a lot of human language is predicated on the assumption of common internal experiences. Pick any human language anywhere in the world and they'll have some sort of word for 'pleasure' and 'pain.' The need for these words arises from the commonality of internal experiences universal to humans. It's an implicit assumption.
However, the assumption doesn't hold true for machines, however many languages they speak. That sort of begs the question of whether or not they can truly be said to "understand." (I'm simplifying a lengthy chain of thought here, but if you're interested, you might google the "Chinese room problem.")
So, as we're seeing today, you've got machines that can demonstrate feats of mathematical intuition that are beyond all but the most gifted humans. But the inner workings of their "mind"--I think that's a fair use of the word--are such that they have no emotional state.
Is such a machine "intelligent?" Well, I don't know. Define "intelligent."
Anyway, that in turn begs the question of whether or not it might be possible to mechanically model the sensations of pleasure and pain. Arguably, I think we're getting close to a mechanical representation of something that could be described as "thought." But I remain pretty certain that no one anywhere has even the slightest clue as to how something like "pleasure" might be modeled by a computer.
To my mind, the essence of the problem is that on a fundamental level no one understands what the word "pleasure" means. Ditto "pain." (Whether or not it's a good idea to make a machine that can suffer or enjoy is a separate question. I vote "no." I think all this hand-wringing over the singularity is nonsense, but if you throw something that could resemble a human motive into the mix all bets are off.)
So, to answer your question, I was kind of alluding to this chain of thought when I talked about the "plane of anguish." The idea was that Father had discovered that the human mind was basically a node of intersection between these higher dimensional planes. Sort of like a capacitor--every now and then something causes a connection, and you get a little spark. It's not a serious theory, just hand-waving, but I think the questions behind it are interesting.
HTH,
Scott
More Answered Questions
Natasha
asked
Scott Hawkins:
This question contains spoilers…
(view spoiler)[
Thank you, sir, for providing something so original and unique. Loved it all and I especially appreciated the refreshing details, e.g. not resurrecting characters like the cool President (a lot of -especially American- authors would, I believe). A question, if/when you find some time: Why did Carolyn feel she had to kill all the librarians? Those that posed a threat I understand, but Peter, for example? Others?
(hide spoiler)]
Leslie Gay
asked
Scott Hawkins:
Hello hello! From your wonderful "homicidal nutjobs librarians" primer, is the anecdote for David predicated on a Stephen King story? I am dying to know. I found your Reddit Q&A only now, and enjoyed it immensely - all elements of it - but so pleased I picked up the book knowing nothing about it! I have gone back and re-read portions (cough, the whole book) several times. Worth the 30 years. Your fan, - Leslie
About Goodreads Q&A
Ask and answer questions about books!
You can pose questions to the Goodreads community with Reader Q&A, or ask your favorite author a question with Ask the Author.
See Featured Authors Answering Questions
Learn more