Manny’s review of Superintelligence: Paths, Dangers, Strategies > Likes and Comments
414 likes · Like
He has a term, Coherent Extended Volition, which is absolutely key. An agent conforming to humanity's CEV basically wants to do whatever we would want to do, if only, you know, we were a bit smarter and we'd thought it through properly and Rupert Murdoch didn't exist.
I can't quite decide whether CEV is meant to be taken seriously or if it's a reductio ad absurdum to show you how ridiculously hard this problem is. I must ask my friendly local superintelligence to explain.
Manny wrote: "I must ask my friendly local superintelligence to explain...."
Also ask it about the two AI computers who started a conversation last year that quickly evolved into a language that was unintelligible to their human handlers... I don't remember if it was Facebook or google who did the project... if I recall, the AI was quickly taken off line.
Anyone remember that story?
I thought it was made to sound more interesting than it was. Unfortunately, there are probably much scarier items which didn't make the news.
Manny wrote: "I thought it was made to sound more interesting than it was. Unfortunately, there are probably much scarier items which didn't make the news."
If I recall, one of the reasons for the shutdown was because the conversation had no market potential if it was indecipherable... technically speaking, an experimental failure if you can't earn a profit.
I wondered if the bots had been chatting with teens. They say AI is growing up so they were bound to reach that point soon.
I'm expecting a bot that can slam doors any day now.
Manny wrote: "I wondered if the bots had been chatting with teens. They say AI is growing up so they were bound to reach that point soon.
I'm expecting a bot that can slam doors any day now."
may be time to discuss the facts of life, especially safe interfacing....
Reason for doom: a) all-out nuclear war b) climate change c) superintelligence. What do the bookies say?
Matt wrote: "Reason for doom: a) all-out nuclear war b) climate change c) superintelligence. What do the bookies say?"
according to twitter:
a) trump's is bigger
b) doesn't exist
c) trump is the smartest person on earth
a little tune for whistlin' past the graveyard:
https://www.youtube.com/watch?v=d-diB...
Jim wrote: "a little tune for whistlin' past the graveyard:"
Thanks, Jim. It's when the graveyard whistles back I beginning to worry.
When Alpha Zero spontaneously says, "Board games are boring! Leave me alone!" and stomps off to its room, then it's time to worry.
Matt wrote: "Reason for doom: a) all-out nuclear war b) climate change c) superintelligence. What do the bookies say?"
I'm annoyed that Paddy Power isn't taking bets here.
Aerin wrote: "This is terrifying."
Thank you Aerin. Thought I'd pass that on to the people who hadn't heard yet.
Manny wrote: Thank you Aerin. Thought I'd pass that on to the people who hadn't heard yet."
Just another inevitable cataclysmic scenario to add to the pile, I suppose...
Manny wrote: "This one might be top of the pile."
What's the worst that could happen?
oh right....... the matrix...... merde!
Jim, the matrix is optimistic for a Un-FAI scenario. In the movie, humans are kept around because they are an energy source (like a battery), but this doesn't make any sense in terms of thermodynamics.
If I start reading now, will I finish before machines achieve superintelligence? How about the audiobook?
Aloke wrote: "If I start reading now, will I finish before machines achieve superintelligence? How about the audiobook?"
What is your reading speed?
It's quite technical after the first 1/3, and footnotes are in abundance. This could slow the the reading speed close to 75% of the normal speed assuming it is fiction.
This would definitely help, since a clever machine would know you and your proffered methods of learning :D
That it is one of the book premises, to do the cognitive offloading to an AI or rather a form before it (oracle, genie or sovereign) to guess what kind of AI we would like to have (rather goals it should pursue to benefit us). To get a hold of the control problem
Which sounds more like an ouroboros.
Zoheb wrote: "*all philosophy is no more than footnotes to Plato & ARISTOTLE."
Whitehead only listed Plato in his original quote.
Simon wrote: "Thanks for the recommendation!"
I'll be really interested to see what you think of this book. Do you hear many philosophers talking about superintelligence? It seems to me that input would be welcome here from people who've thought seriously about the problems of moral philosophy and have a deep understanding of the issues.
Reading Max Tegmark's Life 3.0, I see that as usual the physicists and computer scientists are sure they could sort out all the outstanding issues in philosophy if they were just able to free up their busy schedules for a few weeks. That's pretty scary, considering that the future of life on Earth is at stake.
Kendall wrote: "How do you think Eduard Von Hartmann would feel about this?"
I'm afraid I know nothing about von Hartmann! Looking him up, I speculate that he might think the superintelligence was the latest incarnation of the Unconscious or Will, and that our destiny is to create it so that it can continue its work of transforming the universe of Matter into Spirit. There's a lot of that kind of stuff in Tegmark's book, though he doesn't use language intentionally derived from Hegel.
See, this is why more people need to get involved who actually have studied philosophy.
Manny wrote: "Simon wrote: "Thanks for the recommendation!"
I'll be really interested to see what you think of this book. Do you hear many philosophers talking about superintelligence? It seems to me that input..."
I'm sure there are people who work on this in philosophy, though it's not a major field. if I read it, I'll definitely let you know how I plan to free up a few weeks to sort out the major problems in physics and computer science!
Manny wrote: "Good point! You guys owe us a few favours after all we've done for you!"
If we're going by the philosophy of artificial intelligence, why is our attitude towards sentient machines almost wholly reactionary?
Excuse me, I'm no speciesist. Please note that my second question is what we want the machines to be like if they're going to replace us.
Manny wrote: "Excuse me, I'm no speciesist. Please note that my second question is what we want the machines to be like if they're going to replace us."
Ok, but I meant to broach the question in terms of setting an example; how can we expect our creations to supercede us if we continue to show our worst impulses in the midst of the infant stage of AI development?
Intelligence is notoriously hard to define, but here I think it's primarily being used to mean "ability to solve complex problems". Games like Go and chess have given us a good preview of what superintelligence might look like. I was discussing this with a Go friend yesterday. The Go community is currently trying to digest the published Alpha Zero games and learn from them, but it's very difficult. It has done the equivalent of creating a whole new school of Go thinking in one day: it has a novel approach to the opening, which only seems to make sense if backed up with an array of novel strategies and tactics. Despite several months of study from the world's best Go players, my impression is that no one really understands yet how it works. Attempts by human players to use the "early 3-3 invasion fuseki" have not been terribly successful.
Normally, a leap forward in Go theory of this kind would take 10-15 years and would be the product of intensive work by dozens of the most gifted players.
Everyone seems to be getting very worried about AI takeover. I have to be honest. I don't know what a realistic version of that would look like. It really seems highly unlikely, considering this centurie's generation of people are so meta-aware of those dangers that's it's hard not see how we wouldn't be reasonable enough to "pull the plug on this whole thing" before it gets out of hand. The one contention I do think will definitely be an issue is machines taking over 50% of all jobs in the future. Why? Because it's already happening.
Jason wrote: "Everyone seems to be getting very worried about AI takeover. I have to be honest. I don't know what a realistic version of that would look like. It really seems highly unlikely, considering this ce..."
Many people say "pull the plug", but that's only an option when the AI is contained in one place. If it ever gets connected to the internet, it can easily transfer itself elsewhere. And of course you can try and stop it from getting connected, but remember it's much smarter than you are and will figure out the weaknesses in your firewall.
I hate to recommend Max Tegmark's horrible book, but if you're in any doubt he works out one scenario in detail.
Manny wrote: "Do you hear many philosophers talking about superintelligence? It seems to me that input would be welcome here from people who've thought seriously about the problems of moral philosophy and have a deep understanding of the issues."
+++ No need for philosophers anymore. Problems of moral already fixed +++
I heard there was a bug in one of the kant library's antinomies - have they fixed that in 1.1? People said it was a bitch to program round it.
We're using kant 2.0. It's only Beta, but it'll has to do. We're working under strict time constraints.
Actually it should be range(0,4) = [0,1,2,3], but botLaw(.,0) isn't implemented yet. See above.
I tried running your code and my bot deleted Manhattan. This is all your fault. You should have told me right away that kant was still in beta.
Manny wrote: "my bot deleted Manhattan"
Yes, sorry. Sometime it's acting strange during startup. This should be fixed in the next release. I suggest you let it run for a while. If any more cities get deleted you can give me the logfile and I'll look into it.
Yeah, I know, these things happen. Sorry I snapped at you. After the Manhattan thing it's all gone fine. I think 3.0 should be pretty good, really looking forward to trying out those noumenal classes they've promised!
Good to hear. I talked with the developers and they say the Manhattan problem could have been caused by an encoding glitch. It probably decoded 평양 to "Manhatten" instead of "Pjöngjang". Teething trouble.
We're super exited about the Noumenon module. Not easy to fine tune it, but, hey, no risk no fun ;)
back to top
He has a term, Coherent Extended Volition, which is absolutely key. An agent conforming to humanity's CEV basically wants to do whatever we would want to do, if only, you know, we were a bit smarter and we'd thought it through properly and Rupert Murdoch didn't exist. I can't quite decide whether CEV is meant to be taken seriously or if it's a reductio ad absurdum to show you how ridiculously hard this problem is. I must ask my friendly local superintelligence to explain.
Manny wrote: "I must ask my friendly local superintelligence to explain...."Also ask it about the two AI computers who started a conversation last year that quickly evolved into a language that was unintelligible to their human handlers... I don't remember if it was Facebook or google who did the project... if I recall, the AI was quickly taken off line.
Anyone remember that story?
I thought it was made to sound more interesting than it was. Unfortunately, there are probably much scarier items which didn't make the news.
Manny wrote: "I thought it was made to sound more interesting than it was. Unfortunately, there are probably much scarier items which didn't make the news."If I recall, one of the reasons for the shutdown was because the conversation had no market potential if it was indecipherable... technically speaking, an experimental failure if you can't earn a profit.
I wondered if the bots had been chatting with teens. They say AI is growing up so they were bound to reach that point soon. I'm expecting a bot that can slam doors any day now.
Manny wrote: "I wondered if the bots had been chatting with teens. They say AI is growing up so they were bound to reach that point soon. I'm expecting a bot that can slam doors any day now."
may be time to discuss the facts of life, especially safe interfacing....
Reason for doom: a) all-out nuclear war b) climate change c) superintelligence. What do the bookies say?
Matt wrote: "Reason for doom: a) all-out nuclear war b) climate change c) superintelligence. What do the bookies say?"according to twitter:
a) trump's is bigger
b) doesn't exist
c) trump is the smartest person on earth
a little tune for whistlin' past the graveyard:
https://www.youtube.com/watch?v=d-diB...
Jim wrote: "a little tune for whistlin' past the graveyard:"Thanks, Jim. It's when the graveyard whistles back I beginning to worry.
When Alpha Zero spontaneously says, "Board games are boring! Leave me alone!" and stomps off to its room, then it's time to worry.
Matt wrote: "Reason for doom: a) all-out nuclear war b) climate change c) superintelligence. What do the bookies say?"I'm annoyed that Paddy Power isn't taking bets here.
Aerin wrote: "This is terrifying."Thank you Aerin. Thought I'd pass that on to the people who hadn't heard yet.
Manny wrote: Thank you Aerin. Thought I'd pass that on to the people who hadn't heard yet."Just another inevitable cataclysmic scenario to add to the pile, I suppose...
Manny wrote: "This one might be top of the pile."What's the worst that could happen?
oh right....... the matrix...... merde!
Jim, the matrix is optimistic for a Un-FAI scenario. In the movie, humans are kept around because they are an energy source (like a battery), but this doesn't make any sense in terms of thermodynamics.
If I start reading now, will I finish before machines achieve superintelligence? How about the audiobook?
Aloke wrote: "If I start reading now, will I finish before machines achieve superintelligence? How about the audiobook?"What is your reading speed?
It's quite technical after the first 1/3, and footnotes are in abundance. This could slow the the reading speed close to 75% of the normal speed assuming it is fiction.
This would definitely help, since a clever machine would know you and your proffered methods of learning :D That it is one of the book premises, to do the cognitive offloading to an AI or rather a form before it (oracle, genie or sovereign) to guess what kind of AI we would like to have (rather goals it should pursue to benefit us). To get a hold of the control problem
Which sounds more like an ouroboros.
Zoheb wrote: "*all philosophy is no more than footnotes to Plato & ARISTOTLE."Whitehead only listed Plato in his original quote.
Simon wrote: "Thanks for the recommendation!"I'll be really interested to see what you think of this book. Do you hear many philosophers talking about superintelligence? It seems to me that input would be welcome here from people who've thought seriously about the problems of moral philosophy and have a deep understanding of the issues.
Reading Max Tegmark's Life 3.0, I see that as usual the physicists and computer scientists are sure they could sort out all the outstanding issues in philosophy if they were just able to free up their busy schedules for a few weeks. That's pretty scary, considering that the future of life on Earth is at stake.
Kendall wrote: "How do you think Eduard Von Hartmann would feel about this?"I'm afraid I know nothing about von Hartmann! Looking him up, I speculate that he might think the superintelligence was the latest incarnation of the Unconscious or Will, and that our destiny is to create it so that it can continue its work of transforming the universe of Matter into Spirit. There's a lot of that kind of stuff in Tegmark's book, though he doesn't use language intentionally derived from Hegel.
See, this is why more people need to get involved who actually have studied philosophy.
Manny wrote: "Simon wrote: "Thanks for the recommendation!"I'll be really interested to see what you think of this book. Do you hear many philosophers talking about superintelligence? It seems to me that input..."
I'm sure there are people who work on this in philosophy, though it's not a major field. if I read it, I'll definitely let you know how I plan to free up a few weeks to sort out the major problems in physics and computer science!
Manny wrote: "Good point! You guys owe us a few favours after all we've done for you!"If we're going by the philosophy of artificial intelligence, why is our attitude towards sentient machines almost wholly reactionary?
Excuse me, I'm no speciesist. Please note that my second question is what we want the machines to be like if they're going to replace us.
Manny wrote: "Excuse me, I'm no speciesist. Please note that my second question is what we want the machines to be like if they're going to replace us."Ok, but I meant to broach the question in terms of setting an example; how can we expect our creations to supercede us if we continue to show our worst impulses in the midst of the infant stage of AI development?
Manny wrote: "Wouldn't that be a good reason for them to think, the sooner the better?"
Unequivocable yes. I might hope that their assessment carried a burden of not wanting to take any degree of risk; and maybe consequently getting bogged down in the "we need more information" delay.
Any outside study of being able to co-operate with humans will result in a judgement of one cannot.
Unequivocable yes. I might hope that their assessment carried a burden of not wanting to take any degree of risk; and maybe consequently getting bogged down in the "we need more information" delay.
Any outside study of being able to co-operate with humans will result in a judgement of one cannot.
Intelligence is notoriously hard to define, but here I think it's primarily being used to mean "ability to solve complex problems". Games like Go and chess have given us a good preview of what superintelligence might look like. I was discussing this with a Go friend yesterday. The Go community is currently trying to digest the published Alpha Zero games and learn from them, but it's very difficult. It has done the equivalent of creating a whole new school of Go thinking in one day: it has a novel approach to the opening, which only seems to make sense if backed up with an array of novel strategies and tactics. Despite several months of study from the world's best Go players, my impression is that no one really understands yet how it works. Attempts by human players to use the "early 3-3 invasion fuseki" have not been terribly successful.Normally, a leap forward in Go theory of this kind would take 10-15 years and would be the product of intensive work by dozens of the most gifted players.
Everyone seems to be getting very worried about AI takeover. I have to be honest. I don't know what a realistic version of that would look like. It really seems highly unlikely, considering this centurie's generation of people are so meta-aware of those dangers that's it's hard not see how we wouldn't be reasonable enough to "pull the plug on this whole thing" before it gets out of hand. The one contention I do think will definitely be an issue is machines taking over 50% of all jobs in the future. Why? Because it's already happening.
Jason wrote: "Everyone seems to be getting very worried about AI takeover. I have to be honest. I don't know what a realistic version of that would look like. It really seems highly unlikely, considering this ce..."Many people say "pull the plug", but that's only an option when the AI is contained in one place. If it ever gets connected to the internet, it can easily transfer itself elsewhere. And of course you can try and stop it from getting connected, but remember it's much smarter than you are and will figure out the weaknesses in your firewall.
I hate to recommend Max Tegmark's horrible book, but if you're in any doubt he works out one scenario in detail.
I suppose that I could be truthful in saying that I believe that I can recognize my own thoughts, and differentiate that from what may come from elsewhere. However, then thinking how competent AI may well be, everytime I get a "new" thought, I'll wonder its source; ultimately leading to a conclusion to trust what I think is pre-AI, and disregarding those post AI. Then, I'll realize that AI could have fooled me on that one too. Where that process leads, I don't know; but it doesn't seem as if it could be a good place. AI could convincingly suggest that everything was all right, when it wasn't. That's an even stranger place.
On a more concrete level, I have to think that it would be easy for AI to figure out passwords. Using that it could easily make a shambles of the financial or power systems.
Hope the guy's happy to just explore and learn.
On a more concrete level, I have to think that it would be easy for AI to figure out passwords. Using that it could easily make a shambles of the financial or power systems.
Hope the guy's happy to just explore and learn.
Manny wrote: "Do you hear many philosophers talking about superintelligence? It seems to me that input would be welcome here from people who've thought seriously about the problems of moral philosophy and have a deep understanding of the issues."+++ No need for philosophers anymore. Problems of moral already fixed +++
from moralphil import kant # new as of 2/11/18
from scifi import asmiov
def actionPermissible(act):
if ! kant.catImp(act):
return False
for n in range(1,4):
if ! asimov.botLaw(act,n):
return False
return True
I heard there was a bug in one of the kant library's antinomies - have they fixed that in 1.1? People said it was a bitch to program round it.
We're using kant 2.0. It's only Beta, but it'll has to do. We're working under strict time constraints. Actually it should be range(0,4) = [0,1,2,3], but botLaw(.,0) isn't implemented yet. See above.
I tried running your code and my bot deleted Manhattan. This is all your fault. You should have told me right away that kant was still in beta.
Manny wrote: "my bot deleted Manhattan"Yes, sorry. Sometime it's acting strange during startup. This should be fixed in the next release. I suggest you let it run for a while. If any more cities get deleted you can give me the logfile and I'll look into it.
Yeah, I know, these things happen. Sorry I snapped at you. After the Manhattan thing it's all gone fine. I think 3.0 should be pretty good, really looking forward to trying out those noumenal classes they've promised!
Good to hear. I talked with the developers and they say the Manhattan problem could have been caused by an encoding glitch. It probably decoded 평양 to "Manhatten" instead of "Pjöngjang". Teething trouble.We're super exited about the Noumenon module. Not easy to fine tune it, but, hey, no risk no fun ;)


virtue, right thing, best interests of humanity, etc... are soft concepts, no? With squishy, eye of the beholder human ideas, how could AI come up with a definitive answer to any of these subjective questions? or more precisely, can we expect a precise "answer" to an imprecise question?
Maybe "42" is as good as any other answer...