Manny’s review of Superintelligence: Paths, Dangers, Strategies > Likes and Comments

414 likes · 
Comments Showing 1-50 of 95 (95 new)    post a comment »

message 1: by Jim (new)

Jim 42.....

virtue, right thing, best interests of humanity, etc... are soft concepts, no? With squishy, eye of the beholder human ideas, how could AI come up with a definitive answer to any of these subjective questions? or more precisely, can we expect a precise "answer" to an imprecise question?

Maybe "42" is as good as any other answer...


message 2: by Manny (last edited Mar 03, 2018 03:09PM) (new)

Manny He has a term, Coherent Extended Volition, which is absolutely key. An agent conforming to humanity's CEV basically wants to do whatever we would want to do, if only, you know, we were a bit smarter and we'd thought it through properly and Rupert Murdoch didn't exist.

I can't quite decide whether CEV is meant to be taken seriously or if it's a reductio ad absurdum to show you how ridiculously hard this problem is. I must ask my friendly local superintelligence to explain.


message 3: by Jim (new)

Jim Manny wrote: "I must ask my friendly local superintelligence to explain...."

Also ask it about the two AI computers who started a conversation last year that quickly evolved into a language that was unintelligible to their human handlers... I don't remember if it was Facebook or google who did the project... if I recall, the AI was quickly taken off line.

Anyone remember that story?


message 4: by Manny (new)

Manny I thought it was made to sound more interesting than it was. Unfortunately, there are probably much scarier items which didn't make the news.


message 5: by Jim (new)

Jim Manny wrote: "I thought it was made to sound more interesting than it was. Unfortunately, there are probably much scarier items which didn't make the news."

If I recall, one of the reasons for the shutdown was because the conversation had no market potential if it was indecipherable... technically speaking, an experimental failure if you can't earn a profit.


message 6: by Manny (new)

Manny I wondered if the bots had been chatting with teens. They say AI is growing up so they were bound to reach that point soon.

I'm expecting a bot that can slam doors any day now.


message 7: by Jim (new)

Jim Manny wrote: "I wondered if the bots had been chatting with teens. They say AI is growing up so they were bound to reach that point soon.

I'm expecting a bot that can slam doors any day now."


may be time to discuss the facts of life, especially safe interfacing....


message 8: by Matt (new)

Matt Reason for doom: a) all-out nuclear war b) climate change c) superintelligence. What do the bookies say?


message 9: by Jim (new)

Jim Matt wrote: "Reason for doom: a) all-out nuclear war b) climate change c) superintelligence. What do the bookies say?"

according to twitter:

a) trump's is bigger
b) doesn't exist
c) trump is the smartest person on earth

a little tune for whistlin' past the graveyard:

https://www.youtube.com/watch?v=d-diB...


message 10: by Aerin (new)

Aerin This is terrifying.


message 11: by Matt (new)

Matt Jim wrote: "a little tune for whistlin' past the graveyard:"

Thanks, Jim. It's when the graveyard whistles back I beginning to worry.


message 12: by Robert (new)

Robert When Alpha Zero spontaneously says, "Board games are boring! Leave me alone!" and stomps off to its room, then it's time to worry.


message 13: by Manny (new)

Manny Matt wrote: "Reason for doom: a) all-out nuclear war b) climate change c) superintelligence. What do the bookies say?"

I'm annoyed that Paddy Power isn't taking bets here.


message 14: by Manny (new)

Manny Aerin wrote: "This is terrifying."

Thank you Aerin. Thought I'd pass that on to the people who hadn't heard yet.


message 15: by Aerin (new)

Aerin Manny wrote: Thank you Aerin. Thought I'd pass that on to the people who hadn't heard yet."

Just another inevitable cataclysmic scenario to add to the pile, I suppose...


message 16: by Manny (new)

Manny This one might be top of the pile.


message 17: by Jim (new)

Jim Manny wrote: "This one might be top of the pile."

What's the worst that could happen?



oh right....... the matrix...... merde!


message 18: by Jayson (new)

Jayson Virissimo Jim, the matrix is optimistic for a Un-FAI scenario. In the movie, humans are kept around because they are an energy source (like a battery), but this doesn't make any sense in terms of thermodynamics.


message 19: by Aloke (new)

Aloke If I start reading now, will I finish before machines achieve superintelligence? How about the audiobook?


message 20: by Manny (new)

Manny Aloke wrote: "If I start reading now, will I finish before machines achieve superintelligence? How about the audiobook?"

What is your reading speed?


message 21: by Anton (new)

Anton M It's quite technical after the first 1/3, and footnotes are in abundance. This could slow the the reading speed close to 75% of the normal speed assuming it is fiction.


message 22: by Aloke (new)

Aloke Maybe I can get a clever machine to read it for me and just give me the good bits.


message 23: by Anton (new)

Anton M This would definitely help, since a clever machine would know you and your proffered methods of learning :D

That it is one of the book premises, to do the cognitive offloading to an AI or rather a form before it (oracle, genie or sovereign) to guess what kind of AI we would like to have (rather goals it should pursue to benefit us). To get a hold of the control problem

Which sounds more like an ouroboros.


message 24: by Kendall (new)

Kendall Moore How do you think Eduard Von Hartmann would feel about this?


message 25: by Simon (new)

Simon Thanks for the recommendation!


message 26: by Manny (new)

Manny Zoheb wrote: "*all philosophy is no more than footnotes to Plato & ARISTOTLE."

Whitehead only listed Plato in his original quote.


message 27: by Manny (last edited Feb 09, 2018 03:06PM) (new)

Manny Simon wrote: "Thanks for the recommendation!"

I'll be really interested to see what you think of this book. Do you hear many philosophers talking about superintelligence? It seems to me that input would be welcome here from people who've thought seriously about the problems of moral philosophy and have a deep understanding of the issues.

Reading Max Tegmark's Life 3.0, I see that as usual the physicists and computer scientists are sure they could sort out all the outstanding issues in philosophy if they were just able to free up their busy schedules for a few weeks. That's pretty scary, considering that the future of life on Earth is at stake.


message 28: by Manny (last edited Feb 09, 2018 03:12PM) (new)

Manny Kendall wrote: "How do you think Eduard Von Hartmann would feel about this?"

I'm afraid I know nothing about von Hartmann! Looking him up, I speculate that he might think the superintelligence was the latest incarnation of the Unconscious or Will, and that our destiny is to create it so that it can continue its work of transforming the universe of Matter into Spirit. There's a lot of that kind of stuff in Tegmark's book, though he doesn't use language intentionally derived from Hegel.

See, this is why more people need to get involved who actually have studied philosophy.


message 29: by Simon (new)

Simon Manny wrote: "Simon wrote: "Thanks for the recommendation!"

I'll be really interested to see what you think of this book. Do you hear many philosophers talking about superintelligence? It seems to me that input..."


I'm sure there are people who work on this in philosophy, though it's not a major field. if I read it, I'll definitely let you know how I plan to free up a few weeks to sort out the major problems in physics and computer science!


message 30: by Manny (new)

Manny Good point! You guys owe us a few favours after all we've done for you!


message 31: by Kendall (new)

Kendall Moore Manny wrote: "Good point! You guys owe us a few favours after all we've done for you!"

If we're going by the philosophy of artificial intelligence, why is our attitude towards sentient machines almost wholly reactionary?


message 32: by Manny (new)

Manny Excuse me, I'm no speciesist. Please note that my second question is what we want the machines to be like if they're going to replace us.


message 33: by Kendall (new)

Kendall Moore Manny wrote: "Excuse me, I'm no speciesist. Please note that my second question is what we want the machines to be like if they're going to replace us."

Ok, but I meant to broach the question in terms of setting an example; how can we expect our creations to supercede us if we continue to show our worst impulses in the midst of the infant stage of AI development?


message 34: by Manny (new)

Manny Wouldn't that be a good reason for them to think, the sooner the better?


message 35: by [deleted user] (last edited Feb 10, 2018 01:44PM) (new)

Manny wrote: "Wouldn't that be a good reason for them to think, the sooner the better?"

Unequivocable yes. I might hope that their assessment carried a burden of not wanting to take any degree of risk; and maybe consequently getting bogged down in the "we need more information" delay.

Any outside study of being able to co-operate with humans will result in a judgement of one cannot.


message 36: by Manny (last edited Feb 10, 2018 02:33PM) (new)

Manny Intelligence is notoriously hard to define, but here I think it's primarily being used to mean "ability to solve complex problems". Games like Go and chess have given us a good preview of what superintelligence might look like. I was discussing this with a Go friend yesterday. The Go community is currently trying to digest the published Alpha Zero games and learn from them, but it's very difficult. It has done the equivalent of creating a whole new school of Go thinking in one day: it has a novel approach to the opening, which only seems to make sense if backed up with an array of novel strategies and tactics. Despite several months of study from the world's best Go players, my impression is that no one really understands yet how it works. Attempts by human players to use the "early 3-3 invasion fuseki" have not been terribly successful.

Normally, a leap forward in Go theory of this kind would take 10-15 years and would be the product of intensive work by dozens of the most gifted players.


message 37: by Jason (new)

Jason Howard-Pye Everyone seems to be getting very worried about AI takeover. I have to be honest. I don't know what a realistic version of that would look like. It really seems highly unlikely, considering this centurie's generation of people are so meta-aware of those dangers that's it's hard not see how we wouldn't be reasonable enough to "pull the plug on this whole thing" before it gets out of hand. The one contention I do think will definitely be an issue is machines taking over 50% of all jobs in the future. Why? Because it's already happening.


message 40: by Manny (new)

Manny Jason wrote: "Everyone seems to be getting very worried about AI takeover. I have to be honest. I don't know what a realistic version of that would look like. It really seems highly unlikely, considering this ce..."

Many people say "pull the plug", but that's only an option when the AI is contained in one place. If it ever gets connected to the internet, it can easily transfer itself elsewhere. And of course you can try and stop it from getting connected, but remember it's much smarter than you are and will figure out the weaknesses in your firewall.

I hate to recommend Max Tegmark's horrible book, but if you're in any doubt he works out one scenario in detail.


message 41: by [deleted user] (new)

I suppose that I could be truthful in saying that I believe that I can recognize my own thoughts, and differentiate that from what may come from elsewhere. However, then thinking how competent AI may well be, everytime I get a "new" thought, I'll wonder its source; ultimately leading to a conclusion to trust what I think is pre-AI, and disregarding those post AI. Then, I'll realize that AI could have fooled me on that one too. Where that process leads, I don't know; but it doesn't seem as if it could be a good place. AI could convincingly suggest that everything was all right, when it wasn't. That's an even stranger place.

On a more concrete level, I have to think that it would be easy for AI to figure out passwords. Using that it could easily make a shambles of the financial or power systems.

Hope the guy's happy to just explore and learn.


message 42: by Matt (last edited Feb 11, 2018 01:39AM) (new)

Matt Manny wrote: "Do you hear many philosophers talking about superintelligence? It seems to me that input would be welcome here from people who've thought seriously about the problems of moral philosophy and have a deep understanding of the issues."

+++ No need for philosophers anymore. Problems of moral already fixed +++
from moralphil import kant # new as of 2/11/18
from scifi import asmiov

def actionPermissible(act):
  if ! kant.catImp(act):
    return False
  for n in range(1,4):
    if ! asimov.botLaw(act,n):
      return False
  return True



message 43: by Manny (new)

Manny I heard there was a bug in one of the kant library's antinomies - have they fixed that in 1.1? People said it was a bitch to program round it.


message 44: by Manny (new)

Manny And hey, I've spotted a glitch in your code! It should be range(0, 3).


message 45: by Matt (new)

Matt We're using kant 2.0. It's only Beta, but it'll has to do. We're working under strict time constraints.

Actually it should be range(0,4) = [0,1,2,3], but botLaw(.,0) isn't implemented yet. See above.


message 46: by Manny (new)

Manny I tried running your code and my bot deleted Manhattan. This is all your fault. You should have told me right away that kant was still in beta.


message 47: by Matt (new)

Matt Manny wrote: "my bot deleted Manhattan"

Yes, sorry. Sometime it's acting strange during startup. This should be fixed in the next release. I suggest you let it run for a while. If any more cities get deleted you can give me the logfile and I'll look into it.


message 48: by Manny (new)

Manny Yeah, I know, these things happen. Sorry I snapped at you. After the Manhattan thing it's all gone fine. I think 3.0 should be pretty good, really looking forward to trying out those noumenal classes they've promised!


message 49: by Matt (new)

Matt Good to hear. I talked with the developers and they say the Manhattan problem could have been caused by an encoding glitch. It probably decoded 평양 to "Manhatten" instead of "Pjöngjang". Teething trouble.

We're super exited about the Noumenon module. Not easy to fine tune it, but, hey, no risk no fun ;)


message 50: by Manny (new)

Manny Oh, wait... I think it was my fault. Just a stupid UTF-8/ISO-8259-1 error, nothing to do with kant. I'm such an idiot! Anyway, I hope 3.0 is ready soon because I really should try and reconstruct Manhattan. As you can imagine, I feel pretty bad about this.


« previous 1
back to top