Wyatt Williams

Here's a naïve question, and perhaps he addresses it (I'm only 1/3 in), but why do we need to create a "goal" for seed A.I. at all? It seems like all the problems arise when we try to create goals--especially the matter of instrumentals goal to acquire resources and create "infrastructure" to carry out final goals.

Ben Pace I am quite unsure what you are imagining when you imply that an AI can have no goals. If the seed AI only wanted to improve itself, that would be it's goal. An AI with no goals does nothing... It's just a rock.

Maybe you had something else in mind. I do not know if you have reached argument for the following statement, but it is argued that if a super intelligent AI has a goal, that AI's goal tends to entirely shape the future. If you make an super intelligent AI without goals, someone else can come along and make an AI with goals, and unless that person has done a helluva lot of work on deciding on the goals, it is also argued that things will be very very bad.
Wyatt Williams I'm not sure a machine intelligence would seek self-preservation if it had no goals; I think it's doubtful such a mind would be troubled by existential concerns like us humans, especially since at that level the machine intelligence would be infinitely more aware than us of the illusion of "self" in what is otherwise a heaving sea of causality. Such an entity might also consider "us" part of "it."
Günther Leenaert If I were able to develop a strong AI and if it were at all possible to do so, I would assign as its goal to find a goal for itself in the first place. Matter of fact, that's how I would define a true AI. A system that can choose, define and complete its own goals and assess its own performance. By extension, then, given we have an understanding of its capabilities and trustworthiness, we could then ask it what the most logical goal for us would be, as we are sorely lacking in the department of (re-)defining our collective human objectives. It would make sense to turn our collective 'Great Fall Forward' into a fall or flight with an objective. Progress doesn't mean much without some sense of direction, especially given the level of material comfort most of the western world has. The alternative, not having a goal as a species, would be to maintain our current mindset and await (rely on) either divine intervention, 'the great singularity' or the end of the universe. Not really inspiring. Perhaps we should prioritize goal-oriented society, as defined by the human collective before we aspire to develop a true AI. Just so we can compare and assess its directives/objectives.

Also, contrary to all the hoopla, strong AI in the not-too-distant future is still a fairytale concocted by people who need funding to develop what are essentially just advanced statistical models and emulations of our senses and perceptive faculties. We define the goals and the hyperparameters and assess the performance of said models. There's some progress in the way we define intelligence and our vision of what strong AI could be like, but we're still a ways off the development of a strong AI. In effect, there's not much point stressing over something we don't have the slightest idea about in both general and specific terms.
Stephanie Asimov's 3 laws seem a basic solution to the various apocalyptic scenarios proposed in this work and others on AI.
Timo Brønseth The whole issue of superintelligence becomes much clearer with a basic understanding of decision theory (e.g. understanding that choices are about selecting alternate futures based on which one has highest expected value).

The whole function of intelligence is that it makes one more capable of finding ways to achieve ones own goals. If a thing is very intelligent with a decision algorithm that is based on expected value, and if the programmers also didn't program into it any values, then the intelligent thing may still make choices based on infinitesimal probabilities of certain alternatives having higher value than others. It will select alternatives "just in case" they value those alternatives, because no alternative has exactly 0 probability of being valuable (though maybe it represents expected value of alternatives with a finite-bit number, and then the expected values could be rounded to 0, stopping it from choosing anything).

Also, the instrumental goals wouldn't be programmed into the AI by the programmers, the AI would just figure out that it should try to pursue these instrumental goals because it wants to achieve its main goals.
Joe For those interested in a technical approach to this question (among others), check out Reframing Superintelligence:

It's nothing like as approachable as SI, but worth reading if you're interested to understand more.

Personally, I don't think the CAIS framework in itself is strong grounds for optimism - but it does present a plausible default path in more detail.
With respect to the necessity of including goals, there's a distinction drawn between goals (as wilful pursuit of some explicit property), and tropisms - tendencies to behave in particular ways. The claim is that only the latter is necessary for AI to provide useful services, and that this needn't lead to agent-like pursuit of goals (in general).

However, there's a huge difference between: [X doesn't always happen] and [We know how to ensure X can't happen]. Being provided with an explicit goal certainly isn't the only way for a system to develop a goal.

I read RSI as saying something like:
Not all animals in the river are crocodiles.
Not all crocodiles are going to bite you.
Not all people bitten by crocodiles are killed.

Useful information, but I remain a reluctant swimmer.
Ingvaras Seed AI already has a goal of self-improving. I'd say it needs to have other goals that would limit its... sphere of influence (so that it doesn't accidentally kill us) and to make it useful, of course. Ultimately we humans probably don't have a goal in the same sense, but rather our biology provides us with goal of staying alive and reproducing. But AI without preset goals would probably do nothing or better still, behave randomly before coming to a nihilistic conclusion that its "life" has no meaning. And then do nothing.

I think maybe simply Asimov's laws would be ideal, but I have a feeling they would be a tricky starting point.

Now maybe we shouldn't give it the goal of self-improvement, but then who's gonna prevent every single researcher from doing that. Especially when AI with that goal would immediately get the upper hand...

(now I noticed how old the question is, oh well...)
Hsianloon Only a bit past that but I think that the idea of a seed AI addresses what an AI should be..ie it is able to actually learn and evolve beyond it's starting point. The quote on how as soon something works, it ceases to be an AI in a sense that it's final form and function has been achieved, and simply becomes a machine. Part of being human is that we can constantly evolve and learn if we wish to, whereas an AI, or what we have now only works within simple or limited scope we intended their function to be ?
TruthForAll Interesting. But we should consider that we all are collective super-organism which moving "that way", only because previous generations was trying to prevent it from worse, and failure is necessary.. because once You are awake to "super-organism-collective-unconsciousness" we all trying be good, using evil forces, that's why we are in such a struggle.
Just lets bring good-force each of us, and improve this mechanism from inside, simplify and act.

Word is god, you got a word, make a change.
Image for Superintelligence
by Nick Bostrom (Goodreads Author)
Rate this book
Clear rating

About Goodreads Q&A

Ask and answer questions about books!

You can pose questions to the Goodreads community with Reader Q&A, or ask your favorite author a question with Ask the Author.

See Featured Authors Answering Questions

Learn more