Geoff > Status Update

Geoff
added a status update
Since it seems as likely as not that in a week DONALD FUCKING TRUMP is going to be declared commander-in-chief of the most powerful army humanity has ever known, I ask the good people of the world, what are you stocking your bomb shelters with? Also, half of America? Fuck you. I'm not one of you and I don't like you - stay away from me and my family you scary idiots.
— Nov 02, 2016 04:39AM
252 likes · Like flag
Comments Showing 4,001-4,050 of 4,673 (4673 new)

but her purpose is to praise Putin, put down the U.S."
That's weird. I haven't used the Putin-word in this discussion. I also haven't spoken about Ru..."
Am I sure I am okay, etc? Now you are just being manipulative"
No, you are. While I just prefer people not to attribute their intimate fantasies to me. If I decide to talk about Vietnam or about the NK's president or anything else, I will do it directly and by myself, without you or anyone else putting your words in my mouth. Your Putin-based paranoia looks funny and out of place. I realise you are probably in love with the guy or something, or you wouldn't have been talking about him non-stop, but please do not attribute your emotional problems to me. Ok?
And don't get me wrong, I could talk about Putin or Zimbabwe or whatever but this is not the topic for that discussion. Still, even though I haven't raised that topic, I still see you whining about it. Why do it here? This topic is not called 'Let's whine about Russia', is it?
Manny wrote: "How do you mean, one of them?"
I believed that basically, they took AG, copied it twice and then AG1 and AG2 had some separate 'learning sessions' playing against each other, against people, etc... Or are there really 2 exactly same AGs playing? With same databases, same experience and references?
Manny wrote: "I don't see a robot physiotherapist any time soon. "
Robotic physiotherapist I see all right. I don't see a brilliant novel writer, compositor, singer, painter, strategist, troubleshooter, auditor and inventor in robots. Professions where emotion, empathy, imagination, vision, drive and human interaction are important.
Still, I might be underestimating robots. For example, some neuronetwork has already tried hand at the GoT: http://screenrant.com/game-thrones-bo... It was a fail, of course, but with some interesting prognoses, nevertheless.
Terminator might turn out to be a lot more talented guy than I expect:)

I don't think Deep Mind have released the details of how the self-play games were created. I had assumed that it was two identical copies, since that was the simplest hypothesis, but your scenario is also plausible. I'm not sure in fact that it makes much difference, since AG would not be set up to be able to exploit the potential weakness of playing against an exact clone.
Robotic physiotherapist I see all right. I don't see a brilliant novel writer, compositor, singer, painter, strategist, troubleshooter, auditor and inventor in robots. Professions where emotion, empathy, imagination, vision, drive and human interaction are important.
Still, I might be underestimating robots. For example, some neuronetwork has already tried hand at the GoT: http://screenrant.com/game-thrones-bo... It was a fail, of course, but with some interesting prognoses, nevertheless.
Terminator might turn out to be a lot more talented guy than I expect:) "
I dunno. If AG were human, you'd say it had unlimited quantities of imagination, vision and drive. It would be universally hailed as not just the strongest but also the most creative player in Go history, eclipsing even Go Seigen. But I think robotics is still a long way from the point where you could create a mechanical physiotherapist who'd intuit that your shoulder needed to be massaged just there.
From the opening, I'd have to say that it seems that AG1 is likely to be putting most everything out there; and that AG2 wants more information to make that assessment. If and when AG2 does, it will counter with whatever works best against AG1's demonstrated strengths. Though that observation seems too easy unless one is playing club level.
Regarding physiotherapists; you can always scratch yourself, or like DFW fake doing it badly.
Pro level boils down to execution.
Regarding physiotherapists; you can always scratch yourself, or like DFW fake doing it badly.
Pro level boils down to execution.

Does anyone understand anything about the EMP effect of nuclear bombs, which can be delivered on ICBM missiles, without the missiles touching the ground, if the bombs are exploded over its target?
I have been thinking about the electrical grid article I read some time ago off and on lately, but I am no engineer or scientist. I wonder if anyone else is thinking along these lines. If I were a crazy wanna-be dictator more involved with local Republican pissing contests, and I did not read much except for how-to-do-business-cons books for dummies, and for relaxation DM compliments to my Russian kleptocratic friends, would I realize some dictators are more seriously into proving their sexual prowess beyond the fun of grabbing women's crotches?

I take ex..."
If I bring in Nazis and Hitler now....oops, done! Sorry.


Jessaka, I'm sure their pamphlets were nice, even if not effective. For Gulag the pamphlets did no good, to my knowledge. And I think my data on Gulag is good, considering the fact that most of my maternal ancestors (great-grandparents and their brothers/sisters/parents, about 20 people overall) living at that time had been sent off to labour camps and got out of there only by volunteering to go to WW2 front lines.
Jessaka wrote: "But why bring all of this into the conversation that Cecily had? That is what I meant about propaganda as presented by a Russian to put down the U.S. It has nothing to do with the issues presented by Cecily, unless, that is, you can tell me what it does have to do with it."
I'll reiterate for you our discussion with Cecily. Obviously, it was too difficult to follow.
I was not impressed with her rendering of the current situation as 'theoretical horror'. As a pragmatic person, I believe there 2 types of horrors, imagined and real ones. So, I asked her to clarify on that point to establish whether the said horrors really are as divorced from reality as they sounded. Also, to put things in perspective, I gave her some examples of not theoretical horrors, but true to life ones that US participated in, directly or indirectly. Basically, in my book, bombing a city or putting people in camps is a horror. Budget deficit isn't.
I did not include the Gulag in this list of horror examples as I don't think the US really participated in its creation. In stimulation of October revolution, possibly. The Gulag was a purely domestic production, if not invention, of the USSR. So it would not have made a valid comparison to the situation in the US.
My point is that at some point in previous history of the US there had been horrific situations. Any country of the world has had them. Still, the current US situation is nothing like those situations, by far. The society is evolving, learning how to deal with its issues, that much looks true.
Basically, I think all this hype about horrors in US is just that, hype. There are no horrors, just changes. The US is fine, thank goodness, no matter how much people try to make it sound otherwise. Yes, it has economic problems. Severe ones but curable. Nevertheless, these are just that, problems, not horrors. People are just demostrating cases of bad tempers, like toddlers who need their opinion to be the most important of all.
PS. You do realise that it is you talking here about Russia, and I still haven't said a word about it, right?
PSS. Nation-bashing will not prove your point here. Had I been certifiably Black or Jewish, would you be pointing fingers as well? You, yourself, are more of a nationalist than that Confederation-loving guy who drove a truck into a crowd of protesters in Charlottesville. Think about it.
PSSS. You can't call me a 'Russian person', I come from several cultural backgrounds... I do not fit into a tidy box labeled 'Russian'.
Jessaka wrote: "She comes on here and out of the blue begins bashing the U.S., not that there isn't a lot to bash, but her purpose is to praise Putin, put down the U.S. "
I still don't get how you came to that conclusion. The discussion was on a different topic altogether.

From my experience with physiotherapists, they don't massage people, they just tell you how many times you need to do what movement/exercise to get that blown vertebra/knee/whatever back to work and also put you in weird gadgets to get treatments with magnetic fields, electric currents, etc. Basing on that, I would expect it possible to automatise this as the treatmets are 99% standardised and up to the book. Though maybe I just haven't met the right specialists.
The guys who do the mind-blowing healing massages, those are manual therapists. And human interaction for those would always be a must, I think. One would hate for a robot to put too much pressure to one's back, or something...
deleted user wrote: "it seems that AG1 is likely to be putting most everything out there; and that AG2 wants more information to make that assessment. "
Sounds scary. If the robots are engaging in competition this differently, what does it say about them, different personalities emerging? Or just weird quirks of programming? Or slightly differently tweaked scenarios available to them?

https://theintercept.com/2017/09/01/c...
Whatever one may think of Charlie Hebdo, one has to admire their consistency.
Misericordia wrote: "deleted user wrote: "it seems that AG1 is likely to be putting most everything out there; and that AG2 wants more information to make that assessment. "
Sounds scary. If the robots are engaging in competition this differently, what does it say about them, different personalities emerging? Or just weird quirks of programming? Or slightly differently tweaked scenarios available to them? "
Just guessing; but maybe AG1 doesn't know that it is playing a virtual duplicate; and vice versa. Both however are certain that they have a number of devastating ways of playing.
So maybe white AG1 with the initiative opens with a a strong, but multi-tiered offense which keeps most players at bay all game. It seems rational for AG2 to counter with defensive moves, while it is still not sure that the high level of offense that AG1 has shown will be the only game it can play well. AG2 is waiting for a point at which it determines that AG1 has committed itself to this style; then attempts to take the initiative by concentrating his forces on a few areas, winning battles in each. Complicating this is a few moves back is AG1's ability to anticipate this; at which point he would take alternative steps. AG2 would detect this possibility and adjust his game.
Underlying this thought process is my assumption, which could well be wrong, that AG1 and AG2 do not know each other's abilities. In the case that they did what they would do would be to make a determination of whether a perfectly played game is inherently in the favor of the one who initiates or the one who counters; and seek that position. Humans believe that the advantage is to the initator, but they've been wrong before. It would get weird if my instinct is correct in the advantage being with the one who counters; as each would seek to lose initiative.
There bhave been many, many books written on how to pitch baseball; as the pitcher-hitter thing is called the most mental thing in sports. But Warren Spahn, one of the all time greats distilled it to this; "I only needed two pitches; the one he's looking for and the one I throw."
Sounds scary. If the robots are engaging in competition this differently, what does it say about them, different personalities emerging? Or just weird quirks of programming? Or slightly differently tweaked scenarios available to them? "
Just guessing; but maybe AG1 doesn't know that it is playing a virtual duplicate; and vice versa. Both however are certain that they have a number of devastating ways of playing.
So maybe white AG1 with the initiative opens with a a strong, but multi-tiered offense which keeps most players at bay all game. It seems rational for AG2 to counter with defensive moves, while it is still not sure that the high level of offense that AG1 has shown will be the only game it can play well. AG2 is waiting for a point at which it determines that AG1 has committed itself to this style; then attempts to take the initiative by concentrating his forces on a few areas, winning battles in each. Complicating this is a few moves back is AG1's ability to anticipate this; at which point he would take alternative steps. AG2 would detect this possibility and adjust his game.
Underlying this thought process is my assumption, which could well be wrong, that AG1 and AG2 do not know each other's abilities. In the case that they did what they would do would be to make a determination of whether a perfectly played game is inherently in the favor of the one who initiates or the one who counters; and seek that position. Humans believe that the advantage is to the initator, but they've been wrong before. It would get weird if my instinct is correct in the advantage being with the one who counters; as each would seek to lose initiative.
There bhave been many, many books written on how to pitch baseball; as the pitcher-hitter thing is called the most mental thing in sports. But Warren Spahn, one of the all time greats distilled it to this; "I only needed two pitches; the one he's looking for and the one I throw."

Btw, your write-up after the attack on CH (and its looong comment thread) is one of my favourite corners of GR.
https://www.goodreads.com/review/show...


Manny wrote; "it's just trying to find the best moves."
That strongly suggests that AG1's best moves have nothing to do with what AG2 does. On the one hand it fits the scenario which believes that white (the game's initiator) has an inherent advantage; in that regardless of AG2 moves; all AG1 has to do is play the best moves throughout; the 4.5-6.5 points added to black the determinant.
But, it also suggests to me that if there were "best" moves, that it would play the same game all the time, and win.
I think it has to be evaluating its opponent, or at least the plays that have already been made; and it seems difficult to differentiate one from the other; maybe a bit like the old GR dictum of saying it was OK to criticise a book, but not its author.
That strongly suggests that AG1's best moves have nothing to do with what AG2 does. On the one hand it fits the scenario which believes that white (the game's initiator) has an inherent advantage; in that regardless of AG2 moves; all AG1 has to do is play the best moves throughout; the 4.5-6.5 points added to black the determinant.
But, it also suggests to me that if there were "best" moves, that it would play the same game all the time, and win.
I think it has to be evaluating its opponent, or at least the plays that have already been made; and it seems difficult to differentiate one from the other; maybe a bit like the old GR dictum of saying it was OK to criticise a book, but not its author.

Some reading helped me understand your #4086. Sounds as if a computer necessarily aided by deep learning is much like a human's learning process. I'm guessing here, but it also sounds as if whatever entity lays claim to having the smartest machine is somewhat dependent on what the machine accomplishes on its own.
One question on a Wiki contradiction. Are "deep learners" intuitive? It seems logical that they are, but someone at a high level says no.
One question on a Wiki contradiction. Are "deep learners" intuitive? It seems logical that they are, but someone at a high level says no.

Considering the fact that our understanding of intuition is rather limited currently and amounts to either thinking it a thing of the supernatural or superlogical or some kind of mixture of both, no wonder if it might be difficult to decide on whether the AI are intuitive.
deleted user wrote: "Are "deep learners" intuitive? It seems logical that they are, but someone at a high level says no. "
If one agrees that intuition is mostly just meing able to access and integrate your data and to follow logical links unaccessible to a mainstream mind, then the AIs are definitely intuitive. If one believes that intuition is the mojo where you 'just know this thing' then the Ais have not gotten there yet. Then again, how does one differentiate between the superb analytic skills and the ability to 'read' your opponent? Basically both would be acessing information that was not present a moment ago, in the first case, that information would be the result of self-made superior analysis, and in the second case that information would just be suddenly made available from unknown source. To a casual observer both would look the same and there one would get AIs who are received as being intuitive.

You might be just right. Though from what I heard, the previous presidents were also not particularly enamoured with human rights protection. It's all pretty much Orwell-like these days. Whole countries/nations/ethnic groups get the villanisation treatment. Hate sessions much? It's 1984, all right.
It might just be in the nature of governments to have a knee-jerk reaction to designate some groups as enemies and to go on a crusade against them. Nothing unites better then common enemies: Indians, Muslims, Chinese, illegal aliens, whatever... The scarier the better.
Still, it's not like Mr Trump came and all of a sudden started some сrusade. The process seems to be deeply engrained in the establishment. And the fact that an outsider got in, as opposed to Clinton and Bush dynasties (we safely can call them that, I think) might just give a sudden positive break. We'll live and see, I guess.


That doesn't mean there's any magic in the process, just as humans aren't acting magically when they do exactly the same thing.
What I suspect she's doing (and what we do too): every time she plays a game, she may be able to review the game, analyse it into a vast number of elements (which may be anything from the position of a single stone up to a multi-move sequence or the pattern in any sector of the board), and assign positive and negative weights to the elements based on whether the game they feature in is won or lost. Seeing any of those elements again in future in a possible situation then can either 'scare' her away from that situation or 'attract' her to it, depending how she feels about the elements on the basis of experience. This needn't involve any actual rational analysis (she doesn't have to know why certain things feel bad and others feel good - why, in other words, they seem to give better or worse results), and it also wouldn't require very complicated thinking, just massive 'brain power'.
Humans make this a little easier for ourselves by not actually 'paying attention' to most elements until we see them enough times - we continually pattern-match new experiences against old ones to discover patterns that seem worth paying attention to. The advantage of this is that it doesn't require as much brain power as assessing everything we ever see - the disadvantage is that it makes it much easier to miss recurring patterns that don't recur frequently enough, or that aren't easy enough to recognise.
Humans also like to use higher-level abstractions in learning: we categorise particular patterns as examples of general patterns. This saves brainpower by reducing the number of relevent patterns we have to learn about: rather than spotting Pattern #1,000,000,000, we spot Example of Abstract Pattern #1,000, which also enables us to have better intuitive responses to particular patterns we've never seen before.
I suspect much of the difference in play may be that with greater computing resources and more playing experience, Alpha is less reliant on high-level analogies, and hence can find loopholes in human principles. So where we may say "doing X is bad", because 9 times out of 10 it's bad, Alpha may have learnt "doing X is bad when A" and "doing X is good when B", when B is a something we haven't encountered often enough, or that looks similar enough to A, that we haven't spotted the exception yet. In particular, in something like Go, the position of every single piece on the board can theoretically affect the strategic results of any tactical battle anywhere on the board, and Alpha may be better than us at taking that into account.
[two other meanings of 'intuition' spring to mind. One would be "following analysis that the individual cannot consciously linguistically explain" - Alpha has no language capability, so in this sense all her reasoning is intuitive. Or "following analysis that occurs unbidden, without conscious will or oversight" - and again, she has no ability NOT to analyse Go positions, nor so far as we know any conscious self-awareness of that analysis, so all her reasoning is intuitive in this sense.]
Manny wrote: That might be true when people play people, but AIs have no feelings, so AG must be playing the tenukis for objective reasons.
This seems a tendentious claim. AG must be evaluating situations in terms of preferability and acting to ensure preferable situations, and she's doing it non-rationally, non-consciously, non-linguistically, and without self-control. That sounds very much like a description of feelings. In what way, then, does AG's aversion to certain things or attraction to others not qualify as a "feeling"? It seems like you want to arbitrarily use one set of vocabulary for humans, and another for all other supercomputers!
More specifically, let's remember that if, as she must be, she's using brute learning, rather than rational analysis, then she doesn't need to know why certain moves are good moves, just that they are. A move that provokes suboptimal moves from the opponant is a good move, whether that's because it reduces the number of available good moves, because it confuses the opponant so that they don't pick the best move, or whether it provokes an emotional response. A pure AI operating only on learning is objectivity-neutral: she doesn't know which moves are 'objectively' best, only which are actually best. Playing another AI, she'll learn 'objective' best moves, but as soon as she starts playing humans, her database will be skewed by the limitations of humans, including emotions. Even if she does not create any model of the opponant, she will learn that certain moves (those that intimidate, anger, or otherwise provoke emotion in humans) are more likely to lead to good results than others. She will therefore rationally evolve to take human emotions into account. And she will even evolve to act in overtly "emotional" ways, where those are selected for by evolution. If playing in an "angry" way is more effective than playing the objectively best moves (for instance, if it intimidates), then she will evolve an irrational attraction toward angry play. For instance, certain patterns of play may be suboptimal, but may trigger aversion in the opponant (if they, say, don't like tactical battles, or don't like overly calm, stagnant strategic battles); she will then be able to learn that these patterns are 'winning', even if we know that 'objectively' they may not be: she may learn to bluff and intimidate. And that shouldn't surprise us, because that's what we've done too. [Anger is the yearning to act in a way that is strategically suboptimal in the particular, but that, when established as a viable threat, causes others to be intimidated, improving our situation across many iterations of a situation. In concrete terms: punching someone who insults you is not good for you today, but establishing the viable threat of being punched is a good way to discourage insults tomorrow - anger pushes us to act against our short-term interests and do something that establishes a long-term advantage. Hence, it is selected for by evolution].
Interestingly, once an AI has been exposed to human foibles, and evolved her own in response, this should continue to be at least partly visible in her matches against other AIs - provided that she doesn't have the ability to model her opponant. However, since AIs and humans play differently, she should be able to learn to distinguish them eventually, even without any dedicated modelling (she'll learn that 'Package X' and 'Package Y' demand different responses, where we, the external observer, know that these packages of patterns correlate with human vs AI opponant, even though she doesn't.).

Most of AG's knowledge comes from the two DNNs it trains, to guess the next move and to estimate the score. Since this knowledge comes by learning from experience and is not rule-based, I don't think it's misleading to call it intuition.
It got its original training material from human-human games, so it's possible that it learned things whose value was mostly psychological, in terms of affecting the opponent's feelings. Recently, I think it's learned mostly from games it's played against itself, so I would guess the psychological aspect (if it was ever there) has become less important.



...you just gave a definition of 'intuition'! By definition, if it were rule-based it couldn't be intuition - it would then be rational.
Note as well your use of the word "guessing", which again implies intuition rather than calculation.

"...That means that she is eliminating and studying possible moves on a non-rational basis, so she is acting intuitively..."
Might this equate "intuitively" with "randomly"?
Perhaps this beast is just flailing about on the fringes of theory, rightly or wrongly, but no one is in a position to catch it out in its errors yet.

Donald Trump's policies, for instance, do not appear to be the result of any process of reasoning or analysis, yet statistically they are not truly random.

Bummer! I was considering using it as a true random number generator in my crypto-applications.

But the inverse is not necessarily true. Not-reasoned does not mean non-random. Particularly in the case of a computer program, where there is no guiding principle, no guideline, no evaluation method, or where evaluation is uncertain and fuzzy, the choice is random--even if there is a weighting of options, there is randomness in the choice.
In human decision making, we can make the claim of an intuition. Whether there really is such a thing as human intuition is debatable. But if a computer chooses randomly, this is at best a pseudo-intuition.
A human might "intuit" things by coming to a decision guided by semi-conscious or unconscious processes, a set of data, guiding principles, or rationale that the conscious mind does not fully grasp at one time and cannot articulate in a clear analysis, yet there is something guiding the choice that is not purely random. I don't think we can say such a phenomenon exists in computer decision making.

Well, I would say it's intuition, but some people are unhappy when that word is applied to AIs.

"...That means that she is eliminating and studying possible moves on a non-rational basis, so she is acting intuitively..."
Might this equate "intuitively" with "randomly"?
Perhaps this beast is just flailing about on the fringes of theory, rightly or wrongly, but no one is in a position to catch it out in its errors yet."
AG's moves are anything but random, as its many world-class victims will attest.

Sorry, but that doesn't follow at all, and there's no way a world-class player can attest to whether he or she has been beaten by a machine employing radomness in its decision making.
Take the simplest example: if a great chess player plays against an inferior tactician, the great player can flip a coin to choose the first move (whether to choose between e4 and d4 for white, sicilian vs. french for black, or any two theoretically reasonable initial moves... doesn't really matter. (In reality a master will have personal preferences based on which lines have been prepared and what is known of opponent's prep.))
Viable Strategies to employ randomness: when you can determine the best move for any given situation (by whatever means... calculating to a win, calculating to a quantifiable advantage, evaluating based on theory established by prior experience, principles taught by masters, whatever), choose it. When there is no clearly better move, choose at random. Improve your chances of finding the best move by optimizing the amount of time used to evaluate... but perhaps also use randomness to detemine which candidate moves to give time to and when to terminate the evaluation of a particular line. Etc.

Intuition does not equal randomness does not equal temporarily abandoned recall; though it may share some properties with the latter. Whatever word an AI is deemed to have beyond brute strength it obviously has it. If intuition is not your preference, then it is only fair to expect you to substitute another.
You see, to put it in human terms, if I tell someone that my roof is leaking they will assign a high probbility to my roof having a hole in it. They do not initally consider the other lesser possibilities; like am I lying, did I dream it, or if the leak is actually emanating from the space between the door and it's jamb when the rain is coming from a certain direction, among others.
Let's say that AG does not have the capacity to fully explore all of these possibilities. Genius that it is, it goes for hole in the roof, and explores that. AG has encountered some weird stories about human dreams, misconceptions, and out and out ineptitude. So, when it finds no roof hole, it then backtracks to the other initially lesser possibilities. Just like a human roofer.
So, is this intuition? Pressed for an answer, I'd say yes for simplicity's sake; but intuition has a number of definitions including one specifically relevant to linguistics.
But, whatever; let's just face one thing. These guys are more capable than we are, and thay gap will widen. The tactic suggested is to make them understand "friend" and then convince them that we are one. If they see us as antagonistic, the possible consequences are a worse risk than any rational person would choose to take. So, kiss your AI machine, and hope that it has not yey read of Joelle Van Dyne.
Back to the simplicities of intuition; I'll just bet, though I have way of proving my bet or its aftermath, that good old Geoffy will delete this "offensive" comment. I think that there is a high probability that Geoff is not all that bright. That intuition is a real fucker.
You see, to put it in human terms, if I tell someone that my roof is leaking they will assign a high probbility to my roof having a hole in it. They do not initally consider the other lesser possibilities; like am I lying, did I dream it, or if the leak is actually emanating from the space between the door and it's jamb when the rain is coming from a certain direction, among others.
Let's say that AG does not have the capacity to fully explore all of these possibilities. Genius that it is, it goes for hole in the roof, and explores that. AG has encountered some weird stories about human dreams, misconceptions, and out and out ineptitude. So, when it finds no roof hole, it then backtracks to the other initially lesser possibilities. Just like a human roofer.
So, is this intuition? Pressed for an answer, I'd say yes for simplicity's sake; but intuition has a number of definitions including one specifically relevant to linguistics.
But, whatever; let's just face one thing. These guys are more capable than we are, and thay gap will widen. The tactic suggested is to make them understand "friend" and then convince them that we are one. If they see us as antagonistic, the possible consequences are a worse risk than any rational person would choose to take. So, kiss your AI machine, and hope that it has not yey read of Joelle Van Dyne.
Back to the simplicities of intuition; I'll just bet, though I have way of proving my bet or its aftermath, that good old Geoffy will delete this "offensive" comment. I think that there is a high probability that Geoff is not all that bright. That intuition is a real fucker.

You're obviously not meaning 'random' in the sense of 'non-deterministic' (because then computers would never be random), and you're not meaning it in the sense of 'non-rational' either, as you say. Or in the sense of 'unpredictable', because that applies equally to humans. So what do you actually mean by that word?
You may be being more complicated than I was; and I don't understand either. I might start with that my overall premise is merely that AI computers are doing something other than what we humans call "brute force" now. What that something is termed may not be easily put into a phrase or word; and that is much less relevant than the recognition that this something exists. My choice of intuitive was made out of expedience and that it seemed something like that; but call it MS13 or whatever; no flippancy intended. But at the same time I'd rather not spend time in an attempt to classify, as if that had any bearing on the ultimate resolution or understanding.
It seems to me that we don't even know the mechanics of human intuition. It may be recalling buried memories or it may be totally off the wall and based on nothing discernable. So, to say whether or not an AI machine is doing it seems impossible until we correctly define, and thereby limit what it is in human terms.
Much of your post is likely well ahead of me, as for one thing, I was under the impression that it was computers which were capable of generating random numbers; the human inevitably displaying some bias.
In decision making, it seems that I do something like what has been described here; as when presented with a problem with many possible solutions, initially focus on the ones which I may have correctly or not assigned a higher degree of probability. If upon further investigation none of them are clearly what I'd subjectively characterize as a good answer, I'll retrieve some of those initially discarded and evaluate them. I think this is common and easy for humans. So, now it is apparent that computers can do that or something better, evidenced by their ability to beat humans at a defined game; the undefined games always a matter of speculation, though there shouldn't really be much doubt regarding the abilities of the players.
I'm lost in regard to your mentions of randomness and quantum, both because I'm not well versed in the subject and because my intuition tells me that humans are not programmable beyond the insights of B.F. Skinner.
Specifically you wrote; "It's obviously not reasonable to bundle every cause of a human action up as "intuition" or the like, while not applying the same word to the thinking of other computers." I agree 100% but cannot recognize where anything I've written would call for such a "correction."
This could go on, but one last silliness; Human actions appear as unpredictable only to those unfamiliar with every possible option; and I think I just made a circle.
No joke. I'm just trying to write an entertaining book about talking dogs.
It seems to me that we don't even know the mechanics of human intuition. It may be recalling buried memories or it may be totally off the wall and based on nothing discernable. So, to say whether or not an AI machine is doing it seems impossible until we correctly define, and thereby limit what it is in human terms.
Much of your post is likely well ahead of me, as for one thing, I was under the impression that it was computers which were capable of generating random numbers; the human inevitably displaying some bias.
In decision making, it seems that I do something like what has been described here; as when presented with a problem with many possible solutions, initially focus on the ones which I may have correctly or not assigned a higher degree of probability. If upon further investigation none of them are clearly what I'd subjectively characterize as a good answer, I'll retrieve some of those initially discarded and evaluate them. I think this is common and easy for humans. So, now it is apparent that computers can do that or something better, evidenced by their ability to beat humans at a defined game; the undefined games always a matter of speculation, though there shouldn't really be much doubt regarding the abilities of the players.
I'm lost in regard to your mentions of randomness and quantum, both because I'm not well versed in the subject and because my intuition tells me that humans are not programmable beyond the insights of B.F. Skinner.
Specifically you wrote; "It's obviously not reasonable to bundle every cause of a human action up as "intuition" or the like, while not applying the same word to the thinking of other computers." I agree 100% but cannot recognize where anything I've written would call for such a "correction."
This could go on, but one last silliness; Human actions appear as unpredictable only to those unfamiliar with every possible option; and I think I just made a circle.
No joke. I'm just trying to write an entertaining book about talking dogs.

We don't need to get too deep into how a "random" decision is made or whether it's truly random or unpredictable.
If a player can say "I don't know why this is the better move. I can't prove it's the better move. I'm not sure it's the better move. But I feel it's the better move, so I'll make it," that could be an instance of intuition.
If a player says, "I can't choose between these moves, so I'll look at the clock and choose based on the millisecond at which my opponent made his or her last move, or I'll use pseudo-random number generator, yesterday's temperature in London, a one-time pad, or I'll consult an astrologer," that's "random" enough. It's not based on anything relevant to the decision at hand, not even a "feeling" about it.
To choose a "move," as described above, does not have to be limited to a move played on board. An intuitive player might, for instance, have an intuition about a candidate move to give an extra two minutes of thought to, and then the subsequent analysis leads to playing it on the board. Without the intuition, the play would not have been found.
Now, it's a possible that a machine incapable of intuition can still appear intuitive. Whether a computer is capable of more than brute-force (I'm sure it does more than just that), it's principal strength is in its brute force. A human can't look at 8 billion board positions before choosing a single one, and calculate a numerical value assignment for each. A computer can. A computer can brute force its way to finding a definitive advantage among those positions analyzed, play it, and the human reaction would be to say "I could never have found that move except by intuition." Which, from the human perspective, is probably true. The computer could employ random processes along the way, in selecting candidates, and deciding at what point to terminate analyzing (or it could just go five moves deep on everything, then select the strongest appearing candidates and go another five moves deep on them, etc.) It can still reach that surprisingly "intuitive" move without intuition.
Now, on the other hand, if a computer's strength is not principally in its brute force, then we could simply take that brute force away and see how well it performs. For instance, take everything we have learned about artificial intelligence, take code that AI has developed for itself, and engineer something that can run on an old Apple IIe from the 1980s and have it play Go against the world's best human players. Or just take Alpha-Go, and limit it to one microsecond of processing for each move, while allowing the human one day to think and one day to rest for each move, and see what happens. (But don't let the human consult with a computer in the meantime!)

No, AG presumably couldn't run on a 1980s computer. But nor could you. You have the benefit of a supercomputer in your head capable of amazingly rapid probabilistic calculations* - it' hardly fair to take that away from AG.
But it's easy enough to see the significance of brute force. Chess computers have been capable of extreme brute force calculation, with a little extra, since at least the 1990s. It wouldn't be hard to repurpose one, or build an equivalent, for Go. But while those computers were reaching Kasparov-level chess, they were still getting completely thrashed at Go. This demonstrates that the key thing that has taken AG from the amateur level to the best in the world is not brute analytical reasoning, but improvements in intuitive reasoning (or whatever you want to call it).
Does that mean that she's got better intuition than a human? Not necessarily. And probably not, since she's never needed to evolve it. Her advantage in analysis is no doubt sufficient to give her a substantial edge. But that doesn't mean that it's intuitive reasoning that's the basis of her success. Nor does it mean that her intuitions - which probably operate differently from ours, with different strengths and weaknesses - cannot teach human players something that they haven't previously been aware of.


I don't understand what this means, and I doubt that it's true, but Manny said something similar earlier, so maybe I'm not communicating?
I don't know why it seems hard to imagine a player winning while employing randomness in those cases where analysis has reached its limit. I'm not saying that a computer just looks at the board, calculates nothing, and chooses its next move eenie meenie miney moe from all possibilities. Obviously that loses. But if a computer has calculated three different initial moves deeply, looking at hundreds of thousands of potential calculations for each one, and finds that they all are functionally equal in value, then it's fair enough to choose randomly among them. And it can win too if its evaluation of these moves is more accurate than what its opponent is capable of.
Again, it would do no harm for a human chess-master to flip a coin on the first move with white to choose between e4 and d4. This player is now "playing randomly", and will still win against an opponent whose strategy or tactics or theory are inferior.
The only time randomness or intuition has use in a game like this is when there is no available theory or analysis that would point to a clearly superior answer, or where time or concentration or ability to visualize and compare alternatives is at its limit so that further analysis is impossible or fruitless.
And again, sorry to repeat, but a lemon is a fruit, yet not all fruits are lemons. So, even if intuition is irrational, not every irrational process is intuition. Thus randomness, being irrational, does not necessarily equate to intuition.
"You have the benefit of a supercomputer in your head capable of amazingly rapid probabilistic calculations"
Perhaps, but we don't know what goes on in a brain, we don't have any reason to suppose the functioning of a brain is closely analogous to the functioning of a computer, and there's no evidence to suggest any human visualizes more than a few hundred positions to choose a single move... I doubt that most masters even visualize more than a few dozen, whereas it is certain that a supercomputer both models and evaluates numerically several hundreds of millions or billions of positions. They may face the same problems and often reach the same conclusions while employing radically different methods.
"Nor does it mean that her intuitions - which probably operate differently from ours, with different strengths and weaknesses - cannot teach human players something that they haven't previously been aware of."
That's surely true. Humans can definitely learn from computers. We cannot pursue their methods (we cannot calculate as they do), but the conclusions they reach, and the paths the tread can point to new ideas. And we may even come to new understandings from them, learning something that the computer itself could not learn from observing itself.


I think I somewhat appreciate what you're saying, yet isn't this a simplification that dodges the point:
"AG is trying to maximize the probability of finding a series of plays that lead to winning the game. That's all human players do too."
Depending on how you intend this, I could dispute the use of "all" above. But put that aside for a moment--One could just as well say that AG is trying to win a game of go, and that's all human players do too.
But the heart of the matter is how do they pursue that goal? I maintain that the ways they approach it are radically, fundamentally different. Furthermore, the computer is a much much better calculating machine than a human, whereas a human's strength is something other than calculation.
Re: the randomness controversy. I think at least this aspect of the question is easily resolved. Answer the question whether the code includes a "random" function. If the answer were "no," I would be astounded, but maybe I'm wrong. If the answer is "yes," then it really can't be denied that computers employ randomness in their play.
The following are thoughts which merely relate to how one might play against AlphaGo.
1) AG's play has been characterized as "conservative," in that it seeks more sure small advantages, while I was surprised to read that almost all human players are actually trying to win by as large a margin as possible; thereby incurring a higher degree of risk. This was the reason attributed to the new openings AG made.
2) In sports tthe conservative style is more difficult to defeat. One analogy is that it is easier to get someone swinging for a home run out than it is someone taking a short swing, just looking for a single. The detriment to the singles hitter in baseball is that it takes three of them in an inning to score a run. That may not apply here. But there does seem to be a suggestion that AG will have a more difficult time recovering from a mistake; of course if it can be induced to make one.
3) I can't comprehend the way "deep learning" actually works on a mechanical basis. It sounds visually oriented; yet I don't know how somthing can take meaning from that when it is "seeing" numbers.
4) At a pro or even college level a common sports tactic is not to necessarily optimize your own performance; but to make the opponent do the things it is least comfortable with; perhaps in this case going back to Sedol's move in game 4. Since it resulted in victory it's safe to call it brilliant; but on another level might it have been something that just flew under AG's radar, in its need to ignore some possibilities. Images aside; it does seem likely that AG is making these decisions as a result of software. Does anyone know what that is? IDK, but if an action is based on any sort of routine it probably can be thwarted by another and if anything is acting on "intuition" it can probably be fooled by surreptitious moves; in AG's case the possibilities it has chosen to ignore.
5) Not knowing the game, I'd say that my first approach would be to open many fronts, fully understanding that these may well traditionally be construed as the terrible weak positions, in hopes of making a significant area too big for AG to handle without the excluding intuition.
WTF? Can't do worse than everyone else?
1) AG's play has been characterized as "conservative," in that it seeks more sure small advantages, while I was surprised to read that almost all human players are actually trying to win by as large a margin as possible; thereby incurring a higher degree of risk. This was the reason attributed to the new openings AG made.
2) In sports tthe conservative style is more difficult to defeat. One analogy is that it is easier to get someone swinging for a home run out than it is someone taking a short swing, just looking for a single. The detriment to the singles hitter in baseball is that it takes three of them in an inning to score a run. That may not apply here. But there does seem to be a suggestion that AG will have a more difficult time recovering from a mistake; of course if it can be induced to make one.
3) I can't comprehend the way "deep learning" actually works on a mechanical basis. It sounds visually oriented; yet I don't know how somthing can take meaning from that when it is "seeing" numbers.
4) At a pro or even college level a common sports tactic is not to necessarily optimize your own performance; but to make the opponent do the things it is least comfortable with; perhaps in this case going back to Sedol's move in game 4. Since it resulted in victory it's safe to call it brilliant; but on another level might it have been something that just flew under AG's radar, in its need to ignore some possibilities. Images aside; it does seem likely that AG is making these decisions as a result of software. Does anyone know what that is? IDK, but if an action is based on any sort of routine it probably can be thwarted by another and if anything is acting on "intuition" it can probably be fooled by surreptitious moves; in AG's case the possibilities it has chosen to ignore.
5) Not knowing the game, I'd say that my first approach would be to open many fronts, fully understanding that these may well traditionally be construed as the terrible weak positions, in hopes of making a significant area too big for AG to handle without the excluding intuition.
WTF? Can't do worse than everyone else?


And that's what makes it so pleasurable.
Wastrel wrote: "She will therefore rationally evolve to take human emotions into account"
Now that's an interesting concept. It would probably be dependent on whether the AIs can differentiate between different players and refer to their history of previous games. This would be a huge advantage of the AI over any human player. Basically, if the Ai's learning happens on an all-inclusive basis, where they just get a bunch of games to analise without discrimination on which players demonstrate which tendencies, it would have mostly the advantage of an oversised database and huge analytical capacity. The ability to discriminate a particular player for a targeted analysis would be a whole next level up.
The AI could model not the game but the opponent's strategy. And for humans, who are prone to be predisposed to certain things and not the other ones, that would be a huge disadvantage as they would not have the power to create a similarly mirroring strategy, other than to try to outplay someone who targets their style.
Zadignose wrote: "Might this equate "intuitively" with "randomly"?."
While these might not be equal in true sense of the word, they still could be perceived as equal by an independent observer. Some randomisation must take place, even if it is not choosing from all possible options, just from the best viable ones.
Same could happen with humans. Occasionally, the half-consious analysis comes to a point when its results cannot be explained by logic. Not because they were obtained without logical inputs and via analytical processes, but rather in order to realise all of that, one would have to keep every little thing in the foreground of one's mind, which might be cumbersome. Intuition might amount to the mind using a number of shortcuts to stremline the process. making it go on in the background of the brain, right until the 'eureka' moment where one gets the output(s), seemingly out of thin air. That's where intuition comes handy as an explanation.
Wastrel wrote: "his demonstrates that the key thing that has taken AG from the amateur level to the best in the world is not brute analytical reasoning, but improvements in intuitive reasoning (or whatever you want to call it)."
Well, the extra brainpower also helped, for sure. Modern computers have definitely lots more of that than the 1990s machines.
Again this is only intended to focus on competitive aspects of AG for the game Go.
1) While the human player and AG have the same goal, it seems that their methodologies differ; so the point is to look at each, find their strengths, and find their weaknesses, under the assumption that nothing is perfect.
2) Going through a bit of machine learning, training data, artificial intuition, cognition, and even some gestalt; it is apparent that AG already has an advantage in every area.
3) However, that is not the same as saying it will win every game. Through its abundance of data, the ability to fully analyze same, and the mechanism which allows it to assign probabilities to board situations its got to be the heavy favorite.
4) If it has a weakness it lies in this intuitive, probability assigning mechanism. In addition, it seems to be inferred that AG is analyzing its opponent through the moves the opponent makes; under an assumption that this opponent is of one and only one archetype; though it may adjust that later; I don't know.
So, at this level, the simplicity of the whole thing is that AG's opponent has to know exactly how this intuitive mechanism works; and then find plays which AG will incorrectly assign low probabilities. The opponent would do well to also find out what exactly AG does in the assigning of an archetype and screw that up through style changes.
I think the sports analogies are going to break down at this point; but there may be one more; as this is beginning to sound as if an AG strength is in its ability to adjust . It's a baseball pitching technique called "pitching off the last pitch." It basically just means that if the batter got a good swing at the pitch the last time you threw it; throw something else next pitch, genius.
1) While the human player and AG have the same goal, it seems that their methodologies differ; so the point is to look at each, find their strengths, and find their weaknesses, under the assumption that nothing is perfect.
2) Going through a bit of machine learning, training data, artificial intuition, cognition, and even some gestalt; it is apparent that AG already has an advantage in every area.
3) However, that is not the same as saying it will win every game. Through its abundance of data, the ability to fully analyze same, and the mechanism which allows it to assign probabilities to board situations its got to be the heavy favorite.
4) If it has a weakness it lies in this intuitive, probability assigning mechanism. In addition, it seems to be inferred that AG is analyzing its opponent through the moves the opponent makes; under an assumption that this opponent is of one and only one archetype; though it may adjust that later; I don't know.
So, at this level, the simplicity of the whole thing is that AG's opponent has to know exactly how this intuitive mechanism works; and then find plays which AG will incorrectly assign low probabilities. The opponent would do well to also find out what exactly AG does in the assigning of an archetype and screw that up through style changes.
I think the sports analogies are going to break down at this point; but there may be one more; as this is beginning to sound as if an AG strength is in its ability to adjust . It's a baseball pitching technique called "pitching off the last pitch." It basically just means that if the batter got a good swing at the pitch the last time you threw it; throw something else next pitch, genius.

-Must not be made up of 46% racist dumb fucks (http://www.cnn.com/2016/12/21/politic...)
-doesn't hate immigrants
I'm virtually certain that it's learned to do it. Programming it would be completely contrary to the way AG has been developed.
Another thing that still bothers me is that only one of the 2 AIs involved did this. Of course they probably have some scenario-building mechanism in them and only one of them got the tenuki-scenario. But still, it's disconcerting.
How do you mean, one of them? I think the matches were between two copies of the same AI, and both sides play a lot of tenuki in all games. E.g. if you look at the first game here, you see White starting off with a bunch of tenuki plays, but Black joins in at stone 23 and then rather startlingly (at least to me) ignores the kikashi at 39.
You think soon we are all going to be out of jobs as the bots will be smarter than us by far? "
Oh, there are bound to be some jobs left that require humans. I don't see a robot physiotherapist any time soon.