AlphaGo Beats Top Player at Go - Share Your Thoughts

  • Thread starter Thread starter Buzz Bloom
  • Start date Start date
Click For Summary
The discussion centers on Google's groundbreaking achievement with AlphaGo, an AI that defeated a top human player in the complex game of Go, marking a significant leap in AI development. Participants highlight that Go is exponentially more complex than chess due to the vast number of possible moves and strategic considerations involved. The conversation touches on the implications of this advancement for artificial intelligence, with some expressing excitement about the potential of AI while others caution against equating AI capabilities with human intelligence. The role of intuition in both human and AI decision-making is debated, with some arguing that AI lacks true intuition as defined by human standards. The thread also includes personal anecdotes about learning Go and reflections on the future of AI in various applications, emphasizing the need for careful consideration of AI's ethical implications. Overall, the discussion reflects a mix of enthusiasm and skepticism regarding the future of AI and its relationship to human cognition.
  • #31
phinds said:
I assume this was tongue in cheek but in any case it's really unfortunate that Asimov's 3 laws are a total joke in practical terms.
I don't think that is so far off from the basic concept, I've given this ALOT of thought. I'm not saying it is as simple as typing 3 strings of characters and sticking them somewhere in memory and they can't hurt us it is a hierarchy of laws to keep in mind while building robots, and again it's not as simple as "everyone smart enough to make robots will be smart enough to make them safe for human interaction" but it is like the foundation of religion for robot designers. I don't even agree with the first law simply stating "human" as if any other forms of life are of less importance.
 
Technology news on Phys.org
  • #32
jerromyjon said:
I don't think that is so far off from the basic concept, I've given this ALOT of thought. I'm not saying it is as simple as typing 3 strings of characters and sticking them somewhere in memory and they can't hurt us it is a hierarchy of laws to keep in mind while building robots, and again it's not as simple as "everyone smart enough to make robots will be smart enough to make them safe for human interaction" but it is like the foundation of religion for robot designers. I don't even agree with the first law simply stating "human" as if any other forms of life are of less importance.
Well, we're going to have to agree to disagree on this. I seriously think they are a joke and I've thought about them since reading Asimov in the 50's. Loved the stories despite their implausibility.
 
  • #33
Planobilly said:
How can this be defined as "intuition"?
Hi @Planobilly:

I am not sure why you ask this, but I assume you have some concept about "intuition" that is distinctly different than
Planobilly said:
These networks don’t operate by brute force or handcrafted rules. They analyze large amounts of data in an effort to “learn” a particular task. Feed enough photos of a wombat into a neural net, and it can learn to identify a wombat. Feed it enough spoken words, and it can learn to recognize what you say. Feed it enough Go moves, and it can learn to play Go.

Since I have no intuition about your concept of "intuition", I will start with some definitions from the Internet.
From https://www.wordnik.com/words/intuition
from The American Heritage® Dictionary of the English Language, 4th Edition
  • n. The act or faculty of knowing or sensing without the use of rational processes; immediate cognition. See Synonyms at reason.
  • n. Knowledge gained by the use of this faculty; a perceptive insight.
  • n. A sense of something not evident or deducible; an impression.
These definitions seems to emphasize that intuition is a process for acquiring understanding/knowledge that is not "rational", where "rational" implies what has been metaphorically called the "left-brain" functions. That is, intuition is metaphorically a right-brain function. Another distinction might be, rationality is a step-by-step sequential rule-based deductive process while intuition is a gestalt holistic process. The quote from your post is making that distinction. That is, intuition is a non-rational response to a given situation based on an accumulation of previous experiences which are not consciously remembered when the intuitive response occurs.

I hope this is helpful.

Regards,
Buzz
 
  • #34
Monsterboy said:
I have never played Go but i have played a lot of chess ,this is a new must learn game for me now ,i am very interested to learn the difference in the kind of thinking involved ,any advise from someone who knows both the games will be appreciated.
I'll give two additional alternatives to @buzz Bloom's already excellent answer. One is the 80+ year old book by Edward Lasker, Go and Go-Moku. It's old, it's outdated, but it's cheap and it is written by someone fluent in English. Edward Lasker was a chess grandmaster who later found go to be a superior game. (And it is.)

The other alternative is the internet. There are lots of possibilities here, but the starting point has to be (IMHO) Sensei's Library, http://senseis.xmp.net .
 
  • Like
Likes Monsterboy and Buzz Bloom
  • #35
Hi Buzz,

Thanks for responding. Human intelligence and artificial intelligence are two vastly different things. Human intuition and computer intuition are vastly different things if computer intuition can even be said to exist in the first place.

Based on The American Heritage® Dictionary of the English Language, 4th Edition definition of the word intuition is the AI computer program doing any of the following?
1. knowing or sensing without the use of rational processes?
2. gaining knowledge by a non rational process?
3. is the AI computer program sensing something not evident or deducible?

I don't have my own special definition of the meaning of intuition. I just don't think one can call this AI program intuitive based on the standard definition of the word.
We like to imbue objects both animate and inanimate with human characteristics which makes for interesting cartoons. Not so much in serious discussions of technology.

The above is not stated to detract from the considerable value of the work done by google and the advancement in AI that the work represents. The AI program stands on it's own merit and has no need to be embellished by comparison to ill defined terms or conditions.

Cheers,

Billy
 
  • #36
DiracPool said:
I've been involved in the AI field since the late 80's and have seen many promising technologies come and go. You name it, I've seen it, PDP, "fuzzy logic," simulated annealing, attractor neural networks, reinforcement learning etc. etc. Each one of them promised the same as the article stated above...

AlphaGo is a reinforcement learning algorithm. It's a standard one too, it's just that a neural net is used as a function approximator with lots of free parameters for one of the standard functions in a standard reinforcement learning algorithm. The technology is RL (1980s) and ANN (1970s for the backpropagation).

Apart from faster computers, the main advance for backpropagation since the 1970s is that it was found that setting the initial conditions of the weights in a certain way allowed backpropagation to reach a decent point much more quickly.
 
  • #37
phinds said:
Well, we're going to have to agree to disagree on this.
The only thing I can think you mean is as a deterrent to making terminator
Greg Bernhardt said:
Well there you have it, The Terminator is not far off.
Yep, the good one should be coming back to protect John soon.
 
  • Like
Likes atyy
  • #38
Planobilly said:
I don't have my own special definition of the meaning of intuition. I just don't think one can call this AI program intuitive based on the standard definition of the word.
We like to imbue objects both animate and inanimate with human characteristics which makes for interesting cartoons. Not so much in serious discussions of technology.
Hi @Planobilly:

It was not my intention to imbue AI with human qualities. I think if we have a disagreement, it is not about concepts, or the limits of AI, it is about the use of vocabulary.

Chess AIs mostly use processes that have a descriptive similarity with human "rational" processing. In AIs early history, that was the general method of choice: sequential and rule based. When a move choice was made, generally it was possible to describe why.

AlphaGo makes a much greater exploitation of "non-rational" processes that are similar to human pattern recognition in that the details about how the recognition of a complex pattern occurs is not observable, and there are not any specific rational explanations about why a particular complex pattern is classified as belonging to a particular category. Therefore, by analogy (or metaphor) it seems natural to describe the behavior as non-rational, or intuitive. There is no implication that the chess AI's "rationality" is the same as a human's, nor that AlphaGo's "intuition" is the same as a human's.

Regards,
Buzz
 
Last edited:
  • #39
jerromyjon said:
The only thing I can think you mean is as a deterrent to making terminator
I have no idea what you are talking about. I think they are a joke. They are useless. They are not going to happen.
 
  • #40
D H said:
Edward Lasker was a chess grandmaster who later found go to be a superior game.
Hi @D H:

I would like to add that Edward Lasker invented the relatively unknown checkers variation that I have never heard called anything other than "Laskers". Laskers is much more complicated than checkers, but still less complicated than chess. I was unable to find a reference to this checkers variation on the internet. The variation involves the following changes:
1. The two sides of each checker are distinguishable. One side is called the "plain" side; the other is the "king" side.
2. A capture move captures only the top checker of a stack, and this checker is not removed from the board. It is put on the bottom of the capturing stack.
3. The color on top of a stack determines which player can move the stack.
4. A stack with a plain side on top, a plain stack, moves like a non-king in checkers. When a plain stack reaches the last rank, the top checker in turned over, and the stack becomes a king stack, and moves like a king in checkers.

Regards,
Buzz
 
  • #41
Hi Buzz,

Buzz Bloom said:
It was not my intention to imbue AI with human qualities. I think if we have a disagreement, it is not about concepts, or the limits of AI, it is about the use of vocabulary.

Yes, I am 100% in agreement with that statement.

I also "intuitively" think (lol) AI will ultimately advance to the point where it has the capability too match human abilities in many areas and exceed them in other areas.
The reason for my assumption is that there are truly huge amounts of money to be made from the development of a functional AI system applied to issues like weather forecasting. How long that will take I have no idea. The fact that it has not been done yet only indicates to me that it is not so easy to do.

AI has the possibility of being the "machine" that can provide answers to truly complex issues. I for one am glad to see Google investing money in this technology.

Cheers,

Billy
 
  • #42
Planobilly said:
I also "intuitively" think (lol) AI will ultimately advance to the point where it has the capability too match human abilities in many areas and exceed them in other areas.
Hi @Planobilly:

I recognize and sometimes envy the optimism of your "intuition". I "rationally" (not lol) have pessimistic thoughts, not about AI limits, but about the likelihood that before AI can achieve these benefits, the consequences of negative aspects in our culture (global warming, pollution, overuse of critical resources, overpopulation, extreme wealth inequality, etc.) will destroy the necessary physical and social infrastructure that supports technological progress.

Regards,
Buzz
 
  • Like
Likes billy_joule
  • #43
Buzz Bloom said:
Hi @Planobilly:

I recognize and sometimes envy the optimism of your "intuition". I "rationally" (not lol) have pessimistic thoughts, not about AI limits, but about the likelihood that before AI can achieve these benefits, the consequences of negative aspects in our culture (global warming, pollution, overuse of critical resources, overpopulation, extreme wealth inequality, etc.) will destroy the necessary physical and social infrastructure that supports technological progress.

Regards,
Buzz
Well, if you want things to worry about, you could add the "AI tipping point" to the list. That's when the machines become smart enough to design/build better machines. Some people believe that will happen and it will have a snowball effect on AI. Whether that's a good thing or a bad thing for humanity is very much an open question, but worriers worry about it.
 
  • #44
Hi Buzz,

Buzz Bloom said:
(global warming, pollution, overuse of critical resources, overpopulation, extreme wealth inequality, etc.)

I also am painfully aware of the above. Perhaps we humans are pre programmed to destroy ourselves, perhaps not. There is a very wide difference in what we are involved in. In one place people are driving robotic cars around on another planet and in another place people are loping off heads. Strange world we live in.

Better for Google to develop AI than involve themselves in developing the next new way to destroy things!
As far as the "machines" taking over, based on my computer, I don't think we have much to worry about...lol My machine appears to be about as smart as a retarded cock roach...lol
Cheers,

Billy
 
  • Like
Likes Buzz Bloom
  • #45
Planobilly said:
As far as the "machines" taking over, based on my computer, I don't think we have much to worry about...
Just think what an AI could do to your and every other computer on the planet. Your "retarded cock roach" of a computer even with you operating it stands no chance...
 
  • #46
phinds said:
Well, if you want things to worry about, you could add the "AI tipping point" to the list. That's when the machines become smart enough to design/build better machines. Some people believe that will happen and it will have a snowball effect on AI. Whether that's a good thing or a bad thing for humanity is very much an open question, but worriers worry about it.

This is a good point, and I think for the purposes of my post here readers can reference this related thread:

https://www.physicsforums.com/threa...ence-for-human-evolution.854382/#post-5371222

I have a clear view as to where I think "biologically inspired," if you will, machine intelligence is heading. The moniker, "artificial intelligence" sounds cool but it has the baggage of 40 years of failure that weighs it down, so I don't like to speak of AI, strong or not-so-strong, etc., for fear of being guilted by association.

That said, I feel I can speak to where machine intelligence is heading because I'm part of the effort to forward this advancement. I'm not currently doing this in an official capacity, but I'm confident I'll be accepted into a major program here come next fall.

So now that you're fully aware of my lack of qualifications, I will give you my fully qualified predictions for the next 100 years:

1) Humans and biological creatures will be around as long as "the robots" can viably keep them around. I don't think the robots are going to want to kill the flora and the fauna or the humans and the centipedes any more than we (most of us) want to. If we program them correctly, they will look at us like grandma and grandpa, and want to keep us around as long as possible, despite our aging and obsolete architecture.

2) Within 74 years, we (biological humans) will be sending swarms of "robo-nauts" out into the cosmos chasing the tails of the Mariner and Voyager probes. These will be "interstellar" robo-organisms which may, on transit, build a third-tier intergalactic offspring. How will they survive the long transit? Well, they have a number of options we humans don't have; First, they don't need food or any of the "soft" emotional needs that humans do. Ostensibly, they can recharge their batteries from some sort of momentum/interstellar dust kind of thing. Or maybe island hopping for natural resources on the nearest asteroid? Please don't ruin my vision with extraneous details...

Second, they don't need any of the fancy cryogenic "put the human to sleep" technology which is a laugh. Don't get me started as to the myriad of complications that can arise from this on long distance travels. Suffice it to say that this is not going to be the future of "Earthling" interstellar travel. In fact, I can (almost) guarantee you that we biological sacks of Earth chemicals with never make it past Mars, so we better grab Mars while we still have the chance.

The future is going to be robotic implementations of the biological mechanism in our brains that generates our creative human cognition. Unless the proverbial condition of if we destroy ourselves first doesn't transpire, I think that this is an inevitability. And I don't think it's a bad thing at all. We want our children to do better than us and be stronger than us, this is built into our DNA (metaphorically speaking). Why would we not want to build children in our likeness but not necessarily in our carbon-ness that excel and exceed our capabilities. This is, of course, in the spirit of what the core of the human intellect is...
 
  • #47
DiracPool said:
2) Within 74 years, we (biological humans) will be sending swarms of "robo-nauts" out into the cosmos
.
74? Not 73 ? :biggrin:
 
  • #48
PAllen said:
74? Not 73 ? :biggrin:

Actually, it's 73 years 8 months (August). I just rounded up. But who's counting...:rolleyes:
 
  • #49
Alpha Go won the first game against Lee Sedol, 9p ! This is enormously more of an accomplishment than the first computer victory over Kasparov.
 
Last edited:
  • Like
Likes fluidistic and Greg Bernhardt
  • #50
Hi Paul:

I downloaded the score of the game as an SGF file. I found several free SGF readers available online, but I could find no information about the sites' reliability. Can you please recommend a site from which I can download a safe SGF reader?

Regards,
Buzz
 
  • #51
Buzz Bloom said:
Hi Paul:

I downloaded the score of the game as an SGF file. I found several free SGF readers available online, but I could find no information about the sites' reliability. Can you please recommend a site from which I can download a safe SGF reader?

Regards,
Buzz
No, because I don't have one. My background in Go is the following:

- read 1/2 of one beginners book
- played about 15 games total, in person, 5 online

I went through a phase of being obsessed with the different rule sets from the point of view of their impact on possibly programming Go (which I never actually did). However, there are many websites where you can play through this game (I have several times already, despite my minimal playing strength):

https://gogameguru.com/alphago-defeats-lee-sedol-game-1/
 
  • Like
Likes jerromyjon
  • #55
PAllen said:
Alpha Go won the first game against Lee Sedol, 9p ! This is enormously more of an accomplishment than the first computer victory over Kasparov.

How big is AlphaGo (the one playing Lee Sedol)? Is it a desktop, or some distributed thing?
 
  • #56
atyy said:
How big is AlphaGo (the one playing Lee Sedol)? Is it a desktop, or some distributed thing?
It is distributed, with enormous number of total cores. I don't have detailed figures, but in pure cycles and memory, it dwarfs the computer that first beat Kasparov. What remains remarkable to me, is that even a few years ago, AI optimists thought beating a top Go player was decades away, irrespective of compute power. It wasn't that many years ago that Janice Kim (3-dan professional) beat the top go program with a 20 stone handicap !
 
  • #57
PAllen said:
It is distributed, with enormous number of total cores. I don't have detailed figures, but in pure cycles and memory, it dwarfs the computer that first beat Kasparov. What remains remarkable to me, is that even a few years ago, AI optimists thought beating a top Go player was decades away, irrespective of compute power. It wasn't that many years ago that Janice Kim (3-dan professional) beat the top go program with a 20 stone handicap !

Lee Sedol was just careless. He figured the thing out in game 4 :P
 
  • #58
Here is a nice summary of the final result (4-1 for AlphaGo): https://gogameguru.com/alphago-defeats-lee-sedol-4-1/

It seems to me that this program is broadly in the same category as chess programs in relation to top human players (perhaps more like when Kramnik still won one out of 4 against a program). However, the following qualtitative points apply to both:

1) Expert humans have identifiable superiorities to the program.
2) The program has identifiable superiorities to expert humans.
3) Absence of lapses in concentration and errors (by the programs) combined with superior strength in common situations makes a direct match up lopsided.
4) A centaur (human + computer combination) is reliably superior to computer alone.

We cannot say that human experts have no demonstrable understanding that computers don't have until a centaur is no stronger than the computer alone, and that this state is reached only by the human doing nothing (i.e. any human choice different from the machine's is likely worse).
 
Last edited:
  • Like
Likes Monsterboy
  • #59
PAllen said:
Here is a nice summary of the final result (4-1 for AlphaGo): https://gogameguru.com/alphago-defeats-lee-sedol-4-1/

It seems to me that this program is broadly in the same category as chess programs in relation to top human players (perhaps more like when Kramnik still won one out of 4 against a program). However, the following qualtitative points apply to both:

1) Expert humans have identifiable superiorities to the program.
2) The program has identifiable superiorities to expert humans.
3) Absence of lapses in concentration and errors (by the programs) combined with superior strength in common situations makes a direct match up lopsided.
4) A centaur (human + computer combination) is reliably superior to computer alone.

We cannot say that human experts have no demonstrable understanding that computers don't have until a centaur is no stronger than the computer alone, and that this state is reached only by the human doing nothing (i.e. any human choice different from the machine's is likely worse).
In chess, centaurs are weaker than the program alone. See https://www.chess.com/news/stockfish-outlasts-nakamura-3634 where Nakamura helped by rybka lost to (a weakened version of) Stockfish.
There's been a huge progress in terms of elo for the top programs in the last years, mostly due to fishtest (http://tests.stockfishchess.org/tests), an open source platform where anyone can test ideas in Stockfish's code. In order for a patch to be committed, the idea is tested by self play over thousands of games to determine whether the idea is an improvement or not. The hardware is the one of volunteers, just like you and me. The overall result is that this added about 50 elo per year since the last 4 years or so and closed source programs like Komodo also benefited from it (by trying out the ideas).
Programs are so superior than humans that grand masters intervening on the program's play only weakens it.
Sure, it's easy to cherry-pick a position where a program makes a mistake and claim that it's easy for a human to recognize it; or find a position where the program misunderstands the position, like in the case of a fortress with blocking pawns, and you add queens and other good pieces for one side. The computer is generally going to give an insane evaluation despite a clear draw. But the reality is that these positions are so rare that they almost never occur in a match or on hundreds of games.
 
  • #60
fluidistic said:
In chess, centaurs are weaker than the program alone. See https://www.chess.com/news/stockfish-outlasts-nakamura-3634 where Nakamura helped by rybka lost to (a weakened version of) Stockfish.
That's not a good example because Nakamura is not an experienced centaur. The domain of postal chess, which is all centaurs (officially allowed and required now) proves on a regular basis that anyone only using today's latest program is slaughtered by players combining their intelligence with a program. Not a single such tournament has been won by someone just taking machine moves (and there are always people trying that, with the latest and greatest engines.)
fluidistic said:
There's been a huge progress in terms of elo for the top programs in the last years, mostly due to fishtest (http://tests.stockfishchess.org/tests), an open source platform where anyone can test ideas in Stockfish's code. In order for a patch to be committed, the idea is tested by self play over thousands of games to determine whether the idea is an improvement or not. The hardware is the one of volunteers, just like you and me. The overall result is that this added about 50 elo per year since the last 4 years or so and closed source programs like Komodo also benefited from it (by trying out the ideas).
Programs are so superior than humans that grand masters intervening on the program's play only weakens it.
This is just wrong. See above.
 

Similar threads

  • · Replies 15 ·
Replies
15
Views
3K
Replies
20
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K