AlphaGo Beats Top Player at Go - Share Your Thoughts

  • Thread starter Thread starter Buzz Bloom
  • Start date Start date
AI Thread Summary
The discussion centers on Google's groundbreaking achievement with AlphaGo, an AI that defeated a top human player in the complex game of Go, marking a significant leap in AI development. Participants highlight that Go is exponentially more complex than chess due to the vast number of possible moves and strategic considerations involved. The conversation touches on the implications of this advancement for artificial intelligence, with some expressing excitement about the potential of AI while others caution against equating AI capabilities with human intelligence. The role of intuition in both human and AI decision-making is debated, with some arguing that AI lacks true intuition as defined by human standards. The thread also includes personal anecdotes about learning Go and reflections on the future of AI in various applications, emphasizing the need for careful consideration of AI's ethical implications. Overall, the discussion reflects a mix of enthusiasm and skepticism regarding the future of AI and its relationship to human cognition.
  • #51
Buzz Bloom said:
Hi Paul:

I downloaded the score of the game as an SGF file. I found several free SGF readers available online, but I could find no information about the sites' reliability. Can you please recommend a site from which I can download a safe SGF reader?

Regards,
Buzz
No, because I don't have one. My background in Go is the following:

- read 1/2 of one beginners book
- played about 15 games total, in person, 5 online

I went through a phase of being obsessed with the different rule sets from the point of view of their impact on possibly programming Go (which I never actually did). However, there are many websites where you can play through this game (I have several times already, despite my minimal playing strength):

https://gogameguru.com/alphago-defeats-lee-sedol-game-1/
 
  • Like
Likes jerromyjon
Technology news on Phys.org
  • #55
PAllen said:
Alpha Go won the first game against Lee Sedol, 9p ! This is enormously more of an accomplishment than the first computer victory over Kasparov.

How big is AlphaGo (the one playing Lee Sedol)? Is it a desktop, or some distributed thing?
 
  • #56
atyy said:
How big is AlphaGo (the one playing Lee Sedol)? Is it a desktop, or some distributed thing?
It is distributed, with enormous number of total cores. I don't have detailed figures, but in pure cycles and memory, it dwarfs the computer that first beat Kasparov. What remains remarkable to me, is that even a few years ago, AI optimists thought beating a top Go player was decades away, irrespective of compute power. It wasn't that many years ago that Janice Kim (3-dan professional) beat the top go program with a 20 stone handicap !
 
  • #57
PAllen said:
It is distributed, with enormous number of total cores. I don't have detailed figures, but in pure cycles and memory, it dwarfs the computer that first beat Kasparov. What remains remarkable to me, is that even a few years ago, AI optimists thought beating a top Go player was decades away, irrespective of compute power. It wasn't that many years ago that Janice Kim (3-dan professional) beat the top go program with a 20 stone handicap !

Lee Sedol was just careless. He figured the thing out in game 4 :P
 
  • #58
Here is a nice summary of the final result (4-1 for AlphaGo): https://gogameguru.com/alphago-defeats-lee-sedol-4-1/

It seems to me that this program is broadly in the same category as chess programs in relation to top human players (perhaps more like when Kramnik still won one out of 4 against a program). However, the following qualtitative points apply to both:

1) Expert humans have identifiable superiorities to the program.
2) The program has identifiable superiorities to expert humans.
3) Absence of lapses in concentration and errors (by the programs) combined with superior strength in common situations makes a direct match up lopsided.
4) A centaur (human + computer combination) is reliably superior to computer alone.

We cannot say that human experts have no demonstrable understanding that computers don't have until a centaur is no stronger than the computer alone, and that this state is reached only by the human doing nothing (i.e. any human choice different from the machine's is likely worse).
 
Last edited:
  • Like
Likes Monsterboy
  • #59
PAllen said:
Here is a nice summary of the final result (4-1 for AlphaGo): https://gogameguru.com/alphago-defeats-lee-sedol-4-1/

It seems to me that this program is broadly in the same category as chess programs in relation to top human players (perhaps more like when Kramnik still won one out of 4 against a program). However, the following qualtitative points apply to both:

1) Expert humans have identifiable superiorities to the program.
2) The program has identifiable superiorities to expert humans.
3) Absence of lapses in concentration and errors (by the programs) combined with superior strength in common situations makes a direct match up lopsided.
4) A centaur (human + computer combination) is reliably superior to computer alone.

We cannot say that human experts have no demonstrable understanding that computers don't have until a centaur is no stronger than the computer alone, and that this state is reached only by the human doing nothing (i.e. any human choice different from the machine's is likely worse).
In chess, centaurs are weaker than the program alone. See https://www.chess.com/news/stockfish-outlasts-nakamura-3634 where Nakamura helped by rybka lost to (a weakened version of) Stockfish.
There's been a huge progress in terms of elo for the top programs in the last years, mostly due to fishtest (http://tests.stockfishchess.org/tests), an open source platform where anyone can test ideas in Stockfish's code. In order for a patch to be committed, the idea is tested by self play over thousands of games to determine whether the idea is an improvement or not. The hardware is the one of volunteers, just like you and me. The overall result is that this added about 50 elo per year since the last 4 years or so and closed source programs like Komodo also benefited from it (by trying out the ideas).
Programs are so superior than humans that grand masters intervening on the program's play only weakens it.
Sure, it's easy to cherry-pick a position where a program makes a mistake and claim that it's easy for a human to recognize it; or find a position where the program misunderstands the position, like in the case of a fortress with blocking pawns, and you add queens and other good pieces for one side. The computer is generally going to give an insane evaluation despite a clear draw. But the reality is that these positions are so rare that they almost never occur in a match or on hundreds of games.
 
  • #60
fluidistic said:
In chess, centaurs are weaker than the program alone. See https://www.chess.com/news/stockfish-outlasts-nakamura-3634 where Nakamura helped by rybka lost to (a weakened version of) Stockfish.
That's not a good example because Nakamura is not an experienced centaur. The domain of postal chess, which is all centaurs (officially allowed and required now) proves on a regular basis that anyone only using today's latest program is slaughtered by players combining their intelligence with a program. Not a single such tournament has been won by someone just taking machine moves (and there are always people trying that, with the latest and greatest engines.)
fluidistic said:
There's been a huge progress in terms of elo for the top programs in the last years, mostly due to fishtest (http://tests.stockfishchess.org/tests), an open source platform where anyone can test ideas in Stockfish's code. In order for a patch to be committed, the idea is tested by self play over thousands of games to determine whether the idea is an improvement or not. The hardware is the one of volunteers, just like you and me. The overall result is that this added about 50 elo per year since the last 4 years or so and closed source programs like Komodo also benefited from it (by trying out the ideas).
Programs are so superior than humans that grand masters intervening on the program's play only weakens it.
This is just wrong. See above.
 
  • #61
PAllen said:
That's not a good example because Nakamura is not an experienced centaur. The domain of postal chess, which is all centaurs (officially allowed and required now) proves on a regular basis that anyone only using today's latest program is slaughtered by players combining their intelligence with a program. Not a single such tournament has been won by someone just taking machine moves (and there are always people trying that, with the latest and greatest engines.)

This is just wrong. See above.
I stand corrected about correspondence chess. Even people under 2000 elo can indeed beat the strongest programs under such time controls and liberty to use any program, etc.
I do maintain my claim on the progress in elo of programs, I don't see what's wrong with it (yet at least).

Edit: I am not sure how such weak chess players manage to beat the strongest programs. One guess that I have is that they use multi pv to see the best moves according to a strong engine, and with another computer they investigate each one of these lines and pick the best. In fact no chess knowledge is required to do that, a script could do it.
 
  • #62
fluidistic said:
I stand corrected about correspondence chess. Even people under 2000 elo can indeed beat the strongest programs under such time controls and liberty to use any program, etc.
I do maintain my claim on the progress in elo of programs, I don't see what's wrong with it (yet at least).

Edit: I am not sure how such weak chess players manage to beat the strongest programs. One guess that I have is that they use multi pv to see the best moves according to a strong engine, and with another computer they investigate each one of these lines and pick the best. In fact no chess knowledge is required to do that, a script could do it.
The winning correspondence players don't just do this. An example where expert (but not world class) knowledge helps is early endgame phase. With tablebases, computers have perfect knowledge of up to 6 piece endings. However, they have no knowledge of types of rook endings with e.g. one or two pawn advantage, that are drawn when there are more than 6 pieces (pawns and kings included). Thus, a computer with a pawn advantage will not know how to avoid such endings (thus allowing a centaur to draw), and a computer with the disadvantage may unnecessarily lose to a centaur by not seeking such a position. You need a lot less than grandmaster knowledge to push programs in the right direction in such cases.

Rather than being exotically rare, such endgames are perhaps the most common type.
 
  • Like
Likes fluidistic
  • #63
Here are some interesting failures of neural networks.

http://www.slate.com/articles/techn...ence_can_t_recognize_these_simple_images.html

http://gizmodo.com/this-neural-networks-hilariously-bad-image-descriptions-1730844528On the linguistic front, here is today's example from Google Translate. I had a much longer translation cycle using the following English sentence, which failed horribly. So I decided to try what I thought would be much easier: a simple English -> Chinese -> English test. I could come up with a sentence that would be easy for the software to parse, but that's not the point. I'm trying to come up with a sentence that we sloppy humans might read and understand.

English:

It seems highly improbable that humans are the only intelligent life in the universe, since we must assume that the evolution of life elsewhere occurs the same way, solving the same types of problem, as it does here on our home planet.

Chinese人类是宇宙中唯一的智慧生命似乎是不可能的,因为我们必须假设其他地方的生命的演变是同样的方式,解决相同类型的问题,就像在我们的家庭星球上。

English

Humanity is the only intelligent life in the universe that seems impossible because we have to assume that the evolution of life elsewhere is the same way that the same type of problem is solved, just as on our home planet.
 
Last edited by a moderator:
  • #64
If anyone here is interested in hearing more detailed discussion on machine learning with an emphasis towards future AGI (they also talk about AlphaGo in several instances, I believe), check out the conference recently hosted by Max Tegmark. Here's an article explaining more about it: Beneficial AI conference develops ‘Asilomar AI principles’ to guide future AI research. The Future of Life Institute also has a YouTube channel here where more presentations can be viewed from the conference. There were some fantastic talks by some high level contributors to the field like Yoshua Bengio, Yann LeCun, and Jürgen Schmidhuber.
 
Back
Top