Mind boggling machine learning results from AlphaZero

Click For Summary

Discussion Overview

The discussion centers around the capabilities and implications of AlphaZero, a self-learning algorithm that has demonstrated remarkable performance in chess and other games. Participants explore its learning process, comparisons to human abilities, and the broader impact of AI advancements in machine learning.

Discussion Character

  • Exploratory
  • Debate/contested
  • Technical explanation

Main Points Raised

  • Some participants express skepticism about the rapid advancements in AI, noting the impressive results of AlphaZero compared to traditional chess programs.
  • Others highlight the extensive training required for AlphaZero, citing that it took millions of games to achieve its level of play, which contrasts with human experience.
  • A few participants discuss the potential for Moore's law to apply to AI advancements, suggesting that improvements may come from better algorithms rather than just faster hardware.
  • There is a recognition that AlphaZero's success is notable within a closed domain, raising questions about its applicability to open domains or general AI.
  • Some participants reflect on the implications of AI in competitive environments, noting that AlphaZero has discovered new strategies that have influenced human players.
  • Concerns are raised about the perception of intelligence in relation to chess and Go, with some questioning the comparison between human and machine capabilities.
  • One participant shares a personal perspective on the enjoyment of chess versus other sports, suggesting that mastery in chess requires extensive memorization.

Areas of Agreement / Disagreement

Participants express a mix of admiration for AlphaZero's achievements and skepticism about the broader implications of AI. There is no consensus on the potential for AI to rival human intelligence or the significance of its advancements in a general context.

Contextual Notes

Some limitations in the discussion include the dependence on specific definitions of intelligence and the unresolved nature of how AI advancements may translate to open-ended problem-solving.

  • #91
PAllen said:
Problem is, nobody knows how many positions humans consider, because humans cannot accurately report on both conscious and unconscious thought
To illustrate this point I remember a game when I used to play weekend chess tournaments. I was about 1800, so a decent player. I was losing to a slightly weaker opponent having blown a big advantage when, to my horror, I noticed my opponent had a checkmate in one! which, obviously he hadn't seen.

While he was thinking more and more people crowded round our board. I was praying they would all go away but my opponent never noticed. When he finally moved, not the check mate, a roar of laughter went up and I slumped back in my chair. Only then did my opponent notice the crowd!

So, what on Earth was he thinking? What moves was he looking at and why didn't he notice the mate in one?

Sometimes I think looking at the top players doesn't help understand human thought because they are so exceptional. Looking at what an average player does is perhaps more interesting.
 
Technology news on Phys.org
  • #92
Hendrik Boom said:
I keep wondering whether this technology would be applicable to solving mathematics problems, perhaps defining the game rules by some formal system. What I find hard to imagine is how to formulate the state of a partial proof in a form that can be the input of an artificial neural net.

Yeah, good question. I know mathematical proofs we're one of the first thing they tried to unleash computers on in the 50s, and they meet their first failures in making machines think. There's more to it than just formal logic it seems.

Thinking about Bridges of Königsberg problem solved by Euler. You have this question that seems to involve all this complexity, but you discard a lot of data to get down to the simplest representation, and in that context break down the notion of travel until the negative result is obvious, is proven. And it's proven to us because in that simple form we can understand it, we don't have the cognitive power to brute force it.

How does Euler's brain know to not think about the complete path, but rather just a single node (in his newly created graph theory) to find the solution for all complete paths? It's hard to imagine NN doing this without a priori knowledge that paths are composed of all the places visited, again real physical world knowledge.
 
  • #93
Fooality said:
Yeah, good question. I know mathematical proofs we're one of the first thing they tried to unleash computers on in the 50s, and they meet their first failures in making machines think. There's more to it than just formal logic it seems.

Thinking about Bridges of Königsberg problem solved by Euler. You have this question that seems to involve all this complexity, but you discard a lot of data to get down to the simplest representation, and in that context break down the notion of travel until the negative result is obvious, is proven. And it's proven to us because in that simple form we can understand it, we don't have the cognitive power to brute force it.

How does Euler's brain know to not think about the complete path, but rather just a single node (in his newly created graph theory) to find the solution for all complete paths? It's hard to imagine NN doing this without a priori knowledge that paths are composed of all the places visited, again real physical world knowledge.
I'm hoping initially to be able to automate somewhat the choice of proof tactics in a proof assistant. Not have AI-generated insight.
 
  • #94
Hendrik Boom said:
I'm hoping initially to be able to automate somewhat the choice of proof tactics in a proof assistant. Not have AI-generated insight.

Oh you're actually doing it? Cool good luck. If you can get the training data, I don't see why not.
 
  • #95
Fooality said:
Oh you're actually doing it? Cool good luck. If you can get the training data, I don't see why not.
Sorry. I don't actually have the resources to do this. So I spend my time wondering how it might be done instead.

Training data? I thought of finding some analogue if having the machine play itself. But now that you point it out, I suppose one could use human interaction with a proof assistant as training data.
 

Similar threads

  • · Replies 13 ·
Replies
13
Views
5K