Can AI Evolve to Master Super Mario? A Look at NEAT and Lua Programming

  • Thread starter Thread starter Drakkith
  • Start date Start date
  • Tags Tags
    Ai
AI Thread Summary
A video showcasing an evolving AI playing Super Mario, based on the NEAT (Neuro Evolution of Augmenting Topologies) algorithm and programmed in Lua, highlights advancements in artificial learning algorithms. The discussion touches on the creator's previous work with evolving cells that lacked intelligence, contrasting it with the learning capabilities of NEAT. The conversation also references recent articles in Scientific American about AI and neural networks, emphasizing their significance in gaming, particularly in the context of the Go championship. An algorithm by Google famously defeated top Go player Lee Sedol, marking a milestone in AI development. The NEAT algorithm is noted for its potential in scenarios where traditional training sets are impractical, indicating its broader applications in AI research and development.
Drakkith
Mentor
Messages
23,179
Reaction score
7,656
Here's a neat video I found of an evolving AI that plays the video game Super Mario. The AI is based off of something called NEAT, or Neuro Evolution of Augmenting Topologies (paper here), and was programmed in Lua (source code here). I thought it was pretty cool, so I just figured I'd share.

 
  • Like
Likes Hoophy, mister mishka, collinsmark and 5 others
Physics news on Phys.org
It's fascinating how artifical learning algorithms are advancing.
 
  • Like
Likes Drakkith
Borg said:
It's fascinating how artifical learning algorithms are advancing.

Indeed. I actually created a really simple simulation of some evolving cells about a year ago, but it was nothing like this. There was no learning involved, the cells just adapted over time without any "intelligence".
 
  • Like
Likes Borg
This month's Scientific American happens to have several articles on AI. It also refers to neural networks similar to what is mentioned in the video.
 
  • Like
Likes Drakkith
I've heard that numerous games are played successfully this way. It was a key step on the way to the Go championship.
 
Hornbein said:
I've heard that numerous games are played successfully this way. It was a key step on the way to the Go championship.

The "Go" championship?
 
Drakkith said:
The "Go" championship?

Earlier this year, an algorithm designed by Google defeated the top-rated Go player in the world. Go is an ancient Chinese board game which has simple rules, but is very complex because of the number of possible moves (there are many more possible games of Go than there are of chess, for example).
In the final game of their historic match, Google’s artificially intelligent Go-playing computer system has defeated Korean grandmaster Lee Sedol, finishing the best-of-five series with four wins and one loss.
http://www.wired.com/2016/03/googles-ai-wins-fifth-final-game-go-genius-lee-sedol/

Here's a link to the academic paper on the algorithm: http://www.nature.com/nature/journal/v529/n7587/full/nature16961.html
 
Last edited by a moderator:
  • Like
Likes Drakkith
Several times I've created Artificial Neural Networks (ANN) for fun and kicks (various languages: Matlab, C++ and C#). Thus far they were only more conventional feed-forward networks that were trained using back-propagation with well defined training sets.

I think this NEAT algorithm might be quite useful for applications where well defined training sets are not practical.
 
  • Like
Likes Drakkith
  • #10
I did not think Go would be solved in my lifetime. It was an epoch-making event, IMO.
 
Back
Top