I've been involved in the AI field since the late 80's and have seen many promising technologies come and go. You name it, I've seen it, PDP, "fuzzy logic," simulated annealing, attractor neural networks, reinforcement learning etc. etc. Each one of them promised the same as the article stated above...
AlphaGo is a reinforcement learning algorithm. It's a standard one too, it's just that a neural net is used as a function approximator with lots of free parameters for one of the standard functions in a standard reinforcement learning algorithm. The technology is RL (1980s) and ANN (1970s for the backpropagation).
Apart from faster computers, the main advance for backpropagation since the 1970s is that it was found that setting the initial conditions of the weights in a certain way allowed backpropagation to reach a decent point much more quickly.