I think some distinctions in the field called AI are worth making:
1) There is a long track record of success in neural network training - where people provide training data and guide the training (to varying extents). AlphaGo Master that beat Lee Sedol and (with further refinement and training) Ke Jie (who is generally considered the strongest living Go player), was a result in this category. It was remarkable, but only in the sense that people had tried this with Go without any success comparable to this, and the team itself expected this achievement to take much longer (perhaps 10 years, according to some team members). These techniques have been used for both closed, complete information problems, as well as a number of incomplete information problems or partly open problems.
2) Machine (self) learning is what is explored in this new project, which has minimal precedent (that I am familiar with). This is having a neural network train itself with no human provided data or guidance. The technology developed by the AlphaZero team is at present fundamentally limited to finite actions possible, with finite rules for generating them, and a score that can be represented as a real number whose expectation can be maximized. (Note, for chess and Shogi, the score values wore -1,0,1, for Go they were 0, 1, but the framework was explicitly designed to allow scores like 2.567, if there were a problem with such characteristics). It also seems required that the sequence of actions before which a scoring can occur can't be too long (for practical reasons of computation limits, even given the large computational power available during self training). There are no other limitations or specializations. This necessitated an artificial rule added to chess, for the self training phase (beyond the 50 move rule and 3 fold repetition that in principle terminate any game in finite time). It is still possible (especially with machines) to have 1000 move games without terminating per any of the official rules (49 moves, capture or pawn move, 49 moves, capture or pawn move, etc). The group was concerned with these rats holes eating up too much processing time, so they added a rule that games over some threshold length were scored as draws (the paper does not specify what value they chose for this cutoff). This strikes me as a risky but presumably necessary step due to system limitations. They specifically did NOT want to address this by adding ajudicating rules, because these would have to involve chess knowledge. Particularly intriguing to me, looking at black vs. white results, is that AlphaZero seems to have evolved a meta rule on its own to play it safe with black, and take more risks with white. This is the practice of the majority of top grandmasters.
3) It seems that except possibly for the core neural network itself, huge changes and breakthroughs would be needed to apply their (self learning, with no training data) system to incomplete information, open type problem areas. Further, there is no sense in which it is an AI. This is not pejorative. The whole AI field is named after a hypothetical future goal which no existing project is really directly working on (because no one knows how, effectively). It is silly to judge AlphaZero against this goal, because that is not remotely what it was trying to achieve.