1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Featured B Penrose' Chess problem

Tags:
  1. Apr 16, 2017 #1
    9855c5.png
    Source: http://www.telegraph.co.uk/science/2017/03/14/can-solve-chess-problem-holds-key-human-consciousness/
     
  2. jcsd
  3. Apr 16, 2017 #2

    PeroK

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    It would interesting to see how a computer would play white!

    I can't see how a computer could possibly lose with black.

    PS unless the computer actually resigned, I suspect it would play all the right moves. It would draw as white.

    It's the assessment of the position that is key.
     
    Last edited: Apr 16, 2017
  4. Apr 16, 2017 #3

    jedishrfu

    Staff: Mentor

  5. Apr 16, 2017 #4
    "A chess computer always assumes black will win"

    How so?

    I'm not a good chess player and I haven't played much since high school. But this seems trivial to me. Let's see if I'm right. If I'm wrong, it just proves what a bad chess player I am. :)

    Black could lose, as the problem says, if it blunders. A chess program that made this blunder would be pathetically weak.

    It's white to move. Let's set up the possible black blunder with a white blunder.

    If white moves the pawn from c6 to c7, black must capture using the bishop on e5. Otherwise, on the next move, white would move the pawn again, have a pawn on c8, promote it to queen (although even bishop would be enough) and checkmate black.

    So let's ignore this blunder situation, and go for a draw. This is easy to figure out for a human. I think a chess program with good heuristics would also figure it out. The key is to program the right heuristic for a case like this. Perhaps it has already been done.

    The way to force a draw is for white to simply move his king around on safe squares. He could just move from e2 to f1 and back.

    In that case the only legal replies black will ever have is another pointless move of a bishop. Black can't move any other pieces as long as white keeps them hemmed in with his pawns. None of the black bishops can capture a white pawn because they are on different colored squares.

    This would go on until the 50 move rule is invoked.

    So, assuming I have analyzed this correctly, using my weak chess knowledge, what exactly is the point of this problem?

    (EDIT -- I do see what the article claims is the point. I just don't agree that this problem would be such a challenge for a correctly programmed chess computer to solve.)
     
    Last edited: Apr 16, 2017
  6. Apr 16, 2017 #5
    P.S.

    If black was so weak as to move all bishops off the diagonal where they defend c7, then the white pawn could move there, and checkmate on the next move as already explained.
     
  7. Apr 16, 2017 #6

    TeethWhitener

    User Avatar
    Science Advisor
    Gold Member

    I call BS.

    1) Just plugged this into an online server running GNUChess and it drew (threefold repetition). I suspect that the fact that only the bishops can legally move makes it significantly easier to evaluate than even the most mundane middlegame position. Especially since most decent chess programs have positional weighting schemes in addition to the straightforward tactical piece-value calculations.

    2) As has been pointed out, this is pretty obviously a draw. More interesting would be to give a chess engine some classic endgame study without the benefit of tablebases. I guarantee you someone (likely a chess player and not a GR researcher) has done this

    Bingo: https://en.wikipedia.org/wiki/Endgame_study#Studies_and_chess_engines

    The Telegraph article implies that this is more of Penrose's quantum consciousness gobbledygook. It is interesting that there exist positions that computers struggle with, but I would contend that there are likely few of any positions that computers struggle with which are easy for humans.
     
  8. Apr 16, 2017 #7

    TeethWhitener

    User Avatar
    Science Advisor
    Gold Member

  9. Apr 16, 2017 #8

    EnumaElish

    User Avatar
    Science Advisor
    Homework Helper

    For a draw in chess any one of the following rules should satisfy :

    1. stalemate
    2. threefold repetition of a position (with the same player to move)
    3. if there has been no capture or a pawn being moved in the last fifty moves
    4. if checkmate is impossible
    or
    5. if the players agree to a draw.

    My guess is it's 3. It may also be 4, but I haven't verified it.
     
  10. Apr 17, 2017 #9
    It is possible, possibly even likely, that Penrose is referring solely to the more pure old school chess programmes, which can only calculationally brute force their way through; the reason for this limiting is also pretty clear from a psychology standpoint. For similar reasons, adding in extra insightful plays in whatever form into the code would, I believe, somewhat confound the purpose of the entire study seeing it is the human programmer or some other human who has come up with these insights, not the program itself.

    As for tablebases, these are based on retrograde analysis and should therefore be excluded as well. Using deep neural nets/GANs/etc instead of regular chess programs would be an interesting study in itself, one that likely is already being or already has been done.
    I wouldn't be too keen on dismissing Penrose as merely 'a GR researcher' seeing, first off, he is first and foremost a pure mathematician specialising in algebraic geometry, and second, as record has it, Penrose grew up in a highly chess competitive environment (NB: his father was both an avid player and an endgame composer and he made all the kids compete, leading Roger's older brother to become Oxford champion and his younger brother to win the British championship ten times, beat a reigning world champion and go on to become a Grand Master).

    More importantly, the particular game here is a selection criteria specifically for humans, not the end-all scenario for any possible programmable chess program whatsoever, although it probably is for bruteforce programs which again seem to be the only relevant programs in this case. It is highly likely that the evaluation of other endgame scenarios and whatnot, which are difficult for (brute force) chess programs, by human subjects are also part of the actual study.
    Actually, what they are attempting to study here using functional imaging is a well-known empirical phenomenon from experimental psychology about human reasoning, known as dual process theory. This field has a rich experimental history since the early 70s.

    The kind of study they are attempting here has both practical and scientific merit; I have actually done related research myself and can confidently say that many type 1 reasoning strategies can be extremely difficult, often even practically if not de facto impossible, to reduce to mere computation or pure deduction. Calculation on the other hand, and pretty much anything that is directly reducible to computation, are archetypically forms of type 2 reasoning.

    Using brain imaging studies to better map the associated brain areas for specific kinds of problems like chess (which is harder than elementary arithmetics, but simpler than, say, physics) has immense implications for all possible forms of type 1 reasoning, and possibly even practical utility for learning and teaching, most obviously the learning and teaching of mathematics at all levels.

    To dismiss this study as part of "Penrose' quantum consciousness gobbledygook" is a highly biased and unscientific standpoint. Whatever relation this all might have to Penrose' ideas on consciousness are at best indirect and I would even say completely secondary; the fact that his ideas have inspired many, myself included, to study such phenomena more carefully and more rigorously, even giving us clean 'natural experiments' from areas of mathematics and logic which many would likely otherwise never been exposed to, is nothing short of praiseworthy.
     
    Last edited: Apr 17, 2017
  11. Apr 17, 2017 #10
    'Correctly programmed' likely involves a change of the actual hypothesis under investigation. This hypothesis seems to be: 'actual chess playing by competent human chess players does not resemble, nor is reducible to, (bruteforce) computation, but instead requires insight, pattern recognition, heuristics and so on.' Incompetent human chess playing is exactly bruteforce trial-and-error, i.e. an exercise in computation.

    An analogous hypothesis about another form of human reasoning, namely mathematics, is that many human mathematicians employ intuition to solve most problems, and many if not most reported solutions in the form of perfect logically deductive schemes are purely post hoc constructions which do not remotely resemble the prior actual reasoning process, and which are made for aesthetic, conventional and sociological reasons, i.e. in order to present the findings in a clean, simple and clearly communicable fashion.
     
  12. Apr 17, 2017 #11

    TeethWhitener

    User Avatar
    Science Advisor
    Gold Member

    But the human programmer comes up with the entire algorithm, including the brute force calcuations. Why do we get to include the brute force and exclude the insights? (Do we exclude, e.g., the idea of point values for various pieces? That's an insight in itself, not intrinsic to the rules of chess.) Is the hypothesis that some problems are more difficult to solve with brute force than with some special insight really that controversial?
    I'm sure it does, but maybe not for the reasons the linked article alluded to. If they want to study the "flash of insight" by fMRI, why do they need a task which is hard for computers? How is that at all relevant? A particularly thorny tactical problem in a standard middlegame might induce a flash of insight for a person but be relatively trivial for a strong chess engine to solve.
    The bias is grounded in decades of people taking Penrose's predictions, actually experimentally testing them, and falsifying them. I'd say that's pretty scientific.
    Wait, who's biased?
     
  13. Apr 19, 2017 #12
    Disclaimer: this thread is not about (Orch) OR, I will try to definitively answer all relevant points about OR w.r.t. the studies in the OP in this post, but I will not discuss it any further.
    See my post above about dual process theory.
    Whether it is controversial or not is irrelevant, the question is what has been mapped out already and what has not. Chess seems to be a wonderful natural experiment for investigating such matters seeing it is mathematically well understood and can easily be played mentally giving only pictures to the subject and asking them to verbalise their thoughts, giving a very clean way of empirically distinguishing reasoning processes in vivo, which is also generalizable to other mental activities.

    The hypothesis paraphrased:
    P1) All (directly explicit) algorithmic action, calculation and computation are tasks of purely type 2 reasoning.
    P2) Understanding (or comprehension) is a form of type 1 reasoning.
    P3) Human reasoning does not solely consist of type 2 reasoning, we are also capable of type 1 reasoning.
    P4) Computers or Turing machines or clearly non-conscious adding machines like abaci and calculators, strictly perform tasks that belong to type 2 reasoning.
    C) Therefore such machines can not fully simulate human understanding.

    It has already been demonstrated that brute force reasoning falls squarely under the type 2 reasoning tasks and it is therefore scientifically, i.e. from the point of view contemporary experimental psychology, completely uninteresting to study it further in humans. It should also be abundantly clear that competent chess players do not play chess purely by utilizing brute force, but also by using type 1 reasoning; to insist on investigating such matters w.r.t. the current hypothesis is to attempt an empirically sterile in vitro artificial experiment.

    What you are asking isn't about the hypothesis at hand. Moreover, there actually seems to be two very subtle points at play here:
    1) insight has both an operational definition (from psychological theory) and an informal definition; to conflate the two definitions is to construct and attack a strawman argument
    2) you also seem to be confusing the hypothesis which is a de re statement (some human reasoning is such that it is necessarily non-computational) for the de dicto statement (necessarily, some human reasoning is such that it is non-computational). These are two distinct hypotheses.
    That is extremely relevant, in fact that is the entire argument: they want to study such flashes of insights in real time and so experimentally map their neural characteristics. This has already been done for forms of type 2 reasoning.
    Which specific predictions? Many of the takedowns I've seen (Feferman, Churchland, Grush, Dennett, Tegmark, etc) are strawman arguments, indirect arguments or failures to comprehend the argument altogether. Moreover, I believe there might an actual direct takedown from the point of view of logic which has not been given any large degree of coverage. Almost all of the experimental 'takedowns' have been addressed in the 2014 review of the theory.

    More importantly, Penrose' hypothesis of human understanding being seemingly a non-computational activity is fully consistent with contemporary experimental psychology findings; he only happens to use somewhat different terms, him of course not being intimately familiar with the fields specific jargon or its modern empirical theories such as dual process theory. What this means is, he has incidentally rediscovered a hypothesis which happens to have already been investigated and has survived being falsified.

    Perhaps most importantly, all of the above has nothing, in principle, to do with gravitational objective reduction (OR) theory, Orch OR or twistor theory. These are all experiments which stand or fall on the basis of their own merits. These results say nothing about his further hypotheses that I) understanding requires awareness, II) awareness requires consciousness, III) the mechanism of proto-consciousness is mass-dependent gravitationally induced OR of the wavefunction, IV) human consciousness is neuronal microtubules undergoing gravitational OR in an orchestrated fashion.

    Scientifically, with regard to physics, OR is a falsifiable scheme and there are multiple experiments underway to falsify it. There have been no experiments which have falsified OR yet, the latest estimates by experimentalists places us at years if not decades away from being able to carry out the required experiments. All that can be said at the moment is that the experiments will either demonstrate superposition from orthodox QM persists at all mass scales up to macroscopic masses or show a regime where QM fails and some form of OR will point in the direction of the new theory. That is really all that is relevant at the moment.

    It is only after the experimental parameter space at each scientific level has been mapped out carefully enough, that eventually some deeper physical theoretical explanations for psychology, such as Orch OR or some other competing theory, may be required and thus pointed out. This thread was not meant to discuss (Orch) OR, so I hope this post has addressed all relevant scientific issues about it with respect to the research in the OP.
    I'll immediately admit my bias :P
    The man is one of the major reasons I went into physics in the first place. He is without reserve or question a genius, one with a remarkable breadth and depth of knowledge, a dispassionate independent mind with a healthy philosophical curiosity and an equally healthy dose of humility - all good qualities for a scientist, and for a mathematician very reminiscent of the universalists of old. Saying any of this does not jeopardise my scientific integrity; I and many others have said far crazier things about the likes of Newton and Einstein.

    To quote Feynman:
    Lastly, there have actually been several extremely great mathematicians who have focused on the issues of human psychology, philosophy of mind and their relationship to mathematics. Two prominent mathematicians who have written extensively about these issues which I have read are Poincaré (The Foundation of Science) and Hadamard (An essay on the psychology of invention in the mathematical field). I highly recommend these two books to any mathematically inclined person. It is a veritable shame that this type of research has fallen out of repute with the shift of the intellectual world capital out of Europe during the darker years of the 20th century and the simultaneously occurring over-specialisation/balkanisation of science and its subsequent professionalisation.

    Of course, there have also been many mathematicians who have written alot of plain mystical nonsense regarding these topics, most painfully noticeable major figures like von Neumann and Wigner. In either case, I believe a historical reading of Penrose' work, especially his clarified position in Shadows of The Mind and his further errata, safely places him among the former intellectual group instead of the latter more mystical group, and his works on these issues can be seen as a natural evolution of the earlier debate on many of these issues earlier brought to prominence by Poincaré, Hadamard et al. The argument that he is a mystic because he has associated with known charlatans like Chopra is equally empty as Feynman being one too because he frequently associated with not only new age types but also with hippies, stoners, strippers and gamblers.
     
  14. Apr 19, 2017 #13

    Demystifier

    User Avatar
    Science Advisor

    To solve the puzzle, a human does not think about exact moves he will play. Instead, a human only has a vague long term strategy, to wander with white king around black bishops, hoping that black might make a wrong move with his bishops. Computer algorithms are not good in such vague strategies.

    But what if we program computer differently, not by giving it the precise chess algorithm, but by training it with deep learning neural network (which is how programmers recently trained computers to win the Go game against best human Go players)? I have a feeling that this kind of computer program could much better better play such easy-for-human situations.
     
    Last edited: Apr 19, 2017
  15. Apr 19, 2017 #14
    As I said before, it wouldn't surprise me if this has already been researched, seeing the recent explosive popularity of deep learning methods. In any case, I also think such a neural net (NN) could quite possibly overtake humans w.r.t. such problems, at least given that such problems feature adequately in the initial learning data set given to the NN.

    It would be even more interesting if such problems weren't included in the initial data set and the NN still managed to reliably outperform humans on these problems; in such cases studying the used strategy of the NN might even be able to provide invaluable novel mathematical information about the problems to researchers and human chess players.
     
  16. Apr 19, 2017 #15

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    Neural networks are used to evaluate the strength of positions. Unless one side makes a stupid mistake, all future positions have the same strength - the king and the bishops will be in different places, but that doesn't change the situation. A good neural net will tell you that. And then? It will draw. Which is not bad, but that is something even much simpler computer programs can do.
     
  17. Apr 19, 2017 #16

    TeethWhitener

    User Avatar
    Science Advisor
    Gold Member

    I'm really interested in this thread, and @Auto-Didact seems to know a lot about this area, but in order to respond constructively, I need to give your post much more attention than I can currently give it. This is really unfortunate, and for that I apologize. I also have to apologize for misinterpreting the chess problem as being much easier to solve than it actually is. At least according to the Telegraph article, the task is specifically to force a stalemate (that is, not a checkmate, but no legal moves for white). This is significantly more difficult than forcing a draw. Maybe I'll have my own flash of insight at some point.
     
  18. Apr 19, 2017 #17

    TeethWhitener

    User Avatar
    Science Advisor
    Gold Member

    Ok, I've sufficiently nerd-sniped myself. I think I might have gotten it:
    First, walk the white king up to c8. The black bishops do whatever while still controlling the h2-b8 diagonal to prevent the white pawn on c6 from marching up and checkmating.
    Then, once the white king is on c8, push the pawn to
    1. c7.
    Black can respond in one of 2 ways.
    1 ...ignore the pawn at c7. Then,
    2. Kb8 Bxc7+ (necessary to prevent c8=Q#)
    3. Ka8 stalemate
    or
    1 ...Bxc7
    2. cxb5+ Qxb5 stalemate

    The final wrinkle is if ignoring the pawn at c7 involves stepping the last bishop completely off the h2-b8 diagonal. In that case, we get
    1. Kb8 B (steps back onto the h2-b8 diagonal)
    This pins the pawn to the king and keeps it from promoting and checkmating. But in this case, we simply have
    2. Ka8 Bxc7 stalemate.

    EDIT: problems with this solution:
    First branch: If there's only one black bishop on the h2-b8 diagonal after Ka8, nothing prevents it from stepping off, forcing the king to move out of his safe square.
    Second branch: Nothing stops the black king from getting himself out of check by capturing. This leaves b7 and d7 open to the white king.
    EDIT: I found a whole bunch of problems with this solution. Disregard.
     
    Last edited: Apr 19, 2017
  19. Apr 19, 2017 #18

    stevendaryl

    User Avatar
    Staff Emeritus
    Science Advisor

    I greatly admire Penrose as a brilliant thinker, but I actually don't think that his thoughts about consciousness, or human versus machine, are particularly novel or insightful. My feeling is that human thinking is actually not even Turing-complete. Because our memories are fuzzy and error-prone, I think that there are only finitely many different situations that we can hold in our heads. It's an enormous number, but I think it's still finite. Any kind of "insight" about finitely many situations is computable. Every function on a finite domain is computable; to go beyond what's computable, the domain must be infinite, and I just don't think that humans can really figure out problems with an arbitrarily large number of parameters.
     
  20. Apr 20, 2017 #19

    kith

    User Avatar
    Science Advisor

    This problem seems odd to me.

    If it is not about forcing a stalemate, it is easy to go for the 50 move rule. Here is an article from chessbase.com where someone confirms that (old?) chess algorithms strongly predict that black will win. But why should they put up a prize and sample people by sending them an email if the solution is so easy?

    If on the other hand the problem is how to force a stalemate, it seems to be really hard to me. I have thought about it a bit myself and searched online for people who have a solution and didn't find anything. So it doesn't just seem to take an "average chess-playing human".
     
  21. Apr 20, 2017 #20
    There's no forced stalemate. I've used a computer to find out that playing b3xa4, c4xb5 or c6-c7 allways loses, wherever the white king is, if black only plays the bishop from h2 to g3 and back. Whenever you do one of these things, there will be mate in at most 13 moves.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Penrose' Chess problem
  1. Chess puzzle (Replies: 25)

  2. Chess Moves (Replies: 8)

  3. Chess problem (Replies: 13)

Loading...