High School Can you solve Penrose's chess problem and win a bonus prize?

  • Thread starter Thread starter Auto-Didact
  • Start date Start date
  • Tags Tags
    Ai Chess Penrose
Click For Summary
Penrose's chess problem challenges players to find a way for white to win or force a stalemate against a seemingly unbeatable black setup, designed to confound AI. The Penrose Institute is studying how humans achieve insights in problem-solving, inviting participants to share their solutions and reasoning. While computers may struggle with this position due to its complexity, many human players can recognize the potential for a draw or even a win through strategic moves. The problem highlights the differences in human intuition and AI computation, suggesting that human reasoning may involve more than just brute force calculations. Participants are encouraged to engage with the puzzle and share their experiences for a chance to win a prize.
  • #91
Buzz Bloom said:
Hi Paul:

That might work reliably for computers, but I think for many reasonably competent human players, recognizing when a position has occurred three times might be difficult since even on the restricted fortress layout, the number of possible positions is rather large.

Regards,
Buzz
My point is simply that the position is theoretically a forced draw without the 50 move rule. For computer chess, you often want to remove this rule because there are computer chess positions with mate in 500 or so.
 
Mathematics news on Phys.org
  • #92
Buzz Bloom said:
Hi Paul:

This is sort of interesting, but how does that result relate to the purpose that Penrose had when he posed the problem?

Regards,
Buzz
Penrose proposed this problem shows a fundamental limitation of computer chess. My response is:

1) The evaluation function is a means to an end for chess programs, not the end in itself. I worked for a while on query optimizer, for example, and sometimes used cost functions known to be wrong in principle, but that led, in practice, to good choices for a given query situation in the real world, and trying to achieve the same with a correct evaluation would make the optimizer too slow.

2) By the criterion of actual play, Penrose problem fails to expose any issues with computer play.

3) Had Penrose discussed the matter with chess computer experts, he would know that the issue is well known, and there are also long known positions that expose computer chess weakness by the criterion of actual play.

4) But this is still not fundamental, because the whole area of weakness could be removed in general. And I think chess is a fundamentally poor arena for Penrose to pursue his argument. Not only is chess fundamentally computable, but there is nothing fundamentally noncomputational about how humans play.
 
Last edited:
  • Like
Likes Buzz Bloom
  • #93
I can't see a white win. Say the white uses the furthest pawn forward to kill the castle... check...
black can only use queen to block as there's another pawn if king is used so the queen takes the pawn then white takes queen with pawn and then black king takes pawn no. 2.
Now pawn no. 3 ( second closest to the back takes castle no. 2 of black leaving pawns and three bishops for the blacks. Now... I am a bit unsure but whites could try taking out pawns if blacks don't trap him in doing so. Otherwise it could try trapping itself forcing a stalemate in the next few moves along its 2 remaining pawns, but it looks highly unlikely.
 
  • #94
supersub said:
I can't see a white win. Say the white uses the furthest pawn forward to kill the castle... check...
black can only use queen to block as there's another pawn if king is used so the queen takes the pawn then white takes queen with pawn and then black king takes pawn no. 2.
Now pawn no. 3 ( second closest to the back takes castle no. 2 of black leaving pawns and three bishops for the blacks. Now... I am a bit unsure but whites could try taking out pawns if blacks don't trap him in doing so. Otherwise it could try trapping itself forcing a stalemate in the next few moves along its 2 remaining pawns, but it looks highly unlikely.

White can win in a very unlikely and cooperative way. The white king somehow makes its way to a8 (top left corner) and black removes it's bishops from the b8-h2 diagonal (the diagonal colinear to the three bishops in the original diagram) White can then advance the pawn and no matter what black plays white then delivers checkmate by promoting to queen.

Also in chess pawns can't move backwards so pawn can't take back queen.
 
  • #95
Mastermind01 said:
The white king somehow makes its way to a8
d7 is fine too.
Mastermind01 said:
White can win in a very unlikely and cooperative way.
Well, white can win if black tries to win. That probability is unknown to me.
 
  • #96
Aufbauwerk 2045 said:
But I think we need to be careful about saying they "got some of this "creativity", "imagination", "insight" or whatever" because at the end of day it's still just a machine which runs through its program step by step.
Strictly speaking the computer never has creativity, imagination, or insight. This is true whether it is running a simple program to perform some arithmetic, or a complex program that uses AI techniques such as recursive search with backtracking or pattern recognition.
Even a neural network program is just another program. I can implement a neural network in C. It can "learn" to recognize letters of the alphabet. Does that mean my little program is "intelligent?" Obviously not.
Hi Aufbauwerk:
I get the impression that the issue you are raising is a vocabulary definition issue rather than a computer science issue.

There are two basically different language uses being used here.
1. The words "creativity", "imagination", "insight" relate to mental behavior that humans exhibit. When you say AI fails to exhibit this behavior, I think you mean that the human behavior is different with respect to certain qualities, such as for example, versatility, and therefore doesn't qualify to have these words apply to the AI's behavior.
2. The words "creativity", "imagination", "insight" are used as metaphors because the AI's behavior exhibits some similar aspects to the human behavior. Since metaphors never completely match all aspects of the normal, non-metaphorical usage, it is technically (and punctiliously) accurate to say the usage is "incorrect". However, that criticism is applicable to all metaphors, including those used to describe in a natural language what quantum mechanics math tells us about reality.

Some uses of AI methods are described with less "accuracy" than others with respect to the "creativity", "imagination", "insight" vocabulary. Methods that do not include adaptability behavior seem to be less accurately described with this vocabulary than those AI methods that do. This seems to be appropriate because humans use their creativity, imagination, and insight in a way them allows them to improve. Neural nets is one AI method that demonstrates adaptability, and the newer technologies involving "big data" appears to have potential for even more impressive adaptability.

Regards,
Buzz
 
  • #97
Buzz Bloom said:
Hi Paul:

That might work reliably for computers, but I think for many reasonably competent human players, recognizing when a position has occurred three times might be difficult since even on the restricted fortress layout, the number of possible positions is rather large.

Regards,
Buzz

The most famous example being:

https://en.wikipedia.org/wiki/Threefold_repetition#Fischer_versus_Spassky

In this case, the two players would agree a draw. A chess game wouldn't continue unless one player is trying to win. If one player insisted on playing on, then eventually the 50-move rule would end the game.

Both the 50-move rule and three-fold repetition are rare. Drawn games are usually agreed by the players. Stalemate is also rare.
 
  • Like
Likes Buzz Bloom
  • #98
PAllen said:
White is never forced to move a pawn. That would be idiotic loss. By not doing so, white forces draw. You may be confused by GO, where repeating position (in superko rule variants) is prohibited. In chess, repetition is not prohibited, and leads to draw, whenever both sides would face adverse consequence of avoiding repetition. That is precisely the case here. Thus, this position is an absolute draw without idiotic blunder which even weak programs will not make.

You are right, I was confused here.
 
  • #99
mfb said:
Biology is irrelevant at this point. If all parts of the brain follow physical laws, and we can find the physical laws, then a computer can in principle simulate a brain.
Hi mfb:

I hesitate to disagree with you because I know you are much better educated in these topics than I am, but I think your argument has a couple of flaws.

1. The physical laws that you suggest might be used to simulate brain function I presume includes QM. I do not understand how in principle QM laws can be used for such an a simulation. As far as I know there has never been any observational confirmation that brain behavior is dependent on the randomness of the uncertainty principle. If I am correct about this, then the simulation of the probabilistic possibilities of quantum interactions within the brain would not be sufficient to capture the behavior of the brain behavior. On the other hand, a neurological model might be able to do it, but this would not involve simulating any physics laws.

2. Your argument ignored emergent phenomena. Because of my limitations, the following is just an over simplification of how brain function is an emergent phenomenon much removed from the underlying physics.
a. The chemistry of the brain function is not very well described in terms of the physics because much of the relevant physics is not readily predictive about the complexities of the chemistry due to the chemistry complexity.
b. The biology of the brain cell structure and function is partially, but not very well described in terms of the relevant chemistry because much of the relevant chemistry is not readily predictive about the complexities of the cell structure and function due to the structure and function complexity.
c. The neurology of inter-brain cell structures and interconnectivity is partially, but not very well described in terms of the relevant brain cell structure and function because much of the relevant chemistry is not readily predictive about the complexities of the cell structure and function due to the structure and function complexity.
d. The psychology of the brain's neurological behavior is partially, but not very well described in terms of the relevant inter-brain cell structure because much of the relevant inter-brain structure is not readily predictive about the complexities of the psychological behavior.

Regards,
Buzz
 
  • #100
Demystifier said:
One should distinguish what a human can do, from what a human can experience. The former can be described and explained by physics. The latter is a "magic" that cannot be described or explained by known physical laws.
Hi Demystifier:

Although I mostly agree with your conclusion, I think you may be overlooking that what a human can do is modified by what a human can experience.

Regards,
Buzz
 
  • #101
Haven't read the entire thread, but what computer thinks black will win here? Today's computers have like 3400 elo. That's insane, and there is no way you're going to get me to believe a computer can't figure this out, and rather easily. Even a primitive brute force computer should be able to check that all of black's pieces are trapped, and his bishops aren't on the right squares to do anything useful.

The only way I can see this fooling a computer, is if the computer is truly brute force and nothing else. Chess computers seem bad at long term strategy, but this position should be one of the easiest ones for a computer to recognize.
 
  • #102
stevendaryl said:
I greatly admire Penrose as a brilliant thinker, but I actually don't think that his thoughts about consciousness, or human versus machine, are particularly novel or insightful.

I have read several books by Penrose, including the brilliant Road to Reality, but yes, he is both bizarrely brilliant, and bizarrely simplistic in his understanding of certain matters. In fact, his attempts to inject theology into unrelated topics often serve as a good reminder that brilliant people are brilliant in one thing, not everything.
 
  • #103
Buzz Bloom said:
Although I mostly agree with your conclusion, I think you may be overlooking that what a human can do is modified by what a human can experience.
Scientifically speaking, our experiences are often strongly correlated with our actions that happen closely after the experiences, but it does not necessarily imply that these actions are caused by the experiences. Philosophically speaking, there is no proof that philosophical zombies are impossible.
 
  • Like
Likes durant35
  • #104
Auto-Didact said:
I am arguing that brain and consciousness are part of physics as well, merely that our present-day understanding of physics is insufficient to describe consciousness. This is precisily what Penrose has argued for years

...

It seems to me that physicists who accept functionalism do not realize that they are essentially saying physics itself is completely secondary to computer science w.r.t. the description of Nature.

I don't see how that follows. To me, that's like saying that if I believe that the "A" I just typed is the same letter as the "A" I printed on a piece of paper, then I must believe in something beyond physics. Functionalism defines psychological objects in terms of their role in behavior, not in terms of their physical composition, in the same way that the alphabet is not defined in terms of the physical composition of letters.
 
  • Like
Likes Buzz Bloom
  • #105
@Buzz Bloom: How do you simulate an ant hill?
You simulate the behavior of every ant. You need absolutely no knowledge of the concept of ant streets or other emergent phenomena. They naturally occur in the simulation even if you don't have a concept of them.

How do you simulate a brain? You simulate the behavior of every component - every nucleus and electron if you don't find a better way. You do not need to know about neurons or any other large-scale structures. They naturally occur in your simulation. You just need to know the initial state, and that is possible to get.
For a brain to differ from this simulation, we need atoms that behave differently. Atoms and their interactions with other atoms have been studied extremely well. Atoms do not have a concept of "I am in a brain, so I should behave differently now".
Buzz Bloom said:
I do not understand how in principle QM laws can be used for such an a simulation.
I do not understand the problem. Randomness certainly applies to every interaction. It is currently unknown if that is relevant for large-scale effects or if the random effects average out without larger influence. A full simulation of a neuron would settle this question, and the question is not relevant for a simulation that can take quantum mechanics into account.
Auto-Didact said:
Mere mathematical demonstration is not sufficient, only the experimental demonstration matters; this is why physics can be considered scientific at all.
Classical mechanics has been tested on large scales, I thought that part was obvious. Apart from that: A mathematical proof is the best you can get. You can experimentally get 3+4=7 by putting 3 apples on a table and then 4 apples more and count 7, but showing it mathematically ensures it will work for everything, not just for apples.
Auto-Didact said:
You are basically saying 'given enough computer power and the SM, the correct dynamical/mathematical theory of literally any currently known or unknown phenomenon whatsoever will automatically role out as well'. This is patently false if the initial conditions aren't taken into consideration as well, not even to mention the limitations due to chaos. The SM alone will certainly not uniquely simulate our planet nor the historical accidents leading to the formation of life and humanity.
You can get the initial conditions.
Chaos (and randomness in QM) is not an issue. The computer can simulate one path, or a few if we want to study the impact of chaotic behavior. No one asked that the computer can simulate every possible future of a human (or Earth, or whatever you simulate). We just want a realistic option.
Auto-Didact said:
I am not arguing for some 'human specialness' in opposition to the Copernican principle. I am merely saying that human reasoning is not completely reducible to the same kind of (computational) logic which computers use
That is exactly arguing for 'human specialness'.

Buzz Bloom said:
Although I mostly agree with your conclusion, I think you may be overlooking that what a human can do is modified by what a human can experience.
The actions of humans can be predicted from brain scans before the humans think consciously about the actions.
 
  • #106
mfb said:
The actions of humans can be predicted from brain scans before the humans think consciously about the actions.
Hi mfb:

I think this is an overstatement of the valid conclusions of the research. If you would cite a particular paper about this research, If I can get access to it I will try to explain what I see as the difference between your statement and the actual results of the experiment.

I did take a look at a popularized description of this research which I was able to find quickly, but it may not be a particularly reliable source.

Regards,
Buzz

mfb said:
You need absolutely no knowledge of the concept of ant streets or other emergent phenomena. They naturally occur in the simulation even if you don't have a concept of them.
Hi mfb:

I don't think I am qualified to explain why the simulation of the physics will fail to capture the behavior of emergent phenomena. I suggest you might want to look at at the book described at
https://mitpress.mit.edu/emerging
Do you think that a simulation of the physics taking place on the Earth since it's formation would result in the emergence of homo sapiens?

Regards,
Buzz

Demystifier said:
Scientifically speaking, our experiences are often strongly correlated with our actions that happen closely after the experiences, but it does not necessarily imply that these actions are caused by the experiences.
Hi Demystifier:

I think we are discussing this based on different contexts. I was referring to the fact that experiences causes learning and adaptation and change, which in turn changes what behaviors are possible by the changed individual. An infant cannot do what an adult can.

Regards,
Buzz
 
  • #107
Buzz Bloom said:
Hi Demystifier:

I think we are discussing this based on different contexts. I was referring to the fact that experiences causes learning and adaptation and change, which in turn changes what behaviors are possible by the changed individual. An infant cannot do what an adult can.

Regards,
Buzz
We probably have different notions of "experiences" in mind. By that term, I mean subjective conscious experiences, known also as qualia.
 
  • #108
Buzz Bloom said:
Hi mfb:

I don't think I am qualified to explain why the simulation of the physics will fail to capture the behavior of emergent phenomena. I suggest you might want to look at at the book described at
https://mitpress.mit.edu/emerging
Do you think that a simulation of the physics taking place on the Earth since it's formation would result in the emergence of homo sapiens?

Regards,
Buzz
The probability of homo-sapiens emerging from such a simulation starting from initial conditions at some early moment on Earth is a function of how generic the result is against the backdrop of uncertainty. Assuming at least that high order life is generic for the conditions on earth, you would certainly expect such to emerge from the (in principle) simulation.

[edit: looking at that link, I would say it is perfectly consistent with MFB's claim.]
 
  • #109
stevendaryl said:
I don't see how that follows. To me, that's like saying that if I believe that the "A" I just typed is the same letter as the "A" I printed on a piece of paper, then I must believe in something beyond physics. Functionalism defines psychological objects in terms of their role in behavior, not in terms of their physical composition, in the same way that the alphabet is not defined in terms of the physical composition of letters.
This is a false analogy seeing the alphabet is not a natural phenomenon like consciousness is and therefore cannot be claimed to exist in the same sense. Ontological commitment to functionalism is incompatible with the physicalist thesis that everything that exists is physical/has some key physical aspect and can be therefore described by just describing the physics. Functionalist states cannot be adequately described in this way, not even in principle.

Unless one would want to claim two different levels of actual existence (not merely fictional existence like the 'existence' of Superman or the existence of subjective interpretative matters like whether things like beauty or morality objectively exist out there in the world) I don't see how one could unambiguously reconcile the two; functional things would be part of reality yet their workings would be completely independent of physics in every possible way. Of course, not all physicists or natural scientists necessarily subscribe to physicalism but that's not really relevant.
mfb said:
Classical mechanics has been tested on large scales, I thought that part was obvious. Apart from that: A mathematical proof is the best you can get.
The point was that QM hasn't yet been fully tested for mesoscopic masses; this is of course the reason people continue to do interference experiments for larger (i.e. more massive) objects. Whether you agree or not, this is a legitimate point of scientific dispute, even more so given the existence of other competing theories.

Seeing you are also a physicist, I can safely assume you understand that pure mathematical arguments, while necessary, can only get you so far in the physics, without you know, actually involving some physics. If you don't believe that, you might as well use SU5 instead of the SM.
You can experimentally get 3+4=7 by putting 3 apples on a table and then 4 apples more and count 7, but showing it mathematically ensures it will work for everything, not just for apples.You can get the initial conditions.
Now you're just being facetious, I can also be facetious: initial conditions aren't part of the SM.

More seriously, the correspondence of QM to CM is of a completely different character than say Newtonian mechanics approximating SR for low velocities. This difference is due to the statistical character of the correspondence, i.e. QM averages to CM, something which is clear from e.g. Ehrenfest's theorem and which remains so even if you Wigner transform away from Hilbert space. This statistical character makes the correspondence non-unique in such a way that there are other theories which can achieve the same; these theories are then not automatically considered valid beyond experimental limits without further experimental verification, like QM often is.
Chaos (and randomness in QM) is not an issue. The computer can simulate one path, or a few if we want to study the impact of chaotic behavior. No one asked that the computer can simulate every possible future of a human (or Earth, or whatever you simulate). We just want a realistic option.
Why isn't chaos an issue? And I did ask exactly that; how else would you get a proper exact model in which you don't need to put in effective parameters by hand but have the SM predict from first principles?
That is exactly arguing for 'human specialness'.
It actually isn't, given that different forms of logic exist. Just because we can use classical logic for instance does not imply at all that our brain has literally also fully been wired by natural selection in order to use specifically this form of logic for reasoning.
 
Last edited:
  • #110
mfb said:
How do you simulate a brain? You simulate the behavior of every component - every nucleus and electron if you don't find a better way. You do not need to know about neurons or any other large-scale structures. They naturally occur in your simulation. You just need to know the initial state, and that is possible to get.
For a brain to differ from this simulation, we need atoms that behave differently. Atoms and their interactions with other atoms have been studied extremely well. Atoms do not have a concept of "I am in a brain, so I should behave differently now".I do not understand the problem. Randomness certainly applies to every interaction. It is currently unknown if that is relevant for large-scale effects or if the random effects average out without larger influence.

I respectfully disagree with you, if 'disagree' is the right word. My point is that talking about this stuff is invoking a lot of philosophy of mind into equation. This stuff is one topic where we don't yet have any knowledge about emergence of human behavior, mind and their correlation. Considering that, all your posts sound like you're absolutely convinced in everything you say and that it's undisputable that everything mentioned in thus thread can be completely reduced to the behavior of individual particles/cells that make us up.

For instance, we cannot dismiss some kind of dualism yet, and therefore we cannot dismiss mental causation or the interaction if mental and physical that is beyond realms of physics. This isn't speculative, this is an established fact in philosophical circles. So therefore we cannot treat entities with a mind similar with supposedly mindless entities which also applies to their behaviors and actions. Even if this is not true, there isn't any hint yet that the roles of brain can be reduced to the collective actions of neurons, physically speaking maybe the behavior of the brain and the behavior it produces emerges from the collective actions which simply cannot be reproduced by pairing artificial neurons or some other 'simulated' entities. So maybe atoms in the brain do in fact behave differently than in your 'wannabe-science-fiction' scenarion. Read Zuboff's "The story of a brain" for a better insight.

On the other hand, I noticed that you are a particle physicist and I have a strong feeling that because of that you try to explain everything by "it's just a bunch of atoms" method. Before typing posts which sound like there's no doubt that you're right, try to look things from another (philosophical, neurobiological etc.) standpoint because this is a tricky and controversial subject without any consensus and I doubt that you will bring anything new on the table by insisting on some form of neuroreductionism.
 
  • #111
seems obvious W cannot unlock the zugzwang holding B's pieces. B's 3 bishops are useless except to guard the diagonal they're on. So W just moves the king around on white squares until repetition of moves, or 60 moves with no pawn move or capture, or B relinquishes the diagonal. Not sure why an AI would have difficulty, except that the solution is pretty ambiguous?
 
  • #112
Demystifier said:
Scientifically speaking, our experiences are often strongly correlated with our actions that happen closely after the experiences, but it does not necessarily imply that these actions are caused by the experiences. Philosophically speaking, there is no proof that philosophical zombies are impossible.

Sorry, it's quite off topic but the definition of a philosophical zombie is self-refuting, given proper premises.

Premises:
A. All of reality is "natural" and things are what they are.
B. Anything supernatural does not exist

If the absolute totality of the reality of a specific person (let's take this to be you) is replicated (let's just say hypothetically) to make a second person, then the totality of the reality of that second person is in every way identical to the totality of the reality of you including the fact of reality that it, like you, is such that by its nature has experiences.

We know it has experiences because you know you have experiences, and because of the fact that the second person is exactly the totality of the reality of what you are.

In order for a philosophical zombie to exist either
A. You must be more than the totality of the reality that you are, i.e. you are or possesses some supernatural aspect ...which would only point to a defect in the definition of an exact copy.. which should be the totality of the reality and the supernatural that you are... thus this is easily remedied, and in the end the exact copy must be exactly the same. And besides, we have no need to rely on supernaturalism, you are what you are whatever that is and whether or not we understand it fully
or
B. Experience itself is arbitrarily manifested in reality (or reality plus super-reality)... i.e. existence in connection with you and existence in connection with the second person is arbitrary, but if there literally is no difference between the two of you, that would mean there could be no difference to reality, no difference which remains constant over time, so that arbitrariness must be continuous, in the very next moment you could arbitrarily be a zombie, and would never know it... and then arbitrarily not a zombie. In such a case you could be zombie-ish... but then if you and the second person are exactly identical (in reality, super-reality, super-super-reality, ad nauseum) that would mean you and the second person are equally zombie-like, arbitrarily becoming and unbecoming a zombie. This conclusion is false because you know you have experiences... and in any case makes the idea of a zombie tantamount to irrelevant, your being exactly as zombie-ish as your exact copy.

The idea of a true philosophical zombie is simply self-refuting in concept and in importance.
 
  • #113
PAllen said:
The probability of homo-sapiens emerging from such a simulation starting from initial conditions at some early moment on Earth is a function of how generic the result is against the backdrop of uncertainty. Assuming at least that high order life is generic for the conditions on earth, you would certainly expect such to emerge from the (in principle) simulation.
Hi Paul:

I am unsure if we disagree or not. Even if the conditions inherent for Earth to evolve intelligent creatures made the odds very high, that our species in particular would evolve is infinitesimal. A great many random accidents led to the existence of homo sapiens. My guess would be that the odds are very small that an intelligent species evolving would be happen to be a primate. Apparently the accident of a very large asteroid hitting the Earth
which killed off the dinosaurs and gave the small primates of that era a change for evolving into a large diverse order. And that is only one accident among many that determined which of many species would survived to become a large taxon.

Regards,
Buzz
 
  • #114
Demystifier said:
We probably have different notions of "experiences" in mind. By that term, I mean subjective conscious experiences, known also as qualia.
Hi Demystifier:

I am not sure whether your concept of experiences as you express above in philosophical terminology is the same as mine or not. Do you agree or disagree that in a conscious being experience can (or must) cause learning and adaptation, and thereby change the range of possible behaviors?

Regards,
Buzz
 
Last edited:
  • #115
PAllen said:
[edit: looking at that link, I would say it is perfectly consistent with MFB's claim.]
Hi Paul:

Perhaps I am misinterpreting MFB's claim. I read it a saying simulation of QM applied to the atoms and electrons and such which comprise a brain can lead to the intelligent and conscious behavior which the brain exhibits.

Here is a quote from the summary of Downing's book.
Downing focuses on neural networks, both natural and artificial, and how their adaptability in three time frames—phylogenetic (evolutionary), ontogenetic (developmental), and epigenetic (lifetime learning)—underlie the emergence of cognition.​
I don't see how the simulation of brain behavior with artificial neural networks is the same as simulation of its atomic activity. I see that as an enormous difference.

Regards,
Buzz
 
  • Like
Likes durant35
  • #116
Buzz Bloom said:
I don't see how the simulation of brain behavior with artificial neural networks is the same as simulation of its atomic activity. I see that as an enormous difference.

Regards,
Buzz

Exactly. Even the underlying physics is in my opinion different. Saying that we know how atoms bond etc. is not sufficient for a massive extrapolation as that we can simulate billions of neurons in organisms. Even in principle. It's not about technological limitations.
 
  • Like
Likes Buzz Bloom
  • #117
Auto-Didact said:
This is a false analogy seeing the alphabet is not a natural phenomenon like consciousness is and therefore cannot be claimed to exist in the same sense.

I don't see how that's relevant. Things can be defined by their functional role, even if those things are natural. For example, "sex organ".

Ontological commitment to functionalism is incompatible with the physicalist thesis

That seems like a bizarre claim to me. Calling something a "sex organ" does not imply that there is anything about it that fails to obey the laws of physics.
 
  • #118
Demystifier said:
We probably have different notions of "experiences" in mind. By that term, I mean subjective conscious experiences, known also as qualia.

I think it's very difficult (or impossible) to say that qualia exist independently of our response to them.
 
  • #119
Auto-Didact said:
Unless one would want to claim two different levels of actual existence (not merely fictional existence like the 'existence' of Superman or the existence of subjective interpretative matters like whether things like beauty or morality objectively exist out there in the world) I don't see how one could unambiguously reconcile the two; functional things would be part of reality yet their workings would be completely independent of physics in every possible way. Of course, not all physicists or natural scientists necessarily subscribe to physicalism but that's not really relevant.

It seems to me that in order to do modern science, you have to talk about things that are not physical: functions, sets, vectors, tensors, etc. If you want to say that such abstractions don't really exist, since they're not physical, I guess that's okay, although I can't see any point in saying that. But in any case, that's just a terminological matter, it seems to me---you're using "really exist" in a particular way that excludes such nonphysical entities. But you can't conclude anything about consciousness based on a terminological decision. If you want to say that entities that are defined functionally don't really exist in the same sense that electrons do, then what's your basis for saying that consciousness really exists? Certainly, brains exist, and certainly those brains connected to nerves and muscles and fingers and mouths are responsible for making noises and other signals about consciousness, but how does that show that consciousness exists, as a physical entity, or as a physical property?
 
  • Like
Likes Buzz Bloom
  • #120
stevendaryl said:
I don't see how that's relevant. Things can be defined by their functional role, even if those things are natural. For example, "sex organ".
My point is not that things can't not be defined functionally (they can), my point is that one cannot claim that a functional description can serve as a legitimate replacement for physical description when speaking about natural phenomena.

Natural phenomena can always in principle be described using physics; choosing to completely forego physics - even going so far as to state for one particular natural thing that it is impossible to be described using physics - and instead opting for a functional description as a de facto replacement, is giving up on the primacy of physics.
That seems like a bizarre claim to me. Calling something a "sex organ" does not imply that there is anything about it that fails to obey the laws of physics.
Sex organs are natural phenomena and therefore there is in principle never a problem to describe them using physics. I cannot think of any other natural phenomenon, apart from consciousness, wherein this is always trivially possible. Just describing something by its function is not at all the same as saying that no physical description is possible; this is however exactly what ontological functionalists claims, i.e. that consciousness can only be described functionally and that a physical description is in principle impossible.
 

Similar threads

Replies
29
Views
5K
  • · Replies 13 ·
Replies
13
Views
2K
  • · Replies 10 ·
Replies
10
Views
4K
Replies
10
Views
4K
  • · Replies 10 ·
Replies
10
Views
4K
  • · Replies 12 ·
Replies
12
Views
3K
Replies
5
Views
3K
  • · Replies 82 ·
3
Replies
82
Views
15K
Replies
3
Views
3K
  • · Replies 11 ·
Replies
11
Views
9K