B Can you solve Penrose's chess problem and win a bonus prize?

  • B
  • Thread starter Thread starter Auto-Didact
  • Start date Start date
  • Tags Tags
    Ai Chess Penrose
AI Thread Summary
Penrose's chess problem challenges players to find a way for white to win or force a stalemate against a seemingly unbeatable black setup, designed to confound AI. The Penrose Institute is studying how humans achieve insights in problem-solving, inviting participants to share their solutions and reasoning. While computers may struggle with this position due to its complexity, many human players can recognize the potential for a draw or even a win through strategic moves. The problem highlights the differences in human intuition and AI computation, suggesting that human reasoning may involve more than just brute force calculations. Participants are encouraged to engage with the puzzle and share their experiences for a chance to win a prize.
  • #101
Haven't read the entire thread, but what computer thinks black will win here? Today's computers have like 3400 elo. That's insane, and there is no way you're going to get me to believe a computer can't figure this out, and rather easily. Even a primitive brute force computer should be able to check that all of black's pieces are trapped, and his bishops aren't on the right squares to do anything useful.

The only way I can see this fooling a computer, is if the computer is truly brute force and nothing else. Chess computers seem bad at long term strategy, but this position should be one of the easiest ones for a computer to recognize.
 
Mathematics news on Phys.org
  • #102
stevendaryl said:
I greatly admire Penrose as a brilliant thinker, but I actually don't think that his thoughts about consciousness, or human versus machine, are particularly novel or insightful.

I have read several books by Penrose, including the brilliant Road to Reality, but yes, he is both bizarrely brilliant, and bizarrely simplistic in his understanding of certain matters. In fact, his attempts to inject theology into unrelated topics often serve as a good reminder that brilliant people are brilliant in one thing, not everything.
 
  • #103
Buzz Bloom said:
Although I mostly agree with your conclusion, I think you may be overlooking that what a human can do is modified by what a human can experience.
Scientifically speaking, our experiences are often strongly correlated with our actions that happen closely after the experiences, but it does not necessarily imply that these actions are caused by the experiences. Philosophically speaking, there is no proof that philosophical zombies are impossible.
 
  • Like
Likes durant35
  • #104
Auto-Didact said:
I am arguing that brain and consciousness are part of physics as well, merely that our present-day understanding of physics is insufficient to describe consciousness. This is precisily what Penrose has argued for years

...

It seems to me that physicists who accept functionalism do not realize that they are essentially saying physics itself is completely secondary to computer science w.r.t. the description of Nature.

I don't see how that follows. To me, that's like saying that if I believe that the "A" I just typed is the same letter as the "A" I printed on a piece of paper, then I must believe in something beyond physics. Functionalism defines psychological objects in terms of their role in behavior, not in terms of their physical composition, in the same way that the alphabet is not defined in terms of the physical composition of letters.
 
  • Like
Likes Buzz Bloom
  • #105
@Buzz Bloom: How do you simulate an ant hill?
You simulate the behavior of every ant. You need absolutely no knowledge of the concept of ant streets or other emergent phenomena. They naturally occur in the simulation even if you don't have a concept of them.

How do you simulate a brain? You simulate the behavior of every component - every nucleus and electron if you don't find a better way. You do not need to know about neurons or any other large-scale structures. They naturally occur in your simulation. You just need to know the initial state, and that is possible to get.
For a brain to differ from this simulation, we need atoms that behave differently. Atoms and their interactions with other atoms have been studied extremely well. Atoms do not have a concept of "I am in a brain, so I should behave differently now".
Buzz Bloom said:
I do not understand how in principle QM laws can be used for such an a simulation.
I do not understand the problem. Randomness certainly applies to every interaction. It is currently unknown if that is relevant for large-scale effects or if the random effects average out without larger influence. A full simulation of a neuron would settle this question, and the question is not relevant for a simulation that can take quantum mechanics into account.
Auto-Didact said:
Mere mathematical demonstration is not sufficient, only the experimental demonstration matters; this is why physics can be considered scientific at all.
Classical mechanics has been tested on large scales, I thought that part was obvious. Apart from that: A mathematical proof is the best you can get. You can experimentally get 3+4=7 by putting 3 apples on a table and then 4 apples more and count 7, but showing it mathematically ensures it will work for everything, not just for apples.
Auto-Didact said:
You are basically saying 'given enough computer power and the SM, the correct dynamical/mathematical theory of literally any currently known or unknown phenomenon whatsoever will automatically role out as well'. This is patently false if the initial conditions aren't taken into consideration as well, not even to mention the limitations due to chaos. The SM alone will certainly not uniquely simulate our planet nor the historical accidents leading to the formation of life and humanity.
You can get the initial conditions.
Chaos (and randomness in QM) is not an issue. The computer can simulate one path, or a few if we want to study the impact of chaotic behavior. No one asked that the computer can simulate every possible future of a human (or Earth, or whatever you simulate). We just want a realistic option.
Auto-Didact said:
I am not arguing for some 'human specialness' in opposition to the Copernican principle. I am merely saying that human reasoning is not completely reducible to the same kind of (computational) logic which computers use
That is exactly arguing for 'human specialness'.

Buzz Bloom said:
Although I mostly agree with your conclusion, I think you may be overlooking that what a human can do is modified by what a human can experience.
The actions of humans can be predicted from brain scans before the humans think consciously about the actions.
 
  • #106
mfb said:
The actions of humans can be predicted from brain scans before the humans think consciously about the actions.
Hi mfb:

I think this is an overstatement of the valid conclusions of the research. If you would cite a particular paper about this research, If I can get access to it I will try to explain what I see as the difference between your statement and the actual results of the experiment.

I did take a look at a popularized description of this research which I was able to find quickly, but it may not be a particularly reliable source.

Regards,
Buzz

mfb said:
You need absolutely no knowledge of the concept of ant streets or other emergent phenomena. They naturally occur in the simulation even if you don't have a concept of them.
Hi mfb:

I don't think I am qualified to explain why the simulation of the physics will fail to capture the behavior of emergent phenomena. I suggest you might want to look at at the book described at
https://mitpress.mit.edu/emerging
Do you think that a simulation of the physics taking place on the Earth since it's formation would result in the emergence of homo sapiens?

Regards,
Buzz

Demystifier said:
Scientifically speaking, our experiences are often strongly correlated with our actions that happen closely after the experiences, but it does not necessarily imply that these actions are caused by the experiences.
Hi Demystifier:

I think we are discussing this based on different contexts. I was referring to the fact that experiences causes learning and adaptation and change, which in turn changes what behaviors are possible by the changed individual. An infant cannot do what an adult can.

Regards,
Buzz
 
  • #107
Buzz Bloom said:
Hi Demystifier:

I think we are discussing this based on different contexts. I was referring to the fact that experiences causes learning and adaptation and change, which in turn changes what behaviors are possible by the changed individual. An infant cannot do what an adult can.

Regards,
Buzz
We probably have different notions of "experiences" in mind. By that term, I mean subjective conscious experiences, known also as qualia.
 
  • #108
Buzz Bloom said:
Hi mfb:

I don't think I am qualified to explain why the simulation of the physics will fail to capture the behavior of emergent phenomena. I suggest you might want to look at at the book described at
https://mitpress.mit.edu/emerging
Do you think that a simulation of the physics taking place on the Earth since it's formation would result in the emergence of homo sapiens?

Regards,
Buzz
The probability of homo-sapiens emerging from such a simulation starting from initial conditions at some early moment on Earth is a function of how generic the result is against the backdrop of uncertainty. Assuming at least that high order life is generic for the conditions on earth, you would certainly expect such to emerge from the (in principle) simulation.

[edit: looking at that link, I would say it is perfectly consistent with MFB's claim.]
 
  • #109
stevendaryl said:
I don't see how that follows. To me, that's like saying that if I believe that the "A" I just typed is the same letter as the "A" I printed on a piece of paper, then I must believe in something beyond physics. Functionalism defines psychological objects in terms of their role in behavior, not in terms of their physical composition, in the same way that the alphabet is not defined in terms of the physical composition of letters.
This is a false analogy seeing the alphabet is not a natural phenomenon like consciousness is and therefore cannot be claimed to exist in the same sense. Ontological commitment to functionalism is incompatible with the physicalist thesis that everything that exists is physical/has some key physical aspect and can be therefore described by just describing the physics. Functionalist states cannot be adequately described in this way, not even in principle.

Unless one would want to claim two different levels of actual existence (not merely fictional existence like the 'existence' of Superman or the existence of subjective interpretative matters like whether things like beauty or morality objectively exist out there in the world) I don't see how one could unambiguously reconcile the two; functional things would be part of reality yet their workings would be completely independent of physics in every possible way. Of course, not all physicists or natural scientists necessarily subscribe to physicalism but that's not really relevant.
mfb said:
Classical mechanics has been tested on large scales, I thought that part was obvious. Apart from that: A mathematical proof is the best you can get.
The point was that QM hasn't yet been fully tested for mesoscopic masses; this is of course the reason people continue to do interference experiments for larger (i.e. more massive) objects. Whether you agree or not, this is a legitimate point of scientific dispute, even more so given the existence of other competing theories.

Seeing you are also a physicist, I can safely assume you understand that pure mathematical arguments, while necessary, can only get you so far in the physics, without you know, actually involving some physics. If you don't believe that, you might as well use SU5 instead of the SM.
You can experimentally get 3+4=7 by putting 3 apples on a table and then 4 apples more and count 7, but showing it mathematically ensures it will work for everything, not just for apples.You can get the initial conditions.
Now you're just being facetious, I can also be facetious: initial conditions aren't part of the SM.

More seriously, the correspondence of QM to CM is of a completely different character than say Newtonian mechanics approximating SR for low velocities. This difference is due to the statistical character of the correspondence, i.e. QM averages to CM, something which is clear from e.g. Ehrenfest's theorem and which remains so even if you Wigner transform away from Hilbert space. This statistical character makes the correspondence non-unique in such a way that there are other theories which can achieve the same; these theories are then not automatically considered valid beyond experimental limits without further experimental verification, like QM often is.
Chaos (and randomness in QM) is not an issue. The computer can simulate one path, or a few if we want to study the impact of chaotic behavior. No one asked that the computer can simulate every possible future of a human (or Earth, or whatever you simulate). We just want a realistic option.
Why isn't chaos an issue? And I did ask exactly that; how else would you get a proper exact model in which you don't need to put in effective parameters by hand but have the SM predict from first principles?
That is exactly arguing for 'human specialness'.
It actually isn't, given that different forms of logic exist. Just because we can use classical logic for instance does not imply at all that our brain has literally also fully been wired by natural selection in order to use specifically this form of logic for reasoning.
 
Last edited:
  • #110
mfb said:
How do you simulate a brain? You simulate the behavior of every component - every nucleus and electron if you don't find a better way. You do not need to know about neurons or any other large-scale structures. They naturally occur in your simulation. You just need to know the initial state, and that is possible to get.
For a brain to differ from this simulation, we need atoms that behave differently. Atoms and their interactions with other atoms have been studied extremely well. Atoms do not have a concept of "I am in a brain, so I should behave differently now".I do not understand the problem. Randomness certainly applies to every interaction. It is currently unknown if that is relevant for large-scale effects or if the random effects average out without larger influence.

I respectfully disagree with you, if 'disagree' is the right word. My point is that talking about this stuff is invoking a lot of philosophy of mind into equation. This stuff is one topic where we don't yet have any knowledge about emergence of human behavior, mind and their correlation. Considering that, all your posts sound like you're absolutely convinced in everything you say and that it's undisputable that everything mentioned in thus thread can be completely reduced to the behavior of individual particles/cells that make us up.

For instance, we cannot dismiss some kind of dualism yet, and therefore we cannot dismiss mental causation or the interaction if mental and physical that is beyond realms of physics. This isn't speculative, this is an established fact in philosophical circles. So therefore we cannot treat entities with a mind similar with supposedly mindless entities which also applies to their behaviors and actions. Even if this is not true, there isn't any hint yet that the roles of brain can be reduced to the collective actions of neurons, physically speaking maybe the behavior of the brain and the behavior it produces emerges from the collective actions which simply cannot be reproduced by pairing artificial neurons or some other 'simulated' entities. So maybe atoms in the brain do in fact behave differently than in your 'wannabe-science-fiction' scenarion. Read Zuboff's "The story of a brain" for a better insight.

On the other hand, I noticed that you are a particle physicist and I have a strong feeling that because of that you try to explain everything by "it's just a bunch of atoms" method. Before typing posts which sound like there's no doubt that you're right, try to look things from another (philosophical, neurobiological etc.) standpoint because this is a tricky and controversial subject without any consensus and I doubt that you will bring anything new on the table by insisting on some form of neuroreductionism.
 
  • #111
seems obvious W cannot unlock the zugzwang holding B's pieces. B's 3 bishops are useless except to guard the diagonal they're on. So W just moves the king around on white squares until repetition of moves, or 60 moves with no pawn move or capture, or B relinquishes the diagonal. Not sure why an AI would have difficulty, except that the solution is pretty ambiguous?
 
  • #112
Demystifier said:
Scientifically speaking, our experiences are often strongly correlated with our actions that happen closely after the experiences, but it does not necessarily imply that these actions are caused by the experiences. Philosophically speaking, there is no proof that philosophical zombies are impossible.

Sorry, it's quite off topic but the definition of a philosophical zombie is self-refuting, given proper premises.

Premises:
A. All of reality is "natural" and things are what they are.
B. Anything supernatural does not exist

If the absolute totality of the reality of a specific person (let's take this to be you) is replicated (let's just say hypothetically) to make a second person, then the totality of the reality of that second person is in every way identical to the totality of the reality of you including the fact of reality that it, like you, is such that by its nature has experiences.

We know it has experiences because you know you have experiences, and because of the fact that the second person is exactly the totality of the reality of what you are.

In order for a philosophical zombie to exist either
A. You must be more than the totality of the reality that you are, i.e. you are or possesses some supernatural aspect ...which would only point to a defect in the definition of an exact copy.. which should be the totality of the reality and the supernatural that you are... thus this is easily remedied, and in the end the exact copy must be exactly the same. And besides, we have no need to rely on supernaturalism, you are what you are whatever that is and whether or not we understand it fully
or
B. Experience itself is arbitrarily manifested in reality (or reality plus super-reality)... i.e. existence in connection with you and existence in connection with the second person is arbitrary, but if there literally is no difference between the two of you, that would mean there could be no difference to reality, no difference which remains constant over time, so that arbitrariness must be continuous, in the very next moment you could arbitrarily be a zombie, and would never know it... and then arbitrarily not a zombie. In such a case you could be zombie-ish... but then if you and the second person are exactly identical (in reality, super-reality, super-super-reality, ad nauseum) that would mean you and the second person are equally zombie-like, arbitrarily becoming and unbecoming a zombie. This conclusion is false because you know you have experiences... and in any case makes the idea of a zombie tantamount to irrelevant, your being exactly as zombie-ish as your exact copy.

The idea of a true philosophical zombie is simply self-refuting in concept and in importance.
 
  • #113
PAllen said:
The probability of homo-sapiens emerging from such a simulation starting from initial conditions at some early moment on Earth is a function of how generic the result is against the backdrop of uncertainty. Assuming at least that high order life is generic for the conditions on earth, you would certainly expect such to emerge from the (in principle) simulation.
Hi Paul:

I am unsure if we disagree or not. Even if the conditions inherent for Earth to evolve intelligent creatures made the odds very high, that our species in particular would evolve is infinitesimal. A great many random accidents led to the existence of homo sapiens. My guess would be that the odds are very small that an intelligent species evolving would be happen to be a primate. Apparently the accident of a very large asteroid hitting the Earth
which killed off the dinosaurs and gave the small primates of that era a change for evolving into a large diverse order. And that is only one accident among many that determined which of many species would survived to become a large taxon.

Regards,
Buzz
 
  • #114
Demystifier said:
We probably have different notions of "experiences" in mind. By that term, I mean subjective conscious experiences, known also as qualia.
Hi Demystifier:

I am not sure whether your concept of experiences as you express above in philosophical terminology is the same as mine or not. Do you agree or disagree that in a conscious being experience can (or must) cause learning and adaptation, and thereby change the range of possible behaviors?

Regards,
Buzz
 
Last edited:
  • #115
PAllen said:
[edit: looking at that link, I would say it is perfectly consistent with MFB's claim.]
Hi Paul:

Perhaps I am misinterpreting MFB's claim. I read it a saying simulation of QM applied to the atoms and electrons and such which comprise a brain can lead to the intelligent and conscious behavior which the brain exhibits.

Here is a quote from the summary of Downing's book.
Downing focuses on neural networks, both natural and artificial, and how their adaptability in three time frames—phylogenetic (evolutionary), ontogenetic (developmental), and epigenetic (lifetime learning)—underlie the emergence of cognition.​
I don't see how the simulation of brain behavior with artificial neural networks is the same as simulation of its atomic activity. I see that as an enormous difference.

Regards,
Buzz
 
  • Like
Likes durant35
  • #116
Buzz Bloom said:
I don't see how the simulation of brain behavior with artificial neural networks is the same as simulation of its atomic activity. I see that as an enormous difference.

Regards,
Buzz

Exactly. Even the underlying physics is in my opinion different. Saying that we know how atoms bond etc. is not sufficient for a massive extrapolation as that we can simulate billions of neurons in organisms. Even in principle. It's not about technological limitations.
 
  • Like
Likes Buzz Bloom
  • #117
Auto-Didact said:
This is a false analogy seeing the alphabet is not a natural phenomenon like consciousness is and therefore cannot be claimed to exist in the same sense.

I don't see how that's relevant. Things can be defined by their functional role, even if those things are natural. For example, "sex organ".

Ontological commitment to functionalism is incompatible with the physicalist thesis

That seems like a bizarre claim to me. Calling something a "sex organ" does not imply that there is anything about it that fails to obey the laws of physics.
 
  • #118
Demystifier said:
We probably have different notions of "experiences" in mind. By that term, I mean subjective conscious experiences, known also as qualia.

I think it's very difficult (or impossible) to say that qualia exist independently of our response to them.
 
  • #119
Auto-Didact said:
Unless one would want to claim two different levels of actual existence (not merely fictional existence like the 'existence' of Superman or the existence of subjective interpretative matters like whether things like beauty or morality objectively exist out there in the world) I don't see how one could unambiguously reconcile the two; functional things would be part of reality yet their workings would be completely independent of physics in every possible way. Of course, not all physicists or natural scientists necessarily subscribe to physicalism but that's not really relevant.

It seems to me that in order to do modern science, you have to talk about things that are not physical: functions, sets, vectors, tensors, etc. If you want to say that such abstractions don't really exist, since they're not physical, I guess that's okay, although I can't see any point in saying that. But in any case, that's just a terminological matter, it seems to me---you're using "really exist" in a particular way that excludes such nonphysical entities. But you can't conclude anything about consciousness based on a terminological decision. If you want to say that entities that are defined functionally don't really exist in the same sense that electrons do, then what's your basis for saying that consciousness really exists? Certainly, brains exist, and certainly those brains connected to nerves and muscles and fingers and mouths are responsible for making noises and other signals about consciousness, but how does that show that consciousness exists, as a physical entity, or as a physical property?
 
  • Like
Likes Buzz Bloom
  • #120
stevendaryl said:
I don't see how that's relevant. Things can be defined by their functional role, even if those things are natural. For example, "sex organ".
My point is not that things can't not be defined functionally (they can), my point is that one cannot claim that a functional description can serve as a legitimate replacement for physical description when speaking about natural phenomena.

Natural phenomena can always in principle be described using physics; choosing to completely forego physics - even going so far as to state for one particular natural thing that it is impossible to be described using physics - and instead opting for a functional description as a de facto replacement, is giving up on the primacy of physics.
That seems like a bizarre claim to me. Calling something a "sex organ" does not imply that there is anything about it that fails to obey the laws of physics.
Sex organs are natural phenomena and therefore there is in principle never a problem to describe them using physics. I cannot think of any other natural phenomenon, apart from consciousness, wherein this is always trivially possible. Just describing something by its function is not at all the same as saying that no physical description is possible; this is however exactly what ontological functionalists claims, i.e. that consciousness can only be described functionally and that a physical description is in principle impossible.
 
  • #121
Auto-Didact said:
My point is not that things can't not be defined functionally (they can), my point is that one cannot claim that a functional description can serve as a legitimate replacement for physical description when speaking about natural phenomena.

I'm not sure who is saying that functional definitions are replacements for physical descriptions. I don't think anybody is. Functionalism as I understand it is a matter of taking equivalence classes of physical descriptions.We're saying that one physical description is functionally equivalent to another (for certain purposes). That doesn't mean that we're ignoring physical descriptions.

For example, if I want to make a calculator, or make a chair, or make a car, there are many, many different physical descriptions that could be considered a calculator, or a chair, or a car. That these things are defined by their functional role means that physically inequivalent objects can both be calculators.

Similarly, functionalism for consciousness would mean that you could "implement" consciousness in different ways, in the same way that you can implement a calculator or a chair in different ways. There's no denial of physics involved here.
 
  • #122
Auto-Didact said:
Sex organs are natural phenomena and therefore there is in principle never a problem to describe them using physics. I cannot think of any other natural phenomenon, apart from consciousness, wherein this is always trivially possible. Just describing something by its function is not at all the same as saying that no physical description is possible; this is however exactly what ontological functionalists claims, i.e. that consciousness can only be described functionally and that a physical description is in principle impossible.

I think you misunderstand what functionalists are claiming. Going back to the example of a calculator. There is certainly a physical description of how any particular calculator works, in terms of solid state physics and electronics, etc. But there can be things that are considered calculators that work by different physical principles (Babbage's Difference Engine, for example). What makes something a calculator is not a particular configuration of electronics components, but the fact that it is possible to do calculations with it.
 
  • Like
Likes durant35
  • #123
stevendaryl said:
It seems to me that in order to do modern science, you have to talk about things that are not physical: functions, sets, vectors, tensors, etc. If you want to say that such abstractions don't really exist, since they're not physical, I guess that's okay, although I can't see any point in saying that. But in any case, that's just a terminological matter, it seems to me---you're using "really exist" in a particular way that excludes such nonphysical entities. But you can't conclude anything about consciousness based on a terminological decision. If you want to say that entities that are defined functionally don't really exist in the same sense that electrons do, then what's your basis for saying that consciousness really exists? Certainly, brains exist, and certainly those brains connected to nerves and muscles and fingers and mouths are responsible for making noises and other signals about consciousness, but how does that show that consciousness exists, as a physical entity, or as a physical property?
We are very much straying into the philosophical distinctions between what is physics and what is mathematics. Luckily I can say this somewhat briefly. Physical laws (i.e. differential equations) belong to a special class of physical things, including all their properties which tend to be describable even further using other forms of mathematics.

It is extremely hard to tell what the rest of mathematics is and whether it has some physically actualized form in the natural world or in some other world and I won't try to do this either, I'll just say that I do not subscribe to mathematical Platonism.
I'd actually like to end by quoting what Weyl and Poincaré had to say in regard to these matters but I can't find the relevant quotations.
 
  • #124
stevendaryl said:
I'm not sure who is saying that functional definitions are replacements for physical descriptions. I don't think anybody is. Functionalism as I understand it is a matter of taking equivalence classes of physical descriptions.We're saying that one physical description is functionally equivalent to another (for certain purposes). That doesn't mean that we're ignoring physical descriptions.
There are legions of cognitive scientists, psychologists, philosophers, theologians, etc who specifically argue for ontological functionalism of consciousness instead of some physical theory and so for a full refutation of physicalism. If you haven't ever met any, you simply aren't trying hard enough.
For example, if I want to make a calculator, or make a chair, or make a car, there are many, many different physical descriptions that could be considered a calculator, or a chair, or a car. That these things are defined by their functional role means that physically inequivalent objects can both be calculators.

Similarly, functionalism for consciousness would mean that you could "implement" consciousness in different ways, in the same way that you can implement a calculator or a chair in different ways. There's no denial of physics involved here.

I think you misunderstand what functionalists are claiming. Going back to the example of a calculator. There is certainly a physical description of how any particular calculator works, in terms of solid state physics and electronics, etc. But there can be things that are considered calculators that work by different physical principles (Babbage's Difference Engine, for example). What makes something a calculator is not a particular configuration of electronics components, but the fact that it is possible to do calculations with it.
I understand your point. What you are describing simply isn't ontological functionalism, but some other weaker form of functionalism wherein the functional theory of consciousness is just a placeholder theory, to eventually be replaced by a physical theory, similar to how many definitions in classical chemistry - another placeholder temporary functional theory - were eventually fully replaced by 20th century physics.

Instead of replying in detail, I'll just refer you to a paper: Glymour 1987, Psychology as Physics
 
  • #125
Buzz Bloom said:
Hi Paul:

Perhaps I am misinterpreting MFB's claim. I read it a saying simulation of QM applied to the atoms and electrons and such which comprise a brain can lead to the intelligent and conscious behavior which the brain exhibits.

Here is a quote from the summary of Downing's book.
Downing focuses on neural networks, both natural and artificial, and how their adaptability in three time frames—phylogenetic (evolutionary), ontogenetic (developmental), and epigenetic (lifetime learning)—underlie the emergence of cognition.​
I don't see how the simulation of brain behavior with artificial neural networks is the same as simulation of its atomic activity. I see that as an enormous difference.

Regards,
Buzz
The point is that this work is reductionist from brain to neurons. Unless you further believe that the behavior of individual neurons is fundamentally not reducible to physics of atoms, this reference is consistent with MFB position. Most arguments I've seen against reductionism of the brain are that it is more than the sum of individual functional neuron behavior, rather than local neuron behavior is not reducible to physics of atoms.
 
Last edited:
  • #126
Auto-Didact said:
There are legions of cognitive scientists, psychologists, philosophers, theologians, etc who specifically argue for ontological functionalism of consciousness instead of some physical theory

I think you're misunderstanding what they are saying. Let's take the example of a calculator: To be a calculator means a particular functional relationship between inputs and outputs. So you can develop a theory of calculators independently of any particular choice of how it's implemented. But if you're going to build a calculator, of course you need physics in order to get a thingy that implements that functional relationship. Turing invented a theory of computers before there were any actual computers. An actual computer implements (actually, only partially, because Turing's computers had unlimited memories) the abstraction.

So the people developing a functional theory of mind are trying to understand what abstraction the physical mind is an instance of. Does that count as a refutation of physicalism? Only if someone is wanting to be provocative.
 
  • Like
Likes MrRobotoToo
  • #127
Auto-Didact said:
I understand your point. What you are describing simply isn't ontological functionalism, but some other weaker form of functionalism wherein the functional theory of consciousness is just a placeholder theory, to eventually be replaced by a physical theory, similar to how many definitions in classical chemistry - another placeholder temporary functional theory - were eventually fully replaced by 20th century physics.

No, that's not what I meant. It's not a placeholder at all. Take the example of a computer: Turing develop a theory of computers that was independent of any specific implementation of a computer. It is not correct to say that Turing's theory was a "placeholder" for a more physical theory of computers that was only possible after the development of solid state physics. The abstract theory of computation is neither a placeholder for a solid state physics description of computers, nor is it a replacement for such a description. It's two different, but related, lines of research: the theory of computation, and the engineering of building computers.

Correspondingly, there could be a functionalist theory of mind which relates to a physical theory of the brain in the same way that the abstract theory of computation relates to electroninc computers.
 
  • Like
Likes MrRobotoToo
  • #128
PAllen said:
Most arguments I've seen against reductionism of the brain are that it is more than the sum of individual functional neuron behavior, rather than local neuron behavior is not reducible to physics of atoms.
Hi Paul:

Why can't it be both.
1. The functionality of the brain is more than the sum of individual functional neuron behavior.
AND
2. The functionality of individual neuron behavior is more than the sum of the physics of their atoms, electrons etc.

My problem with reductionism in general is that reductionists seem to ignore the implications of emergent phenomena. What makes the emergent phenomena emergent is that what emerges is not dependent on the details of its constituents. The details of how the constituents function do not influence the emergent behavior. Only the behavior of the constituents affect the emergent behavior. Reductionism is OK with respect to emergent phenomena if the reduction decomposition if only functional, not physical.

Regards,
Buzz

Auto-Didact said:
Just describing something by its function is not at all the same as saying that no physical description is possible; this is however exactly what ontological functionalists claims, i.e. that consciousness can only be described functionally and that a physical description is in principle impossible.
Hi Auto-Didact:

I don't think I have ever met anyone who is like whom you describe as "ontological functionalists". The individuals whom I have met who consider themselves to be "functionalists", like myself, do not believe the physical description is impossible, but rather just irrelevant. The emergent behavior of emergent phenomena, like consciousness, do not depend on the physical description of constituents, only on the functionality of constituents.

Regards,
Buzz
 
Last edited:
  • #129
I merged two posts into one above at the suggestion of a monitor.
 
  • #130
Buzz Bloom said:
Hi Paul:

Why can't it be both.
1. The functionality of the brain is more than the sum of individual functional neuron behavior.
AND
2. The functionality of individual neuron behavior is more than the sum of the physics of their atoms, electrons etc.

My problem with reductionism in general is that reductionists seem to ignore the implications of emergent phenomena. What makes the emergent phenomena emergent is that what emerges is not dependent on the details of its constituents. The details of how the constituents function do not influence the emergent behavior. Only the behavior of the constituents affect the emergent behavior. Reductionism is OK with respect to emergent phenomena if the reduction decomposition if only functional, not physical.

Regards,
Buzz
The point was why I thought this reference was consistent with Mfb's position. This book is reductionist at the neuronal level. To me, that makes it at least consistent with MFB position.

As to emergence, I see no conflict between emergence and reductionism. Specifically, simulating a large system of elements whose individual behavior is known will end up displaying the emergent behavior without explicitly putting it into the simulation. The emergence will happen in the simulation as readily as it would with the physical system.
 
  • Like
Likes Buzz Bloom
  • #131
I think the discussion is going in circles. I'll continue contributing once we have a functional human brain simulation, as this seems to be the only thing that can convince some of the possibility of such a simulation. I'm highly confident simulating neurons with an effective model in classical physics is sufficient to get an accurate response. This is the approach all simulations take so far.

Simulating Caenorhabditis elegans (a worm) moving around, based on its actual cell structure
Simulated Drosophila melanogaster fly
http://www.nsi.edu/~nomad/darwinvii.html - learning how to interpret video camera inputs to recognize objects without guidance.
With increasing computing power and improved neuron mapping technique, these simulations will grow. We currently don't have the computing power to simulate a human brain, and we don't have the tools to map a brain cell by cell, but that is probably just a matter of time.
 
  • #132
PAllen said:
As to emergence, I see no conflict between emergence and reductionism. Specifically, simulating a large system of elements whose individual behavior is known will end up displaying the emergent behavior without explicitly putting it into the simulation. The emergence will happen in the simulation as readily as it would with the physical system.
Hi Paul:

In general I agree with this, but there is an exception.

If the behavior of constituents include adaptability, then the emergent behavior may depend on what may well be an accident (or a combination of several) which the organism and its components adapt to. If that is the case, then the simulation of the components (whether physical or functional) may never experience the accident, and therefore that simulation will fail produce the emergence.

The following is intended to be interpreted as metaphorical.

When emergent behavior is passed from one generation to another, it would not be by genetics, but rather by parent to offspring training. Therefore, the permanence of such emergent behavior requires that the organism that acquires it be a member of a species which has the behavior of training offspring. In such a case, the new behavior is not just a characteristic of the individual, but it becomes a cultural or group characteristic, and a new level of reduction emerges. The former individuals now become components of the group as a new type of individual.

Regards,
Buzz
 
  • Like
Likes durant35
  • #133
Buzz Bloom said:
Hi Paul:

In general I agree with this, but there is an exception.

If the behavior of constituents include adaptability, then the emergent behavior may depend on what may well be an accident (or a combination of several) which the organism and its components adapt to. If that is the case, then the simulation of the components (whether physical or functional) may never experience the accident, and therefore that simulation will fail produce the emergence.

The following is intended to be interpreted as metaphorical.

When emergent behavior is passed from one generation to another, it would not be by genetics, but rather by parent to offspring training. Therefore, the permanence of such emergent behavior requires that the organism that acquires it be a member of a species which has the behavior of training offspring. In such a case, the new behavior is not just a characteristic of the individual, but it becomes a cultural or group characteristic, and a new level of reduction emerges. The former individuals now become components of the group as a new type of individual.

Regards,
Buzz
Normally, emergent phenomena are known (or presumed to be) generic, not accidental. However, running a simulation can investigate this very question (in principle). Accidents in the sense of chaotic phenomena would be handled via perturbing initial conditions and running multiple simulations. Accidents in the sense of quantum randomness are handled by simulation of such using source of true randomness. In principle, you could then find out whether e.g. complex life is generic.

In simulating a human brain, one is not trying to end up with e.g. my brain, in particular, but an instance of a generic human like brain.
 
  • #134
PAllen said:
In simulating a human brain, one is not trying to end up with e.g. my brain, in particular, but an instance of a generic human like brain.
Hi Paul:

I much appreciate this discussion with you. I feel I am improving my understanding of my own personal confusions.

I did have in mind what you described as the object of the simulation. I guess my choice of an explanatory example failed to communicate what I wanted to convey.

I am now thinking of the generic human mind being simulated in its state during the very long period before agriculture. The change from hunter-gatherer behavior to farmer behavior depended on extreme environmental changes which made the hunter-gatherer life-style change from completely sufficient to no longer adequate to sustain the population. This is an example of an "accident".

If you were an alien anthropologist intending to simulate the brains of a group of human subjects from that era, how would you anticipate such an accidental change and thereby incorporate the necessary elements in your simulation so that the simulation would change from hunter-gatherer behavior to farmer behavior? What I am suggesting is that it may not be possible to accurately capture knowledge of, and then simulate, the essential adaptive elements of an adaptive species, and their limits, which one would need to correctly simulate behavior changes due to unexpected external survival requirements.

Regards,
Buzz
 
  • #135
One specific point about computation (which I am writing down in very specific context).

Computation can be thought of as a clerical process to enumerate "all" elements of an r.e. set. What it does not tell us about at all (or isn't supposed to) is cognitively realisable processes that refer to "incomplete enumeration" of sets (more complex than r.e. ones for example) ---- while just guaranteeing satisfaction of certain conditions for example.

Obviously with rather extremely limited life time and with practical concerns (potentially also the concern of doing something more "useful" perhaps), we don't or can't normally think about it much.As an exaggerated example just to highlight a point (deliberately incomplete as I don't feel like writing a very long post) of how much the difference can be:
p = some exceedingly difficult statement of number theory

Program-1
"for all inputs"
if( p is true )
output 1
else
loop forever

Program-2
"for all inputs"
output 1
 
Last edited:
  • #136
mfb said:
I think the discussion is going in circles. I'll continue contributing once we have a functional human brain simulation, as this seems to be the only thing that can convince some of the possibility of such a simulation. I'm highly confident simulating neurons with an effective model in classical physics is sufficient to get an accurate response. This is the approach all simulations take so far.

Simulating Caenorhabditis elegans (a worm) moving around, based on its actual cell structure
Simulated Drosophila melanogaster fly
http://www.nsi.edu/~nomad/darwinvii.html - learning how to interpret video camera inputs to recognize objects without guidance.
With increasing computing power and improved neuron mapping technique, these simulations will grow. We currently don't have the computing power to simulate a human brain, and we don't have the tools to map a brain cell by cell, but that is probably just a matter of time.

Nothing is circular, the discussion is doing great and everybody has had a chance to share an opinion and admit at least indirectly that this is a controversial subject and that opinions of others can benefit to their own perspective.

The only thing that's repetitive and circular is your ignorance of the underlying issues for which you compensate by examples like the quoted ones, which are amazing in their own rights, but can hardly serve for extrapolations because of the philosophical problems that I've typed about before which - for some reason - you avoided to even mention.

It would be a shame that you stop contributing because you made some good examples and references for the sake of the discussion, but you should really stop 'contributing' in a way like everything you type is undisputable without even wondering or seeking clarification on the underlying problems. Don't take this personally, but that is the locus of the circularity in this thread.
 
  • #137
Buzz Bloom said:
Do you agree or disagree that in a conscious being experience can (or must) cause learning and adaptation, and thereby change the range of possible behaviors?
I disagree.
 
  • #138
ObjectivelyRational said:
Sorry, it's quite off topic but the definition of a philosophical zombie is self-refuting, given proper premises.
I agree that it's off topic, but disagree with the rest.
 
  • Like
Likes Auto-Didact
  • #139
mfb said:
How do you simulate a brain? You simulate the behavior of every component - every nucleus and electron if you don't find a better way. You do not need to know about neurons or any other large-scale structures. They naturally occur in your simulation. You just need to know the initial state, and that is possible to get.
I agree with this entirely.

But there is one unexpected piece of neural functionality in the human brain that we will need to replicate: the ability to hold a relatively large amount of information (at least dozens of bits) in a single state. We know that such functionality exists because we are able to be "conscious" of complex concepts, such as the image of a tree, and such objects cannot be summarized in just a few bits - the number that can normally be encoded into a single dynamic state.

But, of course, we do know of devices that can do this - and devices which have the potential to make good (Darwinian) use of information in this form.

On the other hand, I do not see brain functionality that could not be replicated by conventional AND/OR/NAND/NOR gates - even to the point of having the replication report that it is conscious. But what would the purpose of reporting that you are conscious if you are not? Where would the concept of "consciousness" even come from if it didn't exist within social beings? The fact is, we really do have conscious experiences - we aren't just making it up. And, as evidenced by the fact that we can talk about it, that consciousness has the potential to influence our actions.

I will add the argument for how many-bit consciousness compels a many-bit state, though it has fallen on skeptical ears before. Perhaps I can do better this time.

If you are describing something that requires 50 bits, having only 25 of those bits doesn't describe that something. You need all the bits. So you need some way of associating those bits - a way to define which 50 bits stored in this universe are to be the symbolic description of that something. Let's say you use a bunch of logic gates (NAND,NOR,AND,OR) or the presumed neural equivalent. So you have 50 bits of input wired into these gates. But no where in that circuit is all 50 bits, no where in that circuit is the full 50 bits-worth of information associated so that the physics can know there is to be conscious of. For example, you can compute whether the number of 1 bits is odd or even. This will give you a single bit, and therefore a single state, that is dependent of the 50 bits, but obviously, it does not describe the original object.
So how do you associate 50 bits without loosing their value? There is only one physical process for doing this - and being on the Physics Forum should mean that I don't have to say what that is.

Now I am leaving out a piece of this. Associating the bits simply provide one essential element of consciousness, it doesn't "explain consciousness". Fully explaining consciousness has its limits, but if you have followed this so far, there is further to go. Our conscious awareness is very centered around being human, but the basic process required to generate it (superpositioning) is a ubiquitous physical process. It is reasonable to presume that there is a fundamental "consciousness", and that this is implement in the human brain for Darwinian "purpose" with the result being "human consciousness". One more step, made by Penrose, though not in these words: in theory, there is a limited amount on information in the universe - or, in the least, everything that we know about the universe is consistent with there being a finite (though very large) amount of information. Let's create a side universe for ourselves, one with enough flash memory to store a complete description of our universe, and we will make a backup copy of our universe in that flash memory. The question then becomes, how is that backup different from our real universe? That copy will include the full information about humans, but there will be no consciousness. Taken more broadly, all the information about our universe does not make our universe. There is a "reality" element which is the actual physics.

Obviously, not a full explanation. I don't have that.
 
Last edited:
  • #140
Demystifier said:
I disagree.
Hi Demystifier:

I appreciate your post, but its succinctness is a bit disappointing. Although we disagree, I respect your knowledge, and I think I would benefit from understanding your reasons for disagreeing. It may well be that our disagreement is only about the use of terminology.

Regards,
Buzz
 
  • #141
Buzz Bloom said:
Hi Demystifier:

I appreciate your post, but its succinctness is a bit disappointing. Although we disagree, I respect your knowledge, and I think I would benefit from understanding your reasons for disagreeing. It may well be that our disagreement is only about the use of terminology.

Regards,
Buzz
To avoid too much offtopic, for more details see my paper http://philsci-archive.pitt.edu/12325/1/hard_consc.pdf
 
  • #142
Demystifier said:
I agree that it's off topic, but disagree with the rest.
Which part?
 
  • #143
ObjectivelyRational said:
Which part?
That phil. zombi is self-refuting. You can also take a look at my paper I linked in the post above.
 
  • #144
Demystifier said:
That phil. zombi is self-refuting. You can also take a look at my paper I linked in the post above.

Paper looks reasonable but I see no mention of a philosophical zombie.. we must be defining it differently because we simply cannot disagree with conclusions without disagreeing with either the premises or the argument.

Where does it define the philosophical zombie?
 
  • #145
ObjectivelyRational said:
Paper looks reasonable but I see no mention of a philosophical zombie.. we must be defining it differently because we simply cannot disagree with conclusions without disagreeing with either the premises or the argument.

Where does it define the philosophical zombie?
The paper does not talk about p-zombies explicitly. However, the paper defends the same basic ideas as does the Chalmers's book. Then, to see how p-zombies are logically possible, one can see that book.
 
  • #146
ObjectivelyRational said:
Paper looks reasonable but I see no mention of a philosophical zombie.. we must be defining it differently because we simply cannot disagree with conclusions without disagreeing with either the premises or the argument.

Where does it define the philosophical zombie?

I noted something of your paper which we likely disagree on:

You state:

1. Physical laws are entirely syntactical.
2. Brains are entirely based on physical laws.
3. Anything entirely based on syntactical laws is entirely syntactical itself.
Therefore brains are entirely syntactical.

1 may be true but 2 is false.

Physical laws are our attempt to describe reality and they have a certain form, but they are abstractions.
Reality is and acts as it is and does, reality does not follow nor is it based on our physical laws.

You are conflating two distinct things here, one is science, i.e. the study of reality and the abstractions and formulations in math and language we use in order to try to understand it. The other is reality itself which has a nature and behaves according to its nature. We try to understand reality but our understanding is not something reality follows or is based on.

This kind of error helps me understand why we disagree.

Best of luck!
 
  • #147
ObjectivelyRational said:
I noted something of your paper which we likely disagree on:

You state:

1. Physical laws are entirely syntactical.
2. Brains are entirely based on physical laws.
3. Anything entirely based on syntactical laws is entirely syntactical itself.
Therefore brains are entirely syntactical.

1 may be true but 2 is false.

Physical laws are our attempt to describe reality and they have a certain form, but they are abstractions.
Reality is and acts as it is and does, reality does not follow nor is it based on our physical laws.

You are conflating two distinct things here, one is science, i.e. the study of reality and the abstractions and formulations in math and language we use in order to try to understand it. The other is reality itself which has a nature and behaves according to its nature. We try to understand reality but our understanding is not something reality follows or is based on.

This kind of error helps me understand why we disagree.

Best of luck!
You are right, if my axiom 2 is wrong, the so are my conclusions. In that sense I can conditionally agree with you about the p-zombies. Note also that the last paragraph of that section in my paper is a conditional statement, i.e. contains an "if".
 
Last edited:
  • #148
There is one point that I forgot to mention in my previous post. Suppose you say that "functionally"** a computer program is exactly the same as a sentient human being.

Now suppose you accepted LEM for basically any non-recursive set (assume halt set to be specific). Then by that "very acceptance" you are saying that the sentient human being has "potentiality" to go beyond a computer program. That is, even though the sentient human being can't prove all the statements in a set past a given threshold (of his training that is), it is "possible" to "help" him (wouldn't this be very point of taking LEM true?).

I am not really not necessarily taking any point of view. I just have genuine difficulty seeing how someone would take both of the following viewpoints simultaneously:
-a- "equating" computer programs and sentient human beings for "all" functional purposes*** (in the sense of potentiality****)
-b- accepting LEM for halt set

If you reject (b), then I can see why someone can take view (a) above though (as at least there is no internal inconsistency seemingly).

But I personally feel quite strongly that all of this discussion is eclipsed by my previous posts, so perhaps while it is good for a mention (for the sake of completeness), it is of less fundamental nature (in my view).** I keep emphasizing this distinction on the following basis:
Suppose you made an automaton out of "pure circuitry" and "nothing else", but which by all appearances wouldn't appear and act (let's assume so ... for all practical purposes) so. But so what? Should I say it is "really" conscious? It could even deceive someone who didn't know it was "pure circuitry"? But even then what difference does it make?

*** Notice that I don't just mean "pragmatic functional purposes" or "practical functional purposes", but certainly in a deeper sense than that.

**** I am personally completely convinced that equivalence doesn't even hold in sense of "past of a certain threshold" (of training) let alone the sense of "potentiality".

Edit:
Perhaps some clarification would make things clearer. Perhaps this is too much for a point that isn't all that important (at least in my opinion). But since I already made the post, I guess explanation would be better to avoid ambiguity.

When we talk about a statement such as:
"This program loops forever on this input"
We can only talk about "absolutely unprovable" because proving this statement false (if it really is) is trivial.
Call the positions of these supposedly "absolutely unprovable" statements as denoted by some set S.

S can't be r.e. That's because if it was, every statement could be decided in a sound way on following basis:
(1) Start with number 0.
(2) Call for "help". If the statement belongs to S "help" will never come. But eventually it would just be "enumerated" (because of S being r.e.) and we could just return "true". If the statement belongs to complement S then "help" will come at some point. So it is just a matter of waiting long enough.
(3) Move to next number.

"help" means pressing a button on a controller that sends the signal to some "genius mathematicians" in a far away galaxy. With the help button, they start working on the problem "eventually" resolving it (if it is resolvable at all).
Also note that roughly the idea here is that if the "genius mathematicians" start retorting to guesses, to be sure they may get the result right (that is returning "true") for a finite number of initial values of S, but that comes at the cost of making eventual mistakes (potentially at any statement number).

Now the possibility of (a) being true and (b) being false could "presumably" occur when there exists a recursive and sound reasoning system that halts on all values that belong to the set S' (complement of S).
Is there something obviously wrong with it or not? I can't say to be honest.

Now by a recursive and sound reasoning system I mean a partial recursive function f:N→N such that:
-- it can't return "false" when the statement for given number is true
-- it can't return "true" when the statement for given number is false
-- it can't return "false" when the statement number belongs to set S
-- it can run forever for any given input

P.S. I have tried to remove any "major" mistakes in the "Edited" part, but still there might be though, as I hadn't any of this in a thoroughly written form before (though I had given some thought to these issues before).
 
Last edited:
  • #149
Demystifier said:
To avoid too much offtopic, for more details see my paper http://philsci-archive.pitt.edu/12325/1/hard_consc.pdf
Hi Demystifier:
Thanks for the link.

From the abstract it seems we mostly agree. I plan to complete reading the paper soon.

Regards,
Buzz
 
  • #150
Demystifier said:
You are right, if my axiom 2 is wrong, the so are my conclusions. In that sense I can conditionally agree with you about the p-zombies. Note also that the last paragraph of that section in my paper is a conditional statement, i.e. contains an "if".

Then in a sense we are likely in agreement.

Simulation of a system, using physics, science, computation is not the same as replication of a system. That is not to say that simulation of a system cannot replicate certain aspects of the system, but if "what matters" about a real system cannot be successfully simulated, then certainly replication of "what matters" about that system cannot be achieved through simulation.

This does not mean that consciousness cannot be actually replicated, it only means that it cannot be replicated through simulation. A model of a wave on water will never actually be a wave. IF a wave is "what matters" phenomenologically, we can of course set one up with another liquid... hence replicating waves exhibited by water with something else... of course we had to know enough about waves to know we could replicate waves we see on water with waves on another liquid.

If and when the hard problems of consciousness are solved, replication would entail ensuring that what matters about a natural system i.e. what it is about our brains that makes consciousness possible and causes it to be, is present in the system which is to exhibit it. In this case of course, replication would not be simulation, but actual exhibited phenomena of consciousness, which would emerge because the conditions which create it are present.
 
  • Like
Likes Demystifier
Back
Top