B Can you solve Penrose's chess problem and win a bonus prize?

  • Thread starter Thread starter Auto-Didact
  • Start date Start date
  • Tags Tags
    Ai Chess Penrose
Click For Summary
Penrose's chess problem challenges players to find a way for white to win or force a stalemate against a seemingly unbeatable black setup, designed to confound AI. The Penrose Institute is studying how humans achieve insights in problem-solving, inviting participants to share their solutions and reasoning. While computers may struggle with this position due to its complexity, many human players can recognize the potential for a draw or even a win through strategic moves. The problem highlights the differences in human intuition and AI computation, suggesting that human reasoning may involve more than just brute force calculations. Participants are encouraged to engage with the puzzle and share their experiences for a chance to win a prize.
  • #121
Auto-Didact said:
My point is not that things can't not be defined functionally (they can), my point is that one cannot claim that a functional description can serve as a legitimate replacement for physical description when speaking about natural phenomena.

I'm not sure who is saying that functional definitions are replacements for physical descriptions. I don't think anybody is. Functionalism as I understand it is a matter of taking equivalence classes of physical descriptions.We're saying that one physical description is functionally equivalent to another (for certain purposes). That doesn't mean that we're ignoring physical descriptions.

For example, if I want to make a calculator, or make a chair, or make a car, there are many, many different physical descriptions that could be considered a calculator, or a chair, or a car. That these things are defined by their functional role means that physically inequivalent objects can both be calculators.

Similarly, functionalism for consciousness would mean that you could "implement" consciousness in different ways, in the same way that you can implement a calculator or a chair in different ways. There's no denial of physics involved here.
 
Mathematics news on Phys.org
  • #122
Auto-Didact said:
Sex organs are natural phenomena and therefore there is in principle never a problem to describe them using physics. I cannot think of any other natural phenomenon, apart from consciousness, wherein this is always trivially possible. Just describing something by its function is not at all the same as saying that no physical description is possible; this is however exactly what ontological functionalists claims, i.e. that consciousness can only be described functionally and that a physical description is in principle impossible.

I think you misunderstand what functionalists are claiming. Going back to the example of a calculator. There is certainly a physical description of how any particular calculator works, in terms of solid state physics and electronics, etc. But there can be things that are considered calculators that work by different physical principles (Babbage's Difference Engine, for example). What makes something a calculator is not a particular configuration of electronics components, but the fact that it is possible to do calculations with it.
 
  • Like
Likes durant35
  • #123
stevendaryl said:
It seems to me that in order to do modern science, you have to talk about things that are not physical: functions, sets, vectors, tensors, etc. If you want to say that such abstractions don't really exist, since they're not physical, I guess that's okay, although I can't see any point in saying that. But in any case, that's just a terminological matter, it seems to me---you're using "really exist" in a particular way that excludes such nonphysical entities. But you can't conclude anything about consciousness based on a terminological decision. If you want to say that entities that are defined functionally don't really exist in the same sense that electrons do, then what's your basis for saying that consciousness really exists? Certainly, brains exist, and certainly those brains connected to nerves and muscles and fingers and mouths are responsible for making noises and other signals about consciousness, but how does that show that consciousness exists, as a physical entity, or as a physical property?
We are very much straying into the philosophical distinctions between what is physics and what is mathematics. Luckily I can say this somewhat briefly. Physical laws (i.e. differential equations) belong to a special class of physical things, including all their properties which tend to be describable even further using other forms of mathematics.

It is extremely hard to tell what the rest of mathematics is and whether it has some physically actualized form in the natural world or in some other world and I won't try to do this either, I'll just say that I do not subscribe to mathematical Platonism.
I'd actually like to end by quoting what Weyl and Poincaré had to say in regard to these matters but I can't find the relevant quotations.
 
  • #124
stevendaryl said:
I'm not sure who is saying that functional definitions are replacements for physical descriptions. I don't think anybody is. Functionalism as I understand it is a matter of taking equivalence classes of physical descriptions.We're saying that one physical description is functionally equivalent to another (for certain purposes). That doesn't mean that we're ignoring physical descriptions.
There are legions of cognitive scientists, psychologists, philosophers, theologians, etc who specifically argue for ontological functionalism of consciousness instead of some physical theory and so for a full refutation of physicalism. If you haven't ever met any, you simply aren't trying hard enough.
For example, if I want to make a calculator, or make a chair, or make a car, there are many, many different physical descriptions that could be considered a calculator, or a chair, or a car. That these things are defined by their functional role means that physically inequivalent objects can both be calculators.

Similarly, functionalism for consciousness would mean that you could "implement" consciousness in different ways, in the same way that you can implement a calculator or a chair in different ways. There's no denial of physics involved here.

I think you misunderstand what functionalists are claiming. Going back to the example of a calculator. There is certainly a physical description of how any particular calculator works, in terms of solid state physics and electronics, etc. But there can be things that are considered calculators that work by different physical principles (Babbage's Difference Engine, for example). What makes something a calculator is not a particular configuration of electronics components, but the fact that it is possible to do calculations with it.
I understand your point. What you are describing simply isn't ontological functionalism, but some other weaker form of functionalism wherein the functional theory of consciousness is just a placeholder theory, to eventually be replaced by a physical theory, similar to how many definitions in classical chemistry - another placeholder temporary functional theory - were eventually fully replaced by 20th century physics.

Instead of replying in detail, I'll just refer you to a paper: Glymour 1987, Psychology as Physics
 
  • #125
Buzz Bloom said:
Hi Paul:

Perhaps I am misinterpreting MFB's claim. I read it a saying simulation of QM applied to the atoms and electrons and such which comprise a brain can lead to the intelligent and conscious behavior which the brain exhibits.

Here is a quote from the summary of Downing's book.
Downing focuses on neural networks, both natural and artificial, and how their adaptability in three time frames—phylogenetic (evolutionary), ontogenetic (developmental), and epigenetic (lifetime learning)—underlie the emergence of cognition.​
I don't see how the simulation of brain behavior with artificial neural networks is the same as simulation of its atomic activity. I see that as an enormous difference.

Regards,
Buzz
The point is that this work is reductionist from brain to neurons. Unless you further believe that the behavior of individual neurons is fundamentally not reducible to physics of atoms, this reference is consistent with MFB position. Most arguments I've seen against reductionism of the brain are that it is more than the sum of individual functional neuron behavior, rather than local neuron behavior is not reducible to physics of atoms.
 
Last edited:
  • #126
Auto-Didact said:
There are legions of cognitive scientists, psychologists, philosophers, theologians, etc who specifically argue for ontological functionalism of consciousness instead of some physical theory

I think you're misunderstanding what they are saying. Let's take the example of a calculator: To be a calculator means a particular functional relationship between inputs and outputs. So you can develop a theory of calculators independently of any particular choice of how it's implemented. But if you're going to build a calculator, of course you need physics in order to get a thingy that implements that functional relationship. Turing invented a theory of computers before there were any actual computers. An actual computer implements (actually, only partially, because Turing's computers had unlimited memories) the abstraction.

So the people developing a functional theory of mind are trying to understand what abstraction the physical mind is an instance of. Does that count as a refutation of physicalism? Only if someone is wanting to be provocative.
 
  • Like
Likes MrRobotoToo
  • #127
Auto-Didact said:
I understand your point. What you are describing simply isn't ontological functionalism, but some other weaker form of functionalism wherein the functional theory of consciousness is just a placeholder theory, to eventually be replaced by a physical theory, similar to how many definitions in classical chemistry - another placeholder temporary functional theory - were eventually fully replaced by 20th century physics.

No, that's not what I meant. It's not a placeholder at all. Take the example of a computer: Turing develop a theory of computers that was independent of any specific implementation of a computer. It is not correct to say that Turing's theory was a "placeholder" for a more physical theory of computers that was only possible after the development of solid state physics. The abstract theory of computation is neither a placeholder for a solid state physics description of computers, nor is it a replacement for such a description. It's two different, but related, lines of research: the theory of computation, and the engineering of building computers.

Correspondingly, there could be a functionalist theory of mind which relates to a physical theory of the brain in the same way that the abstract theory of computation relates to electroninc computers.
 
  • Like
Likes MrRobotoToo
  • #128
PAllen said:
Most arguments I've seen against reductionism of the brain are that it is more than the sum of individual functional neuron behavior, rather than local neuron behavior is not reducible to physics of atoms.
Hi Paul:

Why can't it be both.
1. The functionality of the brain is more than the sum of individual functional neuron behavior.
AND
2. The functionality of individual neuron behavior is more than the sum of the physics of their atoms, electrons etc.

My problem with reductionism in general is that reductionists seem to ignore the implications of emergent phenomena. What makes the emergent phenomena emergent is that what emerges is not dependent on the details of its constituents. The details of how the constituents function do not influence the emergent behavior. Only the behavior of the constituents affect the emergent behavior. Reductionism is OK with respect to emergent phenomena if the reduction decomposition if only functional, not physical.

Regards,
Buzz

Auto-Didact said:
Just describing something by its function is not at all the same as saying that no physical description is possible; this is however exactly what ontological functionalists claims, i.e. that consciousness can only be described functionally and that a physical description is in principle impossible.
Hi Auto-Didact:

I don't think I have ever met anyone who is like whom you describe as "ontological functionalists". The individuals whom I have met who consider themselves to be "functionalists", like myself, do not believe the physical description is impossible, but rather just irrelevant. The emergent behavior of emergent phenomena, like consciousness, do not depend on the physical description of constituents, only on the functionality of constituents.

Regards,
Buzz
 
Last edited:
  • #129
I merged two posts into one above at the suggestion of a monitor.
 
  • #130
Buzz Bloom said:
Hi Paul:

Why can't it be both.
1. The functionality of the brain is more than the sum of individual functional neuron behavior.
AND
2. The functionality of individual neuron behavior is more than the sum of the physics of their atoms, electrons etc.

My problem with reductionism in general is that reductionists seem to ignore the implications of emergent phenomena. What makes the emergent phenomena emergent is that what emerges is not dependent on the details of its constituents. The details of how the constituents function do not influence the emergent behavior. Only the behavior of the constituents affect the emergent behavior. Reductionism is OK with respect to emergent phenomena if the reduction decomposition if only functional, not physical.

Regards,
Buzz
The point was why I thought this reference was consistent with Mfb's position. This book is reductionist at the neuronal level. To me, that makes it at least consistent with MFB position.

As to emergence, I see no conflict between emergence and reductionism. Specifically, simulating a large system of elements whose individual behavior is known will end up displaying the emergent behavior without explicitly putting it into the simulation. The emergence will happen in the simulation as readily as it would with the physical system.
 
  • Like
Likes Buzz Bloom
  • #131
I think the discussion is going in circles. I'll continue contributing once we have a functional human brain simulation, as this seems to be the only thing that can convince some of the possibility of such a simulation. I'm highly confident simulating neurons with an effective model in classical physics is sufficient to get an accurate response. This is the approach all simulations take so far.

Simulating Caenorhabditis elegans (a worm) moving around, based on its actual cell structure
Simulated Drosophila melanogaster fly
http://www.nsi.edu/~nomad/darwinvii.html - learning how to interpret video camera inputs to recognize objects without guidance.
With increasing computing power and improved neuron mapping technique, these simulations will grow. We currently don't have the computing power to simulate a human brain, and we don't have the tools to map a brain cell by cell, but that is probably just a matter of time.
 
  • #132
PAllen said:
As to emergence, I see no conflict between emergence and reductionism. Specifically, simulating a large system of elements whose individual behavior is known will end up displaying the emergent behavior without explicitly putting it into the simulation. The emergence will happen in the simulation as readily as it would with the physical system.
Hi Paul:

In general I agree with this, but there is an exception.

If the behavior of constituents include adaptability, then the emergent behavior may depend on what may well be an accident (or a combination of several) which the organism and its components adapt to. If that is the case, then the simulation of the components (whether physical or functional) may never experience the accident, and therefore that simulation will fail produce the emergence.

The following is intended to be interpreted as metaphorical.

When emergent behavior is passed from one generation to another, it would not be by genetics, but rather by parent to offspring training. Therefore, the permanence of such emergent behavior requires that the organism that acquires it be a member of a species which has the behavior of training offspring. In such a case, the new behavior is not just a characteristic of the individual, but it becomes a cultural or group characteristic, and a new level of reduction emerges. The former individuals now become components of the group as a new type of individual.

Regards,
Buzz
 
  • Like
Likes durant35
  • #133
Buzz Bloom said:
Hi Paul:

In general I agree with this, but there is an exception.

If the behavior of constituents include adaptability, then the emergent behavior may depend on what may well be an accident (or a combination of several) which the organism and its components adapt to. If that is the case, then the simulation of the components (whether physical or functional) may never experience the accident, and therefore that simulation will fail produce the emergence.

The following is intended to be interpreted as metaphorical.

When emergent behavior is passed from one generation to another, it would not be by genetics, but rather by parent to offspring training. Therefore, the permanence of such emergent behavior requires that the organism that acquires it be a member of a species which has the behavior of training offspring. In such a case, the new behavior is not just a characteristic of the individual, but it becomes a cultural or group characteristic, and a new level of reduction emerges. The former individuals now become components of the group as a new type of individual.

Regards,
Buzz
Normally, emergent phenomena are known (or presumed to be) generic, not accidental. However, running a simulation can investigate this very question (in principle). Accidents in the sense of chaotic phenomena would be handled via perturbing initial conditions and running multiple simulations. Accidents in the sense of quantum randomness are handled by simulation of such using source of true randomness. In principle, you could then find out whether e.g. complex life is generic.

In simulating a human brain, one is not trying to end up with e.g. my brain, in particular, but an instance of a generic human like brain.
 
  • #134
PAllen said:
In simulating a human brain, one is not trying to end up with e.g. my brain, in particular, but an instance of a generic human like brain.
Hi Paul:

I much appreciate this discussion with you. I feel I am improving my understanding of my own personal confusions.

I did have in mind what you described as the object of the simulation. I guess my choice of an explanatory example failed to communicate what I wanted to convey.

I am now thinking of the generic human mind being simulated in its state during the very long period before agriculture. The change from hunter-gatherer behavior to farmer behavior depended on extreme environmental changes which made the hunter-gatherer life-style change from completely sufficient to no longer adequate to sustain the population. This is an example of an "accident".

If you were an alien anthropologist intending to simulate the brains of a group of human subjects from that era, how would you anticipate such an accidental change and thereby incorporate the necessary elements in your simulation so that the simulation would change from hunter-gatherer behavior to farmer behavior? What I am suggesting is that it may not be possible to accurately capture knowledge of, and then simulate, the essential adaptive elements of an adaptive species, and their limits, which one would need to correctly simulate behavior changes due to unexpected external survival requirements.

Regards,
Buzz
 
  • #135
One specific point about computation (which I am writing down in very specific context).

Computation can be thought of as a clerical process to enumerate "all" elements of an r.e. set. What it does not tell us about at all (or isn't supposed to) is cognitively realisable processes that refer to "incomplete enumeration" of sets (more complex than r.e. ones for example) ---- while just guaranteeing satisfaction of certain conditions for example.

Obviously with rather extremely limited life time and with practical concerns (potentially also the concern of doing something more "useful" perhaps), we don't or can't normally think about it much.As an exaggerated example just to highlight a point (deliberately incomplete as I don't feel like writing a very long post) of how much the difference can be:
p = some exceedingly difficult statement of number theory

Program-1
"for all inputs"
if( p is true )
output 1
else
loop forever

Program-2
"for all inputs"
output 1
 
Last edited:
  • #136
mfb said:
I think the discussion is going in circles. I'll continue contributing once we have a functional human brain simulation, as this seems to be the only thing that can convince some of the possibility of such a simulation. I'm highly confident simulating neurons with an effective model in classical physics is sufficient to get an accurate response. This is the approach all simulations take so far.

Simulating Caenorhabditis elegans (a worm) moving around, based on its actual cell structure
Simulated Drosophila melanogaster fly
http://www.nsi.edu/~nomad/darwinvii.html - learning how to interpret video camera inputs to recognize objects without guidance.
With increasing computing power and improved neuron mapping technique, these simulations will grow. We currently don't have the computing power to simulate a human brain, and we don't have the tools to map a brain cell by cell, but that is probably just a matter of time.

Nothing is circular, the discussion is doing great and everybody has had a chance to share an opinion and admit at least indirectly that this is a controversial subject and that opinions of others can benefit to their own perspective.

The only thing that's repetitive and circular is your ignorance of the underlying issues for which you compensate by examples like the quoted ones, which are amazing in their own rights, but can hardly serve for extrapolations because of the philosophical problems that I've typed about before which - for some reason - you avoided to even mention.

It would be a shame that you stop contributing because you made some good examples and references for the sake of the discussion, but you should really stop 'contributing' in a way like everything you type is undisputable without even wondering or seeking clarification on the underlying problems. Don't take this personally, but that is the locus of the circularity in this thread.
 
  • #137
Buzz Bloom said:
Do you agree or disagree that in a conscious being experience can (or must) cause learning and adaptation, and thereby change the range of possible behaviors?
I disagree.
 
  • #138
ObjectivelyRational said:
Sorry, it's quite off topic but the definition of a philosophical zombie is self-refuting, given proper premises.
I agree that it's off topic, but disagree with the rest.
 
  • Like
Likes Auto-Didact
  • #139
mfb said:
How do you simulate a brain? You simulate the behavior of every component - every nucleus and electron if you don't find a better way. You do not need to know about neurons or any other large-scale structures. They naturally occur in your simulation. You just need to know the initial state, and that is possible to get.
I agree with this entirely.

But there is one unexpected piece of neural functionality in the human brain that we will need to replicate: the ability to hold a relatively large amount of information (at least dozens of bits) in a single state. We know that such functionality exists because we are able to be "conscious" of complex concepts, such as the image of a tree, and such objects cannot be summarized in just a few bits - the number that can normally be encoded into a single dynamic state.

But, of course, we do know of devices that can do this - and devices which have the potential to make good (Darwinian) use of information in this form.

On the other hand, I do not see brain functionality that could not be replicated by conventional AND/OR/NAND/NOR gates - even to the point of having the replication report that it is conscious. But what would the purpose of reporting that you are conscious if you are not? Where would the concept of "consciousness" even come from if it didn't exist within social beings? The fact is, we really do have conscious experiences - we aren't just making it up. And, as evidenced by the fact that we can talk about it, that consciousness has the potential to influence our actions.

I will add the argument for how many-bit consciousness compels a many-bit state, though it has fallen on skeptical ears before. Perhaps I can do better this time.

If you are describing something that requires 50 bits, having only 25 of those bits doesn't describe that something. You need all the bits. So you need some way of associating those bits - a way to define which 50 bits stored in this universe are to be the symbolic description of that something. Let's say you use a bunch of logic gates (NAND,NOR,AND,OR) or the presumed neural equivalent. So you have 50 bits of input wired into these gates. But no where in that circuit is all 50 bits, no where in that circuit is the full 50 bits-worth of information associated so that the physics can know there is to be conscious of. For example, you can compute whether the number of 1 bits is odd or even. This will give you a single bit, and therefore a single state, that is dependent of the 50 bits, but obviously, it does not describe the original object.
So how do you associate 50 bits without loosing their value? There is only one physical process for doing this - and being on the Physics Forum should mean that I don't have to say what that is.

Now I am leaving out a piece of this. Associating the bits simply provide one essential element of consciousness, it doesn't "explain consciousness". Fully explaining consciousness has its limits, but if you have followed this so far, there is further to go. Our conscious awareness is very centered around being human, but the basic process required to generate it (superpositioning) is a ubiquitous physical process. It is reasonable to presume that there is a fundamental "consciousness", and that this is implement in the human brain for Darwinian "purpose" with the result being "human consciousness". One more step, made by Penrose, though not in these words: in theory, there is a limited amount on information in the universe - or, in the least, everything that we know about the universe is consistent with there being a finite (though very large) amount of information. Let's create a side universe for ourselves, one with enough flash memory to store a complete description of our universe, and we will make a backup copy of our universe in that flash memory. The question then becomes, how is that backup different from our real universe? That copy will include the full information about humans, but there will be no consciousness. Taken more broadly, all the information about our universe does not make our universe. There is a "reality" element which is the actual physics.

Obviously, not a full explanation. I don't have that.
 
Last edited:
  • #140
Demystifier said:
I disagree.
Hi Demystifier:

I appreciate your post, but its succinctness is a bit disappointing. Although we disagree, I respect your knowledge, and I think I would benefit from understanding your reasons for disagreeing. It may well be that our disagreement is only about the use of terminology.

Regards,
Buzz
 
  • #141
Buzz Bloom said:
Hi Demystifier:

I appreciate your post, but its succinctness is a bit disappointing. Although we disagree, I respect your knowledge, and I think I would benefit from understanding your reasons for disagreeing. It may well be that our disagreement is only about the use of terminology.

Regards,
Buzz
To avoid too much offtopic, for more details see my paper http://philsci-archive.pitt.edu/12325/1/hard_consc.pdf
 
  • #142
Demystifier said:
I agree that it's off topic, but disagree with the rest.
Which part?
 
  • #143
ObjectivelyRational said:
Which part?
That phil. zombi is self-refuting. You can also take a look at my paper I linked in the post above.
 
  • #144
Demystifier said:
That phil. zombi is self-refuting. You can also take a look at my paper I linked in the post above.

Paper looks reasonable but I see no mention of a philosophical zombie.. we must be defining it differently because we simply cannot disagree with conclusions without disagreeing with either the premises or the argument.

Where does it define the philosophical zombie?
 
  • #145
ObjectivelyRational said:
Paper looks reasonable but I see no mention of a philosophical zombie.. we must be defining it differently because we simply cannot disagree with conclusions without disagreeing with either the premises or the argument.

Where does it define the philosophical zombie?
The paper does not talk about p-zombies explicitly. However, the paper defends the same basic ideas as does the Chalmers's book. Then, to see how p-zombies are logically possible, one can see that book.
 
  • #146
ObjectivelyRational said:
Paper looks reasonable but I see no mention of a philosophical zombie.. we must be defining it differently because we simply cannot disagree with conclusions without disagreeing with either the premises or the argument.

Where does it define the philosophical zombie?

I noted something of your paper which we likely disagree on:

You state:

1. Physical laws are entirely syntactical.
2. Brains are entirely based on physical laws.
3. Anything entirely based on syntactical laws is entirely syntactical itself.
Therefore brains are entirely syntactical.

1 may be true but 2 is false.

Physical laws are our attempt to describe reality and they have a certain form, but they are abstractions.
Reality is and acts as it is and does, reality does not follow nor is it based on our physical laws.

You are conflating two distinct things here, one is science, i.e. the study of reality and the abstractions and formulations in math and language we use in order to try to understand it. The other is reality itself which has a nature and behaves according to its nature. We try to understand reality but our understanding is not something reality follows or is based on.

This kind of error helps me understand why we disagree.

Best of luck!
 
  • #147
ObjectivelyRational said:
I noted something of your paper which we likely disagree on:

You state:

1. Physical laws are entirely syntactical.
2. Brains are entirely based on physical laws.
3. Anything entirely based on syntactical laws is entirely syntactical itself.
Therefore brains are entirely syntactical.

1 may be true but 2 is false.

Physical laws are our attempt to describe reality and they have a certain form, but they are abstractions.
Reality is and acts as it is and does, reality does not follow nor is it based on our physical laws.

You are conflating two distinct things here, one is science, i.e. the study of reality and the abstractions and formulations in math and language we use in order to try to understand it. The other is reality itself which has a nature and behaves according to its nature. We try to understand reality but our understanding is not something reality follows or is based on.

This kind of error helps me understand why we disagree.

Best of luck!
You are right, if my axiom 2 is wrong, the so are my conclusions. In that sense I can conditionally agree with you about the p-zombies. Note also that the last paragraph of that section in my paper is a conditional statement, i.e. contains an "if".
 
Last edited:
  • #148
There is one point that I forgot to mention in my previous post. Suppose you say that "functionally"** a computer program is exactly the same as a sentient human being.

Now suppose you accepted LEM for basically any non-recursive set (assume halt set to be specific). Then by that "very acceptance" you are saying that the sentient human being has "potentiality" to go beyond a computer program. That is, even though the sentient human being can't prove all the statements in a set past a given threshold (of his training that is), it is "possible" to "help" him (wouldn't this be very point of taking LEM true?).

I am not really not necessarily taking any point of view. I just have genuine difficulty seeing how someone would take both of the following viewpoints simultaneously:
-a- "equating" computer programs and sentient human beings for "all" functional purposes*** (in the sense of potentiality****)
-b- accepting LEM for halt set

If you reject (b), then I can see why someone can take view (a) above though (as at least there is no internal inconsistency seemingly).

But I personally feel quite strongly that all of this discussion is eclipsed by my previous posts, so perhaps while it is good for a mention (for the sake of completeness), it is of less fundamental nature (in my view).** I keep emphasizing this distinction on the following basis:
Suppose you made an automaton out of "pure circuitry" and "nothing else", but which by all appearances wouldn't appear and act (let's assume so ... for all practical purposes) so. But so what? Should I say it is "really" conscious? It could even deceive someone who didn't know it was "pure circuitry"? But even then what difference does it make?

*** Notice that I don't just mean "pragmatic functional purposes" or "practical functional purposes", but certainly in a deeper sense than that.

**** I am personally completely convinced that equivalence doesn't even hold in sense of "past of a certain threshold" (of training) let alone the sense of "potentiality".

Edit:
Perhaps some clarification would make things clearer. Perhaps this is too much for a point that isn't all that important (at least in my opinion). But since I already made the post, I guess explanation would be better to avoid ambiguity.

When we talk about a statement such as:
"This program loops forever on this input"
We can only talk about "absolutely unprovable" because proving this statement false (if it really is) is trivial.
Call the positions of these supposedly "absolutely unprovable" statements as denoted by some set S.

S can't be r.e. That's because if it was, every statement could be decided in a sound way on following basis:
(1) Start with number 0.
(2) Call for "help". If the statement belongs to S "help" will never come. But eventually it would just be "enumerated" (because of S being r.e.) and we could just return "true". If the statement belongs to complement S then "help" will come at some point. So it is just a matter of waiting long enough.
(3) Move to next number.

"help" means pressing a button on a controller that sends the signal to some "genius mathematicians" in a far away galaxy. With the help button, they start working on the problem "eventually" resolving it (if it is resolvable at all).
Also note that roughly the idea here is that if the "genius mathematicians" start retorting to guesses, to be sure they may get the result right (that is returning "true") for a finite number of initial values of S, but that comes at the cost of making eventual mistakes (potentially at any statement number).

Now the possibility of (a) being true and (b) being false could "presumably" occur when there exists a recursive and sound reasoning system that halts on all values that belong to the set S' (complement of S).
Is there something obviously wrong with it or not? I can't say to be honest.

Now by a recursive and sound reasoning system I mean a partial recursive function f:N→N such that:
-- it can't return "false" when the statement for given number is true
-- it can't return "true" when the statement for given number is false
-- it can't return "false" when the statement number belongs to set S
-- it can run forever for any given input

P.S. I have tried to remove any "major" mistakes in the "Edited" part, but still there might be though, as I hadn't any of this in a thoroughly written form before (though I had given some thought to these issues before).
 
Last edited:
  • #149
Demystifier said:
To avoid too much offtopic, for more details see my paper http://philsci-archive.pitt.edu/12325/1/hard_consc.pdf
Hi Demystifier:
Thanks for the link.

From the abstract it seems we mostly agree. I plan to complete reading the paper soon.

Regards,
Buzz
 
  • #150
Demystifier said:
You are right, if my axiom 2 is wrong, the so are my conclusions. In that sense I can conditionally agree with you about the p-zombies. Note also that the last paragraph of that section in my paper is a conditional statement, i.e. contains an "if".

Then in a sense we are likely in agreement.

Simulation of a system, using physics, science, computation is not the same as replication of a system. That is not to say that simulation of a system cannot replicate certain aspects of the system, but if "what matters" about a real system cannot be successfully simulated, then certainly replication of "what matters" about that system cannot be achieved through simulation.

This does not mean that consciousness cannot be actually replicated, it only means that it cannot be replicated through simulation. A model of a wave on water will never actually be a wave. IF a wave is "what matters" phenomenologically, we can of course set one up with another liquid... hence replicating waves exhibited by water with something else... of course we had to know enough about waves to know we could replicate waves we see on water with waves on another liquid.

If and when the hard problems of consciousness are solved, replication would entail ensuring that what matters about a natural system i.e. what it is about our brains that makes consciousness possible and causes it to be, is present in the system which is to exhibit it. In this case of course, replication would not be simulation, but actual exhibited phenomena of consciousness, which would emerge because the conditions which create it are present.
 
  • Like
Likes Demystifier

Similar threads

Replies
29
Views
5K
  • · Replies 13 ·
Replies
13
Views
2K
  • · Replies 10 ·
Replies
10
Views
4K
Replies
10
Views
4K
  • · Replies 10 ·
Replies
10
Views
4K
  • · Replies 12 ·
Replies
12
Views
3K
Replies
5
Views
3K
  • · Replies 82 ·
3
Replies
82
Views
15K
Replies
3
Views
3K
  • · Replies 11 ·
Replies
11
Views
9K