- 3,012
- 42
Hi MF.
Undeducible truths are facts about properties which we have no way of deducing. We might suggest that we can't deduce simply by examining the computer, whether or not it is experiencing the color red for example. We might say the computer is having a subjective experience, and we may describe this experience as a phenomenon which is had by the computer, but we might be prone to believe that the facts about these phenomena can't be deduced by examinaing the computer itself. Whether this is true or not shouldn't be debated. What needs to be debated is that given the assumption that the computer's experience can't be deduced by observing or measuring the device, what is our conclusion?
I'd suggest there are potential conclusions one can reach by making specific assumptions about computationalism. For example, one can assume the phenomena supervenes on the action of a computer's parts. One can also assume that only classical physical laws are necessary to understand the actions of a computer.
One might also assume however, that what we call subjective experiences are unlike other phenomena. Classical phenomena have properties and there are facts about those properties which we can measure and calculate. However, one can also make the assumption that subjective experience is something unlike classical phenomena, and those subjective experiences may not be something we can measure in any way. Most classical phenomena such as weather patterns, orbits of planets, the function of a car engine, all are phenomena that can be described and understood by examining the interaction of the parts of the system that creates the phenomenon. However, when it comes to consciousness or subjective experience, we might find ourselves doubting these phenomena can be described or understood (ie: deduced) from the facts about the actions of the computational machine which allegedly experiences these phenomena.
There are other views if we make other assumptions:
1. To a computationalist that accepts strong emergence per Chalmers, we might believe that we can deduce these facts about subjective experience from new physical laws which are essentially organizational principals.
2. To someone that believes in quantum consciousness, these facts and properties might be deducible as we discover new physical dimensions or levels or discover new quantum interactions.
More on this momentarily.
I'll grant you that humans can't experience what a bat does, but that isn't important. The important question is, "Can we know what system is having an experience?" More on this in a moment.
I wouldn't be discussing these things with you MF, if I didn't feel you had something valuable to add to the discussion. I appreciate your posts since they come from an intellegent perspective and that's what's needed to challenge and progress my understanding of consciousness. Personally, I also enjoy reading those folks that do this for a living, so when I post references to papers or the papers themselves, it is in the hopes that the arguments they have provided might already be read and understood. I'd like to better understand the papers as well as have others understand them because we shouldn't be sitting here debating between ourselves with our own ideas. That would be ignoring the shoulders on which we can find a foot hold. On the other hand, it could be as you say, because I enjoy blowing smoke. <haha>
~
Getting back to the OP on hand regarding emergence, I'd like to enumerate what I see as our options given various assumptions. If our minds are mathematical processes as computationalism suggests, then we have one of 3 choices to make depending on what assumptions we want to accept. There may be more choices which depend on other assumptions, but the roads I see we can go down include the following:
1a. One set of assumptions leads to panpsychism (all matter has consciousness).
1b. We might also conclude panpsychism, plus, we might also conclude that we may add any other phenomena we wish, such as religion.
We get to this conclusion by making the assumptions I've described earlier. First, that a computer can become conscious. We also assume there are measurable properties or facts that a computer has. This is exactly what Bedau talks about as he describes the GoL. These measurable properties or facts as I'll call them include the color of a given pixel. Obviously, the color is measurable, and it's location is also measurable. The game also creates pictures with these pixels, one of which is termed a "glider". This is simply the recognition that the shape is made up of various colored pixels in a specific arrangement. We can deduce the rules, we can create a simulation of the GoL, we can do many things.
The GoL is not considered a conscious phenomena. It's a very simple game, my computer now is doing things many orders of magnitude more complex, so it boggles my mind to hear someone suggest the GoL has some kind of 'experience'. This observation is a good lead in however to what could then lead to panpsychism, or the concept that all matter is cognizant to some degree.
If we observe any computer, we might suggest that we can know everything about all the measurable properties and facts. We can know every state of every switch, and we can in fact even become that computer ourselves simply by performing the same mathematical functions that the computer does. This isn't very practical of course because we'd need to live to be billions of years old to do what a computer does in just a few seconds. However, there is no reason in principal that we couldn't do everything a computer does. We'd need a way to represent the state of the computer, but a pen and paper would do just fine. We can't say pen and paper are insufficient to create consciousness because that would proclaim that there is something unique about the mechanism or substrate on which a computation is performed that is special and unique. Such a proclamation would go counter to the fundamental premise of computationalism which states it doesn't matter if we use neurons or microchips or buckets filled with water to do the computation. If they all do the same computation, they all have the same experience.
Moving on, if we observe the computer we can know all the measurable facts and properties because by definition, they are measurable facts and measurable properties. We don't need anything more to explain what the computer is doing. What the computer is doing is performing an alorithm or series of mathematical manipulations or a series of symbolic manipulations - take your pick. All of that is knowable by definition.
What we don't know is what the computer might be experiencing! If we don't know what the computer is experiencing, if there is no way whatsoever to deduce this information, then these facts about the computer are undeducible. If this is true, and we can't know if the computer is experiencing anything at all, then we must be prepared to accept there are phenomena such as subjective experience which occur that we weren't aware are occurring, including panpsychism.
Searle and Putnam have put forth alternate arguments which lead us to the same conclusion. Searle writes:
Putnam wrote something very similar. Chalmers is quoting him here in '96.
About these arguments, Scheutz writes:
I believe both Putnam and Searles arguments rest on the assumptions outlined above, specifically that there is nothing one can measure that will tell us if a rock or a wall for example, is cognizant. More importantly, their arguments rest on the fact that we haven't properly defined computationalism. If we can't detect consciousness, if we can't deduce if something is conscious or not from what we can measure, then we have a serious problem, and such things as panpsychism, as well as any other phenomenon such as supernatural dieties, are possible outcomes.
2. Another set of assumptions leads to the mind being strongly emergent at the classical level and thus we need new physical laws which are 'organizational' so to speak. This is Chalmer's position. I won't elaborate on this one. I've provided a link to the paper so we can discuss specifics from that. In the end, I think this reasoning fails for two primary reasons. The first is that strong emergence hasn't been seriously considered by science at any level higher than the mesoscopic. The second reason is that the arguments put forth by Searle and Putnam should still hold unless there is a better definition for computationalism which differentiates computational structures.
3. Another set of assumptions leads to computationalism being false. There can be various reasons for this, one of which includes the fact that strong emergence can only be dignified by science on the molecular level, and computers don't work at this level.
I'd elaborate on these last two, but it seems this post is already longer than I'd expected.
The phrase was "undeducible truths" not "undecidable". If I replace the correct term into your post, it seems the point is still being missed. We discussed this before here:I think the term “undecidable truths’ is misleading. I believe you introduced this term into the discussion? I think a better and more accurate term would be “meaningless questions”. Why? Because “undecidable truths” implies …
MF Responded: Correct.Q_Goest said: thus there are facts and properties which are not deducible ("truths" which are not deducible) from a third person perspective. Is that correct MF?
Undeducible truths are facts about properties which we have no way of deducing. We might suggest that we can't deduce simply by examining the computer, whether or not it is experiencing the color red for example. We might say the computer is having a subjective experience, and we may describe this experience as a phenomenon which is had by the computer, but we might be prone to believe that the facts about these phenomena can't be deduced by examinaing the computer itself. Whether this is true or not shouldn't be debated. What needs to be debated is that given the assumption that the computer's experience can't be deduced by observing or measuring the device, what is our conclusion?
I'd suggest there are potential conclusions one can reach by making specific assumptions about computationalism. For example, one can assume the phenomena supervenes on the action of a computer's parts. One can also assume that only classical physical laws are necessary to understand the actions of a computer.
One might also assume however, that what we call subjective experiences are unlike other phenomena. Classical phenomena have properties and there are facts about those properties which we can measure and calculate. However, one can also make the assumption that subjective experience is something unlike classical phenomena, and those subjective experiences may not be something we can measure in any way. Most classical phenomena such as weather patterns, orbits of planets, the function of a car engine, all are phenomena that can be described and understood by examining the interaction of the parts of the system that creates the phenomenon. However, when it comes to consciousness or subjective experience, we might find ourselves doubting these phenomena can be described or understood (ie: deduced) from the facts about the actions of the computational machine which allegedly experiences these phenomena.
There are other views if we make other assumptions:
1. To a computationalist that accepts strong emergence per Chalmers, we might believe that we can deduce these facts about subjective experience from new physical laws which are essentially organizational principals.
2. To someone that believes in quantum consciousness, these facts and properties might be deducible as we discover new physical dimensions or levels or discover new quantum interactions.
More on this momentarily.
I think we might benefit by focusing on the fact that computationalism is predicated only on classical physics. It does not depend on HUP or anything else. Computationalism could have been theorized hundreds or even thousands of years ago. Yet, as I'm sure you know, it's a new concept which has arisen only in the past half century. Computationalism doesn't require microchips or anything electronic though. Computationalism doesn't need HUP, electricity, Godel's theorem, or anything like that. It simply says that the mind is a mathematical algorithm of some sort which is completely deterministic since it can in fact be described using math. There is no reason in principal, that the Egyptions couldn't have come up with the concept of computationalism, and created a cognizant computer made from water buckets, ropes and wooden scaffolding thousands of years ago.It does not follow, however, that the physical world is completely describable using mathematics.
Yes, we may say that as a human it is impossible to have the same subjective experiences as a bat, but that's irrelevent. What's important is whether or not we can tell if a bat is actually experiencing anything at all. Does a bat have subjective experiences? What is it that can have subjective experiences? Can a computer have them? Does the laptop computer I have right now experience anything? Does my son's video game in which ogres from pod land come to destroy Earth experience anything? Do the ogres experience dying as my son drills them with an assault rifle of unimaginable power? I love it when those ogres burst like popcorn! lol And if a computer can have subjective experience only if it's performing the correct algorithms, then can any equivalent system have them such as a computer made out of water buckets and valves? (multiple realizability says it must!)All I am saying is that questions of the type “what is it like to be a bat?” (which entail projecting one conscious perspective inside another) are also meaningless.
I'll grant you that humans can't experience what a bat does, but that isn't important. The important question is, "Can we know what system is having an experience?" More on this in a moment.
So far I don't disagree. All this says is that I can't become you and experience exactly what someone else does, just like the bat idea above.Conscious perspective is unitary, there is simply no way that agent A can get exactly the same perspective on agent B’s conscious perceptions that agent B gets on those same conscious perceptions.
The question we need to ask is not whether we can experience someone else's subjective experiences, but whether or not we can know if a system is having an experience at all.NOT because they represent an “undecidable truth”, but because the question is meaningless.
Would you agree that computationalism assumes that the proper logical algorithm will create the phenomena of subjective experience? One can call this a logical algorithm or the execution of a series of mathematical steps. All states of a computer can be mathematically represented. All states of any computational device of any type can in fact be mathematically represented. That is the point of weak emergence that I believe is so easily missed. Weak emergence indicates that any computational machine, running any program, can be mathematically represented because in fact it is all mathematical by definition. It is a deterministic computation. So everything a computer does is not only measurable, but the measurement of any state of any computer provides us with sufficient information to know exactly, everything it is going to do in the future, provided we also know all input as a function of time.Now, it also seems to be your premise (as well as Tournesol’s) that we can fully describe the physical world using mathematics. But that’s a premise. Can you describe your first person perspective of consciousness using mathematics? If yes, how? If no, doesn’t that suggest your premise may be false?
<argh> you found me out. I'm just blowing smoke, right? <sigh>Argumentum ad Verecundiam (argument from authority, ie “that cannot be correct, because the experts say so”) does not amount to a “hill of beans” in philosophical debate. Simply quoting names such as Searle, Putnam, Chalmers etc does not an argument make (though I have noticed that many people often feel strangely comforted if they think they can somehow link their personal philosophical position to popular names).
Perhaps selfAdjoint can clarify for us, but I believe that to calculate N particles using a classical computer, to record position and momentum of each is 2 raised to the N power. I'll admit I'm unsure of that, but honestly it doesn't matter for this discussion. Just out of curiosity, I'd like to find out though.MF said: Imho there is little point in claiming my statement is in error if you are not prepared to explain, and defend, why you think this.Q said: I think this statement is in error but I'll disregard as I certainly understand what you're trying to get at.MF said: Even if the HUP did not exist as a limitation to epistemology, there is no way any agent within a finite universe could know all the details of that universe. Let's say the universe comprises N particles. To record the positions and momentum of each of those particles in 3 dimensions at one moment in time would require 6 real numbers, that's 6*N real numbers for a system of N particles. Leaving aside the problem that a real number might not be fully specifiable with a finite number of digits, where/how do you store those 6*N real numbers if you only have N particles in your universe? It cannot be done.
<argh, again> I seem to have many problems, don't I? <sigh twice>Thus, with respect, the fact that you cannot accept the notion that some questions are meaningless is your problem, not mine.
I wouldn't be discussing these things with you MF, if I didn't feel you had something valuable to add to the discussion. I appreciate your posts since they come from an intellegent perspective and that's what's needed to challenge and progress my understanding of consciousness. Personally, I also enjoy reading those folks that do this for a living, so when I post references to papers or the papers themselves, it is in the hopes that the arguments they have provided might already be read and understood. I'd like to better understand the papers as well as have others understand them because we shouldn't be sitting here debating between ourselves with our own ideas. That would be ignoring the shoulders on which we can find a foot hold. On the other hand, it could be as you say, because I enjoy blowing smoke. <haha>
~
Getting back to the OP on hand regarding emergence, I'd like to enumerate what I see as our options given various assumptions. If our minds are mathematical processes as computationalism suggests, then we have one of 3 choices to make depending on what assumptions we want to accept. There may be more choices which depend on other assumptions, but the roads I see we can go down include the following:
1a. One set of assumptions leads to panpsychism (all matter has consciousness).
1b. We might also conclude panpsychism, plus, we might also conclude that we may add any other phenomena we wish, such as religion.
We get to this conclusion by making the assumptions I've described earlier. First, that a computer can become conscious. We also assume there are measurable properties or facts that a computer has. This is exactly what Bedau talks about as he describes the GoL. These measurable properties or facts as I'll call them include the color of a given pixel. Obviously, the color is measurable, and it's location is also measurable. The game also creates pictures with these pixels, one of which is termed a "glider". This is simply the recognition that the shape is made up of various colored pixels in a specific arrangement. We can deduce the rules, we can create a simulation of the GoL, we can do many things.
The GoL is not considered a conscious phenomena. It's a very simple game, my computer now is doing things many orders of magnitude more complex, so it boggles my mind to hear someone suggest the GoL has some kind of 'experience'. This observation is a good lead in however to what could then lead to panpsychism, or the concept that all matter is cognizant to some degree.
If we observe any computer, we might suggest that we can know everything about all the measurable properties and facts. We can know every state of every switch, and we can in fact even become that computer ourselves simply by performing the same mathematical functions that the computer does. This isn't very practical of course because we'd need to live to be billions of years old to do what a computer does in just a few seconds. However, there is no reason in principal that we couldn't do everything a computer does. We'd need a way to represent the state of the computer, but a pen and paper would do just fine. We can't say pen and paper are insufficient to create consciousness because that would proclaim that there is something unique about the mechanism or substrate on which a computation is performed that is special and unique. Such a proclamation would go counter to the fundamental premise of computationalism which states it doesn't matter if we use neurons or microchips or buckets filled with water to do the computation. If they all do the same computation, they all have the same experience.
Moving on, if we observe the computer we can know all the measurable facts and properties because by definition, they are measurable facts and measurable properties. We don't need anything more to explain what the computer is doing. What the computer is doing is performing an alorithm or series of mathematical manipulations or a series of symbolic manipulations - take your pick. All of that is knowable by definition.
What we don't know is what the computer might be experiencing! If we don't know what the computer is experiencing, if there is no way whatsoever to deduce this information, then these facts about the computer are undeducible. If this is true, and we can't know if the computer is experiencing anything at all, then we must be prepared to accept there are phenomena such as subjective experience which occur that we weren't aware are occurring, including panpsychism.
Searle and Putnam have put forth alternate arguments which lead us to the same conclusion. Searle writes:
On the standard definition […] of computationalism it is hard to see how to avoid the following results: 1. For any object there is some description of that object such that under that description the object is a digital computer. 2. For any program and for any sufficiently complex object, there is some description of the object under which it is implementing the program. Thus for example the wall behind my back is right now implementing the Wordstar program, because there is some pattern of molecule movements that is isomorphic with the formal structure of Wordstar. But if the wall is implementing Wordstar then if it is a big enough wall it is implementing any program, including any program implemented in the brain.
Putnam wrote something very similar. Chalmers is quoting him here in '96.
In an appendix to his book Representation and Reality (Putnam 1988, pp. 120-125), Hilary Putnam argues for a conclusion that would destroy [computationalism]. Specifically, he claims that every ordinary open system realizes every abstract finite automaton. He puts this forward as a theorem, and offers a detailed proof. If this is right, a simple system such as a rock implements any automaton one might imagine. Together with the thesis of computational sufficiency, this would imply that a rock has a mind, and possesses many properties characteristic of human mentality. If Putnam's result is correct, then, we must either embrace an extreme form of panpsychism or reject the principle on which the hopes of artificial intelligence rest.
About these arguments, Scheutz writes:
There are, however, strong arguments against this endeavor of explaining mind in terms of computation: some disagree with established notions of computation and argue that these notions will be of no help for CCM because they even fail to capture essential aspects of computation (e.g., intentionality, see Smith, 1996). Others, accepting them, show that some of these notions, such as Turing-computability, are too "course-grained" to be suitable for cognitive explanations (e.g., Putnam, 1998, who proves that every ordinary open system "implements" every finite state machine without input and output, or Searle, 1992, who polemicizes that even ordinary walls can be interpreted as "implementing" the Wordstar program). The former line of attack concentrates on our misunderstanding of what everyday computation is all about, whereas the latter debunks our understanding of how an abstract computation can be realized in the physical - how computation is "implemented".
I believe both Putnam and Searles arguments rest on the assumptions outlined above, specifically that there is nothing one can measure that will tell us if a rock or a wall for example, is cognizant. More importantly, their arguments rest on the fact that we haven't properly defined computationalism. If we can't detect consciousness, if we can't deduce if something is conscious or not from what we can measure, then we have a serious problem, and such things as panpsychism, as well as any other phenomenon such as supernatural dieties, are possible outcomes.
2. Another set of assumptions leads to the mind being strongly emergent at the classical level and thus we need new physical laws which are 'organizational' so to speak. This is Chalmer's position. I won't elaborate on this one. I've provided a link to the paper so we can discuss specifics from that. In the end, I think this reasoning fails for two primary reasons. The first is that strong emergence hasn't been seriously considered by science at any level higher than the mesoscopic. The second reason is that the arguments put forth by Searle and Putnam should still hold unless there is a better definition for computationalism which differentiates computational structures.
3. Another set of assumptions leads to computationalism being false. There can be various reasons for this, one of which includes the fact that strong emergence can only be dignified by science on the molecular level, and computers don't work at this level.
I'd elaborate on these last two, but it seems this post is already longer than I'd expected.