Will any physical system that reproduces the functions of a human brain be conscious?

  • Thread starter Meatbot
  • Start date
  • #51
Shooting Star
Homework Helper
1,977
4
If snakes and cockroaches are conscious then IMO that drastically dilutes the definition of the word. We'd now have a hard time defining a difference between "life" and "consciousness", which makes it kind of useless.

A counter-question: What lifeforms are not conscious?

I have not so far asserted that cockroaches are conscious. Actually, we have not unified in our efforts to arrive at some workable "definition" or understanding of the word consciousness. (Each of us has given our views touching on the subject, some of them quite insightful.) Let that be the priority now. Maybe it will not be resolved, but all these postings won't have gone in vain. Also, discussing too much is wasteful, and that’s why both snakes and human babies may be analyzed later.

I won't answer your counter-question, because it is actually the point which I said would inexorably be raised at some point in the discussion (in my last posts).

But let us not deviate into life and freewill just now (until absolutely unavoidable); otherwise there would be no end to this discussion.

We have more or less agreed that there are degrees of consciousness. Let’s start thinking about whether it can be quantified or at least categorised, so that we can draw a distinction between the more and the less conscious, or different types of consciousness.
 
  • #52
DaveC426913
Gold Member
20,070
3,360
I won't answer your counter-question, because it is actually the point which I said would inexorably be raised at some point in the discussion (in my last posts).
It helps us "bracket" the definition. If we all agree cockroaches aren't conscious then we have dramatically narrowed down our grey zone (some people - even some on this board - believe that atoms are conscious!). Now we know it is - as you say - simply a matter of degree.
 
  • #53
Shooting Star
Homework Helper
1,977
4
Fine. You lead with a sort of definition of consciousness. (It will be pretty subjective, but so will be everybody else's.) You have already, but make it a bit more formal this time, and a bit hard to refute. Let the others add or subtract to it. If the going gets too bad, we'll sadly have to abandon the efforts here.

In Physics we are doing quite well without really knowing what is matter. :uhh:
 
  • #54
DaveC426913
Gold Member
20,070
3,360
I don't know if it can be tested as easily as it can be defined so...

My definition of consciousness is the ability of a creature to "know" that it is, itself an individual, distinct from others of its kind.
 
  • #55
Shooting Star
Homework Helper
1,977
4
But suppose it is just one of a kind?
 
  • #57
Evo
Mentor
23,538
3,170
When you say "reproduces the function of a human brain",
we are not able to do that because we are not even aware of how the human brain functions. This is so far out there. A computer is limited to what has been programmed into it and is therefor limited. So I can't see a computer ever completely reproducing the entire functions of a human brain, when we don't have the understanding to program that into it.
 
  • #58
139
0
Equivalent to defining consciousness is defining how an organism can prove to you that it is conscious.

In other words, how can I prove to you that I am conscious?? I claim that I am conscious, but the next poster (who may actually be a forum-bot-machine) claims that no, I am NOT conscious, but instead HE is. How can we prove which one is the conscious organism and which one is the non-conscious machine?
 
  • #59
Shooting Star
Homework Helper
1,977
4
When you say "reproduces the function of a human brain",
we are not able to do that because we are not even aware of how the human brain functions. This is so far out there. A computer is limited to what has been programmed into it and is therefor limited. So I can't see a computer ever completely reproducing the entire functions of a human brain, when we don't have the understanding to program that into it.

Our understanding, or the lack, of how a human brain works need not prevent us from recognizing another fellow conscious being. If you interact with that being long enough, and the impression you get that you are interacting with another "human" or "humanlike being", isn't that what matters? We do not know how we survive, yet we do, as I've said somewhere else. (It works the other way too. I know that that there are certain entities with whom I interact and they are nothing but morons, but society forces me to acknowledge otherwise.) And some theory or technology being so far out there is not any rationale for its rejection.

Also, you tacitly admit that as soon as we have a complete understanding of the human brain, and if we can incorporate the same functions into the computer, then by definition it will become conscious. I see no reason to object, even if you had meant something else.

Equivalent to defining consciousness is defining how an organism can prove to you that it is conscious.

In other words, how can I prove to you that I am conscious?? I claim that I am conscious, but the next poster (who may actually be a forum-bot-machine) claims that no, I am NOT conscious, but instead HE is. How can we prove which one is the conscious organism and which one is the non-conscious machine?

Since exactly when you have burdened the members of the society you live in to prove to you individually that each of them is conscious? It's you who has granted them that status, by extrapolating your own experiences and responses to their own by objective observation.

If the forum-bot is able to fool you for the rest of your life, then for all practical purposes he'd be conscious to you. That's more or less the Turing test. I am not asserting that this is the last word, but we must recognize that there are degrees and categories of consciousness, as there are of humans and life forms. Consciousness is not a one-dimensional parameter.

Another appeal to at least arrive at a "loose" and workable definition of consciousness without too much of dissension.
 
  • #60
1,685
1
Equivalent to defining consciousness is defining how an organism can prove to you that it is conscious.

In other words, how can I prove to you that I am conscious?? I claim that I am conscious, but the next poster (who may actually be a forum-bot-machine) claims that no, I am NOT conscious, but instead HE is. How can we prove which one is the conscious organism and which one is the non-conscious machine?
That's quite a challenge. I have been thinking on the subject for a considerable time - there is (as far as I know) no infallible "test" of consciousness that would allow us to prove whether another entity was indeed conscious. The most we can do is to say that the entity exhibits signs of consciousness (such as the self-recognition example above), but there is always the possibility that such signs are a false-positive.

The implication this has in a wider context is : We have no way of knowing for sure whether any other species (or indeed another member of our own species) is indeed conscious.
 
  • #61
1,685
1
My definition of consciousness is the ability of a creature to "know" that it is, itself an individual, distinct from others of its kind.
For a concise definition, I think this is good.

It fits with the fact that we cannot know for sure whether another entity is conscious - since the only way we have of knowing whether it knows that it is an individual is from the reports it provides to us - but such reports could of course be either false, fabricated or simply mistaken.
 
  • #62
140
1
My definition of consciousness is the ability of a creature to "know" that it is, itself an individual, distinct from others of its kind.

But oddly enough, I think that "knowing" you are not that other thing is usually an UNCONSCIOUS phenomenon. Do you ever think to yourself "I am not part of that table" or "that guy isn't me"? No. You just behave as though it's the case. If someone asks you, you'll then think about it and say "of course not" but until then it's simply assumed without any conscious thought. It seems there's a difference between perceiving and "intentional thinking". You perceive the other guy, but you don't have to think about what's going on consciously. If you can only perceive and not "think intentionally" are you still conscious?

If I say to you "How do you know that stapler is not part of you?", you might think about it and say that when you move your body it stays where it is, plus it doesn't look like what you remember yourself looking like. But the thing is that this analysis is running constanty in the background without you consciously being aware of it.

It seems like consciousness is just a realization of what your brain is already doing subconsciously - a window into your brain activity. It's like you are just along for the ride.

Plus, let's assume that everything WAS part of you, that you are the universe and there are no others. Wouldn't you have to change your definition?
 
Last edited:
  • #63
167
0
Please forgive, I thought this is an interesting discussion but coming in late I only could read the first to pages of this thread.. I hope I'm not repeating anything that was already said..

There are similar attacks on computationalism by Harnad who points out that computationalism is symbol manipulation and he comes up with an argument called “the Symbol Grounding problem”. Searle also attacks computationalism by noting that computations are not intrinsic to physics.

Personally, I have to agree with the anti-computationalists. The biggest problem right now with any computational view is simply defining “computation”. What is a computation? The most brilliant minds in philosophy have thus far failed to produce an adequate definition that shows how a computer can be intrinsic to anything physical in nature.

I used to have the same conceptual difficulty with Math, because a "number" is not intrinsic to anything physical, and a "proof" of a system that satisfies the field axioms of algebra is done (as far as I know) only symbolically with sets, starting with the infinity axiom to construct N (the counting numbers) with zero = the empty set. That is to say, there is no "international collection of golden beans" locked up in a scientific vault in France...

So for that matter, a "set" is not intrinsic to anything physical. But when you discuss the entropy or temperature of a physical system, the "physical system" is the undefined term of discourse and you are assuming numbers (symbols) or else there would be no physics to discuss.

Another thing I saw discussed was the notion of a "continuum" of consciousness.. But in terms of atoms it seems our human consciousness must be discrete.. but approximately continuous in the way we see a spectrum of colors between white and black.

My viewpoint on this issue ("the existence of numbers") has evolved a lot but I guess I sort of view "logic" and "consistency" as just an anthropological/engineered device like either a spoon or a "chinese abacus", but there's not any non-biblical evidence to suggest any deep truth - which would be a physical entity.

Well -- that is unless after going through the thought process I say to myself, wait a minute, "the world is real" - that seems to be the evidence.. So in that way it does seem like there is physical source for all useful symbols including the mysterious [itex]\infty[/itex].

Oh well...
 
Last edited:
  • #64
baywax
Gold Member
2,156
1
Please forgive, I thought this is an interesting discussion but coming in late I only could read the first to pages of this thread.. I hope I'm not repeating anything that was already said..



I used to have the same conceptual difficulty with Math, because a "number" is not intrinsic to anything physical, and a "proof" of a system that satisfies the field axioms of algebra is done (as far as I know) only symbolically with sets, starting with the infinity axiom to construct N (the counting numbers) with zero = the empty set. That is to say, there is no "international collection of golden beans" locked up in a scientific vault in France...

So for that matter, a "set" is not intrinsic to anything physical. But when you discuss the entropy or temperature of a physical system, the "physical system" is the undefined term of discourse and you are assuming numbers (symbols) or else there would be no physics to discuss.

Another thing I saw discussed was the notion of a "continuum" of consciousness.. But in terms of atoms it seems our human consciousness must be discrete.. but approximately continuous in the way we see a spectrum of colors between white and black.

My viewpoint on this issue ("the existence of numbers") has evolved a lot but I guess I sort of view "logic" and "consistency" as just an anthropological/engineered device like either a spoon or a "chinese abacus", but there's not any non-biblical evidence to suggest any deep truth - which would be a physical entity.

Well -- that is unless after going through the thought process I say to myself, wait a minute, "the world is real" - that seems to be the evidence.. So in that way it does seem like there is physical source for all useful symbols including the mysterious [itex]\infty[/itex].

Oh well...

The ideas about symbolism you brought up are good. Symbols are representative of actual phenomena we experience. They are the convenient packages we can carry anywhere to explain what we've done with the phenomena in question. I can't carry 6,000,000,000 bushels of corn to Armenia to show them what I want to trade for their oil... but I can show them the symbol of those bushels of corn with symbols. Symbols are communication. The sound a voice makes is a symbol of a thought in the brain. The sound a computer makes is the symbol of a directive made by the computer's operator. To say that a computer has no bearing in physics is probably wrong. It would be like saying math, stop photography, VU meters and wind tunnels are not part of physics. Computers provide the means to carry symbolic communications between people. This makes them as integral to physics... and everything else... as the physics lab and the physics professor.
 
  • #65
Q_Goest
Science Advisor
Homework Helper
Gold Member
3,001
39
Hi rudinreader,
I used to have the same conceptual difficulty with Math, because a "number" is not intrinsic to anything physical, and a "proof" of a system that satisfies the field axioms of algebra is done (as far as I know) only symbolically with sets, starting with the infinity axiom to construct N (the counting numbers) with zero = the empty set. That is to say, there is no "international collection of golden beans" locked up in a scientific vault in France...
The issue isn’t specifically that computations are symbol manipulation. The problem is (at least) twofold.
1. Anything can be interpretable as a symbol, so any system can be seen to be manipulating symbols.
2. If anything is interpretable, the interpretation is dependant on what meaning we assign ot that manipulation. The symbol manipulation can’t have an intrinsic meaning by itself.

So what makes a computer a computer? Any physical system can be found to be manipulating symbols. The mail system is manipulating letters and putting envelopes and other bits of paper into boxes. The entire mail system is an obvious example of a computer. Then there are some not so obvious examples of a computer, such as weather systems, airplane wings and rocks.

Turn it around and consider it this way: What is it that you can calculate about some physical system?

The answer is just about anything. I can calculate the velocity of water in a pail when I stir it. Or the stress in an aircraft structure as it’s flying. If there is something you can calculate about a physical system, then that physical system can be interpreted as being a computer which is also calculating those things.

Putnam points out that “Every ordinary open system is a realization of every abstract finite automaton.”

Meaning that depending on how you interpret (perform the calculation on) the ordinary open system, it can be interpreted such that it is a realization of any finite automaton. If any given physical system can be interpretable as any finite automaton, then everything is 1) either conscious in a very nasty way. Everything is having every possible experience. or 2) Putnam’s conclusion is, “In short, “functionalism” if it were correct, would imply behaviorism! If it is true that to possess given mental states is simply to possess a certain “functional organization,” then it is also true that to possess given mental states is simply to possess certain behavior dispositions!”

Personally, I agree with Putnam as I don't want to think that everything has every possible experience. I feel only one experience, not an infinite number. The conclusion is that functionalism is not a basis for computationalism, and so therefore, symbol manipulation is insufficient to instaniate thought.
 
  • #66
baywax
Gold Member
2,156
1
Hi rudinreader,

The issue isn’t specifically that computations are symbol manipulation. The problem is (at least) twofold.
1. Anything can be interpretable as a symbol, so any system can be seen to be manipulating symbols.
2. If anything is interpretable, the interpretation is dependant on what meaning we assign ot that manipulation. The symbol manipulation can’t have an intrinsic meaning by itself.

So what makes a computer a computer? Any physical system can be found to be manipulating symbols. The mail system is manipulating letters and putting envelopes and other bits of paper into boxes. The entire mail system is an obvious example of a computer. Then there are some not so obvious examples of a computer, such as weather systems, airplane wings and rocks.

Turn it around and consider it this way: What is it that you can calculate about some physical system?

The answer is just about anything. I can calculate the velocity of water in a pail when I stir it. Or the stress in an aircraft structure as it’s flying. If there is something you can calculate about a physical system, then that physical system can be interpreted as being a computer which is also calculating those things.

Putnam points out that “Every ordinary open system is a realization of every abstract finite automaton.”

Meaning that depending on how you interpret (perform the calculation on) the ordinary open system, it can be interpreted such that it is a realization of any finite automaton. If any given physical system can be interpretable as any finite automaton, then everything is 1) either conscious in a very nasty way. Everything is having every possible experience. or 2) Putnam’s conclusion is, “In short, “functionalism” if it were correct, would imply behaviorism! If it is true that to possess given mental states is simply to possess a certain “functional organization,” then it is also true that to possess given mental states is simply to possess certain behavior dispositions!”

Personally, I agree with Putnam as I don't want to think that everything has every possible experience. I feel only one experience, not an infinite number. The conclusion is that functionalism is not a basis for computationalism, and so therefore, symbol manipulation is insufficient to instaniate thought.

We can certainly say that everything provides the potential for experience. Whether everything experiences "experience" is probably another semantic discussion worth about 10 pages. I know that it is often said that "the rock experienced traumatic weathering" or "it became obvious that the stainless steel experienced stress". But this language is purely anthropocentric in nature. So, as far as we know, unless we hear otherwise from a rock or some stainless steel, only consciously-aware organisms are able to "experience" phenomena because "experience" is a specific description of how a set of neurons reacts to a specific stimulus.
 
  • #67
Q_Goest
Science Advisor
Homework Helper
Gold Member
3,001
39
Hi Baywax,
The arguments put forth by Searle, Putnam, Bishop and others have nothing to do with "the rock experienced traumatic weathering" or "it became obvious that the stainless steel experienced stress". Nobody is saying such ridiculous things. The arguments prove that physical states may be mapped (ie: Putnam mapping). A physical state in any physical system may be mapped to a physical state in any allegedly conscious computer.

If you're not familiar with any of this, please ask rather than post uninformed replies.
 
  • #68
baywax
Gold Member
2,156
1
Hi Baywax,
The arguments put forth by Searle, Putnam, Bishop and others have nothing to do with "the rock experienced traumatic weathering" or "it became obvious that the stainless steel experienced stress". Nobody is saying such ridiculous things. The arguments prove that physical states may be mapped (ie: Putnam mapping). A physical state in any physical system may be mapped to a physical state in any allegedly conscious computer.

If you're not familiar with any of this, please ask rather than post uninformed replies.

Ah, sorry Q_Goest.

I am only familiar with Searle's Chinese Room and discussions surrounding "understanding".

If a computer contains the information concerning the mapping of a physical state does that mean it is able to "experience" the physical state?

I'd say no because that's like saying a mirror is able to "experience" the reflections that take place on its surface.
 
  • #69
Q_Goest
Science Advisor
Homework Helper
Gold Member
3,001
39
Hi baywax.
This may be difficult concept to grasp. I have no doubt it is.

Putnam mapping relies on the supervenience thesis. See Stanford Encyclopedia of Philosophy.
http://plato.stanford.edu/entries/supervenience/

I think for the purposes of this thread however, I’ll simply quote Maudlin because he nails it on the head without undo rhetoric:
… two physical systems engaged in precisely the same physical activity through a time will support the same modes of consciousness (if any) through that time.
Ref: Maudlin, “Computation and Consciousness”

So if we have two systems, each of which can be shown to have precisely the same physical activity through some period of time, then those two systems will support the same conscious experience. In fact, they’ll share everything including memory. IFF the two systems mirror each other perfectly, then they should share everything, including experience.

Now the problem with symbols encroaches.

~

What is it to say that physical system A is in some specific state? How do we determine that some physical system is in some physical state A?

Functionalism is a concept that Putnam came up with. He states that essentially, if two physical things provide the same function, then those things can be called equivalent. My apologies to others reading for the brevity of this statement, but I hope it captures the essence.

How do we know if two things are equivalent or not? If I asked if 1+2 = 3 or if A + B = C, you might respond that the first is true, but A+B=C doesn’t make sense because they aren’t numbers.

So if all I needed to do to prove to you that A+B=C is to say that A=1, B=2 and C=3, then obviously you’d have to agree with me.

I obviously wouldn’t have to use A, B and C. I could use any symbol whatsoever. I could use x, y, and z. I could use the temperature at that point. Or the … wait… (Make up your own symbols in your head here. Any will do.)

~

Ok, now we agree that mathematics doesn’t rely on the symbols used. Nor does any computation. We could use the stress at some point in my hypothetical aircraft wing to represent a number. Or we could use the temperature at that point.
or the specific heat of the material
or the emissivity
or the strain
or the thermal conductivity
or the … you fill in the value.

Once you fill in the value. Change it, because it doesn’t matter. All that matters is
- that you used a reference for your symbol.
- And you decided what that symbol meant. (meaning is in your head)

~

Do you realize how much trouble you’re in yet? Yes… we’re in deep doo doo….. <oh poop.>

~

Searle is still active. Putnam however has retired. We’ll miss him. He’s no slouch. Putnam gave us functionalism and just before retirement, he wrote a book that took it away. Bishop is following on.

I talked to Bishop two months ago to better understand all this. Here are my notes:

From phone interview with Bishop, November 19, ’07:
Putnam, Bishop, point out that one can find any given thing (open physical system) in any given state such that the state varies over time from state 1 to 2 to 3, etc… It changes over time because of modal influences on any open system, those modal influences being the influence of the environment on the system. Similarly however, a closed system can be seen to go through some set of states (ie: such as a counting device which goes through 1, 2, 3, etc…). Therefore, the open system can be perhaps better represented by a simple counting device which goes through some arbitrary states 1, 2, 3, etc… with no loss in generality. Note that each state, 1, 2, 3, is a combination of all microstates of the machine.

Secondly, we can look at an FSA as going through or being in various states A, B, C, etc… over some time period t0 to tn. Note here that the states A, B, C are also a combination of all microstates of the machine.

Now one needs only to ‘map’ (Putnam mapping) or compare the open system with the FSA by saying that states 1, 2, 3, etc… correspond to FSA states A, B, C, etc… This mapping is possible because the states 1, 2, 3 and corresponding states A, B, C are symbolic. There is no intrinsic quality to any specific state. (ie: Symbol Grounding Problem per Harnad)

We are now left with the conclusion that:
1 = A
2 = B
3 = C
Etc…

Since this is true, we are left to conclude than any phenomena, such as consciousness, possessed by the FSA must be similarly possessed by the open (or closed) physical system.

There are (at least) 2 counters to this:
1. Counterfactual argument
2. CSA argument

Counterfactual argument holds that this mapping can only be accomplished after we already ‘know’ the states the FSA possesses. However, the FSA also has the ability to transit into different states depending on input, and if it did, the mapping would either be invalid or have to be changed to match the new states of the machine.

The CSA argument per Chalmers points out that if we consider the state of the individual switches instead of lumping all these individual states into a single state such as with an FSA, then the number of additional states quickly becomes extremely large and one can not map a one to one correspondence because of the need to define each of these individual states.

I believe the CSA argument (see Chalmers, “Does a Rock Implement Every Finite-State Automaton”) dismisses the Church Turing thesis since Chalmers is claiming that the CSA is somehow functionally different than the FSA and can therefore support phenomena not had by the FSA. Consider here the FSA=Universal Turing Machine.

~

1. The counterfactual argument just plain sucks. er… sorry for the Vanesh <French>

I also emailed Chalmers about his. Chalmers supports the counterfactual argument:
the idea is roughly that just duplicating I/O from the parts isn't enough to preserve mentality, etc -- you have to duplicate all *potential* I/O too.

That’s a good summary. As good as Maudlin’s summary of the supervenience thesis. Do you think it’s reasonable? Do you think that a machine has to instantiate all possible non-used physical states?

The counter is that counterfactuals don’t count.
See Bishop.
“Counterfactuals cannot Count”
“Dancing with Pixies”
“Mechanical bodies; mythical minds”

See also Maudlin, “Computation and Consciousness”.

~

All this isn’t to say the argument is one sided. Christley for example, uses a very similar argument to your argument about the mirror. Christley states:

Furthermore, consider an animated display of a Turing Machine on a computer screen. Since, ex hypothesi, there is a one-to-one correspondence between the states of the display screen and the states of some Turing Machine, Searle and Putnam would apparently claim that the screen realizes the Turing Machine, if anything does. But it seems clear that we would say that the screen depicts a Turing Machine, but is not itself one. One reason why we would deny it computational status is because the state of the screen that corresponds, in the putative interpretation function, to a computational state A does not produce, as a causal effect, the screen state that corresponds to the successor computational state B, even though the Turing Machine depicted does make a transition from state A to state B. Computational states must be able to cause other computational states to come about.
Ref: Christley, ‘Why Everything Doesn’t Realize Every Computation’,

Christley doesn’t deny that the screen instantiates the physical state. He’s saying in affect, that the screen can’t support counterfactuals. However, I have to disagree that counterfactuals are necessary for any consciousness. This requires spooky, nonlocal causal actions.

~

It took me about a year to understand this and come to some agreement with any of it. I think we need to try and understand what’s being said before we cast judgment. I can’t for the life of me, see anyone grasp all the nuances here reading this over once. It’s difficult to grasp as it’s a very abstract argument that needs some basis on cold hard physical law to become clear. So before saying that x doesn’t mean y or creating any argument based on what you read here, I only ask that what you don’t understand… ask. No one is going to understand the arguments provided by people like Chalmers, Christley, Putnam, Bishop and others the first time through.
 
  • #70
baywax
Gold Member
2,156
1
No one is going to understand the arguments provided by people like Chalmers, Christley, Putnam, Bishop and others the first time through.

Now I've read your incredible download of information, yet I've only displayed a small portion that agrees with my sentiment, for now. Are your computations and information still contained within my post, even though I've erased most of it? With my presence here, at this moment the information is somewhat present... in a jumbled state! Once I leave and you only have what I've posted here... these pixels are all the information that will remain, in this post.

This has really turned me on my ear and I have to read your references now.

Thank you Q_Goest for you work on this!
 
  • #71
From a pure computation perspective, expectation states may be pre-computed for the experience/encounter, so without the whole map subsets of activity cannot approach that level of awareness especially when additional effects are missed!

--

To a degree I base my interpretation of consciousness on the being not-blind-to it paradyne. To the degree some automata may become not-blind-to what its functions are, or how it self-manages, that automata can reputably climb up the little known ladder of consciousness.
 
Last edited:
  • #72
From my last post I wanted to elucidate this not-blind-to phenomena which is esentially a knowledge based awareness. Remember knowledge based action is essentially, the ability to demonstrate your information is substantially correct, minus the Gettier defect.

Knowledge some philosophers argue is a purely conscious human activity, leading them to conclude that un-conscious activity cannot be knowledge based. Their argument is one cannot know if one is unaware. However I believe that with some Test(x) = X, with an, f(x), and g(x), such that F(f(x),g(x)) = y and Text(y) = X, f(x) could be known to be correct along with g(x). An automated function which shows f(x) to be correct in itz domain, F(f(),j), and can act on that information, should be called knowledge-in-itself.

As the great Chalmers has passed on his feedback mechanisms from simple human made thermostats, to an evolution as knowledge-in-itself, of course with the appropriate computations, we find ourselves with a mechanism underlying consciousness which implies realiability and surety. In other words consciousness cannot be compounded without a basis for realiablity.

As such consciousness is based on known information flow.
 
  • #73
Q_Goest
Science Advisor
Homework Helper
Gold Member
3,001
39
Hi basePARTICLE,
From my last post I wanted to elucidate this not-blind-to phenomena which is esentially a knowledge based awareness. Remember knowledge based action is essentially, the ability to demonstrate your information is substantially correct, minus the Gettier defect.

Knowledge some philosophers argue is a purely conscious human activity, leading them to conclude that un-conscious activity cannot be knowledge based. Their argument is one cannot know if one is unaware. However I believe that with some Test(x) = X, with an, f(x), and g(x), such that F(f(x),g(x)) = y and Text(y) = X, f(x) could be known to be correct along with g(x). An automated function which shows f(x) to be correct in itz domain, F(f(),j), and can act on that information, should be called knowledge-in-itself.

As the great Chalmers has passed on his feedback mechanisms from simple human made thermostats, to an evolution as knowledge-in-itself, of course with the appropriate computations, we find ourselves with a mechanism underlying consciousness which implies realiability and surety. In other words consciousness cannot be compounded without a basis for realiablity.

As such consciousness is based on known information flow.

I don't know what you're getting at. If I Google http://www.google.com/search?hl=en&q="knowledge+based+awareness"++chalmers", I get 4 hits that have nothing to do with this topic.

What I do understand is:
… with the appropriate computations, we find ourselves with a mechanism underlying consciousness which implies realiability and surety. In other words consciousness cannot be compounded without a basis for realiablity.

If by that you mean Chalmers supports the viewpoint that a conscious machine must reliably support counterfactuals, then I’d agree. He goes through his viewpoint in detail in his paper, “Does a Rock Implement Every Finite State Automaton?”.

It’s this viewpoint I find lacking and indefensible in view of those papers by Maudlin, Putnam and Bishop. Like Christley, Endicott and Copeland, Chalmers is essentially attempting to define “computation” in a way which excludes systems that can’t support counterfactuals.

The problem all these philosophers come up against seems fairly simple to me however, and that is that nature doesn’t give a damn about our definitions. You can define all day and you won’t find anything intrinsic in your definitions that correlate a system to your definition. Searle points this out in considerable detail also, as does Harnad and others. Reliability in the sense of a computational mechanism being able to support counterfactuals will be a long dismissed notion 5 or 10 years from now.
 
Last edited by a moderator:

Related Threads on Will any physical system that reproduces the functions of a human brain be conscious?

Replies
11
Views
6K
Replies
14
Views
2K
  • Last Post
Replies
1
Views
667
Replies
15
Views
3K
Replies
5
Views
3K
  • Last Post
Replies
8
Views
5K
Replies
40
Views
17K
  • Last Post
Replies
17
Views
3K
  • Last Post
Replies
9
Views
3K
  • Last Post
Replies
10
Views
778
Top