Solving the Other Minds Problem

  • Thread starter lax1113
  • Start date
In summary, the argument states that accounts of the mind that do not take the subjective nature of the mind seriously are not able to solve the problem of understanding other minds, while those that do take it seriously are unable to solve the problem as well. The concept of taking the first person nature of the mind seriously means acknowledging that there is something unique and unexplainable about the experience of consciousness, which cannot be fully captured by third person descriptions. This creates a challenge in understanding and attributing mental states to others, as each individual's experience is unique and cannot be fully understood or replicated.
  • #1
lax1113
179
0
Hey guys,
So for my philosophy class we have a writing that is related to the quote --
"The only accounts of the mind that have any chance of solving the other minds problem don't take the subjective, 'first person' nature of the mind seriously, and the accounts that do take it seriously can't solve the other minds problem"
I have to argue for or against this argument with examples. At the moment I am having a bit of trouble actually explaining this concept. I understand the idea that it logical to think that for example, if i hit my thumb with a hammer, I wince in pain, if someone else hits there thumb with a hammer they also wince in pain, so it is logical to believe that they too are conscious (have mental states etc...)
I feel like I know what this is saying but I just don't understand completely what it means by take the first person nature of the mind seriously. Can anyone shed a little light on this?
 
Physics news on Phys.org
  • #2
lax1113 said:
if i hit my thumb with a hammer, I wince in pain, if someone else hits there thumb with a hammer they also wince in pain, so it is logical to believe that they too are conscious (have mental states etc...)

Why?
 
  • #3
I'm with joedawg on this one. Why?
 
  • #4
Well I would believe that they too have conscious mind states because I believe that my being in pain is a mind state of my own, so therefore if my wincing and pain is a mind state, wouldn't it be logical for me to believe that another person who is wincing and in pain would share a similar mind state?

I'm not sure I understand what is meant by your "why's". I understand that your question was intended for me to think about my statement deeper, but I am just concerned that I am going to be thinking about it in a way that will not help me.

Do you mean why do I think that it is logical for me to attribute mind states to someone else if they show similar behaviorisms?
 
  • #5
lax1113 said:
I feel like I know what this is saying but I just don't understand completely what it means by take the first person nature of the mind seriously. Can anyone shed a little light on this?

It is saying that a) there is "something that it is like" to be you, right now, being aware of the world and your thoughts. And b) there is no hope of objectively modelling this state with a heap of scientific detail, no matter how complicated we get.

Consciousness is ineffable, etc. It has to be experienced, and the experience cannot be described in third person terms.

I would say this is true in the sense that third person descriptions of anything are not designed to tell you what it is like to be that thing. Can you describe a glass of water or Jupiter in terms that would allow me to put myself in the shoes of those objects?

Instead, modelling - our method of explaining "objectively" - is based on generalising the "mechanisms" or causality of systems. Arriving at broad principles rather than making specific descriptions.

So in this sense, third person descriptions don't take first person descriptions "seriously" as third person descriptions are developed by the discarding of such specific detail.

First person descriptions by contrast would want to capture all the fine detail of "what it is like".

As it so often said, the map is not the territory. The map is your generalised model of the terrain which gets all the more useful, the more unnecessary detail that can be left out. The territory is the thing itself which has the detail. To attempt to recreate the territory would be a simulation rather than a model.
 
  • #6
So does the person you see in the mirrors reflection also feel pain?
 
  • #7
apeiron said:
It is saying that a) there is "something that it is like" to be you, right now, being aware of the world and your thoughts. And b) there is no hope of objectively modelling this state with a heap of scientific detail, no matter how complicated we get.

Consciousness is ineffable, etc. It has to be experienced, and the experience cannot be described in third person terms.

I would say this is true in the sense that third person descriptions of anything are not designed to tell you what it is like to be that thing. Can you describe a glass of water or Jupiter in terms that would allow me to put myself in the shoes of those objects?

Instead, modelling - our method of explaining "objectively" - is based on generalising the "mechanisms" or causality of systems. Arriving at broad principles rather than making specific descriptions.

So in this sense, third person descriptions don't take first person descriptions "seriously" as third person descriptions are developed by the discarding of such specific detail.

First person descriptions by contrast would want to capture all the fine detail of "what it is like".

As it so often said, the map is not the territory. The map is your generalised model of the terrain which gets all the more useful, the more unnecessary detail that can be left out. The territory is the thing itself which has the detail. To attempt to recreate the territory would be a simulation rather than a model.

Apeiron,
Thank you very much for this explanation of what was meant by the first/third person. While I still think I have to rethink about the arguments I will be presenting, I understand the wording of what is meant by taking it "seriously" much better. It would appear to me that if I explain my mind states and my behavior through my own eyes, there is no possible way that I can know for sure that another person has any of the same thoughts, feelings, hopes, desires... because my mind states are so experiential.
 
  • #8
You could always do a vulcan mind meld couldn't you?
 
  • #9
magpies said:
So does the person you see in the mirrors reflection also feel pain?

magpies,
I apologize in advance if some of the things I say seem a bit foolish/obvious. I am not a philosophy major, just took this because it seemed extraordinarily interesting to me (I am an engineering major so classes that are more open minded oriented are very hard to come by).

With that being said, I obviously would answer this question as no, the person that I see in the mirrors reflection is not actually a person but rather it is any object that is reflecting or emitting light. I cannot logically say that the person that I "see" in the mirror has feelings because it is a not a person. I guess the point you are making here is that the other people around me could just as easily be as empty as the reflection in the mirror?
 
  • #10
magpies said:
You could always do a vulcan mind meld couldn't you?

haha, that would be a solution wouldn't it. Unfortunately last time I was at the local department store they were not in stock.
 
  • #11
No. No point I just say stuff at random most the time. It's kinda an art you learn after doing enough philosophical debate.
 
  • #12
Very well magpies. Well if it was intentional or unintentional is irrelevant I suppose. It made me think about it much more and I think I have come a bit closer to understanding. I think that the question is saying that an account of the mind such as type physical \ism which doesn't worry so much about the experiential parts of mind states can easily refute the other minds problem. They can say a certain mind state, X, is a physical entity. Therefore if someone else has the physical entity they also have the mind state X. type physicalism seems to me to be something hat doesn't take the first person very seriously because we are talking about mind states through science and physics rather than literally saying that the mind state of being in pain is a "hurty" sensation.
 
  • #13
I believe there are other minds because I am able to be surprised/amused/disappointed by what other people say and do.

That is, the phrase "another mind" is just another term for the existence of "interesting ideas and actions."
 
  • #14
What about the one universal mind what is his opinion on this issue?
 
  • #15
glengarry said:
I believe there are other minds because I am able to be surprised/amused/disappointed by what other people say and do.

That is, the phrase "another mind" is just another term for the existence of "interesting ideas and actions."

I think that this idea could be refuted by the idea that you can very easily be surprised amused or disappointed just as easily by things that YOU do. I can be surprised in something that I did myself, so what difference does that make that we can be surprised in what other people do?

I am not trying to say that you are wrong, I am just trying to go a little deeper in the argument.
 
  • #16
Here is the main crux of the mind problem... While you may come to the conclusion that someone else does not have a mind you will eventually have to tell them this fact. After having told the other person who possibly has a shotgun at home that they have no mind you'll have to find a way to smooth things over. So it's basically a lose lose to accuse someone of not having a mind however the less likely they are to have a shotgun the better your odds of getting away with it are. If swamp slugs had shotguns and could use them we would most likely welcome them as members of the human race.
 
  • #17
lax1113 said:
It would appear to me that if I explain my mind states and my behavior through my own eyes, there is no possible way that I can know for sure that another person has any of the same thoughts, feelings, hopes, desires... because my mind states are so experiential.

...indeed, because descriptions would have to be in terms of something else "more objective" (like electromagnetic wavelength).

So if I see this colour as red, how do I know you see red in the same experiential way? You can find lots of arguments along these lines - http://en.wikipedia.org/wiki/Mary's_room
 
  • #18
What is this "red" you speak of?
 
  • #19
Thank you Apeiron. I feel like I am surely on the right track now. I just really had no clue what the question was asking me (this is like the 50th time this has happened this year haha). I am not very used to philosophy questions so they have been catching me off guard. I find it extremely interesting though.

Once again thank you very much Apeiron, I am sure I will see you posting in the future on my philosophy inquiries. (And also the inverted qualia problem that you brought up about the colors was one of my favorite parts of my class right now. In 3rd grade I argued with my teacher that she couldn't surely know that we saw the same color as me, finally 10 years later I figured out there is actually a word for that!)
 
  • #20
It is a scientific question too. Here is a column I wrote some years back on tetrachromats.

“Which is your favourite colour, Dad?” cry the kids, waving a pack of felt tips in my face. I point to the turquoise one. “Oh, green,” they chorus. “No, blue,” I correct. “Naaahh! That’s green!”, they shout, giggling at Dad’s dismal eyesight.

Well it sure looked blue to me. Turquoise is one of those hues that exist on a perceptual cusp. A touch more blue or green in the mix and you are driven towards quite different subjective judgements. And the puzzle is not only whether there can be slight differences in the tuning of an individual’s colour pathways, but whether our brains even construct the same general experiences. Is my blue the same as your blue? Or would I call what’s in your head “orange”, or even feel it was an utterly alien hue?

Colour is unlike other sensations because it bears so little direct relationship to the physical stimulus. Tastes and smells are tied to the binding properties of molecules. Sounds vary smoothly with their frequency. Even other visual properties like shape, depth, motion and luminance, seem directly mappable – the cortex patterns have at least some topographical resemblance to the raw stimulus patterns. But redness and blueness are utterly arbitrary mental constructs. If colours looked like the wavelengths they represent, then really they ought to appear as some kind of surface vibration or texture. A patch of red would have a gentle long-wave fuzziness about it, while blue should give a more intense visual buzz. But no. Instead, the brain somehow turns the gray world Day-Glo.

Well, a murky philosophical issue just got murkier. What would happen if your eyes had four cone pigments instead of three?

Single cone vision – monochromacy – gives us 200 shades of gray. We can distinguish that many luminance levels. Dichromacy – employing a long wave and short wave cone – gives us a blue-yellow spectrum that swells our visual experience geometrically to about 10,000 distinguishable shades. Trichromacy, which adds a third red-green opponent channel, multiplies the total number of shades to several million.

Recently vision researchers have proved that many women – perhaps 1-in-100 – are in fact tetrachromats. Their retinas have four different cone pigments. Do the maths and you will see they should experience hundreds of millions of hues. Their colour experience ought not be just a little richer, but fantastically richer.

Genetic studies have shown that because our red and green photopigments are carried on the X chromosome, and because these pigments are also highly variable, a woman can often inherit two versions of one of them. She might have two red pigments tuned to wavelengths as much as 10 nanometres apart.

The crunch question is of course whether the developing brain is plastic enough to wire itself up to make use of this potential extra dimension of colour. The flabbergasting answer may be yes.

Vision researchers used to believe that laying down retinal circuitry was a very careful affair. A ganglion cell had to form its “on/off” receptive field by comparing excitatory input from one type of cone cell with the inhibitory input from a surround of opponent cells. So a red cone would be wired to a surround of green cones. This would leave nowhere for an extra pigment to go. It would fit no existing opponent channel set-up.

However that model has been turned on its head by a rash of recent findings. It now seems that during development, ganglions connect rather randomly to a surround of cones and so end up being driven by a mix of cone types. A crisp response emerges only by neural learning. The gain is adjusted on the many rivalous inputs to sort out a stable response. And if opponent channels are indeed self-organising in this way, then there now appears much more scope for an extra pigment to form its own opponent channel.

This possibility is now being tested by injecting extra gene pigments into a monkey’s eye to see if there is a take-up – work that could lead to a cure for colour-blindness in humans, or even offer the sci-fi prospect of turning ordinary folk into four colour super-perceivers. Work is also going on to check if women known to have four distinct cone pigments show signs of perceiving many more colours. Colour-matching experiments have already shown that some subjects see the same “orange” created by different green-red wavelength mixes as actually being different. One of these ladies, keen on tapestry, complains she can never find a full range of coloured yarns as manufacturers leave great gaps in the spectrum.

Of course, because any extra pigment will be squeezed into the already tight 30 nanometre space between existing red and green photopigments, tetrachromacy might not actually be as colourful as calculations suggest. For that, we might have to inject humans with the extra ultra-violet range pigment enjoyed by fish and birds. But still, what wouldn’t you give for a glimpse through another’s eyes to discover what you might be missing?
 
  • #21
lax1113 said:
the quote --
"The only accounts of the mind that have any chance of solving the other minds problem don't take the subjective, 'first person' nature of the mind seriously, and the accounts that do take it seriously can't solve the other minds problem"

The thing is, long before we develop a "first person" self-awareness, already during the first months after we're born, our brains are programmed to tune into how other people feel and what's going on with them. The fact that other people are self-conscious beings like ourselves is not something we deduce from experience. It's something we already take for granted as soon as we begin to be self-conscious ourselves.

As we start to grow up, we get better and better at interpreting facial expressions, body language and eventually talking... we learn to express ourselves as we learn understand others, in the same mostly unconscious process of "communicating". But the "problem" philosophers like to discuss is already "solved" at the beginning -- the openness to there being other people there in our world, is built in, biologically. And these biological mechanisms can fail, to varying degrees, as in autism.

I would guess that this specific kind of openness that's captured in the word "you" is at the heart of what makes us human. It's what somehow began to evolve a million or so years ago, gradually making our species so profoundly different from others, by making possible the evolution of language. It seems to me, language can only grow in our minds because of this built-in, unquestioned assumption that there are people there to talk to.

Unfortunately, when philosophers think about the self-aware mind, the mind they're aware of is highly educated, detached, rational, abstract, and sitting alone by itself, thinking. And there's a long history of philosophy taking itself as the model for what's most basic and important about the mind and how it thinks. So philosophical literature is full of reflection on 1st-person experience and 3rd-person reality, and the difficulty of making a reliable connection between them. But I know of only one book -- Buber's religious-philosophical I and Thou -- that takes "you" seriously, as more basic to the nature of the human mind than "I" or "it".
 
  • #22
I should add that the limitations of human language, that lead to an inability to convey sensory experience through it, need not necessarily apply to all conceivable languages. It is conceivable that a more advanced race, maybe even modified humans, might one day actually be able to elicit rich and even novel sensory-like experiences in the minds of the receivers of their messages.
 
  • #23
ConradDJ said:
But the "problem" philosophers like to discuss is already "solved" at the beginning
You are confusing epistemology with ontology.

The difference between them is important if you want to understand philosophy, especially the problem of other minds.

Everything we know about biology comes from experience, and knowledge exists in an indvividual mind. We don't, however, experience other minds. So your 'assumption' is not justified.

And biological science is more based on induction, not deduction.
 
  • #24
lax1113 said:
I think that this idea could be refuted by the idea that you can very easily be surprised amused or disappointed just as easily by things that YOU do. I can be surprised in something that I did myself, so what difference does that make that we can be surprised in what other people do?

I am not trying to say that you are wrong, I am just trying to go a little deeper in the argument.

You need to be "mindless" in order to finally solve the "other minds problem." Digging deeper will only get you to the other side, where it is perfectly shallow again. But before you get there, you will go insane, I promise you...

Go East, young human!

(Beyond Europe, beyond the Central Asian steppes...)
 
Last edited:
  • #25
JoeDawg said:
You are confusing epistemology with ontology.

Not to quibble, but you are confusing ontology -- which concerns the nature of being or what it means to be -- with science. Scientific theories have ontological presuppositions, but the sciences rarely deal with them explicitly.
JoeDawg said:
Everything we know about biology comes from experience, and knowledge exists in an individual mind. We don't, however, experience other minds.


Exactly. There is no way that logic applied to experience can prove that there are other worlds of experience out there in other people's minds. However, outside of a class in so-called philosophy, this is not something it would ever occur to anyone to want to prove, because it's so obvious.

We don't experience other minds, but we believe in them, innately... long before we're aware of having any beliefs or having a mind at all, we have an emotional structure that's geared to developing communicative relationships with others. So you're right, the "assumption" that there are other self-aware beings out there is entirely "unjustified" -- and because we have this unjustifiable assumption built in from the beginning, we gradually learn to imagine what other people are feeling and thinking, in relation to what we think and feel.

If someone were born entirely without this innate need for emotional connection with other people, I imagine they would never learn to talk. (Very severe autism gives us a little bit of an indication of what that would be like.) But since you and I were born with it and did learn to talk, we have an overwhelming abundance of "evidence" that there are in fact other people with minds of their own. It would be literally insane for us to behave as if this assumption were not true, since essentially everything we know about our world depends on it.

Yet, this "evidence" is indeed entirely imaginary. We don't experience other people as having minds of their own -- we imagine it. But how we imagine the world is immensely important to the way we live. The notion that one can be human, or even do science, without believing in unprovable things, is just delusional.
 
  • #26
ConradDJ said:
Scientific theories have ontological presuppositions, but the sciences rarely deal with them explicitly.
This is irrelevant. The issue is one of epistemology NOT ontology.
outside of a class in so-called philosophy, this is not something it would ever occur to anyone to want to prove, because it's so obvious.
This just shows how little you know about philosophy. Philosophy affects how we live life day to day. What you call obvious is simply your unexamined assumptions.
We don't experience other minds, but we believe in them, innately
This is your claim. You have yet to support it.
the "assumption" that there are other self-aware beings out there is entirely "unjustified"
Exactly.
If someone were born entirely without this innate need...
You claim it is innate, but this is where your argument fails.
Claims that something is innate relies on a biological understanding of mind. Biology is based on empirical epistemology. If you are going to talk about 'other minds', you must first talk about what knowledge is, and how it is gained.

Since you don't seem to understand the difference between deductive and inductive reasoning, this is obviously going to be problematic for you.

You dismiss the deep foundational arguments because you can't get past your own assumptions of the obvious. Getting past these assumptions is what basic philosophy classes are most often designed to do.
It would be literally insane for us to behave as if this assumption were not true, since essentially everything we know about our world depends on it.
Again, you confuse ontology with epistemology.


The notion that one can be human, or even do science, without believing in unprovable things, is just delusional.
Again, science has nothing to do with the essential nature of the problem.
 
  • #27
But isn't this essentially just a narrow scope of solipsism? It's very similar to saying that nothing exists, only you're choosing a certain subset of reality to question, rather than all of reality. If that were the case, we might as well close down philosophy forums and physics forums in general, because none of these discussion would be serving any purpose.

My personal favorite justification for the assumption of mindfulness in others is empirical evidence. We can test several inputs to a system for their outputs. The inputs and outputs are not linearly related, of course, but there's stable and unstable nodes. Of course, we don't need to know the equations in the same way we don't need to know the equations to shoot hoop or play golf, our brain does a lot of the work for us, and we begin to understand systems through experience. Success in a broad spectrum of social engagements requires a "theory of mind".

There is also a rationalist element. It serves our intuition well (as do most symmetries) that something that looks and behaves more like us (than anything else in the universe) is more likely to share our experiences.

Constructivism requires that other people have minds (well, at the least scientists) in order to construct their theories in the first place.
 
  • #28
Pain gives you a different perspective - it clears irrelevant issues. I am not sure anyone has adequately addressed the issue raised by lx1113
 
  • #29
Ok so I got this really weird idea... what would it be like to see out of two bodies? You know like your inside your body but your also inside this 2nd body and can see/feel/ect out of both of them?
 
Last edited:
  • #30
JoeDawg said:
This just shows how little you know about philosophy. Philosophy affects how we live life day to day. What you call obvious is simply your unexamined assumptions.

Err, wasn't Conrad referencing a particular philosphic alternative - Buber's I~thou relational approach? As well as making a worthwhile scientific point about the genetic predisposition we have to believe in other minds?

So his reply had content. You as usual just want to bang on about maintaining a distinction between epistemology vs ontology. Which is beginner stuff we all know.

The argument should run, 1) we believe in the existence of other minds, 2) but for the usual reasons we cannot know of the existence of other minds, and then 3)...well what next practically speaking?

And the discussion might be about how strong the a priori belief is (so therefore why the epistemological issue feels more jarring than usual - but perhaps falsely jarring).

Or the Buber point that the standard philosophical framing of the issue is to object-centric and a relation-centric or process view might change the nature of the response in significant ways.

If you have content-ful replies on these matters, bring them to the table. Otherwise it just seems that you are trolling.
 
  • #31
JoeDawg said:
Since you don't seem to understand the difference between deductive and inductive reasoning, this is obviously going to be problematic for you.

But do you in turn not see how it is classic abductive reasoning to begin with an evolutionary "best guess" at the answer?

Before you can arrive at the crisply dichotomised interplay of deduction and induction (from global to local, and local to global), you must start with an abductive jump out of vagueness.

http://en.wikipedia.org/wiki/Abductive_reasoning
 
  • #32
Pythagorean said:
But isn't this essentially just a narrow scope of solipsism?
I am not making an argument 'for solipsism', solipsism is just the place, with the fewest assumptions, where we start from.

As such, solipsism is very basic to any discussion of epistemology. The 'other minds problem' revolves around 'what we can know', so it is primarily an epistemological problem.

Solipsism is rather trivial to refute, in terms of any sort of ontological discussion.
Ontological assumptions can lead us to error, so there is some value in questioning ontological assumptions, but using an ontological assumption to 'refute' an epistemological problem, is an error, since what we know depends on our ablility to know. It is the cart before the horse.
If that were the case, we might as well close down philosophy forums and physics forums in general, because none of these discussion would be serving any purpose.
That would be nihilsm.
My personal favorite justification for the assumption of mindfulness in others is empirical evidence.
But you've already stacked the deck. You're not really questioning if you can know that other minds exist, you're just saying you observe 'people exist, and they appear similar to me, so if I have a mind, then they probably do too'. This question is:

Does entity x, have attribute y?
I have attribute y, entity x is like me, therefore entity x has attribute y.
Induction.

When you move from the 'known' to the 'unknown', you are using induction. So moving from my mind to 'other minds' is reasoning inductively. (You run into the problem of induction... but that's a different problem.) This is one way to argue that other minds exist.

The question about other minds can be as simple as that, but there is also MORE to it than that. The deeper question is: Can we know other minds exist? And how can we know?

If you look at it from a scientific point of view, you assume inductive reasoning is valid, and that the world indicated by your mind is at least roughly similar to what 'exists', then it is a trival ontological question. And if that was all Plato, Aristotle, Descartes, Hume...etc.. were talking and writing about... then they were morons, not worth anyones time.

The scientific point of view, assumes induction is valid, and that external objects exist. These assumptions may help scientists get their work done, but its essentially ignoring the problem, not addressing it.

This scientific point of view, however, was the minority view, for most of history. When the ancient greeks talked about knowing, they didn't mean observation, they meant deductive logic. So do we know other minds exist because of observation? Or because we can deduce it? These are two opposing views on knowledge, and both have their problems.

Deductive logic gives you a valid conclusion, if the premises are valid.
Inductive logic has observed (valid) premises, but the conclusion may not be valid.

Can we know other minds exist? And how can we know?
Success in a broad spectrum of social engagements requires a "theory of mind".
Maybe, http://en.wikipedia.org/wiki/Behaviorism" .

Constructivism requires that other people have minds (well, at the least scientists) in order to construct their theories in the first place.
Unless everyone but you, is a http://en.wikipedia.org/wiki/P_zombie"
 
Last edited by a moderator:
  • #33
Chronos said:
Pain gives you a different perspective - it clears irrelevant issues.
How is pain different from 'red'?
 
  • #34
apeiron said:
From the article:

"Unlike deduction and in some sense induction, abduction can produce results that are incorrect within its formal system. Hence the conclusions of abduction can only be made valid by separately checking them with a different method, either by deduction or exhaustive induction."

Abduction can be useful, but I don't think I'm overstating it when I say it is generally considered inferior, and in some cases its even considered a fallacy.

Inference to the best explanation, depends to a very large extent on the person making the inference.
 
Last edited:
  • #35
JoeDawg said:
Abduction can be useful, but I don't think I'm overstating it when I say it is generally considered inferior, and in some cases its even a considered a fallacy.

No, Peirce's point was that it is the essential starting point. It is how you get started in the game.

You seem to be taking the position that deduction is superior because it is a top-down method that produces necessary truth. And yet the premises have to come from somewhere. They begin as guesses and are build up via induction normally. So they are just as much "the unknowable", just as much a belief, in the end.

Deduction is where you start with global general beliefs and then derive local particular instances. Induction is where you start with local particular instances and derive global general beliefs. It is a mutal process where each refines the other, if done properly.

In the context of this discussion, it is clear that this is a naturally inductive issue as we start at the local particular instance end of things - a confidence that we are at least one example of a conscious mind. We then generalise this instance to a global rule - things that behave and respond to us in the way we would to them are also likely conscious.

That is the easy way to solve the OP - as much as it can be definitely solved.

If we were coming the other direction, we would have to be able to frame some crisp general theory of consciousness which says, for example, any system arranged with these essential set of functions will be conscious. From such a theory, we could derive conclusions about whether rocks are conscious, or Planckton.

This would be an even better way of solving the OP - it would mean we had a working theory of consciousness.

So sure, deduction of particulars from generals is great. But what do you think it would actually look like in this case? Apart from a general theory of mind? Developed via abduction and inference.

The essence of the OP is about the impossibility of hopping across from my own particular local instance (my mind, of which I can be as certain about as anything) to check the existence of someone else's mind - a second local instance. Whatever, you would call that process, it really has nothing to do with deductive reasoning.

It is to do with abduction though. We start out (on partial information) with the reasonable seeming belief we exist in a world of minds. If challenged by a philosopher how we can know this, well, induction can strengthen our belief. And if we have studied enough neurology and systems science to talk about theories of mind, then we could also make deductive arguments from general principles.

A philosopher - of the kind that makes a living from creating doubt as a refuge from pragmatics - can always say "you never really know anything - except that you yourself exist". And so zombies start get thrown around as ways to keep the doubt going.

A little bit of this is a good antidote against scientific over-confidence. But in the end, zombie arguments get you nowhere interesting. Even their proponents don't live their lives as if they were dealing with a world of zombies. And they don't come up with useful theories of consciousness.
 

Similar threads

Replies
3
Views
2K
Replies
14
Views
911
  • STEM Academic Advising
Replies
2
Views
705
Replies
18
Views
2K
  • General Discussion
Replies
21
Views
1K
  • General Discussion
Replies
12
Views
1K
  • General Discussion
Replies
1
Views
1K
  • General Discussion
Replies
1
Views
594
  • General Discussion
3
Replies
102
Views
7K
Back
Top