Is Consciousness Simple or Complex?

In summary: It is an ongoing process that can vary in its level of complexity.Furthermore, consciousness is a product of the brain and cannot exist without it. This refutes the idea of "simple consciousness" as it would require consciousness to exist without a brain. On the other hand, "complex consciousness" is a more accurate understanding of consciousness as it is a result of the complex processes within the brain. However, the idea of multiple copies of identical brains having the same consciousness raises questions about the concept of identity and the relationship between the physical brain and consciousness. Overall, consciousness is a complex and ongoing process that cannot be simplified into just two possibilities.
  • #36
The brain takes no part in decisions on:
how to grow bones so that they'll support the structure
how to align muscles right
regulate and respond locally to environmental signals

Neither does the foot. The foot as an "organ" (er...body part) performs little to no information processing outside vanilla homeostatic and hox-gene mediated structural differentiation performed by almost every other body part.

I can build you a artificial foot that behaves, functionally, almost exactly like a real foot, but we lack anything approaching the computational power to simulate the information processing that goes into making a functional brain.
 
Physics news on Phys.org
  • #37
apeiron said:
Let's get it clear. Are you claiming that legs and brains are of equal complexity?
No, I'm not claiming this.
apeiron said:
If so, how are you defining complexity?
Well I'm not but my definition here of complexity would be the number of components and interactions between said components. I assume that you are using a different definition?
apeiron said:
And then what sources are there that show by your chosen measure that the complexity is in fact equivalent.
A limb has a greater number of components that are more diverse with correspondingly more interactions. So when I think of complexity I'm thinking of how the different types of cells, the various differences in gene expression they have, how this affects their metabolism, what effect this has on intercellular signalling etc. If you want a source start with something like Grays[/PLAIN] and look at the diversity of organs and tissue types in a limb verses the brain.
 
Last edited by a moderator:
  • #38
Number Nine said:
Neither does the foot. The foot as an "organ" (er...body part) performs little to no information processing outside vanilla homeostatic and hox-gene mediated structural differentiation performed by almost every other body part.

I can build you a artificial foot that behaves, functionally, almost exactly like a real foot, but we lack anything approaching the computational power to simulate the information processing that goes into making a functional brain.

You're setting up a blatant strawman and using your own personal scope of "function" to suit your argument.
 
  • #39
As strictly an organ, a leg would be more complex physically, which is what I took Ryan to be referring to. What functions they perform is a completely different issue, not one that Ryan, IMO, was addressing in that particular post.

Ryan_m_b said:
I wouldn't go that far. Every part of the body is incredibly complex with multiple interacting tissue types.
Bolding mine. I think he is very clear.

Apeiron, you were referring to brain functions. The two of you were simply referring to two different things. You are each correct, IMO, I'm no expert. I think it's time to drop this and move on.
 
Last edited:
  • #40
Ryan_m_b said:
No, I'm not claiming this.

OK, so nothing to argue about then.

Ryan_m_b said:
Well I'm not but my definition here of complexity would be the number of components and interactions between said components. I assume that you are using a different definition?

Yes, my definition of complexity would be far more complex. :smile:

Taking a systems science/semiotic/complex adaptive system perspective, I would argue that you are talking about complication not complexification.

The point there is that it is not enough to talk about just information, biological complexity is meaningful information. The kind that genes and neurons code for, but muscle fibres don't.

Ryan_m_b said:
If you want a source start with something like Grays[/PLAIN] and look at the diversity of organs and tissue types in a limb verses the brain.

Again, complication is not complexification.
 
Last edited by a moderator:
  • #41
Perhaps this whole complexity discussion deserves it's own thread. Anyway, this thesis introduction by a graduate student at Brandeis outlines the main issues with defining complexity:

I still think a limb has all the complexity you could ask for, as defined quantitatively by Kolmogorov analysis.

One of the problems with studying the mechanisms and history of complex systems is the lack of a working definition of complexity. We have intuitive notions that often lead to contradictions. There have been numerous attempts to define the complexity of a given system or phenomenon, usually by means of a complexity measure -- a numerical scale to compare the complexity of different problems, but all of them fall short of expectations.

The notion of Algorithmic Information Content (AIC) is a keystone in the problem. The AIC or Kolmogorov complexity of a binary string is defined as the length of the shortest program for a Universal Turing Machine (UTM) whose output is the given string [127,79,19].

Intuitively, the simplest strings can be generated with a few instructions, e.g. ``a string of 100 zeros''; whereas the highly complex ones require a program slightly longer than the string itself, e.g. ``the string 0010111100101110000010100001000111100110''. However, the minimal program depends on the encoding or ``programming language'' chosen; the difference between two different encodings being bound by a constant. Moreover, AIC is uncomputable. Shannon's entropy [121] is a closely related measure (it is an upper bound to AIC [80,136]).

Further research on the matter of complexity measures stems from the notion that the most difficult, the most interesting systems are not necessarily those most complex according to algorithmic complexity and related measures. Just as there is no organization in a universe with infinite entropy, there is little to be understood, or compressed, on maximally complex strings in the Kolmogorov sense. The quest for mathematical definitions of complexity whose maximums lie somewhere between zero and maximal AIC [10,82,58,53] has yet to produce satisfactory results. Bruce Edmonds' recent PhD thesis on the measurement of complexity [33] concludes that none of the measures that have been proposed so far manages to capture the problem, but points out several important elements:

1.
Complexity depends on the observer.
The complexity of natural phenomena per se can not be defined in a useful manner, because natural phenomena have infinite detail. Thus one cannot define the absolute or inherent complexity of ``earth'' for example. Only when observations are made, as produced by an acquisition model, is when the question of complexity becomes relevant: after the observer's model is incorporated.

2.
``Emergent'' levels of complexity
Often the interactions at a lower level of organization (e.g. subatomic particles) result in higher levels with aggregate rules of their own (e.g. formation of molecules). A defining characteristic of complexity is a hierarchy of description levels, where the characteristics of a superior level emerge from those below it. The condition of emergence is relative to the observer; emergent properties are those that come from unexpected, aggregate interactions between components of the system.

A mathematical system is a good example. The set of axioms determines the whole system, but demonstrable statements receive different names like ``lemma'', ``property'', ``corollary'' or ``theorem'' depending on their relative role within the corpus. ``Theorem'' is reserved for those that are difficult to proof and constitute foundations for new branches of the theory -- they are ``emergent'' properties.

A theorem simplifies a group of phenomena and creates a higher lever language. This type of re-definition of languages is typical of the way we do science. As Toulmin puts it, ``The heart of all major discoveries in the physical sciences is the discovery of novel methods of representation, and so of fresh techniques by which inferences can be drawn'' [135, p. 34].

3.
Modularization with Interdependencies
Complex systems are partially decomposable, their modules dependent on each other. In this sense, Edmonds concludes that among the most satisfactory measures of complexity is the cyclomatic number [33, p. 107][129], which is the number of independent closed loops on a minimal graph.
The cyclomatic number measures the complexity of an expression, represented as a tree. Expressions with either all identical nodes or with all different nodes are the extremes in an ``entropy'' scale, for they are either trivial or impossible to compress. The more complex ones in the cyclomatic sense are those whose branches are different, yet some subtrees are reused across branches. Such a graph can be reduced (fig. 1.1) so that reused subexpressions appear only once. Doing so reveals a network of entangled cross-references. The count of loops in the reduced graph is the cyclomatic number of the expression.
http://pages.cs.brandeis.edu/~pablo/thesis/html/node9.html
 
  • #42
I agree with the above post, can we please get back to the main discussion? Or make a new thread about the definitions of complexity for that discussion?

I'm either uninterested in or too dumb to understand this current debate so let's move on please.
 
  • #43
Pythagorean said:
Perhaps this whole complexity discussion deserves it's own thread. Anyway, this thesis introduction by a graduate student at Brandeis outlines the main issues with defining complexity:

I still think a limb has all the complexity you could ask for, as defined quantitatively by Kolmogorov analysis.

But as the grad student says, high algorithmic complexity in fact implies randomness, not order.

The quest for mathematical definitions of complexity whose maximums lie somewhere between zero and maximal AIC [10,82,58,53] has yet to produce satisfactory results.

So if say limbs scored higher on AIC than brains, what would you conclude?
 
  • #44
apeiron said:
But as the grad student says, high algorithmic complexity in fact implies randomness, not order.



So if say limbs scored higher on AIC than brains, what would you conclude?
You just can't let go and get back on topic, can you?
 
  • #45
Number Nine said:
That doesn't seem to me to be a defensible position. Almost any cellular or genetic process that occurs in the foot occurs in the head, so the differences we're looking for are large scale, emergent properties of tissues and cellular networks. Most of these (e.g. blood flow) occur in both the brain and the foot, so we're left with the fact that the brain performs large scale parallel, distributed information processing, whereas the foot does...what? Incredibly fine and detailed muscle contractions, yes, that are programmed by the brain. The statistical procedures performed the brain alone would fill many a textbook, each with fairly sophisticated mathematical prerequisites. Did you know that an ensemble of neurons, each experiencing Hebbian synaptic changes, can perform principal component analysis? Your brain is literally deriving optimal basis vectors to best express the information fed to it by the outside world.
I don't see how those differences are more than quantitative differences. "parallel processing", if two muscle fibers in a leg contract at the same time, it would fit the definition of parellel processing. Which is an arbitrary definition in the first place, and only indicative of a chronological order, the moment at which the processing happens. If you look at each of the terms you mention, like "distributed", "information", etc., you will find they boil down to very ordinary physical processes. Not just legs, but even rocks do those things to some degree.
 
  • #46
Pythagorean said:
Perhaps this whole complexity discussion deserves it's own thread. Anyway, this thesis introduction by a graduate student at Brandeis outlines the main issues with defining complexity:

I still think a limb has all the complexity you could ask for, as defined quantitatively by Kolmogorov analysis.

http://pages.cs.brandeis.edu/~pablo/thesis/html/node9.html
The important part i see in the paper is that complexity or emergence is observer dependent. That means consciousness cannot be the result of it.
 
  • #47
Let's focus my quote-post a little more:

Often the interactions at a lower level of organization (e.g. subatomic particles) result in higher levels with aggregate rules of their own (e.g. formation of molecules). A defining characteristic of complexity is a hierarchy of description levels, where the characteristics of a superior level emerge from those below it. The condition of emergence is relative to the observer; emergent properties are those that come from unexpected, aggregate interactions between components of the system.

And this is exactly what cellular differentiation entails, from the molecular genetic networks to the cellular networks to the organ/tissue and environment interaction. This surely exists in a limb. That's not to say it doesn't exist in a brain, or that the brain might even be more complex; it's just that the question is not as trivial as its being made out to be.

As far as the OP is concerned, I think we all agree that consciousness is complex, the current discussion/disagreement is more about how complex it is compared to other living tissues. I contest that complexity isn't unique to consciousness/brain in this regard.

pftest said:
The important part i see in the paper is that complexity or emergence is observer dependent. That means consciousness cannot be the result of it.

That's a bit of a misinterpretation. It's not that emergence and complexity doesn't exist without humans, it's that humans draw their boundaries between classifications in an arbitrary manner. The decoder and the code together can be objectified, but the code cannot. We often have the problem of being "trapped" in our decoder scheme. That's not to say that the things we're encoding aren't real, just that our codes/symbols/definitions fail to completely describe them.
 
  • #48
Pythagorean said:
That's a bit of a misinterpretation. It's not that emergence and complexity doesn't exist without humans, it's that humans draw their boundaries between classifications in an arbitrary manner. The decoder and the code together can be objectified, but the code cannot. We often have the problem of being "trapped" in our decoder scheme. That's not to say that the things we're encoding aren't real, just that our codes/symbols/definitions fail to completely describe them.
Thats actually what i mean, we set up arbitrary boundaries, and then we say consciousness happens on this side of the boundary but not the other. If the boundary is arbitrary, then it is not possible for consciousness to only exist on one side.
 
  • #49
GladScientist said:
Consciousness is still a big mystery, and I've always wondered what causes it.
Me too. Phenomenal properties/qualia/subjective awareness/the mental do seem to be inexplicable in terms of "physical' properties (as presently understood). I think I've become persuaded by the "ignorance-hypothesis":

Perhaps, with a full/complete science an ideal subject would in the future be able to discover/understand how phenomenal properties are explicable in terms of physical properties but knowledge of the physical world is constrained by our current scientific concepts we possess. So, it's quite possible that there may exist physical properties of which we currently have no conception (perhaps it may even be beyond our cognitive understanding even by future advances in our "physical" understanding) and that is required to bridge this gap? Stoljar and McGinn have presented such arguments.
 
Last edited:
  • #50
Pythagorean said:
Let's focus my quote-post a little more:

... A defining characteristic of complexity is a hierarchy of description levels, where the characteristics of a superior level emerge from those below it. The condition of emergence is relative to the observer; emergent properties are those that come from unexpected, aggregate interactions between components of the system.​


I think a discussion of complexity needs to focus on the difference – which occurs at every level – between the things involved and the relationships that can happen between those things. For example, between atoms and the relationships atoms have with each other, which are the basis for forming molecules.

That is, instead of thinking in terms of simpler things (systems) “aggregating” to form larger things, we could imagine the hierarchy in terms of things having relationships, which allow for the formation of more complex kinds of things, that can have more complex relationships, etc.

“Thing” in this context means a structure that lasts over time and continues to be what it is, though it may also change, over time. Things typically have both constant and variable properties.

A “relationship” is something that happens between things, made of specific interactions taking place at a certain times. A relationship between two things can last over time, but only to the extent there is a repetitive pattern in their interactions. The characteristics of relationships are not properties they possesses in themselves, but have to do with the effect these patterned interactions have on the things involved.

When it comes to consciousness, for example – if we look at the brain as a thing, an organ consisting of an aggregate of cells, its complexity is perhaps comparable to other organs. What makes the brain different is the kind of relationships that happen through synaptic connections between neurons. These relationships support a kind of real-time information processing that’s specific to the neural system, which operates at a much higher level of complexity than anything else in the body, or anything else that we know of.

The new kind of “things” that these neural relationships make possible are animals. Whether or not we think of animals as “conscious” is purely a matter of how we like to use that word. But we are not yet at the level of human consciousness.

The kind of simple unity that we indicate with the words “I” and “You” – that we experience and think about as “subjective awareness” – doesn’t emerge out of any characteristic of the neural system, per se, though our brains have obviously evolved to support it. But it can develop only in a certain kind of relationship that can happen between two animals.

So far as we know, this kind of relationship is highly evolved only between humans, though many other animals have relationships with some of the same characteristics. But to the extent we humans learn to talk to each other and think about each other, we also learn to talk to ourselves and think about ourselves. This kind of "internal self-awareness" is not comparable with whatever internal processing may happen in other animals. We can call both "consciousness" if we want to, but we're talking about two different things.

And of course, it's out of this kind of communicative I-You relationship that there arises a whole new hierarchy of “things” like words and ideas and corporations.

I guess my point is that a word like “complexity” doesn’t capture very well the profound differences that can emerge in the hierarchy of systems. At each level, the types of complexity that pertain to things is quite different from the kinds of complexity that can happen in their relationships.
 
  • #51
ConradDJ said:
I think a discussion of complexity needs to focus on the difference – which occurs at every level – between the things involved and the relationships that can happen between those things. For example, between atoms and the relationships atoms have with each other, which are the basis for forming molecules.

That is, instead of thinking in terms of simpler things (systems) “aggregating” to form larger things, we could imagine the hierarchy in terms of things having relationships, which allow for the formation of more complex kinds of things, that can have more complex relationships, etc.

“Thing” in this context means a structure that lasts over time and continues to be what it is, though it may also change, over time. Things typically have both constant and variable properties.

A “relationship” is something that happens between things, made of specific interactions taking place at a certain times. A relationship between two things can last over time, but only to the extent there is a repetitive pattern in their interactions. The characteristics of relationships are not properties they possesses in themselves, but have to do with the effect these patterned interactions have on the things involved.

When it comes to consciousness, for example – if we look at the brain as a thing, an organ consisting of an aggregate of cells, its complexity is perhaps comparable to other organs. What makes the brain different is the kind of relationships that happen through synaptic connections between neurons. These relationships support a kind of real-time information processing that’s specific to the neural system, which operates at a much higher level of complexity than anything else in the body, or anything else that we know of.

The new kind of “things” that these neural relationships make possible are animals. Whether or not we think of animals as “conscious” is purely a matter of how we like to use that word. But we are not yet at the level of human consciousness.

The kind of simple unity that we indicate with the words “I” and “You” – that we experience and think about as “subjective awareness” – doesn’t emerge out of any characteristic of the neural system, per se, though our brains have obviously evolved to support it. But it can develop only in a certain kind of relationship that can happen between two animals.

So far as we know, this kind of relationship is highly evolved only between humans, though many other animals have relationships with some of the same characteristics. But to the extent we humans learn to talk to each other and think about each other, we also learn to talk to ourselves and think about ourselves. This kind of "internal self-awareness" is not comparable with whatever internal processing may happen in other animals. We can call both "consciousness" if we want to, but we're talking about two different things.

And of course, it's out of this kind of communicative I-You relationship that there arises a whole new hierarchy of “things” like words and ideas and corporations.

I guess my point is that a word like “complexity” doesn’t capture very well the profound differences that can emerge in the hierarchy of systems. At each level, the types of complexity that pertain to things is quite different from the kinds of complexity that can happen in their relationships.
When you talk about the relationships, presumably you are referring to the actual physical forces between particles, right? So while we can create an elaborate hierarchy of descriptions, physically speaking the system remains a flat collection of particles and forces interacting. I think its a good idea to separate our human interpretation and descriptions of an object, from the actual physical properties it has. A good example is that of a computer. We talk about it as if it calculates, has memory, does informationprocessing, etc. Physically it does none of those things. Unless we want to say that all particles calculate and have memory.
 
  • #52
I agree pftest, and we do often refer to systems of molecules having memory if the ensemble can consistently return to a prior state when it's not being acted on (having assumed an influenced state under action).

This is where the word "emergent" is useful. Because the ensemble of particles alone is not anything special; it's the properties that arise from particular ensembles that aren't properties of individual members of the ensemble.
 
  • #53
To op

If consciousness is simple, is awareness also simple?
Is awareness a property electromagnetism, an arrangement of patterns generalised so as to give cohesion to the myriad of sensory info. Something akin to what’s presented on the computer monitor, rather than any particular or any set of informations occurring in the computer itself [processor etc].

If consciousness is an inherent property of the universe, but is not awareness, or awareness requires faculties of the brain; what is the inherent property?

edited for repetition.

I’d add that information is an important part of how we see consciousness. As far as I know, all physical information is collocative and patterns built from such [like binary in a computer or DNA].

Is that what all information is? …even this you are thinking right now, how about art and image based info? Or conceptual info which I feel occurs prior to linguistic thought, or underlies it.
 
Last edited by a moderator:
  • #54
Assuming consciousness arises out of physical processes in the brain, I don't understand its significance from the darwinian view.
Ok eyes, skin, ears and all those sensory organs give the data about the external environment to the brain and the different modules in brain respond to it. For eg., module A could exist which expects signals from the eye to initiate actions that the received information require. (A lion chasing you. Module A-reflex action- triggers the leg muscles to start running)

Similarly other modules might exist that receive, interpret the signals from other sense organs and trigger the necessary actions, if any. What is the role of consciousness here? Why should that info be CCed to the consciousness guy? Sharing the signals with the consciousness and nourishing whatever process that sustains consciousness seems to be an unnecessary burden for life.

If there were two branches of life, one with consciousness and one without, i would tend to think that the latter should have a better chance at survival.

An interesting example i can think of is about hackers with auto-aim plugins in the online first-person shooter game Counter Strike . The auto-aim enabled hacker is like the latter version of life. The program 'sees' an enemy approaching, just aims at his head and shoots. Mission accomplished.! The hacker is a loser with no life. But that's a different discussion.

The humans on the other hand have loads of unnecessary things going on in their coconut shell. Millions of neurons firing carrying absolutely trivial stuff!
"That shot would get me to a 50 frags!"
"I swear b3A$t was a playing like a noob last week. must be a hack!"
"Mom! Yes.. coming down for lunch in a minute!"
"When am i ever going to start studying for the math test?"

Seeing the huge score difference even between an experienced player and a hacker, i wonder how could life with consciousness have survived had there been a consciousness-free version.
Imagine a battle between Terrorists full of hackers and Counter Terrorists with a mix of brilliant and dull non-hackers. Puppies vs Velociraptors.!
 
  • #55
Art imitates life. Modern computers reflect the function and architecture of the human brain, they have been created "in our own image". Accordingly, consciousness exists in the brain's cyberspace. It is one aspect of the phenomenology of mind. - CW
 
  • #56
chasw said:
Art imitates life. Modern computers reflect the function and architecture of the human brain, they have been created "in our own image". Accordingly, consciousness exists in the brain's cyberspace. It is one aspect of the phenomenology of mind. - CW

I disagree, the human brain's architecture is vastly more complex than a computer's. A computer has been designed with a goal from the start. Brain's evolved over a long period of time and in some cases, lacks the efficiency of a computer, but the computer lacks the ingenuity of a brain.

In fact, many projects are currently under way to reproduce brain architecture in computers; IBM has gotten a lot of press for their attempts, Eugene Izhikevich runs Brain Corp. which does many of the similar things. To the computer world, the brain architecture is rather novel.
 
  • #57
"In fact, many projects are currently under way to reproduce brain architecture in computers; snip. To the computer world, the brain architecture is rather novel."

Thanks, P. My point exactly, animal brains are very old, while computers are relatively new human artifacts patterned after our understanding of how the brain works. The brain's function and architecture evolves slowly, but our artificial analog to the brain is still undergoing rapid improvements.

In the initial stages of information technology, most computers have been based on the concept of a unified machine with built-in data storage, volatile memory and a central processor, think of iRobot. In the next stage, humans have begun to separate these functions with specialized machines for data storage and data processing. For example, a modern Beowulf-style high-performance compute cluster behaves like a collective mind, with many little artificial brains working in concert, think of the Borg. Our efforts to build a thinking machine have only just begun. - CW
 
  • #58
My history of study is brain science so I have something to say about consciousness. First of all, consciousness is best understood from a non-reductionist perspective ala David Chalmers. Does that mean its an "emergent" phenomenon? Well, yes, but... The best way to understand consciousness is to realize that it appeared in human evolution in relation to emergence of self-awareness. The awareness of self put boundary conditions upon the brain that demanded that that brain produce what Piaget referred to as "action schemes" in order to negotiate simple AND complex scenarios in its environment.

The self can viewed as the central character in this scheme construction, and the scheme construction only becomes manifest as qualititive experience or "sentience," insofar as the brain's ability to distinguish the varied properties of that constructed universe. Thus, consciousness equates to the ability of the central character manifested within the brain's dynamic physiology to distinguish stimuli. That is what consciousness is, it is the property of a derived central character within the brain in the act of distinguishing stimuli. That stimuli can be sights, sounds, tastes, smells, etc., and it can also be emotions, thoughts, or other non modality specific signals.
 
  • #59
In fact, many projects are currently under way to reproduce brain architecture in computers

Yes there are, and that is a good thing, regardless. However, none of these efforts have much chance of success barring some self-organized assembly of information evolution emerging from the infrastructure, which would manifest unintentionally from the original design perspective. I have studied this. Current efforts to model brain function according to this paradigm use a "brute force" approach by using as the kernal of functionality the individual neuron, complete with the Hodgkin-Huxley equations in every brute force Petaflop iteration.

But, as I stated in my previous post, the scale to analyze consciousness is not on the small scale. i.e., neuronal or quantum level, it is on the level of the cytorchitectonically defined cortical region, such as V1 or S1, etc. At this level, brain processes are modeled as a set of coupled oscillators, the maths of which can be modeled by coupled sets of nonlinear ordinary differential equations. Modelling the brain and consciousness this way is an active and current direction of basic research in the USA, and is cuurently being funded by DARPA research grants.
 
  • #60
Of course, the brain is important, it must be?
However, there is an increasing body of empirical evidence that contradicts the "brain as the ultimate seat of consciousness" hypothesis:

- There are many known cases like the one reported here: http://www.newscientist.com/article/dn12301-man-with-tiny-brain-shocks-doctors.html in which a normally functioning human (education, job, family, social skills) turns out to have just a fraction of the brain tissue compared to other normally functioning humans.
- A patient has recently been observed who has severe brain damage in all the brain areas that neuroscience has identified as important for consciousness and cognition who is still able to pass the famous "mirror test":
http://www.plosone.org/article/info:doi/10.1371/journal.pone.0038413#pone-0038413-g001
- it is known that in cases of severe epilepsy in children (younger than about 8) removing one entire hemisphere will yield (after revalidation) no noticable impairments later on in life.

So the brain alone may not be enough to explain the complexity of consciousness, but is a leg important? There have been reports of patients who had limbs amputated (if I recall correctly especially when the hands were concerned) who "forgot" certain specialistic actions they were able to perform very proficiently before the amputation (f.i. Professional motor skills like those of a musician or locksmith) in the sense that they could not remember nor imagine perfoming the action.

Disciplines of the behavioural sciences who are concerned about the role of the body, nervous system and the environment in the emergence of complex adaptive behaviour are ecological psychology and embodied embedded cognitive science. If anyone is interested I can elaborate on what they would have to say about the subject.
 
  • #61
FredoX said:
Of course, the brain is important, it must be?
However, there is an increasing body of empirical evidence that contradicts the "brain as the ultimate seat of consciousness" hypothesis:

- There are many known cases like the one reported here: http://www.newscientist.com/article/dn12301-man-with-tiny-brain-shocks-doctors.html in which a normally functioning human (education, job, family, social skills) turns out to have just a fraction of the brain tissue compared to other normally functioning humans.
- A patient has recently been observed who has severe brain damage in all the brain areas that neuroscience has identified as important for consciousness and cognition who is still able to pass the famous "mirror test":
http://www.plosone.org/article/info:doi/10.1371/journal.pone.0038413#pone-0038413-g001
- it is known that in cases of severe epilepsy in children (younger than about 8) removing one entire hemisphere will yield (after revalidation) no noticable impairments later on in life.
Welcome to the forums. Cases like this are examples of how adaptive the brain is, over time different areas can take on functions previously lost.
FredoX said:
So the brain alone may not be enough to explain the complexity of consciousness, but is a leg important? There have been reports of patients who had limbs amputated (if I recall correctly especially when the hands were concerned) who "forgot" certain specialistic actions they were able to perform very proficiently before the amputation (f.i. Professional motor skills like those of a musician or locksmith) in the sense that they could not remember nor imagine perfoming the action.

Disciplines of the behavioural sciences who are concerned about the role of the body, nervous system and the environment in the emergence of complex adaptive behaviour are ecological psychology and embodied embedded cognitive science. If anyone is interested I can elaborate on what they would have to say about the subject.
Those sound like pseudo-scientific anecdotes. If there is any change after amputation it is likely do to the psychological stress of the event, not because memories have been kept in the limb.
 
  • #62
Ryan_m_b said:
Welcome to the forums. Cases like this are examples of how adaptive the brain is, over time different areas can take on functions previously lost.

Thanks, these are very interisting forums indeed.

The examples I used sure are examples of brain plasticity! But to me they raise the question what you can learn about consciousness by studying the cytoarchitecture of area V1 when people can be conscious and cogent if it is removed altogether? In your answer also lies a question... If a function can be lost and then taken over by other areas... Where resides this function? It cannot be located exclusively in the piece of brain that was lost, how else can you recover it?

Also take into account that the patients in which the head contains more liquid than brain tissue, loss of brain tissue is so extreme there is no "other", healthy area that can take over. Moreover, there is no dramatic loss of function that prompts the discovery of these cases they are often accidental discoveries. So these examples are not exactly the same as recovery of function after acute acquired brain damage.

Ryan_m_b said:
Those sound like pseudo-scientific anecdotes. If there is any change after amputation it is likely do to the psychological stress of the event, not because memories have been kept in the limb.

If by anecdotal pseudo-science you mean that no experiments can be found in the literature in which people have been deliberately amputated to answer this question you are right :)
I must admit I cannot find the paper I was referring to right now, but here is a much more recent study in which it is shown that motor imagery, imagining a movement or action sequence is impaired in amputees (and f.i. not in immobilized patients) http://nnr.sagepub.com/content/23/5/449.short

[edit just to be sure: I am not suggesting there are memories stored in the limb, just that memories / function are not stored in the brain as if it were an information database]
 
Last edited by a moderator:
  • #63
Why not think of the brain in a quantum sense? This still makes it a physical object but allows for much more possibilities. Entaglement, qubits and such. The quantum landscape emerges into the classical existence we experience with our classical senses. Yet the definition of conciousness eludes us, interpretations thru history are always in the classical physics sense. Perhaps humans have been using the wrong set of tools in an attempt to explain this great question of wonder?
 

Similar threads

  • General Discussion
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
417
Replies
6
Views
1K
Replies
23
Views
2K
  • General Discussion
Replies
5
Views
2K
  • Quantum Interpretations and Foundations
Replies
14
Views
979
Replies
4
Views
865
  • Other Physics Topics
Replies
7
Views
1K
  • General Discussion
Replies
21
Views
5K
Replies
2
Views
1K
Back
Top