Recognitions:
Gold Member

## Is Consciousness Simple or Complex?

 Quote by Ryan_m_b I agree with this analysis.
Let's get it clear. Are you claiming that legs and brains are of equal complexity?

If so, how are you defining complexity?

And then what sources are there that show by your chosen measure that the complexity is in fact equivalent.

 The brain takes no part in decisions on: how to grow bones so that they'll support the structure how to align muscles right regulate and respond locally to environmental signals
Neither does the foot. The foot as an "organ" (er...body part) performs little to no information processing outside vanilla homeostatic and hox-gene mediated structural differentiation performed by almost every other body part.

I can build you a artificial foot that behaves, functionally, almost exactly like a real foot, but we lack anything approaching the computational power to simulate the information processing that goes into making a functional brain.

Mentor
Blog Entries: 1
 Quote by apeiron Let's get it clear. Are you claiming that legs and brains are of equal complexity?
No, I'm not claiming this.
 Quote by apeiron If so, how are you defining complexity?
Well I'm not but my definition here of complexity would be the number of components and interactions between said components. I assume that you are using a different definition?
 Quote by apeiron And then what sources are there that show by your chosen measure that the complexity is in fact equivalent.
A limb has a greater number of components that are more diverse with correspondingly more interactions. So when I think of complexity I'm thinking of how the different types of cells, the various differences in gene expression they have, how this affects their metabolism, what effect this has on intercellular signalling etc. If you want a source start with something like Grays and look at the diversity of organs and tissue types in a limb verses the brain.

Recognitions:
Gold Member
 Quote by Number Nine Neither does the foot. The foot as an "organ" (er...body part) performs little to no information processing outside vanilla homeostatic and hox-gene mediated structural differentiation performed by almost every other body part. I can build you a artificial foot that behaves, functionally, almost exactly like a real foot, but we lack anything approaching the computational power to simulate the information processing that goes into making a functional brain.
You're setting up a blatant strawman and using your own personal scope of "function" to suit your argument.

Mentor
Blog Entries: 4
As strictly an organ, a leg would be more complex physically, which is what I took Ryan to be referring to. What functions they perform is a completely different issue, not one that Ryan, IMO, was addressing in that particular post.

 Quote by Ryan_m_b I wouldn't go that far. Every part of the body is incredibly complex with multiple interacting tissue types.
Bolding mine. I think he is very clear.

Apeiron, you were referring to brain functions. The two of you were simply referring to two different things. You are each correct, IMO, I'm no expert. I think it's time to drop this and move on.

Recognitions:
Gold Member
 Quote by Ryan_m_b No, I'm not claiming this.
OK, so nothing to argue about then.

 Quote by Ryan_m_b Well I'm not but my definition here of complexity would be the number of components and interactions between said components. I assume that you are using a different definition?
Yes, my definition of complexity would be far more complex.

Taking a systems science/semiotic/complex adaptive system perspective, I would argue that you are talking about complication not complexification.

The point there is that it is not enough to talk about just information, biological complexity is meaningful information. The kind that genes and neurons code for, but muscle fibres don't.

 Quote by Ryan_m_b If you want a source start with something like Grays and look at the diversity of organs and tissue types in a limb verses the brain.
Again, complication is not complexification.

Recognitions:
Gold Member
Perhaps this whole complexity discussion deserves it's own thread. Anyway, this thesis introduction by a graduate student at Brandeis outlines the main issues with defining complexity:

I still think a limb has all the complexity you could ask for, as defined quantitatively by Kolmogorov analysis.

 One of the problems with studying the mechanisms and history of complex systems is the lack of a working definition of complexity. We have intuitive notions that often lead to contradictions. There have been numerous attempts to define the complexity of a given system or phenomenon, usually by means of a complexity measure -- a numerical scale to compare the complexity of different problems, but all of them fall short of expectations. The notion of Algorithmic Information Content (AIC) is a keystone in the problem. The AIC or Kolmogorov complexity of a binary string is defined as the length of the shortest program for a Universal Turing Machine (UTM) whose output is the given string [127,79,19]. Intuitively, the simplest strings can be generated with a few instructions, e.g. a string of 100 zeros''; whereas the highly complex ones require a program slightly longer than the string itself, e.g. the string 0010111100101110000010100001000111100110''. However, the minimal program depends on the encoding or programming language'' chosen; the difference between two different encodings being bound by a constant. Moreover, AIC is uncomputable. Shannon's entropy [121] is a closely related measure (it is an upper bound to AIC [80,136]). Further research on the matter of complexity measures stems from the notion that the most difficult, the most interesting systems are not necessarily those most complex according to algorithmic complexity and related measures. Just as there is no organization in a universe with infinite entropy, there is little to be understood, or compressed, on maximally complex strings in the Kolmogorov sense. The quest for mathematical definitions of complexity whose maximums lie somewhere between zero and maximal AIC [10,82,58,53] has yet to produce satisfactory results. Bruce Edmonds' recent PhD thesis on the measurement of complexity [33] concludes that none of the measures that have been proposed so far manages to capture the problem, but points out several important elements: 1. Complexity depends on the observer. The complexity of natural phenomena per se can not be defined in a useful manner, because natural phenomena have infinite detail. Thus one cannot define the absolute or inherent complexity of earth'' for example. Only when observations are made, as produced by an acquisition model, is when the question of complexity becomes relevant: after the observer's model is incorporated. 2. Emergent'' levels of complexity Often the interactions at a lower level of organization (e.g. subatomic particles) result in higher levels with aggregate rules of their own (e.g. formation of molecules). A defining characteristic of complexity is a hierarchy of description levels, where the characteristics of a superior level emerge from those below it. The condition of emergence is relative to the observer; emergent properties are those that come from unexpected, aggregate interactions between components of the system. A mathematical system is a good example. The set of axioms determines the whole system, but demonstrable statements receive different names like lemma'', property'', corollary'' or theorem'' depending on their relative role within the corpus. Theorem'' is reserved for those that are difficult to proof and constitute foundations for new branches of the theory -- they are emergent'' properties. A theorem simplifies a group of phenomena and creates a higher lever language. This type of re-definition of languages is typical of the way we do science. As Toulmin puts it, The heart of all major discoveries in the physical sciences is the discovery of novel methods of representation, and so of fresh techniques by which inferences can be drawn'' [135, p. 34]. 3. Modularization with Interdependencies Complex systems are partially decomposable, their modules dependent on each other. In this sense, Edmonds concludes that among the most satisfactory measures of complexity is the cyclomatic number [33, p. 107][129], which is the number of independent closed loops on a minimal graph. The cyclomatic number measures the complexity of an expression, represented as a tree. Expressions with either all identical nodes or with all different nodes are the extremes in an entropy'' scale, for they are either trivial or impossible to compress. The more complex ones in the cyclomatic sense are those whose branches are different, yet some subtrees are reused across branches. Such a graph can be reduced (fig. 1.1) so that reused subexpressions appear only once. Doing so reveals a network of entangled cross-references. The count of loops in the reduced graph is the cyclomatic number of the expression.

http://pages.cs.brandeis.edu/~pablo/...tml/node9.html
 I agree with the above post, can we please get back to the main discussion? Or make a new thread about the definitions of complexity for that discussion? I'm either uninterested in or too dumb to understand this current debate so let's move on please.

Recognitions:
Gold Member
 Quote by Pythagorean Perhaps this whole complexity discussion deserves it's own thread. Anyway, this thesis introduction by a graduate student at Brandeis outlines the main issues with defining complexity: I still think a limb has all the complexity you could ask for, as defined quantitatively by Kolmogorov analysis.
But as the grad student says, high algorithmic complexity in fact implies randomness, not order.

 The quest for mathematical definitions of complexity whose maximums lie somewhere between zero and maximal AIC [10,82,58,53] has yet to produce satisfactory results.
So if say limbs scored higher on AIC than brains, what would you conclude?

Mentor
Blog Entries: 4
 Quote by apeiron But as the grad student says, high algorithmic complexity in fact implies randomness, not order. So if say limbs scored higher on AIC than brains, what would you conclude?
You just can't let go and get back on topic, can you?

 Quote by Number Nine That doesn't seem to me to be a defensible position. Almost any cellular or genetic process that occurs in the foot occurs in the head, so the differences we're looking for are large scale, emergent properties of tissues and cellular networks. Most of these (e.g. blood flow) occur in both the brain and the foot, so we're left with the fact that the brain performs large scale parallel, distributed information processing, whereas the foot does...what? Incredibly fine and detailed muscle contractions, yes, that are programmed by the brain. The statistical procedures performed the brain alone would fill many a textbook, each with fairly sophisticated mathematical prerequisites. Did you know that an ensemble of neurons, each experiencing Hebbian synaptic changes, can perform principal component analysis? Your brain is literally deriving optimal basis vectors to best express the information fed to it by the outside world.
I dont see how those differences are more than quantitative differences. "parallel processing", if two muscle fibers in a leg contract at the same time, it would fit the definition of parellel processing. Which is an arbitrary definition in the first place, and only indicative of a chronological order, the moment at which the processing happens. If you look at each of the terms you mention, like "distributed", "information", etc., you will find they boil down to very ordinary physical processes. Not just legs, but even rocks do those things to some degree.

 Quote by Pythagorean Perhaps this whole complexity discussion deserves it's own thread. Anyway, this thesis introduction by a graduate student at Brandeis outlines the main issues with defining complexity: I still think a limb has all the complexity you could ask for, as defined quantitatively by Kolmogorov analysis. http://pages.cs.brandeis.edu/~pablo/...tml/node9.html
The important part i see in the paper is that complexity or emergence is observer dependent. That means consciousness cannot be the result of it.

Recognitions:
Gold Member
Let's focus my quote-post a little more:

 Often the interactions at a lower level of organization (e.g. subatomic particles) result in higher levels with aggregate rules of their own (e.g. formation of molecules). A defining characteristic of complexity is a hierarchy of description levels, where the characteristics of a superior level emerge from those below it. The condition of emergence is relative to the observer; emergent properties are those that come from unexpected, aggregate interactions between components of the system.
And this is exactly what cellular differentiation entails, from the molecular genetic networks to the cellular networks to the organ/tissue and environment interaction. This surely exists in a limb. That's not to say it doesn't exist in a brain, or that the brain might even be more complex; it's just that the question is not as trivial as its being made out to be.

As far as the OP is concerned, I think we all agree that consciousness is complex, the current discussion/disagreement is more about how complex it is compared to other living tissues. I contest that complexity isn't unique to consciousness/brain in this regard.

 Quote by pftest The important part i see in the paper is that complexity or emergence is observer dependent. That means consciousness cannot be the result of it.
That's a bit of a misinterpretation. It's not that emergence and complexity doesn't exist without humans, it's that humans draw their boundaries between classifications in an arbitrary manner. The decoder and the code together can be objectified, but the code cannot. We often have the problem of being "trapped" in our decoder scheme. That's not to say that the things we're encoding aren't real, just that our codes/symbols/definitions fail to completely describe them.

 Quote by Pythagorean That's a bit of a misinterpretation. It's not that emergence and complexity doesn't exist without humans, it's that humans draw their boundaries between classifications in an arbitrary manner. The decoder and the code together can be objectified, but the code cannot. We often have the problem of being "trapped" in our decoder scheme. That's not to say that the things we're encoding aren't real, just that our codes/symbols/definitions fail to completely describe them.
Thats actually what i mean, we set up arbitrary boundaries, and then we say consciousness happens on this side of the boundary but not the other. If the boundary is arbitrary, then it is not possible for consciousness to only exist on one side.

Recognitions:
Gold Member
 Quote by GladScientist Consciousness is still a big mystery, and I've always wondered what causes it.
Me too. Phenomenal properties/qualia/subjective awareness/the mental do seem to be inexplicable in terms of "physical' properties (as presently understood). I think I've become persuaded by the "ignorance-hypothesis":

Perhaps, with a full/complete science an ideal subject would in the future be able to discover/understand how phenomenal properties are explicable in terms of physical properties but knowledge of the physical world is constrained by our current scientific concepts we possess. So, it's quite possible that there may exist physical properties of which we currently have no conception (perhaps it may even be beyond our cognitive understanding even by future advances in our "physical" understanding) and that is required to bridge this gap? Stoljar and McGinn have presented such arguments.

Recognitions:
Gold Member
 Quote by Pythagorean Let's focus my quote-post a little more: ... A defining characteristic of complexity is a hierarchy of description levels, where the characteristics of a superior level emerge from those below it. The condition of emergence is relative to the observer; emergent properties are those that come from unexpected, aggregate interactions between components of the system.

I think a discussion of complexity needs to focus on the difference – which occurs at every level – between the things involved and the relationships that can happen between those things. For example, between atoms and the relationships atoms have with each other, which are the basis for forming molecules.

That is, instead of thinking in terms of simpler things (systems) “aggregating” to form larger things, we could imagine the hierarchy in terms of things having relationships, which allow for the formation of more complex kinds of things, that can have more complex relationships, etc.

“Thing” in this context means a structure that lasts over time and continues to be what it is, though it may also change, over time. Things typically have both constant and variable properties.

A “relationship” is something that happens between things, made of specific interactions taking place at a certain times. A relationship between two things can last over time, but only to the extent there is a repetitive pattern in their interactions. The characteristics of relationships are not properties they possess in themselves, but have to do with the effect these patterned interactions have on the things involved.

When it comes to consciousness, for example – if we look at the brain as a thing, an organ consisting of an aggregate of cells, its complexity is perhaps comparable to other organs. What makes the brain different is the kind of relationships that happen through synaptic connections between neurons. These relationships support a kind of real-time information processing that’s specific to the neural system, which operates at a much higher level of complexity than anything else in the body, or anything else that we know of.

The new kind of “things” that these neural relationships make possible are animals. Whether or not we think of animals as “conscious” is purely a matter of how we like to use that word. But we are not yet at the level of human consciousness.

The kind of simple unity that we indicate with the words “I” and “You” – that we experience and think about as “subjective awareness” – doesn’t emerge out of any characteristic of the neural system, per se, though our brains have obviously evolved to support it. But it can develop only in a certain kind of relationship that can happen between two animals.

So far as we know, this kind of relationship is highly evolved only between humans, though many other animals have relationships with some of the same characteristics. But to the extent we humans learn to talk to each other and think about each other, we also learn to talk to ourselves and think about ourselves. This kind of "internal self-awareness" is not comparable with whatever internal processing may happen in other animals. We can call both "consciousness" if we want to, but we're talking about two different things.

And of course, it's out of this kind of communicative I-You relationship that there arises a whole new hierarchy of “things” like words and ideas and corporations.

I guess my point is that a word like “complexity” doesn’t capture very well the profound differences that can emerge in the hierarchy of systems. At each level, the types of complexity that pertain to things is quite different from the kinds of complexity that can happen in their relationships.

 Quote by ConradDJ I think a discussion of complexity needs to focus on the difference – which occurs at every level – between the things involved and the relationships that can happen between those things. For example, between atoms and the relationships atoms have with each other, which are the basis for forming molecules. That is, instead of thinking in terms of simpler things (systems) “aggregating” to form larger things, we could imagine the hierarchy in terms of things having relationships, which allow for the formation of more complex kinds of things, that can have more complex relationships, etc. “Thing” in this context means a structure that lasts over time and continues to be what it is, though it may also change, over time. Things typically have both constant and variable properties. A “relationship” is something that happens between things, made of specific interactions taking place at a certain times. A relationship between two things can last over time, but only to the extent there is a repetitive pattern in their interactions. The characteristics of relationships are not properties they possess in themselves, but have to do with the effect these patterned interactions have on the things involved. When it comes to consciousness, for example – if we look at the brain as a thing, an organ consisting of an aggregate of cells, its complexity is perhaps comparable to other organs. What makes the brain different is the kind of relationships that happen through synaptic connections between neurons. These relationships support a kind of real-time information processing that’s specific to the neural system, which operates at a much higher level of complexity than anything else in the body, or anything else that we know of. The new kind of “things” that these neural relationships make possible are animals. Whether or not we think of animals as “conscious” is purely a matter of how we like to use that word. But we are not yet at the level of human consciousness. The kind of simple unity that we indicate with the words “I” and “You” – that we experience and think about as “subjective awareness” – doesn’t emerge out of any characteristic of the neural system, per se, though our brains have obviously evolved to support it. But it can develop only in a certain kind of relationship that can happen between two animals. So far as we know, this kind of relationship is highly evolved only between humans, though many other animals have relationships with some of the same characteristics. But to the extent we humans learn to talk to each other and think about each other, we also learn to talk to ourselves and think about ourselves. This kind of "internal self-awareness" is not comparable with whatever internal processing may happen in other animals. We can call both "consciousness" if we want to, but we're talking about two different things. And of course, it's out of this kind of communicative I-You relationship that there arises a whole new hierarchy of “things” like words and ideas and corporations. I guess my point is that a word like “complexity” doesn’t capture very well the profound differences that can emerge in the hierarchy of systems. At each level, the types of complexity that pertain to things is quite different from the kinds of complexity that can happen in their relationships.
When you talk about the relationships, presumably you are referring to the actual physical forces between particles, right? So while we can create an elaborate hierarchy of descriptions, physically speaking the system remains a flat collection of particles and forces interacting. I think its a good idea to seperate our human interpretation and descriptions of an object, from the actual physical properties it has. A good example is that of a computer. We talk about it as if it calculates, has memory, does informationprocessing, etc. Physically it does none of those things. Unless we want to say that all particles calculate and have memory.
 Thread Tools

 Similar Threads for: Is Consciousness Simple or Complex? Thread Forum Replies Quantum Physics 5 Calculus & Beyond Homework 4 Precalculus Mathematics Homework 2 Calculus & Beyond Homework 0 General Discussion 2