- #1
Mentat
- 3,960
- 3
Wow, this forum's changed a lot since the last time I logged-on...
Anyway, since I may not be able to get back on the Forums again, I wanted to make sure that I explained something which I only sort of glossed over in previous threads, and which probably would have been useful in those threads, had it been further developed.
You see, while I spent so much time showing that the supposed emergent property of "subjective experience" doesn't exist (indeed, it's only definition is blatantly circular, and so any argument built from it will inevitably be a straw-man), I forgot to emphasize that there is an emergent property related to consciousness, which might (just maybe) explain why it does not appear to be perfectly reducible. The emergent property, btw, is nothing more than a master algorithm.
An algorithm describes a relationship of many parts in terms of one master "plan" of sorts, that the many parts are following (though, usually, the parts are not so much obeying a "plan" as the "plan" is being deduced from the behavior of the parts). Anyway, there are billions (perhaps trillions) of individual processes occurring between the individual units of thought, in the neocortex, and so it would be impossible to comprehend each individual action. Instead, the entire process is looked at, and the emergent algorithm is quantified as it's own entity (going by such names as "subjective experience" and simply "consciousness").
Because the algorithm describes the behavior of all or many of the parts, in a pattern that they form together, it is not a property of any of the individual parts, giving the sense of irreducibility.
Anyway, I figured I should state plainly what I had already implied sporadically. You see, this emergent (and irreducible) property has indeed been recognized by the scientists that I so often quote (Edelman, Tononi, Calvin, Dennett, etc). Dennett was referring to it when he created the intentional stance. Calvin and Edelman (along with all the other "Selectionist" scientists) are reducing the algorithm to more fundamental patterns, by relating it to a Darwinian process.
The recognition that algorithmic structures are often treated as emergent, separate, properties, is also helpful in other areas of philosophy, btw. For example: life. A thing is alive if it performs the functions of a living thing, but none of those function (alone) is "living" only the collection thereof, and the subsequent algorithm describing their behavior.
Anyway, since I may not be able to get back on the Forums again, I wanted to make sure that I explained something which I only sort of glossed over in previous threads, and which probably would have been useful in those threads, had it been further developed.
You see, while I spent so much time showing that the supposed emergent property of "subjective experience" doesn't exist (indeed, it's only definition is blatantly circular, and so any argument built from it will inevitably be a straw-man), I forgot to emphasize that there is an emergent property related to consciousness, which might (just maybe) explain why it does not appear to be perfectly reducible. The emergent property, btw, is nothing more than a master algorithm.
An algorithm describes a relationship of many parts in terms of one master "plan" of sorts, that the many parts are following (though, usually, the parts are not so much obeying a "plan" as the "plan" is being deduced from the behavior of the parts). Anyway, there are billions (perhaps trillions) of individual processes occurring between the individual units of thought, in the neocortex, and so it would be impossible to comprehend each individual action. Instead, the entire process is looked at, and the emergent algorithm is quantified as it's own entity (going by such names as "subjective experience" and simply "consciousness").
Because the algorithm describes the behavior of all or many of the parts, in a pattern that they form together, it is not a property of any of the individual parts, giving the sense of irreducibility.
Anyway, I figured I should state plainly what I had already implied sporadically. You see, this emergent (and irreducible) property has indeed been recognized by the scientists that I so often quote (Edelman, Tononi, Calvin, Dennett, etc). Dennett was referring to it when he created the intentional stance. Calvin and Edelman (along with all the other "Selectionist" scientists) are reducing the algorithm to more fundamental patterns, by relating it to a Darwinian process.
The recognition that algorithmic structures are often treated as emergent, separate, properties, is also helpful in other areas of philosophy, btw. For example: life. A thing is alive if it performs the functions of a living thing, but none of those function (alone) is "living" only the collection thereof, and the subsequent algorithm describing their behavior.