- #36
Noesis
- 101
- 0
DaveC426913 said:Your viewpoint ignores the idea that new additions can be imbued with identity once added.
i.e. if half the planks on Theseus' ship are replaced with new planks, it does not follow that only half the ship is Theseus' ship; it is more reasonable that the new planks are inaugerated into the "Theseus' Ship Club".
You're right, I was unclear. I was considering the question raised by Archosaur yet addressing it in reference to the Theseus Paradox, which is different. I had the 'subtractive concept' in mind.
I don't think this will work. I would not have to remove many pieces from a computer program for it to stop working. In fact, the removal of a single character - virtually any single character I might care to choose - is quite likely to be fatal. I would then erroneously conclude that that single character is the most important component in the program.
Likewise, I would not have to remove many components of a human for it to stop working too. I cannot thus conclude that the particular component I last removed is the difference between life and death.
Same would likely apply to consciousness.
Thus is the nature of complex, interdependent systems. I think these are excellent examples of 'the whole is greater than the sum of its parts'.
The conclusion would be unsubstantiated since all that is possible to conclude is that the single character is an integral part of the program, certainly not the most important. Removal of any character would yield the same conclusion. But your point is duly noted: this instrument is not sharp enough to isolate relative importance or causation, it can only correlate what is necessary for proper function in a particular instance.
I don't see why we couldn't remove 'pieces' of the brain until we were able to identify the minimum material necessary in order to have consciousness. If consciousness still exists with something missing, then whatever is missing wasn't necessary for consciousness. An implicit assumption here is that consciousness is binary, or that we have some mechanism for 'measuring consciousness.' But again, the same problem occurs since the system would likely collapse before we were able to glean any significant data. And perhaps even more problematic, the data would all be correlative: I might conclude arms are necessary for consciousness if I lop one off, but its really blood perfusion that I should be considering, etc. However, the technique can have merit: brain ablations and recently transgenic mice have proven very insightful into understanding various systems.
I think the main hindrance to the consciousness problem is our resolution and the vast amount of possible permutations of removals. The inverse problem would be to build consciousness from the molecular level, and it comes with similar problems.
A reductionist perspective can't hope to understand systems where the 'whole is greater than the part sum' only if it reduces too far. This fact certainly limits the efficacy of reductionist technique, but it can at least serve to realize what portions of a system exhibit emergent properties due to interacting components.
To be clear, I think the idea is almost embarrassingly simple: remove things until it breaks and then infer that in some form whatever was just removed is necessary. It lacks discriminatory power, e.g., the program example, but it can be used to solve problems. And finally, it is related philosophically to what the existence of things means and makes us be a bit more careful regarding the definitions we use--something that seems to be happening with our definition of life.