ryan said:
Perhaps you could propose your own uses of the word "consciousness".
as I said "subjective experience"
that's it.
all the other philosophical and emotional baggage can go out the window. We can define detection, memory, and learning in a mechanical way that any set of materials is capable of if arranged correctly.
Once you're start talking about "free will" or a "soul" or a special, separate "you", your are avoiding a mechanistic, physical description. Our measurable behavior is a complex electrochemical interaction based on a stimulus, just like a plants. Molecular networks in primitive life forms can display associative learning and decision-making, too:
https://www.physicsforums.com/showpost.php?p=3226519&postcount=15
Again, though, you have to separate this from subjective experience. That's why I prefer to drop the word consciousness; because everyone has emotional and philosophical baggage associated to it that they never can lay on the table.
When it comes to subjective experience, we have no clue how it arises. It's all the other things that you associate with consciousness (memory, learning) that, mechanistically, can equally describe a computer or another life form. You just call your memory/learning consciousness because you can ascertain that your subjective experience is attached to it.
ryan_m_b said:
I'm not being passive or avoiding the argument but I find it highly off putting when, on a forum meant to be for constructive discussion, people start throwing up straw mans and putting words in my mouth. You really seemed keen there to suggest that I was trying to bring religion into a discussion when it is only your misinterpretation (or if you didn't understand what I meant you could have asked) and eagerness to battle down such a notion.
I never brought up religion. To me, you implied a nonphysical agent must be at work in humans when you said "just chemicals" for plants. Nonphysical explanations of consciousness are not religious, they are dualist. But all I accused you of was an implication, which is a process that involves my judgment, so it's not a "strawman" argument. In fact, it's an opportunity for you to clarify things. Are you operating as a physicalist or a dualist or using another basis of assumptions? I'm a physicalist. This tells you the assumptions I work from (physicalism).
I suspect you're a physicalist too, since you keep saying emergence. I am actually quite familiar with the kinds of papers that come out of Physics Review E and AIP: Chaos. What you should quickly learn if you want to publish in such a journals is that "oh it emerges" is not enough. Emergence is an arbitrary and degenerate. We have next to nothing int he way of general rules of emergence. It's a fascinating, nascent discipline. It's the frontier, in my eyes (speaking of the "nonlinear sciences", see AIP: Chaos's "about us" page:)
http://chaos.aip.org/about/about_the_journal[/URL]
it is very much in an exploratory phase.
So if you want this conversation to be constructive, then ditch this attitude:
[QUOTE]You're also really labouring this idea that I should be able to define everything about consciousness and if I can't then what I am saying is wrong.[/QUOTE]
You're exaggerating, of course. It only takes one fundamental test. Not "everything about consciousness". We know plenty "about" consciousness. But that's a very ambiguous, general statement.
Apeiron shaped the request better than I did:
[QUOTE=apeiron]To get anywhere, you have to instead focus on some core definitional action that is then properly generalisable.[/QUOTE]
This is the kind of answer I'm looking for. I don't think "predictive modeling" would satisfy you though, since most lifeforms can be viewed as predictive modelers going on the only measure you can make: behavior (p. 6, paragraph 2 of the plant apices paper I posted earlier).
All's I'm saying is it sounds like you have information that I don't that allows you to confidently make judgments that I cannot.
It's the fundamental question "how can matter have subjective experience?"
Saying it emerges doesn't answer the question, it creates thousands more questions (see Eve Marder's paper: http://www.nature.com/neuro/journal/v14/n2/full/nn.2735.html" for an example of biological degeneracy, though I can point out many more in proteomic models as well)
Yes, it emerges, I agree, but that's already the approach I take to the question. What's next? How does it emerge, what is the underlying informational structure of a system that is conscious vs. one that is not. For me, it starts with understanding information flow and structure in physiological neural networks. And the more I learn and work my thesis (of the same study) the more I recognize how important the chemical signaling networks in the human body are to the function of the brain and global regulation (based on transcription factors, which are based back on the stimulus) the more I question the simplicity of the question.