Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Can mental events cause?

  1. Dec 21, 2009 #1

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    The primary problems with mental causation are nicely summed up by Yablo (“Mental Causation” The Philosophical Review, Vol. 101, No. 2 (April 1992) ).
    http://www.jstor.org/pss/2185535
    Yablo defines dualism as follows:
    In other words, the physical description of the color red as it appears in the mind will be different than the mental description of the color red (distinct), but they both should be taken as phenomena that actually occur (existents). The physical description would discuss what neurons and sections of the brain are active when the phenomenon of ‘red’ occurs, while the mental description would focus on explaining the qualia.

    Take for example, an allegedly conscious computer. For the sake of clarity, let’s model a computer as a large collection of switches, which is basically all a computer is. A transistor is at the heart of every modern computer which is nothing more than a switch.

    We can examine a computer that is reporting that it sees the color red when looking at a fire truck for example. This computer will have a camera for eyes and a speaker for a mouth. So out of the speaker, when the camera is turned on a fire truck, it reports ‘red’. But did it report red because it actually is experiencing red, or because its circuit is designed such that red is reported? None of the transistors in the computer are influenced by any ‘experience’ of redness. Each transistor only changes state because an electrical current is either applied or removed. And per computationalism, the experience of the color red is a phenomena produced by the interactions of the transistors, it is not a phenomena produced by any given transistor.

    For the computer, we have physical events (changes transistor states) which have physical causes (application or removal of electric current). Mental events therefore, are not causally relevant and are epiphenomenal.

    Appeals to mental causation because of quantum phenomena may also be problematic. Very briefly, if a quantum physical event was somehow influenced by a mental event, such as protein folding for example, then the probability of the physical event will have been influenced by the mental event. If a quantum physical event has a 50/50 chance of occurring and that event is influenced by some mental event such that the physical event no longer occurs with a 50/50 chance. This might violate entropy since a system can now become more ordered because of mental events.

    What’s your view? How can mental events be reconciled with physical events? Please provide references to the literature if at all possible. We don’t want personal theories.
     
    Last edited by a moderator: Apr 24, 2017
  2. jcsd
  3. Dec 21, 2009 #2
    My objections to his argument:

    (2) I dont think this is an established fact. If we knew every cause and effect, then everything is known and science has nothing left to investigate. But we dont know everything, so this is not the case. We dont know if and how strings work, if there are other universes, what happens in black holes, what started the universe, how gravity works, how QM and GR can fit together, where the laws of physics came from, how atoms work , etc.(example of unexpected atomic interaction: http://www.sciencedaily.com/releases/2008/07/080702132209.htm, or the reasons they built the large hadron collider). So there is room for unknown causal powers (either physical or mental) in our universe.

    (3) is avoided by adopting monism such as materialism, panpsychism, idealism or something else. With materialism: if mind = matter, then mind has the same causal powers as matter. Panpsychism: we can talk about "the physical" as if it is unconscious, but we dont really know. A physical body might operate according to a known mechanism, yet be conscious. There is no logic that states that mechanistically/deterministically behaving objects cannot be conscious or that consciousness cannot cause mechanistic/deterministic behaviour.

    The issue with entropy seems more to do with free will, and not so much with mind in general. And if a mind with free will made a system more ordered in one location, yet this caused a decrease in order in another location, would it violate entropy? If not, then a free will mind has much room to operate in concordance with entropy.
     
    Last edited: Dec 21, 2009
  4. Dec 21, 2009 #3

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Hi pftest,
    I’d rather not go through everyone’s own arguments and objections. Please review some of the literature on the topic.

    Regarding atomic interactions, that’s a non starter. Those are physical processes that can be objectively measured and if they are random in nature, then as I’d pointed out before, the statistical chances of those processes occuring can be quantified. Radioactive decay for example, has a well defined statistical rate of occurance. No one has ever suggested that the liklihood of a physical process occuring is dependant on someone’s mood for example, which is what we’d need to find if mental causation is true.

    This leaves open the explanatory gap. Why should there be any feeling or phenomenal experience at all? How can we know what a computer experiences since everything a computer does can be FULLY explained in physical terms? This argument can be extended to the mind given the present computational paradigm. What we say and how we act can be explained by referencing the governing physical interactions between neurons and other physical interactions within the body. It also leaves open the issue of reliable reporting that we discussed in your other thread.

    I’d suggest doing a search of the web for:
    http://www.google.com/search?hl=en&q=mental+causation&aq=f&oq=&aqi=g2"
    http://www.google.com/search?hl=en&source=hp&q=explanatory+gap+&aq=f&oq=&aqi=g1"
     
    Last edited by a moderator: Apr 24, 2017
  5. Dec 21, 2009 #4
    Im sorry i dont have any references, but i shall look for some later. I will have a go at it now anyway, i believe it is not against the forum rules to do so.

    I brought the atomic interactions up because it shows that even in atoms there is room for unknown causal powers. That room itself is enough to dismiss the idea that there is no room for mental causation.

    You are right, i dont think materialism solves this and i should not have mentioned it.

    What i was trying to say is that "the physical" need not be not in conflict with "the mental". When we have an equation that predicts that an object will move in a straight line, from this it doesnt follow that thus the object has no mind.

    The statements "the object will move in a straight line" and "the object has a mind" are not in conflict with eachother. Similarly, im saying that physical interactions need not be in conflict with mental ones. The end result is then panpsychism or neutral monism (but not materialism as i mistakenly said).
     
    Last edited: Dec 21, 2009
  6. Dec 21, 2009 #5

    apeiron

    User Avatar
    Gold Member

    It may be important to distinguish between a computer - a Turing machine - and machines more generally here. What you are describing is a rather hybrid system that confuses the essential issues I believe.

    So a Turing machine is the familiar tape and gate system. If it is really "computing" that your machine is doing, then all those functions and activities can be reduced to the making and erasing of marks on an infinite tape. You can then ask the relevant questions of this most minimal model and see how they stack up.

    Do this and you can see for example that you now have no clear place to insert your camera input and your speaker output. You can write these actions into the program as data on the tape. But the point is that YOU have to. The computer is not in a dynamic relationship with the world as a basic fact of its nature.

    Now you can begin to think about how you would build up some actual modelling relation with a world - how you would build something more like a neural network that could learn from experience. What is it exactly that you are adding that was missing?

    To cut a long story short, the whole epiphenomenal/dualistic debate arises because we insist on taking a strictly bottom-up, built from smallest components, approach to thinking about complex adaptive systems. Complexity involves both its forms - its global organisation - as well as its substances, its local material out of which things can get made (which includes the notion of information, or transistor bits, or marks on an infinite tape).

    With neural networks, we are beginning to see signs of a global ongoing state that acts as a living context to the system's moment-to-moment reactions - the ideas or long term memories that frame the impressions or short term memories (see Grossberg, Rao, Hinton, McKay, Attneave, or any one working on generative neural nets, forward models, Helmholtz machines, dietic coding, etc).

    A Turing machine has no global organisation, no hierarchy of operational and temporal scale. So there is nothing like a top-down causation guiding and constraining its actions. There is no internal meaning or semiosis. It is only we programmers who found the marks on the tape meaningful when we first wrote them and when we looked again at how they were rearranged.

    All this is perfectly obvious from a systems perspective and so these kinds of philosophical traumas have no real content. There is a problem of coming up with an adequate model of top-down causality as it applies to conscious human brains - it is a hard ask - but not an issue of actual causal principle.

    To add a little reality to your hybrid machine, what if you allowed that it was sufficiently complex to be an anticipatory device?

    This would mean that before a fire truck hoved into sight, it would be in a state of prevailing expectation of not seeing red in that particular part of the visual field. It would be expecting to see whatever colour of whatever was already in that place. The red fire truck would then be a surprise. Although hearing its sirens would prime for the sight of it coming around the corner (and if the truck were painted green, that would be an even bigger surprise).

    And so on. The point being that the "mind" is always there as a global state of aniticipation and prepared habits. New information can be taken in. But there is always a prevailing context that is framing it. This is what a computer simulation would have to replicate in all its gloriously complex detail. And such a simulation would have even less to do with the canonical Turing machine than a neural net. It would in fact have to have the real-life dynamism of a human brain embedded in a human body.

    So the standard trick of philosophical dualists is to say we can't imagine a Turing machine being conscious. Well neither can a systems theorist. And what is completely lacking in the one, and completely necessary in the other, is this hierarchy of scale, this interaction between bottom-up constructing causality and top-down contextualising, or constraining, causality.
     
  7. Dec 21, 2009 #6

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Hi apeiron,
    Thanks for the responce. I realize some folks feel a system approach and some form of downward causation are instructive. The paper "Physicalism, Emergence and Downward causation" by Campbell and Bickhard for example, is right up your ally. They discuss mental causation and reference Kim. To me, it's all mere handwaving.

    I'm on the other side of the fence. Craver and Bechtel2 I think do a nice job of getting in between the two camps and provide an argument that you might find interesting. They suggest a way of thinking about "top down casuation" without resorting to downward causation. They suggest that interlevel relationships are only constitutive. To a system level approach they suggest, "...those who invoke the notion of top-down causation ... owe us an account of just what is involved.” I see very few individuals attempt to provide that account, and those that do have not been able to prove any kind of downward causation. Bedau1 discusses weak and strong emergence as well as downward causation. Bedau suggests that "weak emergence is all we are entitled to" and does a very good job pointing out that "emergent macro-causal powers would compete with micro-causal powers for causal influence over micro events, and that the more fundamental micro-causal powers would always win this competition." I see no evidence to challenge that.

    Regardless of which camp you’re in, the system's approach doesn't do anything to change the conclusion. Every single transistor, switch or other classical element of any neural net only ever changes state because of local causal actions. In the case of a transistor, it's the current applied to the transistor. It really is that simple!

    1. Bedau; "http://people.reed.edu/~mab/publications/papers/principia.pdf" [Broken]"
    2. Craver and Bechtel; "http://philosophyfaculty.ucsd.edu/faculty/pschurchland/classes/cs200/topdown.pdf" [Broken]"
     
    Last edited by a moderator: May 4, 2017
  8. Dec 21, 2009 #7

    apeiron

    User Avatar
    Gold Member

    Most philosophers don't take it seriously and yet most mathematical biologists do. Interesting that o:).

    Here is my set own refs from an earlier thread on this....
    (https://www.physicsforums.com/showthread.php?p=2469005&highlight=emmeche#post2469005)

    http://www.ctnsstars.org/conferences...0causation.pdf [Broken]

    http://www.buildfreedom.com/tl/tl20d.shtml [Broken]

    http://people.reed.edu/~mab/papers/principia.pdf

    http://www.nbi.dk/~emmeche/coPubl/2000d.le3DC.v4b.html

    http://www.nbi.dk/~emmeche/coPubl/97e.EKS/emerg.html

    http://pespmc1.vub.ac.be/CSTHINK.html

    http://www.calresco.org/ [Broken]

    http://books.google.co.nz/books?id=N...ollege&f=false [Broken]

    http://www.nbi.dk/~emmeche/pr/DC.html

    http://www.isss.org/hierarchy.htm

    https://webspace.utexas.edu/deverj/p...bingmatter.pdf [Broken]

    Not so fast cowboy. If you are dealing with hierarchically organised systems, you can't just blithly label all the causality as "local". The point is that there are in fact prevailing long term states of activity across the network that act as contextual constraints.

    The on-ness or off-ness of a particular transistor gate is the result of events that happened in the past and predicted (with certain weight) in the future.

    The on-ness or off-ness of a particular transistor gate has both some level of "now" meaning relating to some current spatiotemporal pattern of activation, and also some level of more general meaning as part of long term memory patterns.

    If you look at the transistor gate from an outside point of view - and make that choice to measure its isolated state at some isolate instant in its history - then it may indeed seem you are only seeing bottom-up local causality. But you precisely then are missing the real deal, the internal systems perspective by which every local action has meaning because it occurs with a running global context.

    If philosophers studied biology and systems theory, this would not be such a mystery.

    There are honorable exceptions of course like Evan Thompson.

    http://individual.utoronto.ca/evant/MBBProblem.pdf [Broken]
     
    Last edited by a moderator: May 4, 2017
  9. Dec 21, 2009 #8

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Hi apeiron,
    This is okay. Why "certain weight" though? Are you suggesting computers are not deterministic?
    Would you agree that none of this changes the fact that transistors only ever change state because of a current being applied? And if that's true, then mental states per the standard computational paradigm still don't influence individual transistors any more than a brain state influences individual neurons. Macro-states at the classical scale simply don't influence micro-states except that macro-states provide these boundary conditions that put limits on potential micro-states. I only see bottom up causality (in classical mechanics) because that's all there is. That's why engineers, scientists, meteorologists, etc... use finite element analysis for all kinds of structural, fluid, heat transfer, electromagnetic... all classical phenomena. They can all be dealt with using local bottom up causation. That's the "real deal".

    Regarding chemistry, biology, and condensed matter physics, there are many instances of new and unexpected things that can happen. There are some articles in the literature that have valid cases for their being non-separable physical states at or below the level where classical mechanics gives way to quantum mechanics. We might find some common ground there, but I doubt we'll ever see eye to eye on everything.
     
  10. Dec 21, 2009 #9

    apeiron

    User Avatar
    Gold Member

    Computers are certainly designed to be as deterministic as possible - that is part of their engineering spec. And of course we know how difficult this is becoming as chip gates get down to the nano-scale.

    But no. The nodes of neural nets are weighted in the sense they do not switch indiscriminately but on the basis of their learning history, just like the real neurons they are meant to vaguely simulate.

    I was saying that if you insist on only measuring systems in simple ways, you will of course only extract simple measures of what is going on.

    Your question is "why did this transistor switch"? You say it is only because of some set of inputs arriving at that moment. I say it is because of some history of past learning, some set of expectations about what was likely to happen, some current context in which its switching state makes a co-operative and cohesive sense.

    You then say well I'm looking at the transitor and I can't see these things. I reply that is because all the fancy stuff that is actually making things happen has been hidden away from your gaze at the level of the software.

    In a realistic neural simulation, for example, there would have to be some equivalent of neural priming, neural binding, population voting, evolving responses. Some thousands of transistors would be needed (a computational sub-system) to even begin getting this necessary global complexity represented as hardware.

    So again, you are pursuing an illegitimate route to an argument.

    The honest approach is to strip your thought experiment down to the bare essentials of a Turing machine and see if your idea still holds. And the standard outcome of such approaches is agreement that you have now clearly put all meaning outside the physical implementation. The writing and the interpreting of the programs is external to its running. And all you have done is break apart the bottom-up crunching from the top-down contextualisation. Not proved that the top-down part is actually unneccessary to the deal.

    This is not a classical vs QM issue either. It applies to all systems (and thus all reality - reality being best understood as a system, except when you find it more useful to model it as a machine).
     
  11. Dec 21, 2009 #10

    Pythagorean

    User Avatar
    Gold Member

    But the applied current itself is based on inputs to the system; ultimately, from a user, who may be very well responding to an output from the system. I don't know how easily we can separate a computer from the user, or the engineers that designed it.

    Why shouldn't there be a feeling or phenomenal experience? I'm not sure we can know what a computer experiences until we close the explanatory gap. I think we'd have to find the physical basis for our experience and start mapping it to get an idea of what physical process is associated with what kinds and parts of consciousness.

    Speaking in magnitudes of centuries, I don't think we're that far off from being able to bring the mind into the physical arena.
     
  12. Dec 22, 2009 #11

    baywax

    User Avatar
    Gold Member

    Hi Q_Goest, it was discussed some time ago about how experience is the action of several neurons and chemical processes recording an incident or event. If your computer records the "experience" of red then uses that earlier response in a more recent one, then it is experiencing red. Similarly with the brain doing relatively the same action or event.

    What's different here is that your computer is not set up to experience red with genetic responses or "innate" response. Every cell in our entire bodies responds to the colour red as do plants and other animals. This is either genetic or a chemo-photosensitive trait that has been selected through our evolution.

    I don't think the genetic or chemo-photosensitive reactions we have to the colour red are dependent on mental activity. The cells respond autonomically. And personally, I'd tell you that any brain with its visual centre, eyes etc... in working order is primarily autonomic as well. I would point out the obvious here and say that when our brain detects red there is an intellectual signal to stop the car... and that would indicate a mental cause of an action/event. (?)
     
  13. Dec 22, 2009 #12

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Hi pftest,
    Sorry if my last post sounded a bit abrupt. I actually agree that there may be room for mental causation at a quantum level, but I don’t yet see how and I’ve not read enough of the literature to locate a good argument in this regard. One issue is the explanatory gap – why should any physical process be accompanied by a mental one? This is equally applicable to quantum interactions. Another issue is that there could be violations of ‘entropy’ in the sense that I provided in the OP. I think what quantum models have in their favor is that they provide a physical substrate which is intrinsically inseparable. Phenomena that can be described in classical terms however are separable (depending on how you define separable).


    Hi apeiron,
    Let’s clarify one issue. In your first post you mentioned “top-down causation” and when I read that in context it seemed to me you meant “downward causation”. Hence the focus on the transistor. Downward causation may or may not be what you have in mind, but I’m assuming it is. Top-down causation has been defined in different ways in the literature, so perhaps you’d like to clarify.
    1) Top-down causation can mean “downward causation”. I’ll define “downward causation” below as defined by Bedau.
    or
    2) it can mean that the boundary conditions of a physical system restrict the potential micro-states of that system. For example, a point on a wheel rolling down a hill has its motion restricted by the boundary conditions on the wheel. See Bedau’s paper for more on this. This kind of top-down causation is not problematic, but it also doesn’t lend any help to mental causation.

    You may have another meaning for top down causation in mind, so feel free to clarify.


    Hi Pythagorean,
    I’d asked for people to reference the literature, not so much because I like people to keep referencing things, but because the philosophy forum has a reputation for people ignoring the literature as if it doesn’t exist. The issues regarding cognition have already been considered in depth by others, so using our own intuitions about philosophy typically gets us in trouble.

    Before getting into this, I want to define “downward causation” as given by Bedau:
    I’ll end it there. Hopefully you get the idea.

    If you agree that transistors only change state because of there being a current applied to the base, then you can successfully rule out downward causation (and very likely, mental causation) for such a system of switches. I believe you must agree with that, so hopefully the above discussion by Bedau helps provide an understanding of what is meant by downward causation. I’d strongly recommend reading Bedau’s paper (link above).

    There are others who accept this but still try to defend mental causation in some fashion. I’ve seen various methods of attack. I’d categorize these as largely appealing to the complexity of such a system and glossing over the simple facts. Once you rule out downward causation, mental causation (using the standard computational paradigm) becomes not only indefensible, but it creates a very nasty paradox.

    If we rule out mental causation, we have a very serious paradox that is almost ignored in the literature. (almost but not quite) The problem is that if mental causation is false, then any behavior we express or reporting of mental states cannot be shown to correlate reliably with the actual mental states. In fact, in the worst case, we may even be forced to accept the worst form of panpsychism, that all matter experiences every possible phenomenal experience at the same time!* The standard line of defense on this issue is to say that mental states ARE physical states. However, this doesn’t help with the paradox one bit IMHO. If the mental states really are epiphenomenal on the physical states, then there is nothing we can do to determine what those mental states are. We can’t discover them by observing behavior and we can’t find out by asking people about them.

    Ultimately, the behavior and reports of those mental states in a computer are completely governed by the physical states, so there is no chance of the mental state being reliably reported. For example, consider a computer animation on a screen of a man in pain saying he’d like you to stop pressing the down arrow because each time you do he feels a stabbing pain. If the computer really feels this pain, how can we know? Did the computer say so because of the change in physical states of the switches? Or did the computer experience something and it told you what it was feeling?

    We can take the machine apart and we’ll find a series of transistors that change state just like dominos falling over. There is a physical reason for the behavior (and the reporting) that the animated character provides. The animation MUST act and say those things because there is a physical reason for the changes in state of the computer. However, the figure could equally be experiencing anything or nothing at all. There is no way for the animated figure to do anything but act and talk as if it were experiencing pain because that’s what the physical changes of state resulted in. Those physical changes of state can’t report mental states in any way, shape or form, so even behavior does not reliably correspond to mental states if mental causation is ruled out.

    Per the paradox above, I think we’re forced to conclude that mental causation is a fact of nature. But the computational paradigm rules this out since it insists that classical scale physical processes govern the actions of the brain, and those processes are both separable and locally causal such that the overall macro-state of the brain has no causal influence on any individual neuron any more than the macro-state of a computer has a causal influence on any individual transistor.

    *This was brought out by Mark Bishop, "Dancing with Pixies"
     
  14. Dec 22, 2009 #13

    apeiron

    User Avatar
    Gold Member

    I would happily use the terms interchangeably. And I don't actually think either of them are the best way to put it.

    A first issue is that this is "top-down" and "downwards" in spatiotemporal scale. So it is better to speak of global causality. The action is from global moments to the local ones. So it is from a larger size, but also a longer time. Thus it is as much from before and after as it is from "above" in spatial scale. Which is why there is such a stress on history, goals and anticipation - the global temporal aspects.

    A second point is that I want to stress the primacy of constraint as the form of causality we are talking about. I am dividing causality not just by direction or scale but also by kind.

    Local bottom-up causality has the nature of "construction" - additive action. Global top-down causality has the nature of "constraint" - a suppression of local degrees of freedom (free additive constructive action).

    Note this is different from versions of cybernetics or complexity theory, for example, where the top-down action is thought of as "control". Another different kind of thing. Although autonomous systems (like us humans) can appear to act in controlling causal fashion on the world.

    As you suggest, a lot of people see control as indeed the definition of what consciousness is all about if consciousness is a something that does anything. But this is a wrong idea on closer analysis.
     
  15. Dec 22, 2009 #14

    Pythagorean

    User Avatar
    Gold Member

    Background

    Ok, first some of my background: I have no formal education in anything mind science. I have an undergraduate degree in physics, so I'm very causal-minded. I am currently designing a master's degree in theoretical neuroscience and have been investigating the literature on my own (I start the relevant classes next semester).

    I spent a little time looking at the top down approach, but for the most part, I've been looking at bottom-up approaches lately. I'm familiar with Koch (here's Kritoph Koch's laboratory home page: http://www.klab.caltech.edu/ ) and Daniel Dennit (philosopher who has lots of talks available online).

    Preconceived Notion

    Here are some experiments that seem to suggest that top-down causation doesn't exist:



    Personally, I don't think there's such a thing as top-down causation. I tend to agree with Dennit that nobody's really running the wheelhouse (The problem of the Cartesian Theatre, as he calls it: http://en.wikipedia.org/wiki/Cartesian_theater ). If somebody's running the wheelhouse, then we still haven't answered the question of the mind, we've just specified the location (in the wheelhouse!).

    I take the materialist view that our system of biological neural networks is handling inputs and transforming them into outputs. In this view, for instance, the interneural computations between sensory (input) neurons and motor (output) neurons might be responsible for higher-level consciousness, as well as the illusion of self-control, will-power, and other abstract ideas.

    Paradox

    If we define consciousness as a 1 and non-consciousness as a 0, then this paradox is sure to bother people, but Koch for example claims that there are many kinds of consciousness. (Though Koch also refrains from a pin-point definition of consciousness.)

    If we assume that the many kinds of consciousness can be normalized and assigned a value between 1 and 0 instead of strictly 1 or 0, then it may be more palatable to say something like "The computer has a Class C consciousness rating of 0.3".

    Like I said before though, I believe we will have to wait for people like Koch and other bottom-up theoretical neuroscientists to pin-down the physical system of events associated with consciousness before we can judge whether other systems experience some degree of consciousness.
     
    Last edited by a moderator: Sep 25, 2014
  16. Dec 23, 2009 #15
    Suppose you are right and consciousness is the computation of neurons (or maybe i misunderstood what you meant with 'responsible'). Since computation has causal powers (it does something in the physical world), this would grant causal powers to consciousness. If C does cause things, why is the sense of control still an illusion? It may be so that this causal power matches with the subjective sense of self-control.

    Btw the free will issue can be disconnected from the mental causation issue. The Libet experiments for example may show that the decision feeling ("hey i just made a decision") comes after the decision has been physically made. But even prior to the decision feeling, the subject was already conscious and those conscious states may have influenced the physical processes anyway. So there may be mental causation, regardless if it felt like a decision or not. A simple example is just watching TV. You can have all kinds of experiences and your neurons will do all kinds of stuff, yet there is no feeling of "i just made a decision".

    I like the idea of a spectrum, since that is how all of nature seems to work. But.... if there is some minimum degree of consciousness (0.000001), then at the very least everything is conscious to some degree.
     
    Last edited: Dec 23, 2009
  17. Dec 23, 2009 #16
    I just have some questions. Perhaps I missed it, but I haven't seen a definition of "consciousness". The Turing Test was designed to test for something called "intelligence". If a machine is "intelligent", is it therefore "conscious"? If an entity is "conscious" does it therefore have some level of "intelligence"?
     
  18. Dec 23, 2009 #17

    apeiron

    User Avatar
    Gold Member

    Oh how my heart sinks at the mention of these names, at the whole tenor of what you already believe.

    I spent 15 years in this area. And all I can say is that you are heading down the hughest blind alley.

    If you want a flavour of the neuroscience debate over top-down causality, see for example this...
    http://www.dichotomistic.com/mind_readings_molecular_turnover.html

    Those youtube clips are all about habit vs attention. It would be correct to think of habits as bottom-up in a sense. But habits would have originally been learnt in the eye of (top down) attention) and then unfold within the context of some prevailing attentive state.

    So if I know I am required to flex my finger, then that is the top-down anticipatory preparation. A whole lot of global brain set-up is taking place - that would look quite different from when I want to be very sure I'm not about to make some unnecessary twitch. The actual flexing of a finger is a routinised habit and so has to arise within the prepared context via activity from the relevant sub-cortical paths - striatum, cerebellum, etc.

    But hey, if you are going to be studying neuroscience, you will learn these things anyway.
     
  19. Dec 23, 2009 #18

    Pythagorean

    User Avatar
    Gold Member

    But in the second clip I provided, they subject has a button in each hand and decides randomly to press the left or the right button. By the time the person has perceived his choice and pressed his button, the testers with their technology, have already (six seconds beforehand) predicted which side he was going to press.

    Was the choice really conscious? It seems from this experiment, that the conscious decision came six seconds after the brain had already made its choice, leading me to believe the "conscious decision" wasn't really a decision at all, but a sensation resulting from a decision made by the neural network.
     
  20. Dec 23, 2009 #19
    Is there a simple example of top-down causality? One not involving minds, but just a simple physical system (the simpler the better).
     
  21. Dec 23, 2009 #20

    baywax

    User Avatar
    Gold Member

    Conscious awareness: "The conscious aspect of the mind involving our awareness of the world and self in relation to it"

    http://wps.pearsoned.co.uk/wps/media/objects/2784/2851009/glossary/glossary.html#C
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Can mental events cause?
  1. Mental Sex (Replies: 50)

  2. Can consciousness cause? (Replies: 63)

  3. Mental calculators? (Replies: 6)

Loading...