Layman's Question on Quantum Mechanics

  • #51
Out of respect of the learned people who have contributed to this thread and to help me clarify my question I'd like to thank you all for entertaining it and me and my ineptitude with the study of the nature of quantum physics, mechanics and physics in general.

Since there has been such an open attitude toward disclosing everyone's level of understanding of physics I'll include my own background with the topic. As a medical illustrator for many years with a well established cancer treatment centre I was fortunate enough to be exposed to varied amounts of knowledge about every discipline that goes into the treatment of cancer. My favorite group to spend well over 4620 coffee breaks and lunch breaks with was the medical physicists.

One man in particular became a sort of mentor for me who originally immigrated from China, became a dishwasher and supported himself through university and became a PhD medical physicist. That is the extent of my informal training in physics... with the exception of this incredible forum, the Physics Forum.

Before I completely blow this thread out of quantum physics into philosophy or get my knuckles wrapped by a mentor I'll just say that I have completely enjoyed the responses I've received to my question.
I realize now that physicists really can't progress in their studies if they are continually questioned or in doubt about niggling little philosophical questions concerning the nature of their work. It would be like stopping Wayne Gretsky every five minutes during a hockey game and asking him if he really exists! Totally inappropriate.. and absurd!

I am actually satisfied with the "speil' from Bohr that suggests "physics is not about what nature is but more about what physicists can say (with regard to their observations) about nature. Thank you very much Dr. Bohr and everyone else here that has done and is doing the math!
 
Last edited:
Physics news on Phys.org
  • #52
ZapperZ said:
I'm not sure what you mean by "ab initio" here. It certainly doesn't HAVE to mean "start from elementary interactions". The ab initio starting point in condensed matter physics has always been the many-body hamiltonian.

Well, start with "real" electrons (in the framework of non-relativistic quantum mechanics, sufficient for most of condensed matter physics), and real nucleae. Consider the crystal structure as a given (if you want to discuss this, go to the Born-Oppenheimer approximation of the many body problem which allow you to treat the electron system and the nucleae system - in most cases - separately) and consider hence the many-electron system in a periodic potential. Then realize that stationary solutions for the many-body system without interactions can be classified using a Fock space approach, and call the bookkeeping state countings "quasi particles". We have now a "free quantum field theory" of quasiparticles. Next, add in the interactions of the individual real particles, but modeled as interactions of quasi-particles. It is usually here where we totally leave the ab-initio approach and start guessing reasonable effective interactions of quasi particles ; but in order to be right, these should be the matrix elements of the up hereto neglected interaction hamiltonian in the real many-body problem, but between the Fock basis states, and not between the real-single-particle states. It is only to be hoped that the guessed-at effective interaction term corresponds to this matrix element, but in the case it is, we are still "in parallel" with an approximation to an ab initio calculation.
The point I'm trying to make is that if I were some god-mathematician with a brain a billion times the size of the visible universe, I WOULD be able to do all these calculations ab initio. Because some of us are lesser mortals :smile: we have to ressort to intuition and guesswork.

The confusion comes in when condensed matter uses the phrase such as "single-particle" excitation. This does NOT mean it is starting with elementary particle and adding the interactions one at a time. A single particle spectral function ALREADY comes in with a baggage load of many-body interactions absorbed within the self-energy term inside the many-body Green's function. This is what allows us to treat this as if it is a one-body problem.

I understand that, but I'm claiming that the idea behind it is still ab initio, but not from calculations, but from intuition and guesswork, because of the limited size of our brains. It does not mean that the ab initio approach is going to FAIL. It is simply untractable by us, lesser mortals. And maybe the smarter we get, the more cases we will find where we CAN, within some approximation, do the true ab initio calculation.

Now, while you can "turn off" the many-body interactions and regain the "bare particle", the problem is not tractable if you instead decides to change the problem from a many one-body problem (which is what Landau did) to a single, many-body problem. You can't solve this. So by doing what appears to be only a one-body problem, you are already commited to an emergent entity - the quasiparticle that arises out of the many-body interactions.
This is why I said that the starting point is the many-body ground state. The "electrons" that you measure in a conductor is already a many-body normalized quasiparticle, not the bare electrons. And if Anderson and Laughlin are correct, ALL of our elementary particles are emergent particles arising out of some field excitations, including the quarks.
Zz.

I understand that, but I don't find that an argument against reductionism. The quasi particles EMERGE from the underlying microphysics - or at least, we conceptually think of it that way (and in some toy models, we can even do it explicitly). I'm not arguing against "emergent properties" at all. I'm arguing against the idea that we can have microphysical laws which are "correct" and apply to the microphysics, and that we have INDEPENDENT and co-existing macrophysical laws, which have nothing to do, even in principle, with these microphysical laws. I'm claiming that as a matter of principle, these microphysical laws make a definite statement concerning the macrolaws ; and if they are in agreement, all the better, and if they are in contradiction, the microlaws are simply FALSIFIED. But I don't buy that they can INDEPENDENTLY exist next to each other.

However, I totally agree that the above statement is only one of principle (available to said god-mathematician with said brain) and that we have to realize our mortal limitations, and as such, that the said approach is a total waste of time in a great many number of cases, and that you indeed BETTER do as if you are discovering (almost) independent laws: you'll get faster to a useful theory.

So you can tell me: well, if you know that it is the only practical method, what the hell are you blathering about ? I think that there IS a fundamental difference. We get smarter as time goes on. We learn new mathematical techniques, and we get smarter machines. We get more and more experience with how these many-body systems behave. This means that there is some hope that we can do more and more ab-initio-like calculations, or at least derive general properties of the kinds of solutions we can hope to obtain ab initio. And in the reductionist approach, these derivations and properties make sense ; while in the true holistic approach, this is a total waste of time.

As I said, I think that reductionism together with the hypothesis of the existence of an objective world are the two founding principles of physics. I'm not willing to give them up so easily.
 
  • #53
Fascinating stuff

Really walking a fine line between explaining your method and philosophy, here I'll step over that line.

all good stuff. But accepting that there is an objective reality and accepting there is no objective reality only a subjective one are different sides of the same coin . Neither has any proof and neither is more credible than the other? This maybe a philosophical point, but it does at least raise one important issue? What if what we see is not objectively real? What if we are merely seeing it that way because that is how we evolved to see it, evolutionaly speaking the reality of photons properties are less beneficial to survival than seeing them the way we do now? Yeah I know you can dismiss such questions as specualtion and philosophy.

But if I had a machine that did 1000 experiments which all got the same results. And you did the experiment and got 1000 slightly different results. You'd check the machine and see that those results where caused by an error brcause of reason A. But what if the machine is right and your perception is wrong? We cannot simply dismiss the idea that what we see experimentally is the absolute truth any more than we can dismiss that it is all delusion. With that in mind, we have to be really careful about what we say is true from our perspective, getting 1 million peope to corroborate results is sciences way of mitigating the subjectivity but barring meeting an alien that tells us QM is all tosh because of Reason B or that QM is all right because of reason A, we will never truly know what objectivity is. With this in mind all we can do is experiment and hope that everything exists that we say does and that QM isn't just some bizarre fantasy, because the alternative doesn't bear thinking about:wink:

Yep total philosophy I agree, but I still think it's a point... Like I said before if we're wrong then we'll never know it 'till someone teaches us how to percieve reality properly:smile:

I must say though that this thread is one of the best I've read in my short time on here. You'd be hard pressed to come up with a better suming up of scientific ideals and methodology; bravo gentlemen:smile:

I think I agree with Vanech and ZZ, I'm just hoping I'm not wrong:wink:
 
  • #54
vanesch said:
Well, start with "real" electrons (in the framework of non-relativistic quantum mechanics, sufficient for most of condensed matter physics), and real nucleae. Consider the crystal structure as a given (if you want to discuss this, go to the Born-Oppenheimer approximation of the many body problem which allow you to treat the electron system and the nucleae system - in most cases - separately) and consider hence the many-electron system in a periodic potential. Then realize that stationary solutions for the many-body system without interactions can be classified using a Fock space approach, and call the bookkeeping state countings "quasi particles". We have now a "free quantum field theory" of quasiparticles.

But these aren't "quasiparticles". They ARE particles. Their non-interactions make them to be "ideal gasses", and statistically, we can deal with that.

Next, add in the interactions of the individual real particles, but modeled as interactions of quasi-particles. It is usually here where we totally leave the ab-initio approach and start guessing reasonable effective interactions of quasi particles ; but in order to be right, these should be the matrix elements of the up hereto neglected interaction hamiltonian in the real many-body problem, but between the Fock basis states, and not between the real-single-particle states. It is only to be hoped that the guessed-at effective interaction term corresponds to this matrix element, but in the case it is, we are still "in parallel" with an approximation to an ab initio calculation.

Actually, this is not true all the time. In Landau's Fermi Liquid theory, it is ONLY in the weak coupling limit (i.e. the interaction is weak enough that you can use mean field approximation) that will produce a 1-to-1 correspondence to the "bare", many one-body scenario. Crank up the coupling, and you lose everything! It is why strongly-correlated systems are such a fundamental and hot topic in condensed matter physics. We LOSE quasiparticles here! We have no well-defined entity to call a "quasiparticles" (see the normal state of an underdoped high-Tc superconductor). So no, there's nothing "parallel" here. At some point, your well-defined entity goes away.

And not only that, if you do this in 1-dimension, even the smallest correlation can produce a fractionalization of the spin and charge degree of freedom - they go their own separate ways with their own dispersion. There's nothing to match anything with the "non-interacting" case here - bluntly put, a "quasiparticle" is no longer "a good quantum number".

The point I'm trying to make is that if I were some god-mathematician with a brain a billion times the size of the visible universe, I WOULD be able to do all these calculations ab initio. Because some of us are lesser mortals :smile: we have to ressort to intuition and guesswork.

But as I've said, if this is OBVIOUS, we won't be having this discussion. It is NOT obvious, at least not to me, and certainly not to Anderson and Laughlin. You could be looking all you want at the individual dots on a piece of paper, but you will never been able to deduce from the local arrangement of those dots that it forms a picture. The long-range pattern is just not there for you to see it. It is why such interactions at that microscopic scale tells us nothing about the larger ensemble. Only when you step back and consider the whole thing as a clumb do you see the emergent behavior.

Zz.
 
  • #55
vanesch said:
The true holistic (anti-reductionist) approach is that EVEN IF YOU WERE TO USE THE EXACT MICROSCOPIC LAWS OF NATURE without any approximation, you would not be able to derive certain macroscopically observed phenomena. I think that that claim is self-contradictory...
This is not a correct understanding of holism, at least as understood by the science of cybernetics. If the state conditions of two black boxes (A,B) are given, and each is studied in isolation until its "canonical representation" is established and if they are then coupled in a known pattern by known linkages, then it must follow logically that the behavior of the whole [A-B] is determinate, and can be predicted. Thus such a "holistic" system will show no emergent properties since there is a 1:1 canonical transformation (U) linking A --> B. Likewise, there must be a 1:1 inverse of U (V) which may be also U^-1 for B --> A (= unitary transformation of a QM system).
Now the concept of "reducibility" and its relationship to "holism" in general system theory is such that a holistic system [(A)-(B) from above example] has immediate effects on each other as thus shown [ (A) <-----> (B) ] or perhaps one way only [ (A) -----> (B). The "reductionist" representation is only obtained when the two parts are functionally independent, thus [ (A) (B) ].
Now, in cases where the "parts" are at a range of size greatly different than the "whole", then the fundamental properties of the whole can be very different indeed from the parts, and what is "true" at one end (the reduced state) may not be true at the other (the holistic state). For example, consider the concept "taste" as relates to the atoms carbon (C), hydrogen (H), oxygen (O). If we treat these as black boxes each has a fundamental taste property (= no taste). However, when we couple to form a large molecule H-C-O (a sugar) we find a new property of taste has "emerged" (= sweet) that could not be predicted from the parts via any formalism of QM.
And I take it that this is what ZapperZ is articulating when he states:
[ZapperZ: "QM has many features that merge into the classical properties, especially at high quantum number, high temperatures, or large interactions (decoherence). But this doesn't mean that using QM description for classical, macroscopic system is any more valid than using classical physics for QM systems. There are a bunch of things we still don't quite know at the mesoscopic scale where these two extremes clash their heads. All we know right now is that one should not simply adopt QM's "world view" on classical systems. It will produce absurd conclusions."]
 
  • #56
ZapperZ said:
You could be looking all you want at the individual dots on a piece of paper, but you will never been able to deduce from the local arrangement of those dots that it forms a picture. The long-range pattern is just not there for you to see it. It is why such interactions at that microscopic scale tells us nothing about the larger ensemble. Only when you step back and consider the whole thing as a clumb (clump) do you see the emergent behavior.
Zz.

Sorry to bother you again. This sounds like a fundamental requirement to observing matter. You have to exist at a similar scale to the scale of other emergent phenomena... or "step{ped} back" to see the emergent behaviours of energy.

It is boggling for me to figure out how physicists do the opposite and reduce their observations to the scale of energy fields or EM energy.

Do the Quantum Physicists consider themselves as one of the instruments in an experiement? Or as one of the conditions?

By the way, "how salty is spring tension (Zapper z)?!" :smile: (ref:"How big is a photon..." thread, PF)
 
Last edited:
  • #57
quantumcarl said:
Sorry to bother you again. This sounds like a fundamental requirement to observing matter. You have to exist at a similar scale to the scale of other emergent phenomena... or "step{ped} back" to see the emergent behaviours of energy.

Please note that when I said "observe", it has NOTHING to do with "I have to exist" requirement. This makes the rest of your point in that post moot.

You are more than welcome to start a new thread in the Philosophy section to discuss what is obvously the main thing you are so interested in. I just don't have the patience to deal with such issues.

Zz.
 
  • #58
Rade said:
This is not a correct understanding of holism, at least as understood by the science of cybernetics. If the state conditions of two black boxes (A,B) are given, and each is studied in isolation until its "canonical representation" is established and if they are then coupled in a known pattern by known linkages, then it must follow logically that the behavior of the whole [A-B] is determinate, and can be predicted. Thus such a "holistic" system will show no emergent properties since there is a 1:1 canonical transformation (U) linking A --> B. Likewise, there must be a 1:1 inverse of U (V) which may be also U^-1 for B --> A (= unitary transformation of a QM system).

Ah, that's a much more restricted definition of "reductionism" which makes even no sense in the frame of QM, in that the QM state space of the system [A-B] is BIGGER than the state space of [A] and the statespace of (entangled states).

I understood "reductionism" as: all macroscopic behaviour is *in principle* determined by the laws (in casu QM) gouverning the microsystems (and holism as the negation of reductionism). This means that the laws governing the microsystem have mathematically fixed (even if you don't know how to *derive* it in practice) the emergent properties.



Now the concept of "reducibility" and its relationship to "holism" in general system theory is such that a holistic system [(A)-(B) from above example] has immediate effects on each other as thus shown [ (A) <-----> (B) ] or perhaps one way only [ (A) -----> (B). The "reductionist" representation is only obtained when the two parts are functionally independent, thus [ (A) (B) ].

Well, in physics, that's simply called: 'interacting systems'. Of course we allow the constituents to interact...

Now, in cases where the "parts" are at a range of size greatly different than the "whole", then the fundamental properties of the whole can be very different indeed from the parts, and what is "true" at one end (the reduced state) may not be true at the other (the holistic state). For example, consider the concept "taste" as relates to the atoms carbon (C), hydrogen (H), oxygen (O). If we treat these as black boxes each has a fundamental taste property (= no taste). However, when we couple to form a large molecule H-C-O (a sugar) we find a new property of taste has "emerged" (= sweet) that could not be predicted from the parts via any formalism of QM.

That's because "taste" is not a well-defined measurement observable (it's in fact a qualium). As I tried to point out, one needs to describe an *experiment* that measures a "holistic" quantity. The microscopic description of the experiment, in the reductionist version, should then also give the correct outcome for the experimental measurement. That's what I tried to do with the description of the measurement of the index of refraction of water, that evaporates. The phase-transition introduces a change in refractive index, and that gives rise to a change in the configuration of the EM field, something that "makes microscopic sense". So considering the entire physical microdescription of all the particles and fields involved, if reductionism holds (meaning: if the microdescription determines mathematically the macroproperties) then this microdescription determines also this change in the configuration of the EM field.


And I take it that this is what ZapperZ is articulating when he states:
[ZapperZ: "QM has many features that merge into the classical properties, especially at high quantum number, high temperatures, or large interactions (decoherence). But this doesn't mean that using QM description for classical, macroscopic system is any more valid than using classical physics for QM systems. There are a bunch of things we still don't quite know at the mesoscopic scale where these two extremes clash their heads. All we know right now is that one should not simply adopt QM's "world view" on classical systems. It will produce absurd conclusions."]

I agree with ZapperZ statement as such, but we both interpret this differently. I see this as a statement that QM might need a modification, while he can - apparently - accept this situation happily and doesn't see any contradiction in it.
In a reductionist view, there can only be ONE theory of the universe. Now, it could be that classical and QM theories we have now are only approximations, in certain limits, of this one theory.
Nevertheless, I think there IS a view of QM as a world view which does NOT produce absurd (although weird, agreed) results as such, and that is a "many world" view. So it is not the argument of the absurdness which is for me sufficient to conclude that QM does not work at macroscopic scales. The more difficult aspect is its relationship to GR. This, to me, is the only compelling reason to think there might be something wrong with QM - apart of course from an eventual clear experimental deviation of its predictions.
 
  • #59
ZapperZ said:
Please note that when I said "observe", it has NOTHING to do with "I have to exist" requirement. This makes the rest of your point in that post moot.
You are more than welcome to start a new thread in the Philosophy section to discuss what is obvously the main thing you are so interested in. I just don't have the patience to deal with such issues.
Zz.

Right on. I'll consider your recommendation and if I follow it I can only hope to see a contribution there from the objectivistic quarter of the equation.

All the best in your endeavors.
 
  • #60
quantumcarl said:
Right on. I'll consider your recommendation and if I follow it I can only hope to see a contribution there from the objectivistic quarter of the equation.
All the best in your endeavors.

Sorry. I avoid physics discussion in the philosophy forum like a plague.

Zz.
 
  • #61
vanesch said:
I understood "reductionism" as: all macroscopic behaviour is *in principle* determined by the laws (in casu QM) gouverning the microsystems (and holism as the negation of reductionism). This means that the laws governing the microsystem have mathematically fixed (even if you don't know how to *derive* it in practice) the emergent properties.
The sort of reductionism that eg. the standard (particle theory) model represents is an ontological analysis into smaller and smaller constituent parts. Whether the rules defining the experimental production of the particles are actually abstractions of general rules governing the behavior of phenomena on any and all scales is questionable, and apparently, wrt meso and macro scale observations, not the case. This sort of reductionism seems to be at odds with a holistic approach.

Then there is the sort of reductionism that refers solely to the abstraction of the truly general principles governing the behavior of phenomena on any and all scales. This is an epistemological rather than an ontological reductionism, and it seems to me to be a holistic approach, in any sense that I can now think of using the term, holistic.

vanesch said:
In a reductionist view, there can only be ONE theory of the universe. Now, it could be that classical and QM theories we have now are only approximations, in certain limits, of this one theory.
It seems reasonable to assume that the currently standard theories will undergo modifications with advances in technology. It also seems at least somewhat reasonable to me to assume that the sort of reductionist approach underlying the currently standard particle model is just the wrong approach if the goal is to get at the truly general principles which apply to any and all scales of behavior.

vanesch said:
I think there IS a view of QM as a world view which does NOT produce absurd (although weird, agreed) results as such, and that is a "many world" view.
The adoption of the Many Worlds Interpretation of QM requires taking at least some parts of the QM algorithm and mathematical models employed as more or less accurate representations of what is happening in real 3D space independent of observation. But I think it's reasonable to view this (the MWI prerequisite that the QM formalism is, in some/any sense, a description of the physical reality between emitters and detectors) as a perversion of the meaning and application of QM.

Or am I just wrong about what I'm supposing to be prerequisite to the MWI view?

There is a view of QM which doesn't produce absurd continuations. It just requires accepting that the theory is not intended as, and doesn't function as, a description of an underlying quantum world.
 
  • #62
Sherlock said:
The sort of reductionism that eg. the standard (particle theory) model represents is an ontological analysis into smaller and smaller constituent parts.
This is the way it's presented in popularizations, but the quarks and gluons are actually quanta of the gauge field, which is a large scale phenomenon. The size of the experimental spaces at the colliders, at which the SM is applied are actually quite large on a human scale. Rather than "smaller" perhaps "prior" would be a better word.
Whether the rules defining the experimental production of the particles are actually abstractions of general rules governing the behavior of phenomena on any and all scales is questionable, and apparently, wrt meso and macro scale observations, not the case.
I don't know what you mean by this. Ab initio calculation of, say, chemical reactions is difficult because of computational complexity, but much progress has been made and there is no bar in principle to deriving all chemistry, biology, and indeed the whole experienced world, from particle physics.
This sort of reductionism seems to be at odds with a holistic approach.
Well, weak emergence (as gas pressure emerges from the conservation of particle momentum) is envisioned. But holism as it is usally presented simply seems to be almost content-free to me. What does it predict?
 
  • #63
selfAdjoint said:
This is the way it's presented in popularizations, but the quarks and gluons are actually quanta of the gauge field, which is a large scale phenomenon. The size of the experimental spaces at the colliders, at which the SM is applied are actually quite large on a human scale. Rather than "smaller" perhaps "prior" would be a better word.
The experimental spaces are large because of the energies and instrumental complexity required to produce the particles. But the particles that are experimentally produced (or theoretically hypothesized) are neither large nor complex wrt the human scale.

I'm only barely at the level of semi-sophisticated popularizations of the standard particle model. Maybe my characterization of the enterprise as "an ontological analysis into smaller and smaller constituent parts" was wrong -- or at least a possibly misleading way to describe it. Maybe it's meaningless to rank the particles that high energy physics has been able to produce (or which it has hypothesized) on a scale of 'size'. But if it isn't, then the upper limit on the size of quarks and gluons (100 attometers ?) is certainly smaller than the particles which confine them. However, I was thinking (most heuristically, for which I might apologize if it weren't for this feeling that if the fundamental laws of nature apply to all scales of behavior, then there is evidence of them at the level of our ordinary sensory perception ) more along the lines of scales of complexity (a hierarchy of 'media' ?) rather than size, per se.

I'm not sure what you mean in suggesting that, eg. quarks, are 'prior' (some sort of temporal hierarchy?). According to what parameter scale might the particles of the standard theory be ranked?

Recent experimental data and the size of the quark in the constituent quark model
http://www.edpsciences.org/articles/epjc/pdf/2002/11/100520277.pdf

Quarks, diquarks, and pentaquarks
http://physicsweb.org/articles/world/17/6/7/1

selfAdjoint said:
Ab initio calculation of, say, chemical reactions is difficult because of computational complexity, but much progress has been made and there is no bar in principle to deriving all chemistry, biology, and indeed the whole experienced world, from particle physics.
It isn't known if there's "no bar in principle to deriving ... the whole experienced world from particle physics". It is known that it can't be done from the current standard model. The current standard model isn't fundamental in the sense that fundamental refers to behavioral laws which apply to any and all scales of interaction. While the standard model is certainly an abstraction, it apparently isn't an abstraction of what's truly fundamental wrt nature.


The Theory of Everything
http://www.pnas.org/cgi/content/full/97/1/28

selfAdjoint said:
... holism as it is usally presented simply seems to be almost content-free to me. What does it predict?
Holism, like reductionism, is an approach to (ostensibly eventually) understanding what's fundamental wrt the behavior of anything and everything in our universe. Since modern physics has taken an analytical reductionist approach rather than a holistic reductionist approach, then the latter hasn't produced much theoretical content.

The weak and strong forces, electromagnetism, and gravity (and who knows what else the analytical reductionist approach might tack on) --- maybe these aren't truly fundamental. Even with it's success in quantitatively accounting for many phenomena, considering the problems facing the standard model, it seems at least worth considering that maybe it's just the wrong way to go about getting at what's fundamental wrt all phenomena.

What's fundamental must be operating (evident) at the level of our sensory perception and wrt more or less everyday phenomena, not just wrt quantum experimental phenomena -- so there's no reason why we shouldn't, eventually, be able to express the fundamental principles of our universe in familiar terms. Maybe it just hasn't been looked at carefully enough, or sorted out according to the right paradigm. Of course, the standard model (or any model for that matter) isn't wrong wrt what it's able to predict. It just isn't telling us what's fundamental in the sense of universal physical principles.

Anyway, my main reason for entering the holism-reductionism discussion in the first place was to get vanesch to expound in more detail on his reasons for becoming an MWI er (or ist). But thanks for your comments, selfAdjoint, and I welcome any criticism of my comments.
 
Last edited by a moderator:
  • #64
Sherlock said:
There is a view of QM which doesn't produce absurd continuations. It just requires accepting that the theory is not intended as, and doesn't function as, a description of an underlying quantum world.

Yes, I know, the "shut up and calculate" view. I call that a non-view :biggrin:. It is maybe the most accurate interpretation of QM :smile: . But it leaves you with a problem: how do you identify the mathematical objects you're manipulating with things in the lab ? Which you need to do in order to give some sense to the numbers you're calculating in the first place. This means no matter, that you DO implicitly identify certain objects in the lab with certain mathematical structures. Now why can you do that for those things that suit you, and why do you obstinately refuse to do so with the rest of the mathematical formalism - just because it bothers your intuition ?

However, I'm much less hostile to this view than you might think. It is a bit like the Mendelian theory of genetics before the recognition of DNA: the theory worked, used certain concepts but didn't really correspond to a representation of reality. Once DNA and several cellular mechanisms were discovered, Mendelian genetics could be EXPLAINED on the basis of these biological mechanisms.

Maybe QM does the same, and is some statistical mechanism that has no correct representation of reality in it, but nevertheless arrives at correct predictions. The analogy can be continued: one should then look for an underlying representation of reality that explains the success of the formalism. But there are serious sea monsters in that ocean!
My point of view is that one should row with the tools one has, and in as much that the formalism of QM DOES allow for a representation of reality, take the opportunity to use it (were it only to devellop an intuition for the formalism!) - even if you want to keep in mind that it MIGHT not be the correct representation of reality. And then, it might be, too.
 
  • #65
vanesch said:
Yes, I know, the "shut up and calculate" view. I call that a non-view :biggrin:. It is maybe the most accurate interpretation of QM :smile: . But it leaves you with a problem: how do you identify the mathematical objects you're manipulating with things in the lab ?

You don't! You adopt a "working" view to deal with the formalism which you know isn't necessary kosher but works. By the time you actually make a measurement, these are then classical concepts. You measure energy, position, momentum, etc. So there's no problem with interpretation of what you observe in the lab.


Zz.
 
  • #66
ZapperZ said:
You don't! You adopt a "working" view to deal with the formalism which you know isn't necessary kosher but works.

You mean, let's take a modest attitude, we haven't gotten *a clue* how nature "really" works, but we've noticed that in certain domains, certain formalisms give good results, and that's all we can say ?
 
  • #67
vanesch said:
You mean, let's take a modest attitude, we haven't gotten *a clue* how nature "really" works, but we've noticed that in certain domains, certain formalisms give good results, and that's all we can say ?

On the contrary, we DO have a clue, and in fact, more than a clue. The clue here is that our classical concepts are having loads of weird properties when applied to where they shouldn't be applied. But this doesn't mean we're clueless, or else we will have no useful information.

Zz.
 
  • #68
ZapperZ said:
On the contrary, we DO have a clue, and in fact, more than a clue. The clue here is that our classical concepts are having loads of weird properties when applied to where they shouldn't be applied. But this doesn't mean we're clueless, or else we will have no useful information.
Zz.

It has always seemed to mr that science progresses mostly by showing things that aren't true, rather than things that are. The Earth isn't flat; it isn't at the center of the solar system,..., and nature isn't classical!
 
  • #69
Sherlock said:
Anyway, my main reason for entering the holism-reductionism discussion in the first place was to get vanesch to expound in more detail on his reasons for becoming an MWI er (or ist). But thanks for your comments, selfAdjoint, and I welcome any criticism of my comments.

I've done that already a few times, but probably the time to find back the original posts is just as long as typing it here again.

My reason for taking on the MWI viewpoint is this:
1) QM is seen as a "universal" theory (is supposed to describe what happens in the universe). This can be correct or wrong, but it is what QM claims. The axioms of QM do not include a domain of applicability.
2) QM contains precise rules of how composite systems, build up from smaller systems, are supposed to work (namely: the tensor product of hilbert spaces). There is no postulated limit to this, and as such, I can construct, if I want, the hilbert space of all particles in my body, and in the lab's instruments, and ...
3) The superposition principle is a basic postulate of QM
4) The unitary time evolution is a basic postulate in QM (it's time derivative is the hamiltonian).
5) All relevant physical interactions (except gravity, acknowledged) are known how to be represented by this unitary time evolution.

As such, there is, from the postulates,
1) a natural description of the measurement apparatus, including the body of the observer, as a vector in hilbert space (follows from the build-up as tensor products of hilbert spaces of the constitutents)
2) the unitarity of the time evolution operator acting upon this state

and from these two points, invariably, the body state of the observer ends up entangled with the different possible outcomes of measurements WITHOUT CHOOSING ONE of them (as is said in the projection postulate). As such, it almost naturally follows that, if we are going to require that we only observe ONE outcome (and not all of them in parallel, as do our bodies), we can only be aware of one of these terms, with a certain probability. Once we accept that we only observe ONE of our body states, a classical awareness can emerge.

This is the essence of MWI. There are variations on the theme. It follows from taking the postulates of QM seriously ; there's no escaping from this view if you accept the axioms of QM and apply them universally. The objection can be that we are using the quantum formalism way outside of where it was somehow *intended* to work, but in absence of a theory that tells us WHY this formalism doesn't work the way it does (in other words, an underlying theory explaining QM), this is what the current formalism of QM says, by itself. And although very weird, it is not self-contradictory (and, as I often tried to show here, gives even a natural frame to view certain "bizarre" results, such as EPR situations, quantum erasers, and other such things). The real problem doesn't reside in the weirdness, the real problem resides with gravity. I think that given this difficulty, all bets are still open.

Nevertheless, I still advocate the MWI view for practical reasons (can sound bizarre): it helps elucidate EPR paradoxes and other weird quantum phenomena. It is a great TOOL for understanding the QM formalism.
 
  • #70
selfAdjoint said:
It has always seemed to mr that science progresses mostly by showing things that aren't true, rather than things that are. The Earth isn't flat; it isn't at the center of the solar system,..., and nature isn't classical!

But is there a difference between showing something is "true" and showing something is "valid"? Newton's laws are valid to be used to construct a building. Even if more fundamental description of the universe is found, Newton's laws will STILL be valid to build a house.

At some point, the extreme Popper's view of science is no longer useful.

Zz.
 
  • #71
vanesch said:
I've done that already a few times, but probably the time to find back the original posts is just as long as typing it here again.
My reason for taking on the MWI viewpoint is this:
1) QM is seen as a "universal" theory (is supposed to describe what happens in the universe). This can be correct or wrong, but it is what QM claims. The axioms of QM do not include a domain of applicability.
2) QM contains precise rules of how composite systems, build up from smaller systems, are supposed to work (namely: the tensor product of hilbert spaces). There is no postulated limit to this, and as such, I can construct, if I want, the hilbert space of all particles in my body, and in the lab's instruments, and ...
3) The superposition principle is a basic postulate of QM
4) The unitary time evolution is a basic postulate in QM (it's time derivative is the hamiltonian).
5) All relevant physical interactions (except gravity, acknowledged) are known how to be represented by this unitary time evolution.
As such, there is, from the postulates,
1) a natural description of the measurement apparatus, including the body of the observer, as a vector in hilbert space (follows from the build-up as tensor products of hilbert spaces of the constitutents)
2) the unitarity of the time evolution operator acting upon this state
and from these two points, invariably, the body state of the observer ends up entangled with the different possible outcomes of measurements WITHOUT CHOOSING ONE of them (as is said in the projection postulate). As such, it almost naturally follows that, if we are going to require that we only observe ONE outcome (and not all of them in parallel, as do our bodies), we can only be aware of one of these terms, with a certain probability. Once we accept that we only observe ONE of our body states, a classical awareness can emerge.
This is the essence of MWI. There are variations on the theme. It follows from taking the postulates of QM seriously ; there's no escaping from this view if you accept the axioms of QM and apply them universally. The objection can be that we are using the quantum formalism way outside of where it was somehow *intended* to work, but in absence of a theory that tells us WHY this formalism doesn't work the way it does (in other words, an underlying theory explaining QM), this is what the current formalism of QM says, by itself. And although very weird, it is not self-contradictory (and, as I often tried to show here, gives even a natural frame to view certain "bizarre" results, such as EPR situations, quantum erasers, and other such things). The real problem doesn't reside in the weirdness, the real problem resides with gravity. I think that given this difficulty, all bets are still open.
Nevertheless, I still advocate the MWI view for practical reasons (can sound bizarre): it helps elucidate EPR paradoxes and other weird quantum phenomena. It is a great TOOL for understanding the QM formalism.
Thanks ... I'm still thinking about this. Just read an article by A.J. Leggett, Reflections on the quantum measurement paradox, in Quantum Implications, Essays in Honor of David Bohm.

I have a question: what does the term, macroscopic, mean when used to refer to quantum states? I've always thought of macroscopic as meaning that something could be seen with the naked eye ... an unamplified (by instruments) visual phenomenon. So, what exactly is being seen in macroscopic quantum superpositions?
 
  • #72
Sherlock said:
Thanks ... I'm still thinking about this. Just read an article by A.J. Leggett, Reflections on the quantum measurement paradox, in Quantum Implications, Essays in Honor of David Bohm.
I have a question: what does the term, macroscopic, mean when used to refer to quantum states? I've always thought of macroscopic as meaning that something could be seen with the naked eye ... an unamplified (by instruments) visual phenomenon. So, what exactly is being seen in macroscopic quantum superpositions?

Don't mind me, I'm just maintaining an awareness of this thread. I found this to do with classical/macroscopic and quantum/microscopic scales.

I will use the example of a droplet of water. If I’m investigating the properties of the whole droplet, I obviously will consider the droplet as the physical entity. However, if I conduct an investigation in the atomic realm, I must consider the protons, neutrons, electrons, etc. as physical entities because in this case the properties of these minute entities are of interest to me. For differentiation purposes, I will call entities such as the droplet macroscopic entities or classical entities. I will call protons, neutrons, electrons, photons etc. microscopic entities or quantum entities. There is some problem in equating “classical” with “macroscopic” and “quantum” with “microscopic” because the distinction between classical and quantum is process dependent; and there are cases where the quantum scales could be very large. But since “macroscopic” and “microscopic” are popular terms and their meanings in most cases do correspond to “classical” and “quantum”, I will respect tradition and try to hang on to them.

From: http://www.thinhtran.com/heisenberg.html

And if this helps you that would be nice.. but, for me...:rolleyes:


Quantum superposition is the application of the superposition principle to quantum mechanics. The superposition principle is the addition of the amplitudes of waves from interference. In quantum mechanics it is the amplitudes of wavefunctions, or state vectors, that add. It occurs when an object simultaneously "possesses" two or more values for an observable quantity (e.g. the position or energy of a particle).

More specifically, in quantum mechanics, any observable quantity corresponds to an eigenstate of a Hermitian linear operator. The linear combination of two or more eigenstates results in quantum superposition of two or more values of the quantity. If the quantity is measured, the projection postulate states that the state will be randomly collapsed onto one of the values in the superposition (with a probability proportional to the square of the amplitude of that eigenstate in the linear combination).

The question naturally arose as to why "real" (macroscopic, Newtonian) objects and events do not seem to display quantum mechanical features such as superposition. In 1935, Erwin Schrödinger devised a well-known thought experiment, now known as Schrödinger's cat, which highlighted the dissonance between quantum mechanics and Newtonian physics.

In fact, quantum superposition does result in many directly observable effects, such as interference peaks from an electron wave in a double-slit experiment.

If two observables correspond to noncommutative operators, they obey an uncertainty principle and a distinct state of one observable corresponds to a superposition of many states for the other observable.

From: http://en.wikipedia.org/wiki/Quantum_superposition

I think the crux of my question here in this thread is this: "is energy presented in the macroscopic form of matter even when it is not being observed?"

The idea of the question places a great deal of importance on the power of observation that I instictively don't agree with but cannot prove either way because all experiments involve an observation at some point along the way.

The information that is transmited over distance from remote instruments suggests that hard, macroscopic matter does not depend upon being observed to be in the form of matter. And when the information is viewed some time after being gathered... that makes matter seem all that much more of a separate and independent state. I guess you'd call it a Classical state.

I wonder if there is any proof that shows the nature of the quantum microcosim to be absolutely fundamental to the classical macrocosim. Would this not constitute a unifier?
 
  • #73
Sherlock said:
Thanks ... I'm still thinking about this. Just read an article by A.J. Leggett, Reflections on the quantum measurement paradox, in Quantum Implications, Essays in Honor of David Bohm.
I have a question: what does the term, macroscopic, mean when used to refer to quantum states? I've always thought of macroscopic as meaning that something could be seen with the naked eye ... an unamplified (by instruments) visual phenomenon. So, what exactly is being seen in macroscopic quantum superpositions?

Well (damn, we're getting metaphysical again... guess ZapperZ is going to get nervous at me :-p), let us first ask what it means, in a classical context, when "something is seen with the naked eye". Clearly, at the end of the day, what counts is a certain state of the particles and fields in your brain. So what corresponds to the (subjective) notion of "I see a red ball on the left side of the table", is a certain physical configuration of your brain. Which can, physiologically, be traced back to certain nerves in the visual nerve firing, which can be traced back to a certain image impinging on your retina, which can be traced back to EM radiation being emitted from a thermal light bulb, scattering off the red ball. But at the end of the day, it is the state of your brain which makes you have your subjective experience "I see a red ball on the left side of the table". We're so much used to doing this tracing back, that we do not even realize it, and associate immediately to the subjective experience "I see a red ball on the left side of the table" that there *IS* a red ball on the left side of the table. This is what one could call the very reasonable hypothesis of an objective world which corresponds to our subjective experiences. But it is good to keep in mind that this is nothing else but a good working hypothesis, stimulated by that tracing back as described above, and which works in a lot of everyday situations. And in classical physics, indeed, the relationship is so evidently 1-1, that it is - by a lot of physicists - considered a total waste of time to even think about it.

Well, quantum mechanically, it is not so different, but the 1-1 relation is gone. Of course, in all quantum interpretations where you switch to classical physics at some point, after the switch, you're back in the above scenario and we can stop the discussion. But if we insist on an MWI viewpoint, so that we DO HAVE a quantum state in hilbert space of the ball and all that, including our brains, the above "useless" relationship between subjective experience and hypothesis of objective world saves the day, so to say. Indeed, we take it that it is the STATE OF THE BRAIN which determines subjective experiences. But now we have an apparent (?) difficulty:

Imagine that we have the state:
|red ball left on table>|brainstate1> + |green ball right on table> |brainstate 2>

If you analyse these brainstates carefully, you'd do exactly as in the above explanation: you'd find that brainstate1 corresponds to the classical state in the case we would have had a classical red ball on a classical table, and brainstate 2 corresponds to a classical green ball on the table etc...
So if you take this literally, you should have to say that you should be aware of BOTH situations. This clearly isn't the case, so an EXTRA RULE is needed: you can only be aware of ONE classical brainstate. Your subjective experiences can only be related to ONE TERM in the above expansion, and this will be assigned RANDOMLY (through the Born rule).
Maybe another "subjective you" will experience the complementary term, I don't know. What matters, is your subjective experience, to you, only.

So we could say that we have to work "in the basis of classical brain states which correspond to subjective experiences". There are good indications that this needs not to be forced, and that decoherence naturally leads to a coarse-grained Schmidt decomposition which has these states for your brain states. There are people who claim that they can derive the Born rule and do not need to postulate it (I don't believe them, but that's something else). So there are differences in the details: what some think you have to postulate (like I do), others think they can derive somehow naturally (Deutsch, Zurek...). But this is just a matter of esthetics of the axiomatic structure of the theory. It simply, in practice, comes down that you end up having your subjective experiences LINKED TO ONE classically-looking brainstate in the overall quantum state, and that the probability for such a state to be picked is given by the Born rule.

And now, decoherence is nice, in that whatever you will do afterwards, you will only be effectively interacting with the states of the systems around you in the same term. You will not notice the existence of the other terms, because the in-product <other terms | your term> will always be effectively 0 because of the complexity of the environment states, which will always be "classically different" (at least one particle of the environment in the other term will be at a different position, say, than in "your" term). One says that these terms "decohered". As such, it seems to you (in all your successive subjective experiences) that the world really looks much like a classical term. For instance, if you did end up having your subjective experiences derived from "brainstate1" (and not from 2), things really look to you as if the world were classical and reduced quantum-mechanically to: |red ball left on table>|brainstate1>

That doesn't mean that the other term "disappeared magically", but you will almost be in the impossibility to ever detect something of the second term, because of this inproduct being effectively zero from the moment that ONE particle (or one field mode) is in a sufficiently different position. As this is essentially uncontrollable once there has been a contact with a thermal bath or so, this second term will remain undetectable for ever.
You can just as well consider that the state of the world REDUCED to:

|red ball left on table>|brainstate1>

and from that you will be able to derive all your future subjective experiences. Hence you can apply the projection postulate in practice. But that means that you can apply it from the moment that "irreversible entanglement" with the environment took place, which explains why we can *usually* apply the projection postulate already early in the "chain" (usually after a physical interaction we call "measurement") and treat the rest classically. It doesn't alter the final classical situation which corresponds to the quantum state which will explain our subjective experiences, EVEN THOUGH the true state of the world remains:

|red ball left on table>|brainstate1> + |green ball right on table> |brainstate 2>

So this slightly alters our 1-1 concept of the link between "subjective experiences" and the "hypothesis of an objective world".
If we take the above stuff for real, the "truely objective state" of the world remains:

|red ball left on table>|brainstate1> + |green ball right on table> |brainstate 2>

your subjective experience "now" derives from |brainstate1>,

and the term |red ball left on table>|brainstate1> is sufficient to derive every possible future subjective experience, so in a way you could say that WHAT YOU CONCERNS, the semi-objective state of the world is simply:

|red ball left on table>|brainstate1>

which justifies the use of the projection postulate for all practical purposes.

I said: we can *usually* apply the projection postulate early, from the moment we have a "measurement". But there are a few exceptions to this rule. One such exception is when we perform an EPR experiment. I posted several times a symbolic treatment of such an experiment in MWI view, and then, no "spooky action at a distance" is needed at all: this spooky action is only required when we ERRONEOUSLY have applied the projection postulate too early. In the MWI view, the projection postulate is a SHORTCUT (not something that "really happens") when we are SURE that all quantum-mechanical interference with other terms is definitely excluded. And this is what goes wrong in EPR situations: an interference term appears when Alice and Bob come together to compare their notes. If the projection postulate, applied too early, excluded this interference term, you end up with some puzzling aspects.
 
  • #74
vanesch said:
Well (damn, we're getting metaphysical again... guess ZapperZ is going to get nervous at me ), let us first ask what it means, in a classical context, when "something is seen with the naked eye". Clearly, at the end of the day, what counts is a certain state of the particles and fields in your brain. So what corresponds to the (subjective) notion of "I see a red ball on the left side of the table", is a certain physical configuration of your brain. Which can, physiologically, be traced back to certain nerves in the visual nerve firing, which can be traced back to a certain image impinging on your retina, which can be traced back to EM radiation being emitted from a thermal light bulb, scattering off the red ball. But at the end of the day, it is the state of your brain which makes you have your subjective experience "I see a red ball on the left side of the table".
We're so much used to doing this tracing back, that we do not even realize it, and associate immediately to the subjective experience "I see a red ball on the left side of the table" that there *IS* a red ball on the left side of the table.
I think that "clearly, at the end of the day" what counts is what we have seen, or felt, or heard, or smelled, or tasted. We're not aware of most of the components of the correlational/causal chain, this tracing back, as we navigate our way through the macroscopic world of our sensory perceptions. That's why it is what we perceive that is the primary basis for evaluating hypotheses about the physical world. That is, what is seen by us to happen in the macroscopic world is the final arbiter of statements about it --- and that's what my question was about: what is it that's actually seen in the macroscopic world that verifies the existence of so-called macroscopic quantum superpositions? We can see macroscopic superpositions, as they happen, in water and other media. But of course these aren't macroscopic quantum superpositions. Is it maybe that the term, macroscopic, has a technical and different meaning wrt quantum experiments than it does in ordinary language?
vanesch said:
This is what one could call the very reasonable hypothesis of an objective world which corresponds to our subjective experiences. But it is good to keep in mind that this is nothing else but a good working hypothesis, stimulated by that tracing back as described above, and which works in a lot of everyday situations. And in classical physics, indeed, the relationship is so evidently 1-1, that it is - by a lot of physicists - considered a total waste of time to even think about it.
I don't think it's a waste of time to think about it, but the idea that there is an objective world that is revealed to us via (as opposed to being created by) our senses, and that it exists whether we happen to be sensing it or not, is the only way of thinking about it that makes any ... sense. It's the basis of physical science. I also keep in mind that everything is rendered in macroscopic form, it's just that wrt quantum experimental phenomena the results (macroscopic though they be) can't be accounted for using the mathematical constructions of classical physics.
vanesch said:
Well, quantum mechanically, it is not so different, but the 1-1 relation is gone. Of course, in all quantum interpretations where you switch to classical physics at some point, after the switch, you're back in the above scenario and we can stop the discussion. But if we insist on an MWI viewpoint, so that we DO HAVE a quantum state in hilbert space of the ball and all that, including our brains, the above "useless" relationship between subjective experience and hypothesis of objective world saves the day, so to say. Indeed, we take it that it is the STATE OF THE BRAIN which determines subjective experiences. But now we have an apparent (?) difficulty:
Imagine that we have the state:
|red ball left on table>|brainstate1> + |green ball right on table> |brainstate 2>
If you analyse these brainstates carefully, you'd do exactly as in the above explanation: you'd find that brainstate1 corresponds to the classical state in the case we would have had a classical red ball on a classical table, and brainstate 2 corresponds to a classical green ball on the table etc...
So if you take this literally, you should have to say that you should be aware of BOTH situations.
You can take the QM formalism literally as being in 1-1 correspondence with an underlying quantum world, but how would you know? So, for good reason, what is taught is that, as far as anybody knows, the QM formalism is not in 1-1 correspondence with an underlying quantum world.
You can take the QM formalism literally as being in 1-1 correspondence with the macroscopic world, but the difference here is that we actually experience the macroscopic world. So, for good reason, what is taught is that when the formalism says that a pointer is in a superposition of pointing at 1 and pointing at 2 for some specific preparation, then we interpret this to mean that it will either be pointing at 1 or pointing at 2 at the end of each trial, and that the probability assigned to each unique result refers to the mean average of a certain number of trials. The various pointer positions are the possible, mutually exclusive, results of individual measurements ... that's what the formalism means -- not that these things are happening simultaneously in each trial.
Anyway, the superposition principle is employed because quantum theory is essentially a theory of wave mechanics. Waves of what ... where? The frequency distributions produced by the detectors. But what relationship do these waves have with the underlying quantum reality? Nobody knows exactly. But waves in any medium are still waves ... frequency distributions ... and, apparently, whatever is happening in the medium or media of the underlying quantum world is somewhat similar to the data waves produced by the detectors. That is, it seems that a wave is a wave is a wave --- no matter what medium it happens to be propagating in.
vanesch said:
This clearly isn't the case, so an EXTRA RULE is needed: you can only be aware of ONE classical brainstate.
We're only aware of one classical brainstate at a time in the first place. Well it isn't actually the brainstate that we're aware of, is it ? We're aware that eg. an instrument pointer is pointing in some specific direction, not all possible directions, at a certain time. The EXTRA RULE is the interpretational one that MWI adds which says that the expansion refers to simultaneously existing, mutually exclusive macroscopic states. Since this "clearly isn't the case" we don't take QM literally as a hi-fidelity representation of physical reality, and thus dismiss the MWI approach. It's what is seen that is the primary evaluational basis, not what is metaphysically postulated or mathematically formulated.
vanesch said:
Your subjective experiences can only be related to ONE TERM in the above expansion, and this will be assigned RANDOMLY (through the Born rule).
Maybe another "subjective you" will experience the complementary term, I don't know. What matters, is your subjective experience, to you, only.
My subjective experiences, and afaik everybody else's (unless they're all lying), ARE only related to ONE TERM in the above expansion for each trial.
Another subjective me experiencing the complementary term ? I see what you mean by your statement at the beginning of the post --- yes, I agree, this MWI approach is truly metaphysical.
vanesch said:
So we could say that we have to work "in the basis of classical brain states which correspond to subjective experiences". There are good indications that this needs not to be forced, and that decoherence naturally leads to a coarse-grained Schmidt decomposition which has these states for your brain states. There are people who claim that they can derive the Born rule and do not need to postulate it (I don't believe them, but that's something else). So there are differences in the details: what some think you have to postulate (like I do), others think they can derive somehow naturally (Deutsch, Zurek...). But this is just a matter of esthetics of the axiomatic structure of the theory. It simply, in practice, comes down that you end up having your subjective experiences LINKED TO ONE classically-looking brainstate in the overall quantum state, and that the probability for such a state to be picked is given by the Born rule.
And now, decoherence is nice, in that whatever you will do afterwards, you will only be effectively interacting with the states of the systems around you in the same term. You will not notice the existence of the other terms, because the in-product <other terms | your term> will always be effectively 0 because of the complexity of the environment states, which will always be "classically different" (at least one particle of the environment in the other term will be at a different position, say, than in "your" term). One says that these terms "decohered". As such, it seems to you (in all your successive subjective experiences) that the world really looks much like a classical term. For instance, if you did end up having your subjective experiences derived from "brainstate1" (and not from 2), things really look to you as if the world were classical and reduced quantum-mechanically to: |red ball left on table>|brainstate1>
That doesn't mean that the other term "disappeared magically", but you will almost be in the impossibility to ever detect something of the second term, because of this inproduct being effectively zero from the moment that ONE particle (or one field mode) is in a sufficiently different position. As this is essentially uncontrollable once there has been a contact with a thermal bath or so, this second term will remain undetectable for ever.
You can just as well consider that the state of the world REDUCED to:
|red ball left on table>|brainstate1>
and from that you will be able to derive all your future subjective experiences. Hence you can apply the projection postulate in practice. But that means that you can apply it from the moment that "irreversible entanglement" with the environment took place, which explains why we can *usually* apply the projection postulate already early in the "chain" (usually after a physical interaction we call "measurement") and treat the rest classically. It doesn't alter the final classical situation which corresponds to the quantum state which will explain our subjective experiences, EVEN THOUGH the true state of the world remains:
|red ball left on table>|brainstate1> + |green ball right on table> |brainstate 2>
I think that physical science takes the "true state of the world" to be what is seen, and what is seen is that either there is a red ball left on table or there is a green ball right on table at a certain time. A certain way of interpreting the QM formalism will require you to 'explain' why you only see one or the other at a certain time, but that way of interpreting the formalism is not required and in light of everything else that is known that pertains to this consideration it is, prima facie, a bad way of interpreting it. Just my current opinion.
vanesch said:
So this slightly alters our 1-1 concept of the link between "subjective experiences" and the "hypothesis of an objective world".
It doesn't quite do it for me. I'm looking at my keyboard now. Pondering which key to hit. Is my brainstate in a superposition of all the possible continuations -- and does that mean that when I hit a key then I've actually hit all of the keys but because of the branching I'm only subjectively aware of hitting one key ?? ... naaahh
vanesch said:
If we take the above stuff for real, the "truely objective state" of the world remains:
|red ball left on table>|brainstate1> + |green ball right on table> |brainstate 2>
your subjective experience "now" derives from |brainstate1>,
and the term |red ball left on table>|brainstate1> is sufficient to derive every possible future subjective experience, so in a way you could say that WHAT YOU CONCERNS, the semi-objective state of the world is simply:
|red ball left on table>|brainstate1>
which justifies the use of the projection postulate for all practical purposes.
I don't know that that's a good way to approach understanding the adoption and retention of the projection postulate. It came, afaik, from considerations of how formulas and models from classical wave optics might be applied in the quantum theory, and it's retained because it works.
vanesch said:
I said: we can *usually* apply the projection postulate early, from the moment we have a "measurement". But there are a few exceptions to this rule. One such exception is when we perform an EPR experiment. I posted several times a symbolic treatment of such an experiment in MWI view, and then, no "spooky action at a distance" is needed at all: this spooky action is only required when we ERRONEOUSLY have applied the projection postulate too early. In the MWI view, the projection postulate is a SHORTCUT (not something that "really happens") when we are SURE that all quantum-mechanical interference with other terms is definitely excluded. And this is what goes wrong in EPR situations: an interference term appears when Alice and Bob come together to compare their notes. If the projection postulate, applied too early, excluded this interference term, you end up with some puzzling aspects.
The probabilities calculated using QM are only physically meaningful wrt to statistical ensembles. The probabilistic interpretation of QM, which is the standard one because it's the one that makes the most sense, doesn't have any problem with "spooky action at a distance" --- which, iirc, you have written a nice exposition of.
I'm going back through Heisenberg's The Physical Principles of the Quantum Theory now. No doubt he's rolling over in his grave from this MWI stuff.
Thanks for the lengthy reply. I'm not convinced yet that the MWI approach makes good sense. And, I'm still wondering what it is that is actually seen superposed in the experiments that purport to produce macroscopic quantum superpositions. I just haven't had time to find and read any of the experimental papers yet. There's probably some technical definition for macroscopic that I haven't learned yet.
I would like to see a thread where the foundations of the MWI are discussed (by you and the other people who have written or are writing papers on it) at length. I could well be missing some important subleties in my current bewilderment regarding why such a metaphysical approach is being entertained by so many obviously smart people. It seems to me that there is a much less problematic way to interpret quantum theory, and if adopting the MWI approach is, ultimately, really just a matter of taste ... then, I wonder ... why pursue it ?
 
  • #75
quantumcarl said:
From: http://www.thinhtran.com/heisenberg.html
And if this helps you that would be nice.. but, for me...
Thanks quantumcarl ... I only just briefly glanced at the article, but saved it and will read it. I didn't know there was a problem with the uncertainty relations ... but it's always a good thing to consider other perspectives.
quantumcarl said:
This doesn't really answer my question either.
Now, to your question ...
quantumcarl said:
I think the crux of my question here in this thread is this: "is energy presented in the macroscopic form of matter even when it is not being observed?"
The "macroscopic form of matter" is the objects and events that we see with our visual perception. The kinetic energy of an object has to do with its mass (for practical purposes, its weight) and its velocity. The answer to your original question about whether, according to QM, the moon exists when we're not looking at it was yes.
So yes, "energy is presented in the macroscopic form of matter even when it is not being observed."
quantumcarl said:
The idea of the question places a great deal of importance on the power of observation that I instictively don't agree with but cannot prove either way because all experiments involve an observation at some point along the way.
Trust your instincts on this one. Even though there's no way to 'prove' that, say the moon, exists when we're not observing it --- assuming that it does is the only way of thinking about it that makes any sense, as far as I can tell. This assumption is an integral part of our ordinary language and is a basic assumption of the physical sciences, including quantum theory.
quantumcarl said:
The information that is transmited over distance from remote instruments suggests that hard, macroscopic matter does not depend upon being observed to be in the form of matter. And when the information is viewed some time after being gathered... that makes matter seem all that much more of a separate and independent state. I guess you'd call it a Classical state.
Well, ultimately, things have to be amplified (rendered big enough or complex enough for us to see) to the macroscopic level in order to be able to say anything meaningful about them.
quantumcarl said:
I wonder if there is any proof that shows the nature of the quantum microcosim to be absolutely fundamental to the classical macrocosim. Would this not constitute a unifier?
From sub-submicroscopic stuff to super-duper macroscopic stuff, from the very simple to the very complex, it's all going along for the same ride. What's fundamental is the behavioral laws that objects, events, evolutions on any scale in any medium must obey.
I guess that doesn't answer your question ... ok, I don't know.
But I think your question(s) about the relationship between consciousness (awareness) and matter (other than the material brain states which define consciousness) have been answered. Our sensory perception of the world isn't an act of creation. On the other hand, the experimental production of quantum phenomena is an act of creation. Before the moon was formed it didn't exist. Once brought into existence, our observations of it don't appreciably alter its existence and it doesn't pop into and out of existence each time we look at it and look away. The same with photons and electrons. Once the data has been produced by the instruments it exists, whether we're looking at it and whether we like it, or not.
 
  • #76
Sherlock said:
what is it that's actually seen in the macroscopic world that verifies the existence of so-called macroscopic quantum superpositions?

No, I think you have it backwards about the reasons to consider macroscopic quantum superpositions. Macroscopic quantum superpositions are simply part of the mathematical formalism of quantum theory. Maybe they don't exist, which then only means that the postulates of quantum theory are of limited use. As I tried to point out, macroscopic quantum superpositions appear naturally in the formalism of quantum theory. You have to do something to the formalism of quantum theory if you DON'T want to obtain them, and that something usually is quite "ugly", like introducing non-locality, and an unphysical interaction based upon a non-well defined concept which is "measurement".
Macroscopic superpositions are unavoidable (in the formalism) if you take the axioms of quantum theory seriously to be valid universally. There's really nothing more to it !

It is when you take your (quite understandable) "naah ! Too crazy to be true" viewpoint that you must conclude that macroscopic superpositions are, after all, not possible, and AS A CONSEQUENCE, that the postulates of quantum theory are not universally applicable. THIS IS VERY WELL POSSIBLE, I'm not denying that. But then we're left with a riddle: if the axioms of quantum theory are NOT universally valid, then what is ? Each time we do that (by introducing some "objective" mechanism of collapse, or by introducing other things, like in Bohm's theory) we seem to have a serious clash with locality, an other cherished principle.
This is why I take the viewpoint: can we not "save the day" for the axioms of quantum theory, and really take them as universally valid ? And then, what should we require ? We know that the consequence of taking the axioms of quantum theory as universally valid, is the existence of macroscopic superpositions *as theoretical constructions*. This is unavoidable in the formalism. And this clashes with your "naah, too crazy to be true". But cannot this macroscopic superposition NEVERTHELESS EXPLAIN OUR SUBJECTIVE EXPERIENCES ? The point is that if there's a way to view these superpositions as nevertheless explaining our subjective experiences, at the end of the day that's good enough for a physical theory. If that theory describes an objective world state, from which one can *derive* subjective experiences that *correspond to what we actually experience*, then this theory is good enough as a physical theory. Of course, a theory in which the subjective experiences correspond directly to the objective world state (as in classical physics) is probably intuitively more acceptable. But imagine the following situation: you're in the lab of a brain surgeon, which implants a few electrodes in your brain, and by sending certain signals in the electrodes, he can make you see pink elephants. What is now the most appropriate theory ? That there *are* pink elephants, or that the true state of the world is a manipulated brain from which we can derive that you should *experience the presence* of pink elephants ? In this case, it is probably evident that the last theory is the most useful. Which is a naive attempt, on my part, of trying to make you see that a theory that can explain all your subjective experiences is all you can require, finally, of a physical theory.

And that is what MWI does. It saves the universality of the QM postulates, it needs to postulate a non-trivial relationship between the world state and your subjective experience, but if you do so, it *explains* perfectly well those subjective experiences, and why you *have the impression* that the world is classical, and why you do not manifestly subjectively observe macroscopic objects in superposition.

The gain is the following: there is no ambiguity as to the difference between a physical phenomenon and a "measurement" (as is necessary in all collapse views) ; there is no need for non-locality and no clash with SR, and we can take the quantum state (the vector in hilbert space) as the true, objective description of the state of the world.


I don't think it's a waste of time to think about it, but the idea that there is an objective world that is revealed to us via (as opposed to being created by) our senses, and that it exists whether we happen to be sensing it or not, is the only way of thinking about it that makes any ... sense. It's the basis of physical science.

Yes, I agree with that. It is because I agree with that, that I think that one should take the "state description" of quantum theory seriously. And if you take the wavefunction as something "really out there", it cannot "disappear" at a certain macroscopic scale. Hence the necessity of the reality of macroscopic superpositions. Unless quantum theory is really fundamentally misguided (as I said, that's an option, but, in the current state of affairs, not a very helpful one).

You can take the QM formalism literally as being in 1-1 correspondence with an underlying quantum world, but how would you know?

Because it is the only coherent formalism that we have ! As I said, it is the starting point for an MWI view. I think that if you want to interpret a formalism of a physical theory, the least you can do is to take the formalism seriously.

So, for good reason, what is taught is that, as far as anybody knows, the QM formalism is not in 1-1 correspondence with an underlying quantum world.
You can take the QM formalism literally as being in 1-1 correspondence with the macroscopic world, but the difference here is that we actually experience the macroscopic world. So, for good reason, what is taught is that when the formalism says that a pointer is in a superposition of pointing at 1 and pointing at 2 for some specific preparation, then we interpret this to mean that it will either be pointing at 1 or pointing at 2 at the end of each trial, and that the probability assigned to each unique result refers to the mean average of a certain number of trials. The various pointer positions are the possible, mutually exclusive, results of individual measurements ... that's what the formalism means -- not that these things are happening simultaneously in each trial.

I'm sorry but that is NOT what the formalism says. It is what a certain interpretation of the formalism says (the Bohr view), and as such it denies any descriptive value to quantum theory. This comes about because people said "naah, too crazy!".

So what's best ? A theory that DOES give you a precise description of what physically happens, and how what physically happens relates to what you subjectively experience as a consequence of it (MWI), or a theory that DOESN'T SAY ANYTHING OF WHAT'S HAPPENING, but allows you to calculate some probabilities of outcomes without explaining any underlying mechanism ?

Anyway, the superposition principle is employed because quantum theory is essentially a theory of wave mechanics. Waves of what ... where? The frequency distributions produced by the detectors. But what relationship do these waves have with the underlying quantum reality? Nobody knows exactly. But waves in any medium are still waves ... frequency distributions ... and, apparently, whatever is happening in the medium or media of the underlying quantum world is somewhat similar to the data waves produced by the detectors. That is, it seems that a wave is a wave is a wave --- no matter what medium it happens to be propagating in.

This is not true. Quantum theory started off as a kind of wave mechanics, but we're now far beyond that POV. The so-called waves are now to be seen as superpositions of position states: as a point being at different places in the same time. Spin superpositions cannot be seen as "waves".

Well it isn't actually the brainstate that we're aware of, is it ? We're aware that eg. an instrument pointer is pointing in some specific direction, not all possible directions, at a certain time.

Just as you are aware of the pink elephants when you were in the brain churgeon's room...

The EXTRA RULE is the interpretational one that MWI adds which says that the expansion refers to simultaneously existing, mutually exclusive macroscopic states. Since this "clearly isn't the case" we don't take QM literally as a hi-fidelity representation of physical reality, and thus dismiss the MWI approach.

Many people do this, but it is clearly based only upon "gut feeling". I would agree with you that a theory that *introduces* parallel macroscopic worlds just for the sake of it, would do an overkill. However, when the mathematical formalism shows them, then the overkill, to me, is to artificially remove them in certain, very unwell defined cases, and as such violate basic principles of the axiomatic structure (such as locality, unitarity, the superposition principle).

It's what is seen that is the primary evaluational basis, not what is metaphysically postulated or mathematically formulated.

Oops, then the world is still a flat disk ! I think that if you want to interpret physically a mathematical formalism, you should give priority to the mathematical structure. If it allows you to deduce what you *should see* (although that is not what "is out there"), then that's good enough for me.

My subjective experiences, and afaik everybody else's (unless they're all lying), ARE only related to ONE TERM in the above expansion for each trial.

Eh, yes, that's exactly what is postulated that you SHOULD see.

I think that physical science takes the "true state of the world" to be what is seen, and what is seen is that either there is a red ball left on table or there is a green ball right on table at a certain time.

It is difficult to let that notion go, I know. The pink elephant is really there. In fact, the notion of "what is seen is what is there" is indeed the pillar of classical physics. But I'm trying to point out that this is maybe a too strong requirement for a physical theory: if the theory tells you what you are supposed to see, and that's what you see, is that not good enough ? If the price of clinging to the requirement that what you see is the true state of the world leads to a lot of FORMAL difficulties and apparent paradoxes, isn't it perferable to go for the lesser requirement of only having the formalism to explain what you ought to see, even if there is something else is out there ?

I'm looking at my keyboard now. Pondering which key to hit. Is my brainstate in a superposition of all the possible continuations -- and does that mean that when I hit a key then I've actually hit all of the keys but because of the branching I'm only subjectively aware of hitting one key ?? ... naaahh

Apart from the intuitive strangeness, what's wrong with it ? If the same crazy theory also allows you to find out that everything will now appear to you *as if* you only hit one key ?

I don't know that that's a good way to approach understanding the adoption and retention of the projection postulate. It came, afaik, from considerations of how formulas and models from classical wave optics might be applied in the quantum theory, and it's retained because it works.

No, that's not the point. The EPR experiments relate only to optical experiments because of experimental possibilities. But the EPR situation can be set up for just any quantum system: atoms, electrons... take your pick.

The probabilities calculated using QM are only physically meaningful wrt to statistical ensembles.

This is a positivist viewpoint to which I don't subscribe, because it *completely gives up on any attempt to describe physical reality at all*. In fact, such a viewpoint is even more "metaphysical" than the MWI viewpoint, because at least, in the MWI viewpoint, there IS a description of physical reality (the wavefunction), even though it doesn't correspond 1-1 to your subjective experience and an extra rule is required to deduce the subjective experience. But in the positivist viewpoint, there IS NO REALITY AT ALL apart from your subjective experience!

The probabilistic interpretation of QM, which is the standard one because it's the one that makes the most sense, doesn't have any problem with "spooky action at a distance" --- which, iirc, you have written a nice exposition of.

It cannot have any problem with spooky action at a distance because there IS no description of physical reality in that viewpoint! The only thing that "exists" are your observations. So I find it strange that one can adhere to a viewpoint that there IS NO REALITY AT ALL, and make objections to a theory that gives you a certain description of reality, and a deduction of what are your subjective experiences from it.

Unless, of course, you take the viewpoint that there IS a reality, but that QM is simply not describing it correctly. As I said, that's a possibility. But is it fruitful to adhere to such a viewpoint if we don't have a correct description that can replace it ?

And, I'm still wondering what it is that is actually seen superposed in the experiments that purport to produce macroscopic quantum superpositions.

If you take the viewpoint that quantum theory does not describe reality (but only in some way is capable of producing right statistical descriptions), then *nothing* corresponds to "superposition". It is just part of the calculational algorithm of probabilities of your observations. This is the problem (in my opinion) of the positivist viewpoint.
Nothing in the formalism corresponds to anything "out there", it is just an algorithm. It is very difficult (in my view) to devellop a "physical intuition" for such a formalism - while the MWI viewpoint allows you exactly that: to devellop a physical intuition of the formalism (a strange intuition all right, but the funny thing is that you get used to it!). To me, the positivist viewpoint reduces quantum theory to a kind of black box that spits out numbers that are probabilities. As such, it is a "non-interpretation" of the theory. It doesn't interpret the elements of the formalism as physical things.

It seems to me that there is a much less problematic way to interpret quantum theory, and if adopting the MWI approach is, ultimately, really just a matter of taste ...

Up to a point, of course it is a matter of taste and intellectual entertainment. But I think one has to make the choice between the following fundamental viewpoints:

A) there exists an objective reality that can be described by a physical theory

B) there doesn't exist such an objective reality.

Ok, there is a bizarre possibility,

C) there exists an objective reality, but it is not compatible with any mathematical theory (most religions somehow fit into this class)

Now, if we split this up further:

A)
1) quantum theory describes this objective reality

2) quantum theory does not describe this objective reality

B)
Quantum theory (as any theory) does not describe the non-existing objective reality

C)
As objective reality is not describable by a mathematical theory, the quantum formalism will not describe it either.

Your point of view is compatible with A2) and with B) and with C).
MWI is (I think) the only viewpoint that is compatible with A1). I'm open to A2), but I don't know of any formulation that doesn't have big problems in one or another way. I'm not open to B: I think that before giving up all together on an objective reality, we should think again. I don't even understand very well C because it means that we are living in an a-logical universe, maybe dictated by the will of the gods or whatever.

I'm only pointing out that A1) is perfectly possible - something which seems to be dismissed out of hand because of the outlandishness of the concept (and not because of the contradiction with observation). Given that it is *possible* to assume that quantum theory describes physical reality, I'd argue that the best interpretation of the formalism is exactly that. If it leaves you with the gut feeling "naaah!", then I find that not a sufficient argument.

People have tried hard to think of A2, but I don't find any approach as yet promising ; the local realists deny EPR situations, and Bell's theorem dictates that if you accept the quantum predictions also in EPR situations, that you will have big difficulties with locality and the principle of relativity.

Of course, from the day that a nice theory explains us why quantum theory seemed to work the way we thought, we'll know more, and we could then possibly dismiss this "subjective experience' viewpoint. But I'm pointing out that we don't HAVE such a formalism yet, so it is a bit difficult to take it as an interpretation of a theory we DO HAVE. Because one should be open to the possibility that the formalism we have is ultimately right, too and that this desired-for theory does not exist.

Nevertheless, I stick with my claim that the MWI view gives you the best possible *intuition* about the formalism (whether it ultimately will prove right or wrong doesn't matter). I tried several times to show how naturally one can interpret "weird experimental results" using this view. That by itself, I find already sufficient reason to consider it.

It is a bit as if discussions about free will would interfere with the (evident) interpretation of classical deterministic physics. I think that assuming determinism is part of the interpretational scheme of classical physics, and pointing out the "evident" problem of free will with such a scheme doesn't really help you devellop a feeling for the formalism of classical physics. In the same way, I view the MWI viewpoint as the most natural one sticking to the formalism of quantum theory ; and pointing out evident problems of "naah! Too crazy" doesn't help you in anything understanding the formalism and getting a feeling for it.
 
  • #77
Ok vanesch, thanks. I just very quickly scanned your last post and it seems very clearly laid out. More so than anything I've read anywhere else so far. I must take some more time to think about your points ...
 
  • #78
vanesch said:
Well, quantum mechanically, it is not so different, but the 1-1 relation is gone. Of course, in all quantum interpretations where you switch to classical physics at some point, after the switch, you're back in the above scenario and we can stop the discussion. But if we insist on an MWI viewpoint, so that we DO HAVE a quantum state in hilbert space of the ball and all that, including our brains, the above "useless" relationship between subjective experience and hypothesis of objective world saves the day, so to say. Indeed, we take it that it is the STATE OF THE BRAIN which determines subjective experiences. But now we have an apparent (?) difficulty:
Imagine that we have the state:
|red ball left on table>|brainstate1> + |green ball right on table> |brainstate 2>
If you analyse these brainstates carefully, you'd do exactly as in the above explanation: you'd find that brainstate1 corresponds to the classical state in the case we would have had a classical red ball on a classical table, and brainstate 2 corresponds to a classical green ball on the table etc...
So if you take this literally, you should have to say that you should be aware of BOTH situations. This clearly isn't the case, so an EXTRA RULE is needed: you can only be aware of ONE classical brainstate. Your subjective experiences can only be related to ONE TERM in the above expansion, and this will be assigned RANDOMLY (through the Born rule)
Hi Patrick... fascinating stuff.
But why would the brain play such a special role? It seems to me that already before the light reaches your eyes, the "collapse" to the red ball should have occurred. Why should such a special status be assigned to the brain? (other than making EPR type experiments easier to swallow:smile: ).
(I also enjoyed the discussion oh holism and reductionism with ZapperZ...except that it felt to me like Zapper was going back and forth from "practical holism" to "fundamental holism" and not really caring as much about the distinction between the two, which must have caused you some frustration. The way I saw it, you were arguing against fundamental holism while saying that practical holism was, of course, a useful approach (indeed, there is no way around it for most calculations!). On the other hand, Zapper would be defending the use of *practical* holism while at the same time dropping hints of believing in *fundamental* holism, but then going back to practical holism and so on...It must have felt like trying to hit a moving target. I do think that when ZapperZ said that most practicing physicists believe in holism (he said it differently, but it was the gist of it), he meant in *practical* holism. That is not surprising. Even a die hard reductionist would say that it would be fool hardy to study superfluidity, say, ab initio. In that sense, *nobody* would disagree that practical holism is a useful (and necessary) scientific tool. The real discussion is whether *fundamental* holism can even be envisioned and debated. *That* is the more exciting discussion. ZapperZ made some comments and brought some arguments but would go back to saying that it is unanswerable and then switching back to defending practical holism.)
Pat
 
  • #79
nrqed said:
(I also enjoyed the discussion oh holism and reductionism with ZapperZ...except that it felt to me like Zapper was going back and forth from "practical holism" to "fundamental holism" and not really caring as much about the distinction between the two, which must have caused you some frustration. The way I saw it, you were arguing against fundamental holism while saying that practical holism was, of course, a useful approach (indeed, there is no way around it for most calculations!). On the other hand, Zapper would be defending the use of *practical* holism while at the same time dropping hints of believing in *fundamental* holism, but then going back to practical holism and so on...It must have felt like trying to hit a moving target. I do think that when ZapperZ said that most practicing physicists believe in holism (he said it differently, but it was the gist of it), he meant in *practical* holism. That is not surprising. Even a die hard reductionist would say that it would be fool hardy to study superfluidity, say, ab initio. In that sense, *nobody* would disagree that practical holism is a useful (and necessary) scientific tool. The real discussion is whether *fundamental* holism can even be envisioned and debated. *That* is the more exciting discussion. ZapperZ made some comments and brought some arguments but would go back to saying that it is unanswerable and then switching back to defending practical holism.)
Pat

Welp, having had my name resurrected again under this topic, I will have to address this, especially what is perceived to be my waffling back and forth on this so-called "fundamental holism".

We all know my view on "practical holism", so let's leave that there. When vanesch asked me if I think that if we have all the infinite computing power in the world and can really do a computation for a gazilion particles exactly, shouldn't we, IN PRINCIPLE, be able to derive ALL of the emergent phenomena?

I have replied that at best, I don't know, but my haunch is, no, you can't. I will explain why I believe so.

When you look at the liquid state (water, let's say), and you write down ALL the possible interactions of every single water molecules, do you think that by looking at the dynamics of the system that you could predict a phase transition at certain temperatures, even in principle? No you can't. Why? The transition between liquid and solid, for example, involves a broken symmetry. This is a symmetry that does NOT exist in one phase or the other. So it isn't JUST a matter of being able to write down ALL of the interactions of a system, because at SOME point, there is symmetry principle that one needs to introduce into the system, maybe to get an ordered state that wasn't initially there.

We can take this even further by looking at a quantum phase transition. You do NOT have a quantum critical point in a system even when you could compute all of the interactions. There isn't an a priori way for you to model that so that it drops onto your lap naturally.

All of the condensed matter phenomena that I have listed, involved such transition. The emergence of superconductivity out of a sea of charge carriers involves a broken symmetry (many even claim it is a 2nd order phase transition). One has no way in extrapolating such dynamics simply by looking at how it looks like in one state.

So, not only is reductionism not useful in giving anything meaningful about emergent phenomena, but in my opinion, based on what we already know, that it is entirely possible that it can't, even in principle, derive those phenomena.

Now, was I still waffling about this issue in this post?

Zz.
 
  • #80
ZapperZ said:
Welp, having had my name resurrected again under this topic, I will have to address this, especially what is perceived to be my waffling back and forth on this so-called "fundamental holism".

My sincerest apologies if my comments may have appeared offensive. They were not meant to be. My main point was that my impression was that Vanesch wanted to focus exclusively on discussing arguments (for or against) for "fundamental holism", whereas I had the impression that this was a much less relevant issue, that the relevant issue is that *practical* holism was necessary (which, I think, can't be denied).
My apologies again. I will shut up and go back to lurking mode :smile:


ZapperZ said:
Now, was I still waffling about this issue in this post?
Zz.

No, you attacked it up front. I see a lot of interesting directions this argument could go and I will be watching with interest in the lurking mode.


Thank you and my apologies, once more.

Pat
 
  • #81
nrqed said:
My sincerest apologies if my comments may have appeared offensive. They were not meant to be. My main point was that my impression was that Vanesch wanted to focus exclusively on discussing arguments (for or against) for "fundamental holism", whereas I had the impression that this was a much less relevant issue, that the relevant issue is that *practical* holism was necessary (which, I think, can't be denied).
My apologies again. I will shut up and go back to lurking mode :smile:
No, you attacked it up front. I see a lot of interesting directions this argument could go and I will be watching with interest in the lurking mode.
Thank you and my apologies, once more.
Pat

Oh no, you may have read it wrong. I wasn't the least bit offended at all. I was just a bit amused that it is being resurrected all over again.

Zz.
 
  • #82
Hi Pat !

Nice to see you here again!

nrqed said:
Hi Patrick... fascinating stuff.
But why would the brain play such a special role? It seems to me that already before the light reaches your eyes, the "collapse" to the red ball should have occurred. Why should such a special status be assigned to the brain? (other than making EPR type experiments easier to swallow:smile: ).

You're implicitly assuming that there *is* actual collapse, but this time of "brain states" (von Neumann had this argument in fact in his book ; I typed it over here once, and will try to find it back). But the point is that there is NO "objective" collapse, not even of brain states.
The thing that is supposed to happen is that the RELATIONSHIP BETWEEN BRAINSTATES AND SUBJECTIVE EXPERIENCE is the one that "does the picking of a term" (without changing the physical state, which remains in superposition).
In classical physics, it is assumed that the entire brain state (the positions and momenta of all particles in your brain, and the amplitudes and phases of all relevant field modes in your brain) "generates" a subjective experience which is what "you are aware off". In MWI, it is not so different ; only it is not the ENTIRE brainstate (which is entangled with several states of the environment), but just ONE, NONENTANGLED state which generates a subjective experience. If it is postulated (or derived, as some claim, though I think they're wrong, because of circularity or hidden assumptions) that your subjective experience can only derive from ONE brain state in the sum of states, then for your subjective experience it will appear as if this term is the only one that exists. Of course, as long as interference effects with the "neighbouring" terms are non-zero, this would give very strange-looking results (and IMO, this is exactly what happens in EPR setups). But once the different terms are entirely decohered, we'll "never hear again of these other terms" because our subjective experiences are only derived from the brain state in ONE of these terms.
So, what your subjective experiences are concerned, collapse occured.
Nevertheless, the other brainstates, in other terms, are still there. You are simply not subjectively aware of them.

I think that this is the essence of any MWI view.
(and as I said, those parallel worlds are not *introduced* because I have been smoking some crack, they *appear naturally* when you apply the axiomatic structure of quantum theory)

(I also enjoyed the discussion oh holism and reductionism with ZapperZ...except that it felt to me like Zapper was going back and forth from "practical holism" to "fundamental holism" and not really caring as much about the distinction between the two, which must have caused you some frustration.

I could indeed not make up indeed what was the claim!
 
  • #83
ZapperZ said:
When vanesch asked me if I think that if we have all the infinite computing power in the world and can really do a computation for a gazilion particles exactly, shouldn't we, IN PRINCIPLE, be able to derive ALL of the emergent phenomena?
I have replied that at best, I don't know, but my haunch is, no, you can't. I will explain why I believe so.

So I take it that you claim that, if we have that computing power, we would predict DIFFERENT outcomes for the observables that are supposed to measure those phase transitions than what is really observed. Like, when calculating the refractive index (or a suitable observable that corresponds to it, such as the position of the spot of a light beam that is refracted by the sample), which is nothing else but a hermitean operator that works upon the (very big) hilbert space of particles and relevant field modes of all the constituents of the sample and measurement environment, as a function of an ensemble of initial states that correspond, say, to the microcanonical ensemble or whatever (being a relatively suitable initial condition of the real experiment at temperature T), clearly I WILL find a distribution of probability of the position of the light spot on the screen which will give me a probability distribution of the observed refractive index. I expect this distribution to be rather peaked around one value, btw. Now, you claim that this spot will NOT be in the right position as a function of T ?

I find this strange, because this IS what is done in toy models like the Ising model, where observables DO show phase transitions.
For instance:
http://www.physics.cornell.edu/sethna/teaching/sss/ising/intro.htm

http://scienceworld.wolfram.com/physics/IsingModel.html

Of course, this is a toy model, but it shows you how phase transitions can occur from microdynamics "ab initio". The complexity of real-world systems is of course way too big to even be able to find out, but I don't buy the argument that phase transitions cannot be, a priori, calculated ab initio if we know the microdynamics.
 
  • #84
vanesch said:
So I take it that you claim that, if we have that computing power, we would predict DIFFERENT outcomes for the observables that are supposed to measure those phase transitions than what is really observed. Like, when calculating the refractive index (or a suitable observable that corresponds to it, such as the position of the spot of a light beam that is refracted by the sample), which is nothing else but a hermitean operator that works upon the (very big) hilbert space of particles and relevant field modes of all the constituents of the sample and measurement environment, as a function of an ensemble of initial states that correspond, say, to the microcanonical ensemble or whatever (being a relatively suitable initial condition of the real experiment at temperature T), clearly I WILL find a distribution of probability of the position of the light spot on the screen which will give me a probability distribution of the observed refractive index. I expect this distribution to be rather peaked around one value, btw. Now, you claim that this spot will NOT be in the right position as a function of T ?
I find this strange, because this IS what is done in toy models like the Ising model, where observables DO show phase transitions.
For instance:
http://www.physics.cornell.edu/sethna/teaching/sss/ising/intro.htm
http://scienceworld.wolfram.com/physics/IsingModel.html
Of course, this is a toy model, but it shows you how phase transitions can occur from microdynamics "ab initio". The complexity of real-world systems is of course way too big to even be able to find out, but I don't buy the argument that phase transitions cannot be, a priori, calculated ab initio if we know the microdynamics.

Ah, but look at the Ising model closely. You have to put in, BY HAND, not ab initio, the interaction strength.

It is because of this that many people still claim that we don't quite fully understand the cause of magnetism in matter. The Heisenberg coupling that can determine if something is going to be a ferromagnet or antiferromagnet, for examp;le, has to be put in by hand.

So no. Such a model is not fully ab initio. It requires you to make an a priori input to make it mimick whatever system you are trying to get.

Zz.
 
  • #85
ZapperZ said:
Ah, but look at the Ising model closely. You have to put in, BY HAND, not ab initio, the interaction strength.

Naah, that's not what I mean. That's when you use the Ising model to model another (real-world) system, like ferromagnetism or so. Then it becomes indeed a *phenomenological model*.

But if you would consider a universe where the Ising hamiltonian is exact and the interaction strength is a fundamental constant of that universe, then the Ising model is "ab initio" in that universe. And you have phase transitions ab initio in that universe, which proves that it is possible to obtain phase transitions, ab initio, from microscopic physics, at least in the quantum physics of a toy universe. Which goes against the claim that phase transitions cannot be derived ab initio. It can, at least in some toy universes.

Of course when we want to apply this Ising model to a real world situation, it is not ab initio, it is one of those typical phenomenological modelisations which are so common in condensed matter physics, and which illustrate indeed, only *some aspects* of the physics, and in which we have to do some (educated or not) guesses on the interaction model and parameters.
 
  • #86
vanesch said:
But if you would consider a universe where the Ising hamiltonian is exact and the interaction strength is a fundamental constant of that universe, then the Ising model is "ab initio" in that universe. And you have phase transitions ab initio in that universe, which proves that it is possible to obtain phase transitions, ab initio, from microscopic physics, at least in the quantum physics of a toy universe. Which goes against the claim that phase transitions cannot be derived ab initio. It can, at least in some toy universes.

But this is what I claim to be something input by hand. I've done such a modeling, although not very extensively. I had to know the interaction strength, and how many neighbors to take into account. I play around with those "free parameters" till I could get the ordered state, or whatever state I was looking for. Was the Hamiltonian exact? Sure! Did I let everything run by itself? Sure! Did it require an input from me? DEFINITELY! So *I* was the "source" for the phase transition or any ordered state.

ZZ.
 
  • #87
ZapperZ said:
But this is what I claim to be something input by hand. I've done such a modeling, although not very extensively. I had to know the interaction strength, and how many neighbors to take into account. I play around with those "free parameters" till I could get the ordered state, or whatever state I was looking for. Was the Hamiltonian exact? Sure! Did I let everything run by itself? Sure! Did it require an input from me? DEFINITELY! So *I* was the "source" for the phase transition or any ordered state.

If what you put in "by hand" is now declared "fundamental microphysics" (of the toy universe), then we are both right :approve:

Did Maxwell put the terms into his equations by hand ? Sure. But that cannot be what you mean, right ?

Now, I think I know what you mean: in the Ising case, it's all made up *in order to* generate things like a phase transition. But I read this differently: I read it that the Ising Hamiltonian could, in a specific toy universe, be derived from some microscopic law, obtained by "scattering" individual spins (in other words, study few-constituent systems). The "particle physicists" of the toy universe could have studied several few-spin systems, and have derived their "fundamental laws of microphysics" of the toy universe. Condensed-matter physicists in that toy universe could then apply those fundamental laws of microphysics to much larger systems, and DERIVE AB INITIO the phase transition ; as such, the phase transition would NOT have to be phenomenologically modeled, but is really derived from the supposed exact microphysical laws they learned from their peers ; and this time they COULD do the math. So there is all right an "emergent phenomenon" of phase transition in large systems of spin, but it is entirely contained and derivable from the microphysics of the toy universe.

Imagine now that the parameters of the microphysical laws in the toy universe are determined by the particle physicists to have certain values, and imagine now that the condensed-matter physicists use these laws to derive a phase transition ; and imagine that they observe that the phase transition DOES NOT occur as predicted by the microphysical laws. What would now be the conclusion ? I think that the conclusion would simply be that they would go and see their particle physicist peers, and tell them that they must have messed up!
But is it really thinkable that the particle physicists tell them that, well, the laws they have are exactly true for the microphysics, but that, when there are a lot of spins, there can be emergent phenomena, and that they better not do their derivation from microphysics ? But that this is all right ? Nothing to worry about ?

How can the individual spins be supposed to follow exactly the laws of microphysics, while the derivable consequence for the behaviour of a lot of them is not true ? It is just "emergent" ?

Honestly, the only conclusion I could draw from such a situation is that
1) the microlaws are not exact (although maybe such a good approximation that one cannot notice experimental derivation from them in few-spin situations) as such, the condensed-matter experiment has a higher sensitivity to the deviations than the few-component interaction experiments; or:
2) in the derivation, an extra assumption has been made without noticing, which invalidates the deduction.

I cannot conceive logically the possibility that the individual constituents (spins) follow ALL the exact laws, but that the deduced conclusion about the behaviour of a lot of them is not correct if there is no error in the deduction.

The only difference with real-world situations (instead of situations in this toy universe) is that we cannot do the derivation in practice, today.
 
  • #88
vanesch said:
If what you put in "by hand" is now declared "fundamental microphysics" (of the toy universe), then we are both right :approve:
Did Maxwell put the terms into his equations by hand ? Sure. But that cannot be what you mean, right ?

Depends on what you mean "by hand" in your case. In fact, I claim that Maxwell Equations ARE phenomenological. One does not, for example, derive from First Principles, the Coulomb's Law.

Now, I think I know what you mean: in the Ising case, it's all made up *in order to* generate things like a phase transition. But I read this differently: I read it that the Ising Hamiltonian could, in a specific toy universe, be derived from some microscopic law, obtained by "scattering" individual spins (in other words, study few-constituent systems). The "particle physicists" of the toy universe could have studied several few-spin systems, and have derived their "fundamental laws of microphysics" of the toy universe. Condensed-matter physicists in that toy universe could then apply those fundamental laws of microphysics to much larger systems, and DERIVE AB INITIO the phase transition ; as such, the phase transition would NOT have to be phenomenologically modeled, but is really derived from the supposed exact microphysical laws they learned from their peers ; and this time they COULD do the math. So there is all right an "emergent phenomenon" of phase transition in large systems of spin, but it is entirely contained and derivable from the microphysics of the toy universe.

But if this is true, we would have solved magnetism already. Instead, the Ising model for such a system STILL has to make use of the same approximation that we have to use in many-body approximation - the mean-field potential is a very popular one.

The problem here is that the "toy model" requires you to know what to put in. You arrive at an ordered phase because... well, you KNOW this is what you want and you tweak the interactions. How many nearest neighbors, next nearest neighbors, next-next nearest neighbors that you consider is NOT derived ab initio. What coupling strenth do you put in for each of those interactions? To say that you can "derive" a phase transition from such manipulation isn't entirely kosher. You can show your toy system undergoes a phase transition, but you have no idea why it does that with your parameters - meaning that you DON'T have a "Theory of Everything" in principle.

Imagine now that the parameters of the microphysical laws in the toy universe are determined by the particle physicists to have certain values, and imagine now that the condensed-matter physicists use these laws to derive a phase transition ; and imagine that they observe that the phase transition DOES NOT occur as predicted by the microphysical laws. What would now be the conclusion ? I think that the conclusion would simply be that they would go and see their particle physicist peers, and tell them that they must have messed up!

I have 2 ways to address that. First, we have already established that we can't do First Principle calculations of a gazillion interacting particles. So we have already agreed that, in the practical sense, microscopic interactions are of no use in predicting and describing emergent phenomena. This is what Laughlin described in his Nobel Lecture as his bad trick onto his graduate students. So off hand, there is no way to verify what you suggested above, because if it doesn't work, we don't know if it's because of our computational shortcoming, or we're missing something fundamental. Secondly, even if it works, we don't know if it is the SAME mechanism because we're extrapolating.

But is it really thinkable that the particle physicists tell them that, well, the laws they have are exactly true for the microphysics, but that, when there are a lot of spins, there can be emergent phenomena, and that they better not do their derivation from microphysics ? But that this is all right ? Nothing to worry about ?
How can the individual spins be supposed to follow exactly the laws of microphysics, while the derivable consequence for the behaviour of a lot of them is not true ? It is just "emergent" ?
Honestly, the only conclusion I could draw from such a situation is that
1) the microlaws are not exact (although maybe such a good approximation that one cannot notice experimental derivation from them in few-spin situations) as such, the condensed-matter experiment has a higher sensitivity to the deviations than the few-component interaction experiments; or:
2) in the derivation, an extra assumption has been made without noticing, which invalidates the deduction.
I cannot conceive logically the possibility that the individual constituents (spins) follow ALL the exact laws, but that the deduced conclusion about the behaviour of a lot of them is not correct if there is no error in the deduction.
The only difference with real-world situations (instead of situations in this toy universe) is that we cannot do the derivation in practice, today.

You are missing the 3rd option: that the so-called "fundamental" microlaws are THEMSELVES emergent! If one can envision that the property that we consider as "mass" to be nothing more than an excitation out of the Higgs field, then I can certainly speculate that ALL of our fundamental particles are emergent, "quasiparticles" out of some fields, and the fundamental "force" carriers are no different than the collective particles like phonons, spinons, chargons, polarons, etc. In that case, your "micolaws" are not "fundamental" and certainly subject to fluctuations and are themselves many-body excitations.

I know I'm extrapolating, but that is what you are doing with your toy model. So it's fair game.

Zz.
 
  • #89
ZapperZ said:
You can show your toy system undergoes a phase transition, but you have no idea why it does that with your parameters - meaning that you DON'T have a "Theory of Everything" in principle.

Well, if the derivation from the premise is "understood", then you do have an idea !

I have 2 ways to address that. First, we have already established that we can't do First Principle calculations of a gazillion interacting particles. So we have already agreed that, in the practical sense, microscopic interactions are of no use in predicting and describing emergent phenomena. This is what Laughlin described in his Nobel Lecture as his bad trick onto his graduate students. So off hand, there is no way to verify what you suggested above, because if it doesn't work, we don't know if it's because of our computational shortcoming, or we're missing something fundamental.

Honestly, this sounds like claiming that we don't know if natural numbers with more than 10^500 digits can be written as a unique product of prime factors or not, no ? Because there's no way to verify if it is because of a computational shortcoming or if we're missing something fundamental.

Before taking the option of "something fundamental" (and as such throw overboard the entire number theory, and even mathematical logic), I'd say that the obvious point is that we're having computational shortcomings. The funny thing is that probably, 2 centuries ago, one would have said the same about numbers with a few hundred digits. Well, we now know that it was a matter of computational shortcoming: numbers with a few hundred digits can be written as a unique product of prime factors.

So, we might make progress with what can be computed!

You are missing the 3rd option: that the so-called "fundamental" microlaws are THEMSELVES emergent! If one can envision that the property that we consider as "mass" to be nothing more than an excitation out of the Higgs field, then I can certainly speculate that ALL of our fundamental particles are emergent, "quasiparticles" out of some fields, and the fundamental "force" carriers are no different than the collective particles like phonons, spinons, chargons, polarons, etc.

Oh, but that's almost for sure the case! But there's no problem here! The problem I have is with the claim that, when we know the behaviour of constituents exactly (even if this behaviour is "emergent" from an underlying theory) - or at least with sufficient accuracy, and that we make systems compound of many of these constituents, that we suddenly should NOT be able - even in principle - to deduce the behaviour of the compound system, even though we're supposed to know what each individual compound is going to do. THAT is difficult to accept conceptually to me.
 
  • #90
vanesch said:
Honestly, this sounds like claiming that we don't know if natural numbers with more than 10^500 digits can be written as a unique product of prime factors or not, no ? Because there's no way to verify if it is because of a computational shortcoming or if we're missing something fundamental.
Before taking the option of "something fundamental" (and as such throw overboard the entire number theory, and even mathematical logic), I'd say that the obvious point is that we're having computational shortcomings. The funny thing is that probably, 2 centuries ago, one would have said the same about numbers with a few hundred digits. Well, we now know that it was a matter of computational shortcoming: numbers with a few hundred digits can be written as a unique product of prime factors.

But all I did was described what you called "practical holism" or something to that nature, did I not? I thought we agreed on this already?

So, we might make progress with what can be computed!
Oh, but that's almost for sure the case! But there's no problem here! The problem I have is with the claim that, when we know the behaviour of constituents exactly (even if this behaviour is "emergent" from an underlying theory) - or at least with sufficient accuracy, and that we make systems compound of many of these constituents, that we suddenly should NOT be able - even in principle - to deduce the behaviour of the compound system, even though we're supposed to know what each individual compound is going to do. THAT is difficult to accept conceptually to me.

It isn't for me mainly because I am not convinced that a toy model that I manipulated by hand that happened to mimick large-scale phenomena is accurate. I haven't seen one that is able to, for example, mimick every aspect of an antiferromagnetic phase, for example, all the way up to producing a spin-density wave. All you get is a resemblance to one part of the picture. To get the resemblance to another part, you construct ANOTHER toy model, because the previous one just can't do it.

So what we have here are examples where (i) you claim you can show a toy model system resembling a phase transition seen in a larger system and (ii) me showing you other examples where such toy models don't work - in fact, I claim that there are many more examples in this category than there are in the first. If this is true, then it is just a matter of taste on if this is a convincing evidence one way or the other. My taste runs on it being not convincing.

Now, with that in mind, my point on the 3rd option that you missed would not matter either way. I brought it up simply in relations to the claim of the possibility of TOE. Disregarding the fact that the knowledge of all the "fundamental interactions" are useless in predicting and describing emergent phenomena, it means that even when one obtains complete knowledge of all of our current fundamental interactions, one can STILL end up with nothing more a set of emergent phenomena. We would have known more, but we certainly do not know everything.

Zz.
 
  • #91
ZapperZ said:
It isn't for me mainly because I am not convinced that a toy model that I manipulated by hand that happened to mimick large-scale phenomena is accurate.

That was not the point of course. The point was that toy models can show emergent phenomena such as phase transitions to occur in toy universes (where they are supposed to be "fundamental").
As such, the argument that emergent phenomena *prove* that the reductionist approach is bound to fail in principle is shown to be false as a general argument, because we have a counter example (in a toy universe).

This leaves you with the hope that the same can in principle be done in the real universe: that it is CONCEIVABLE that phase transitions and other fancy emergent stuff MIGHT BE derivable from the microphysics, if only we had enough brains.

I haven't seen one that is able to, for example, mimick every aspect of an antiferromagnetic phase, for example, all the way up to producing a spin-density wave. All you get is a resemblance to one part of the picture. To get the resemblance to another part, you construct ANOTHER toy model, because the previous one just can't do it.
So what we have here are examples where (i) you claim you can show a toy model system resembling a phase transition seen in a larger system and (ii) me showing you other examples where such toy models don't work - in fact, I claim that there are many more examples in this category than there are in the first. If this is true, then it is just a matter of taste on if this is a convincing evidence one way or the other. My taste runs on it being not convincing.

Well, I find it convincing, from the moment that there is ONE example in the bin (i), because that shows that there is no fundamental reason why all the things in bin (ii) could not eventually be moved to bin (i). Bin (i) is not empty. That leaves us with some hope. The hope that the universe is running on a mathematical model. A single one.

it means that even when one obtains complete knowledge of all of our current fundamental interactions, one can STILL end up with nothing more a set of emergent phenomena. We would have known more, but we certainly do not know everything.

Sure. The issue is if this "turtling down" is infinite, or will stop. If we take it that the universe is running on a certain mathematical model, then it should stop, the day we find that mathematical model, no ?
And if it is NOT running on a mathematical model, then anything goes, right ?

That said, we will of course never KNOW if we have it or not (because we cannot do every conceivable experiment). Maybe that's your point.
 
  • #92
ZapperZ said:
Oh no, you may have read it wrong. I wasn't the least bit offended at all. I was just a bit amused that it is being resurrected all over again.
Zz.

Ah ok. Thanks a lot for posting this, I genuinely felt bad. I did not mean to sound rude but in rereading my post I realized that I did come out rude and without tact. I am sincerely glad you were not offended because you would have had good reasons to be.

Thanks again!

Pat
 
  • #93
I was wondering, (as a layperson to the study of quantum physics), if the phenomenon of non-location (and all those related events) is a result of the observer, or the instruments of the observer, being unable to discern what actually takes place at a microscopic level.

Perhaps whatever is non-local moves so fast it appears to us as though its in two different places at the same time. Could it be occilating between two locations at a rate which is undetectable to an instrument or observer in our scale and position in the classical environment?

For instance, we are at a scale of x times that of the quantum. The mechanisms that support us as observers, and our instruments, are far removed from the mechanisms and the scale and the speed at which events unfold... microscopically.

Can the physicist be sure his or her perception and the readings their instruments dictate are "up to speed" with the microscopic scale of a quantum field?
 
  • #94
quantumcarl said:
I was wondering, (as a layperson to the study of quantum physics), if the phenomenon of non-location (and all those related events) is a result of the observer, or the instruments of the observer, being unable to discern what actually takes place at a microscopic level.

Perhaps whatever is non-local moves so fast it appears to us as though its in two different places at the same time. Could it be occilating between two locations at a rate which is undetectable to an instrument or observer in our scale and position in the classical environment?

For instance, we are at a scale of x times that of the quantum. The mechanisms that support us as observers, and our instruments, are far removed from the mechanisms and the scale and the speed at which events unfold... microscopically.

Can the physicist be sure his or her perception and the readings their instruments dictate are "up to speed" with the microscopic scale of a quantum field?

See, the problem here has more to do with your understanding of a more general principle of superposition. Being in "two locations" is an example of such principle. Now couple that with what QM defines as non-commutative operators as observables, we have a situation where one really cannot learn QM only in bits and pieces.

The principle of superposition is well-verified. This is because, while one cannot see a superposition of locations, for example, one can detect the CONSEQUENCES of it by doing an indirect measurement. One can measure an observable that doesn't commute with the position operator. Such act does not cause a complete collapse of the position superposition. Thus the value that one obtains would reflect such superposition.

This has been done and observed many times, and are often known as the Schrodinger Cat-type states. The existence of the bonding-antibonding bonds in H2 molecule is a prime example. The energy gap measure in the SQUID experiments of Delft and Stony Brook is another. Rather than just repeat everything that has been said many times on here, I'll just copy off an entry in my Journal.

ZapperZ's Journal said:
These are the papers that clearly show the Schrodinger Cat-type states (alive+dead, and not alive or dead). All the relevant details are there and anyone interested should read them. Also included is the reference to a couple of review articles which are easier to read, and the reference to two Leggett's papers, who was responsible in suggesting this type of experiments using SQUIDs in the first place. Again, the papers have a wealth of citations and references.

The two experiments from Delft and Stony Brook using SQUIDs are:

C.H. van der Wal et al., Science v.290, p.773 (2000).
J.R. Friedman et al., Nature v.406, p.43 (2000).

Don't miss out the two review articles on these:

G. Blatter, Nature v.406, p.25 (2000).
J. Clarke, Science v.299, p.1850 (2003).

However, what I think is more relevant is the paper by Leggett (who, by the way, started it all by proposing the SQUIDs experiment in the first place):

A.J. Leggett "Testing the limits of quantum mechanics: motivation, state of play, prospects", J. Phys. Condens. Matt., v.14, p.415 (2002).

A.J. Leggett "The Quantum Measurement Problem", Science v.307, p.871 (2005).

This paper clearly outlines the so-called "measurement problem" with regards to the Schrodinger Cat-type measurements.

Zz.
 
  • #95
ZapperZ said:
we have a situation where one really cannot learn QM only in bits and pieces.

Yeah, you're right to point that out. I can't ask a coherent question about qm because I haven't studied it from the bottom up... I don't imagine "suffering fools" is part of the qm curriculum.:rolleyes: Do I still get some mouse ears for effort? :smile:
 
  • #96
quantumcarl said:
Yeah, you're right to point that out. I can't ask a coherent question about qm because I haven't studied it from the bottom up... I don't imagine "suffering fools" is part of the qm curriculum.:rolleyes: Do I still get some mouse ears for effort? :smile:

In many cases, the problem here isn't with you. It's more with ME. I see a lot of these questions, and I wish I have more of a patience to write a many-page answer on why there's a huge part of QM that one is missing. This is especially true on questions of quantum entanglement. I see people asking about this and that, and I notice that they haven't actually understood the physics that is the central issue in this phenomenon. You will never realize why it is so "strange" if you don't understand (i) quantum superposition and (ii) the commutation relations of observables. Undergraduate physics majors could spend a whole semester doing nothing but the understanding and applications of these two principles. In fact, the commutation relations of observables is sometime called the First Quantization. It is THAT important.

Physics is a difficult subject because one has to have a mastery of many different, sometime apparently unrelated, areas. And one certainly cannot fully comprehend a particular area by just focusing on one single aspect of it, because that's like looking at the hoof of the animal and trying to deduce what the animal looks like. Realizing the interconnectedness of a particular area is one of the first steps of learning it.

Zz.
 
  • #97
ZapperZ said:
In many cases, the problem here isn't with you. It's more with ME. I see a lot of these questions, and I wish I have more of a patience to write a many-page answer on why there's a huge part of QM that one is missing. This is especially true on questions of quantum entanglement. I see people asking about this and that, and I notice that they haven't actually understood the physics that is the central issue in this phenomenon. You will never realize why it is so "strange" if you don't understand (i) quantum superposition and (ii) the commutation relations of observables. Undergraduate physics majors could spend a whole semester doing nothing but the understanding and applications of these two principles. In fact, the commutation relations of observables is sometime called the First Quantization. It is THAT important.

Physics is a difficult subject because one has to have a mastery of many different, sometime apparently unrelated, areas. And one certainly cannot fully comprehend a particular area by just focusing on one single aspect of it, because that's like looking at the hoof of the animal and trying to deduce what the animal looks like. Realizing the interconnectedness of a particular area is one of the first steps of learning it.

Thank you Zapper z.

To begin with I have learned more about QM physics from you and the other contributors to this thread than from anywhere else.

What I have learned has helped me understand that applying QM to philosophy will never be as easy as the people who compiled "What the Bleep..." make it look. In fact... they have only made themselves look rather foolish where they try to analogize QM properities with what you have taught me to regard as "emergent properties".

There is definitely a potential for a unification of classical and quantum physics and the laws governing them... but... there is also a potential for the sculpture of George Washington at Mt. Rushmore to start talking. (That would be an earfull!)

When I am able to figure out what SQUID and/or KALAMARI have to do with QM, I'll constitute a position of being a better contributor to this section of the illustrious PhysicsForum!:rolleyes:

If this means I have to change my name, I'm not doing it. I tried that already and its a big mess. Greg or his adminstrators send out the automated swat team w/dogs when you do.
 
Last edited:

Similar threads

Back
Top