rolnor said:
Thanx, I learn a lot. You wright something of the photon being asymptotic, does that mean that it has no real boundary? You can not say really where its "volume" starts or begins? I understand that, when I with my limited knowledge react strongly against this popular science idéa that a conscious is nessecary to collaps the wavefunction, you guys must really puke on what is all over the webb regarding a lot of quantum mechanics theory...
"Asymptotically free" means you look in regions far away from things the photon interacts with, i.e., where you can describe the em. field as not experiencing any interactions with charges. Only in this "asymptically free" limit there is a clear interpretation in terms of photons, and that's what we usually observe in high-energy physics, i.e., scattering processes, where we consider that initially we have two particles, which are far away from each other and thus can be considered as not interacting, i.e., we prepare the two particles in "asymptotically free" "in-states". Then they come closer to each other and the interaction between them becomes non-negligible. We don't bother with interpreting this state in any way in terms of particles but we look after this interaction when all the particles going out from this collisions are again far distant from each other and thus can be considered as "asymptotically free" again ("out-states"). What we calculate is the transition probability from the asymptotically free in-state to the asymptotically free out-state, which is encoded in the "scattering matrix" (S-matrix). That's what's usually done using perturbation theory and nicely encoded in Feynman diagrams, and that's what's measured in the high-energy-particle experiments (e.g., at the LHC at CERN). That's the physics, i.e., that's what we can say on a scientific basis, what's done in physics research: You have a theory ("Standard Model of elementary particles"), with which you can predict the outcome of what can be really measured (scattering cross sections) and can be objectively compared to the theory.
Now, unfortunately QT is plagued from the very beginning by a strong influence of "philosophy" or "meta-physics". That's quite understandable, because QT is really a revolution with regard to our very fundamental understanding of the physical world around us. That's, because QT says that Nature behaves inevitably in an indeterministic way, i.e., "observables" (something that can be measured) do not necessarily take well-defined ("determined") values, but only if the system under investigation is prepared in a "state" (something that describe the specific properties of the system before measurement), for which the observable under investigation takes determined values. The outcome of a precise measurement of an observable, when the system is prepared in a state, where this observable doesn't take determined values, is inevitably random, and this randomness is not due to the lack of our knowledge about the state of the system (as in classical statistical mechanics) but because it's a property of Nature that the observable doesn't take a determined value, if the system is prepared in such a state.
That's why there was a big debate from the very beginning among physicists, whether "quantum mechanics can be considered complete", i.e., whether there may be a more comprehensive theory, which is in accordance with the observations but still being deterministic, i.e., a theory where all observables always take determined values, which we only don't know for some reason, and thus use probabilistic descriptions as in classical statistical physics. This culminated in 1935 in the famous (I would say infamous) paper by Einstein, Podolsky, and Rosen, and the even more cryptic anser to it by Bohr. The problem with both papers, in my opinion, is that the EPR paper just states a philosophical prejudice about how "Nature should be", namely deterministic, and thus claims that quantum mechanics were incomplete. Bohr answered in his usual nebulous style without making a scientific statement either about how the problem could be solved with scientific means.
This only happened about 30 years later, when Bell made the philosophical, vague EPR definition of "reality" to a solid mathematical description of a "realistic" (i.e., deterministic) "local" (i.e., excluding strictly the influence of measurements on far-distant places with faster-than light signals). He figured out that such a theory gave probabilistic predictions about the outcome of measurements and about the far-distant correlations (which is the only thing, which can be named "non-local", which however is an unfortunate choice of terminology, as I'll explain below) described by "entanglement", which are different from the predictions of QT (the famous Bell inequalities). Since then the question was attacked by experimentalists, and that lead to the clear conclusion that the world does not behave according to any "local, realistic theory" but according to the predictions of QT.
It's unfortunate to call the correlations of measurement results at far-distant places, which are described by entanglement "non-local", because ironically it's relativistic QFT which is strictly local by construction, i.e., the demand of locality, i.e., that there cannot be any causal influences between space-like separated events, is worked into the theory from the very beginning. This is, why we describe relativistic QT exclusively in terms of such a quantum-field theory, because only in terms of a QFT we can impose this "microcausality constraint". The consequence is that particle numbers are not conserved when scattering particles at "relativistic energies", and indeed that's what's observed: In such collisions it can well be that we destroy the incoming particles completely and get out completely different particles (and not only two as in usual scattering experiments but a whole "spray" of new particles), i.e., we can "annihilate and create" particles (according to the laws imposed by conservation laws like charge conservation, which are also built into the theory from the very beginning, based on corresponding empirical knowledge). So the description of relativistic QT in terms of a local (i.e., micro-causal) QFT is well-founded both in theoretical demands (causality in relativistic spacetime models) and in observations (all predictions of the Standard Model still hold true, although for the vigorous search for "physics beyond the Standard Model" for decades now!). So there is no "non-locality" but only the correlations of properties on far-distant parts of quantum systems, described by entanglement.
For sure, there's no need for consciousness in the entire theory. Nowadays the measurement results are just stored on some electronic storage device and only explored in detail by humans long after the experiment is done. There's no need for a human interaction with the investigated quantum systems to get well-defined measurement results. It's all due the interaction of the particles with the detectors and the corresponding storage of the information about these interactions.