
#91
Aug1409, 09:14 AM

P: 871

If there is no viable lower order explanation then by definition you aren't dealing with a higher order explanation. Higher order explanations, as such, are not necessary, and unless they are reducible to first order explanations, they cannot be sufficient either. Basically, they aren't true causes (or explanations) at all. 



#92
Aug1409, 04:01 PM

P: 1,414

The reason that Bell inequalities are violated has to do with the formal requirements due to the assumption of locality. This formal requirement also entails statistical independence of the accumulated data sets at A and B. But entanglement experiments are designed and executed to produce statistical dependence vis the pairing process. There's no way around this unless you devise a model that can actually predict individual detections. Or you could reason your way around the difficulty by noticing that the hidden variable (ie., the specific quality of the emission that might cause enough of it to be transmitted by the polarizer to register a detection) is irrelevant wrt the rate of joint detection (the only thing that matters wrt joint detection is the relationship, presumably produced via simultaneous emission, between the two oppositemoving disturbances) . Thus preserving the idea that the correlations are due to local interactions/transmissions, while at the same time modelling the joint state in a nonseparable form. Of course, then you wouldn't have an explicitly local, explicitly hidden variable model, but rather something along the lines of standard qm. 



#93
Aug1409, 05:38 PM

P: 2,490

My reductio ad absurdum argument was based on thermodynamics which at the theoretical level is based on probabilities. If a system can only exist in one possible state and only transit into one other possible state, there is no Markov process. All states exist with p=1 or p=0. (past. present, and future). Under D, probabilities can only reflect our uncertainty. If you plug 0 or 1 into the Gibbs equation, you get positive infinity or 0. Any values in between (under D) are merely reflections of our uncertainty. Yet we can actually measure finite non zero values of entropy in experiments (defined as Q/T or heat/temp). Such results cannot be only reflections of our uncertainty. Remember, there is no statistical independence under D. None of this either proves or disproves D. I don't think it can be done. It seems to be essentially a metaphysical issue. However, it seems to me (I'm not a physicist) like you have give up a lot to assume D at all scales. 



#94
Aug1409, 09:13 PM

P: 1,414





#95
Aug1409, 10:11 PM

P: 1,414

There isn't any compelling reason to believe that there aren't any fundamental deterministic dynamics governing the evolution of our universe, or that the dynamics of waves in media is essentially different wrt any scale of behavior. In fact, quantum theory incorporates lots of classical concepts and analogs. Anyway, maybe you should start a new thread here in the philosophy forum on induction and/or determinism. I wouldn't mind discussing it further, but I don't think we're helping ueit wrt the thread topic. 



#96
Aug1509, 04:33 AM

P: 2,490

http://en.wikipedia.org/wiki/Fundame...rst_principles The assumption behind this derivation is that the position and momentum of each of N particles (atoms or molecules) in a gas are uncorrelated (statistically independent). However D doesn't allow for statistical independence. Under D the position and momentum of each particle at any point in time is predetermined. Therefore there is full correlation of the momenta of all the particles. What is actually happening when the experimenter heats the gas and observes a change in the Q/T relation (entropy increases)? Under D the whole experiment is a predetermined scenario with the actions of the experimenter included. The experimenter didn't decide to heat the gas or even set up the experiment. The experimenter had no choice. She or he is an actor following the deterministic script. Everything is correlated with everything else with measure one. There really is no cause and effect. There is only a the predetermined succession of states. Therefore you're going to have to give up the usual (strong) form of causality where we can perform experimental interventions to test causality (if you want D). Causality is not defined in mathematics or logic. It's usually defined operationally where, given A is the necessary, sufficient and sole cause of B, if you remove A, then B cannot occur. Well under D we cannot remove A unless it was predetermined that p(B)=0. At best, we can have a weak causality where we observe a succession of states that are inevitable. 



#97
Aug1509, 07:45 AM

P: 379





#98
Aug1509, 08:25 AM

P: 379

1. At source location, the field is a function of the detectors' state. Because the model is local this information is "old". If the detectors are at 1 ly away, then the source "knows" the detectors' state as it was 1 year in the past. 2. From this available information and the deterministic evolution law the source "computes" the future state of the detectors when the particles arrive there. 3. The actual spin of the particles is set at the moment of emission and does not change on flight. 4. The correlations are a direct result of the way the source "chooses" the spins of the entangled particles. It so happens that this "choice" follows Malus's law. In conclusion, changing the detectors before detection has no relevance on the experimental results because these changes are taken into account when the source "decides" the particles' spin. Bell's inequality is based on the assumption that the hidden variable that determines the particle spin is not related to the way the detectors are positioned. The above model denies this. Both the position of the detector and the spin of the particle are a direct result of the past field configuration. 



#99
Aug1509, 08:33 AM

P: 379





#100
Aug1509, 02:54 PM

P: 2,490

Under D, probabilities only reflect our uncertainty. They have nothing to do with nature (as distinct from ourselves). Statistical independence is an assumption based on our uncertainty. Ten fair coin tosses are assumed to be statistically independent based on our uncertainty of the outcome. We imagine there are 1024 possible outcomes, Under D there is only one possible outcome and if we had perfect information we could know that outcome. Under D, not only is the past invariant, but the future is also invariant. If we had perfect information the future would be as predictable as the past is "predictable". It's widely accepted that completed events have no information value (ie p=1) and that information only exists under conditions of our uncertainty. I agree that with pseudorandomness the thermodynamic laws work, but only because of our uncertainty given we lack the perfect information which could be available (in principle) under D. EDIT: When correlation ([tex]R^{2}[/tex]) is unity, it is no longer probabilistic in that no particles move independently of any other. Under D all particle positions and momenta are predetermined. If a full description of particle/field states is in principle knowable in the past, it is knowable in future under D. 



#101
Aug2009, 02:32 PM

P: 1,414

I wouldn't separate causality into strong and weak types. We observe invariant relationships, or predictable event chains, or, as you say, "a succession of states that are inevitable". Cause and effect are evident at the macroscopic scale. Determinism is the assumption that there are fundamental dynamical rules governing the evolution of any physical state or spatial configuration. We already agreed that it can't be disproven. The distinguishing characteristic of ueit's proposal isn't that it's deterministic. What sets it apart is that it involves an infinite field of nondiminishing strength centered on polarizer or other filtration/detection devices and/or device combinations and propagating info at c to emission devices thereby determining the time and type of emission, etc., etc. So far, it doesn't make much sense to me. We already have a way of looking at these experiments which allows for an implicit, if not explicit, local causal view. Anyway, his main question about arguments against the assumption of determinism has been answered, and I thought we agreed on this  there aren't any good ones. 



#102
Aug2009, 03:04 PM

P: 2,490

(Note: I'm using "effective determinism" in terms of what we actually observe within the limits of measurement, and "strict determinism" as a philosophical paradigm.) 



#103
Aug2009, 08:31 PM

P: 1,414

Whether local hidden variable mathematical formulations of certain experimental preparations are possible is another thing altogether. This was addressed by Bell. Ueit is interested in lhv models. Bell says we're not going to have them for quantum entangled states, and so far nobody has found a way around his argument. 



#104
Aug2109, 04:46 PM

P: 649

That's right, but how is it different to the idea that we are living in the Matrix and if no good arguments can be put forward against it, is that a model of the universe that you think should even be considered by science? 



#105
Aug2109, 08:26 PM

P: 1,414

We might be in some sort of Matrix. But there's no particular reason to think that we are. The question is, does our shared, objective reality seem more deterministic the more we learn about it? 



#106
Aug2109, 08:34 PM

P: 871

Since you mentioned it, it looks more and more like the world could be discrete, suggesting a structure with limits on its basic numerical accuracy  very Matrixlike. I don't think science can tell us anything about the issue though. Hume covered that pretty well in my opinion. I do think the assumption of determinism is a rational extension of logic that needs to be made for the world to be intelligible for us. 



#107
Aug2109, 09:04 PM

P: 1,414

But maybe I don't understand what you're getting at here. 



#108
Aug2109, 09:47 PM

P: 1,414

If they've set things up correctly, then when the data sets at A and B are properly paired and correlated with the appropriate angular differences, then you'll see something closely approximating a cos^2 Theta dependency (Malus Law). But the filters'/detectors' state couldn't have had anything to do with the emission values because the filters/detectors didn't even exist until all of the emissions were already more than 3/4 of the way to the filters/detectors. Yet the joint results would approximate a Malus Law dependency between angular difference and rate of coincidental detection. 


Register to reply 
Related Discussions  
why is there such little talk regarding superdeterminism?  General Discussion  23  
Arguments against LQG  Beyond the Standard Model  24  
Creationist Arguments.  Biology  14  
Geometric arguments  General Math  5  
two contradictory arguments...  Introductory Physics Homework  15 