malreux said:
[[3] Well, this is a bad interpretation, so far is goes, because in this case the hidden variables are 'idle wheels'.
That has always been my objection as well, but I would stop short of calling it "bad", because it is very hard to use absolute terms when dealing with interpretations. Some very bright people, including de Broglie himself, thought it as a
good interpretation, and for many of the same reasons that we think it is
bad. So it is clear we don't agree on the requirements for a good interpretation-- that's an important thing to recognize about interpretations, perhaps even the most important thing. Even though I prefer some to others, and can give reasons why, I recognize them all as valid in their own way, and I'm glad to know them-- there's not one I wish I hadn't met!
[4]As distasteful as I ultimately find their antirealism, it's worth checking out
http://plato.stanford.edu/entries/constructive-empiricism/ if your interested in how this distinction plays out for modern empiricists.
I haven't penetrated to the controversy yet, because at first glance, constructive empiricism appears to make the claim from this quote:
"Science aims to give us theories which are empirically adequate; and acceptance of a theory involves as belief only that it is empirically adequate."
Naive realism, on the other hand, appears to make the claim from this quote:
"Science aims to give us, in its theories, a literally true story of what the world is like; and acceptance of a scientific theory involves the belief that it is true."
Now, looking at those two statements, it seems to me that the first is scientifically demonstrable as basically correct (basic scientific history suffices), though it unnecessarily and inaccurately stresses the word "only" (the clear fact is that this is one of science's most closely held goals, but it is not the only goal of science, the other involving a sense of unification and understanding that goes quite a bit beyond empirical adequacy). But all that is obvious. The second quote, on the other hand, is clearly naive and rather absurd, and again even a rudimentary knowledge of scientific history suffices to demonstrate that. I can't even imagine how anyone holding that opinion is going to even begin to define the phrase "literally true" in a way that is remotely scientific, without ending up sounding like the first statement.
[5] I'm not sure if this is entirely historical accurate; regardless, the crucial point your making is that at various junctures in the history of science, a 'crucial experiment' has often been the ultimate arbiter between competing theories.
Yes, the role of "crucial experiments" cannot be understated, they are what volcanoes are to island chains and what wars are to nations. The main theories of physics do not tiptoe in the back door, they erupt with great pomp and circumstance, and always with some experimental result that no one had any reason to expect in the absence of the theory. Usually the result precedes the theory, but the successful theory also predicts additional things we would have no reason to expect without the theory, and that's how we verify the theory is not pure rationalization of something already known.
[6] I'm thinking here of the quiet revolution re POVM's, and also the viability of the modern medium decoherence programme.
Yes, POVM's are an interesting new direction to call attention to, and I can't see why I would have any "disappointment" associated with this. The intent of the program is, as usual in science, to be able to predict experiments, here those involving decoherence, such that the state of the system can be continuously tracked, not as an evolution from a pure state to a mixed state (which regular quantum mechanics does in concert with the Born rule or standard decoherence), but as evolution from a mixed state to a pure state. That's what is missing from quantum mechanics, and predictions along that path would be a new theory that would arrive with great fanfare and experimental confirmation.
On the other hand, we're rapidly approaching regimes where no viable experimentation will be likely to occur - the most obvious case: quantum gravity.
But do you think that is something new? The history of physics is peppered with periods where we were far from viable experimentation-- and it invariably led to a period of stagnation in physics.
As to the current state of affairs, we can certainly be optimistic if we are predisposed to be, but there is a danger that optimism gives way to self-deception and rationalization. The simple truth is, we have no reason to expect quantum gravity to provide us with a great new theory of physics that does not simply either repackage what we already know, or make predictions that we have no way of knowing would hold true if we could test them. Regardless of how aesthetically pleasing we might find notions of quantum gravity, that is just a sorry state of affairs, for science. The only hope is that there really will be some verifiable predictions that we could not anticipate without that quantum gravity theory.
Further, some of the 'problems' of interpretation just aren't physical problems -
Certainly. I would hold that no problems of interpretation are physical problems, they are all philosophical. They will only be physical problems when interpretations spawn new theories that actually make testable predictions.
An example from Deustch might clarify: why was the ancient Greek theory about the seasons a bad explanation?
But Deutsch is missing the deeper undercurrent here-- for
even if seasons were the same in the southern hemisphere, the Greek model would
still be of no value! That's because the model predicts nothing, it is a perfect example of a pure rationalization. It makes no difference if the rationalization works, there is no way to verify that it is saying something they didn't already know unless it makes a prediction they would not otherwise expect-- no matter what is happening in the southern hemisphere.
The theory whereby the Earth spins on an axis tilted with respect to its own planar rotation can not only accommodate these observations, it predicts them.
That's exactly my point, a theory must do more than rationalize what is already known, else there is no verification step. But this has nothing to do with realism or any other philosophical attachments, it is purely an issue of empirical evidence.
[9] Please note I didn't want to talk about "what must the world be like for this theory to work well" but "what must the world be like if our (best) theories are approximately true.
I don't see any distinction there, they both sound equally impossible to establish scientifically, and equally against the weight of scientific history. The world doesn't have to be "like" anything, it can just be what it is, and the theories can just work as well as they do, or do not. What more can be supported with evidence?
Your [9] needs to clarify some things - what exactly is the 'ontology' of the Hamiltonian, for example? This is not as easy a question as it (may!) seem.
I agree, but look how much more difficult that question becomes if we must bury the Hamiltonian under the weight of being something that "the world must be really like." That approach forces an ontology onto the Hamiltonian, it can no longer be what it demonstrably is-- a mathematical concept, pure and simple, with no need to say anything more. We are playing the game of math, and we are doing it in a way that mimics or apes the presence of some Platonic "Hamiltonian", but the tension between the game and the ontology need not make any contact with a "true game that math really is", or a "true Hamiltonian that the world is really like." Those concepts are completely superfluous-- all we need is the interplay between the syntax of the game and the semantics of the ontology, without taking either one seriously as a destination of its own.
Sometimes we're discussing the status of things like ordinary tables and chairs with regards to fundamental physical ontology (emergence?), other times we're discussing the ontology of particular physical theories (is the wavefunction real?). We could focus this discussion a lot more on a new thread if you think its worth it.
By all means, a thread exploring the purposes of the whole idea of having an ontology to prop up our thought processes would be quite interesting. It's relevant here as well though-- it's the reason that people like to imagine that math is Platonic, to have that prop.
A note on structural realism as the worst of both worlds: I tend to think of it (unsurprisingly!) as the best of both of worlds, conversely.
And I would agree that both views have their value-- the truth is in the tension between them, structural realism has value because it invokes a tension between being a vacant solution, and an effective solution, and that tension opens up a discussion about what kinds of solutions we are looking for and why. We expect this as soon as we see that solutions are contextual and provisional, so the job of philosophy is not just to find the solutions, but also to clarify their limits.
This works by combining two arguments - the (1) 'no miracles argument' and (2) the 'pessimistic meta-induction' (sounds grandiose, right?). (1) states that it would basically be a miracle if our scientific theories weren't even remotely true, because we predict phenomena, safely use technology (sometimes!), etc.
But this invokes a false dichotomy. Now we must choose between our theories being either "not remotely true", or being "like the world". What happened to the most likely case of all, neither one? Why can't the theories just work pretty darn well for what they are supposed to work for, and yet not be anything "like" the actual truth of the world (if it even makes sense to talk about a truth of the world, which I argue it doesn't-- truths are contextual and provisional too). My assertion requires zero assumptions not in evidence, the standard naive realism requires a leap of faith that is contradicted every time the ontologies of our theories take another inevitable radical shift.
(2) is usually presented as an induction, though it also has a deductive variant (that isn't sound, so ignore it): scientists had good reason to believe past scientific theories (evidence, predictions), those theories have all turned out to be false, scientists have good reason to believe current theories, they will overwhelmingly likely turn out to be false, so we shouldn't believe our best theories.
Actually, no induction whatever is required there. All that is required is the bedrock of science: basic skepticism. The requirement that a proposition be backed with evidence that is not constantly contradicted.
Consider the revolution of special and general relativity - they both contain Newtonian physics as a limit case.
That's a basic requirement of the simple fact that it is known that Newtonian mechanics works well for some things, and relativistic mechanics works well for others. Same for quantum and classical. It is not saying anything surprising that a theory that worked for something will continue to work for that same something, so it is a given that all superior theories will "contain" the inferior versions. Something that must be true is not evidence for something that does not have to be true. We can take it as given that our theories work well, there is no other claim that can be made on nature without leaving the realm of what we can support with evidence.