Decoherence and the randomness of collapse

bilkusg
Messages
2
Reaction score
0
I'm trying to at least understand what decoherence can and cannot explain about how quantum mechanics works, the more I read, the less clear I am about what is known and what is merely speculative.

So I finally decided the only way to get any further was to try and clarify what I think is known, and give others an opportunity to confirm or correct as the case may be.
  1. The fundamental mystery of quantum mechanics is that the basic equations tell us that any isolated quantum system evolves deterministically in accordance with a unitary equation, but in practice, the transition from microscopic to macroscopic environments appears to engender a 'collapse', turning waves into localised particles, and doing so in a probabilistic way.
  2. The second mystery of quantum mechanics is that macroscopic superpositions of very different states, such as Schrodinger's Cat, can easily be described within the mathematical formalism, but appear not to exist in the real world.
  3. As I understand it, decoherence provides what might be described as a mathematically suggestive explanation of the second point. Essentially, as soon as a system gets big enough, the complex interaction of a macroscopic number of things causes macroscopic superpositions to be extraordinarily unlikely and unstable configurations, in much the same way as modern interpretations of the second law of thermodynamics describe entropy-lowering events as staggeringly uncommon, rather than theoretically impossible.
  4. This particular aspect of decoherence appears to be fairly well accepted by many people, and has some support from entanglement experiments, the behaviour of quantum computers etc.
  5. Although decoherence arguments make it plausible why we never see macroscopic superpositions, it appears at first sight to offer no explanation of the first question. If the apparent collapse of the wave function is simplly an inevitable consequence of the interaction with the rest of the universe, or even a fairly small but macroscopic part of it, then why isn't that a determinstic process, i.e. where does the quantum randomness come from.
  6. What appears to be randomness could in fact just be extreme sensitivity to the initial conditions. In other words, when an electron goes through two slits at once it's behaving as a wave. When it goes through only one slit and gets measured, it's still behaving as a wave, one which decoherence has concentrated in a small area through interaction with the other particles in the apparatus. But exactly where that concentration will occur, although deterministically calculatable in theory, is in practice so sensitive to intiial conditions and to unknowable ones at that ( the complete starting states of everything in the universe which could influence the result ), that an element of ramdomness appears.
  7. But in this case, quantum randomness is just like classical randomness, albeit computationally even worse by some humungous factor. And so we appear to have an explanation of all quantum wierdness. The entire universe is deterministic, but the emergent behaviour of small parts of it can only be analysed statistically. Einstein was right. God does not play dice with the universe - ( just with almost all parts of it :)

Have I gone too far. Am I imputing to decoherence more than there is evidence or even an analysis for? Can anyone point me to an analysis of a gedanken experiment in which decoherence can demonstrate a chaotic-like behaviour, or, even better, some indication that this kind of localisation caused by entanglement is inevitable in practice, rather than simply plausible.

And if my understanding above is in fact what decoherence tells us, then where has the mystery gone, and why do people still advocate alternate, almost philosophical approaches?
 
Physics news on Phys.org
Bilkusg, you might find this lecture by Steven Weinstein helpful:



I found this web page to be useful as well:

http://www.ipod.org.uk/reality/reality_decoherence.asp

I am not a physicist, but my understanding is that although decoherence gives us good tools for describing the transition between pure states and mixed states, it can't explain how the mixed states emerge in the first place.

There is debate on this point, but those who've looked at it the closest, from what I've seen, seem to conclude that decoherence can't explain how the first appearance of a particle could come from a pure state.

Decoherence needs there to be an environment first. There needs to be a separate system to interact with. Then decoherence shows how the information of the pure system does not completely transfer to the environment. Only certain specific aspects. When phases cancel, the electron goes only through one slit and the interference pattern disappears.

The other thing decoherence doesn't explain, as you suggest, is why the specific states emerge, as they do. Statistically, yes, we know how it will turn out, but not on an individual basis.

Randomness is not an explanation. It is another way of saying that we don't know why.

Also, your idea that there is some faint predetermining factor that causes the specific state to emerge doesn't seem to work. What you are suggesting is similar to the idea of hidden variables, which was something Einstein first considered but eventually ruled out, and Bell's Theorem mostly destroys (although Bohm does suggest universal hidden variables might work).

I think it makes more sense to treat the emergence of a specific state for a specific particle to be acausal. There is no causation creating that specific state. Causation is a principle that only applies to mixed states that already exist. It describes the classical world, not the quantum world.

I hope this helps. I enjoyed your questions.
 
Last edited by a moderator:
I wrote a massive reply and PF ate it. :(
I'll try again.

Good questions, bilikusg! I spend far too much time thinking about the measurement problem. It's an interesting one. How much of the formalism of QM do you know? Decoherence is much less mysterious if you see the maths. I'm happy to go over it if you haven't seen it before.

bilkusg said:
[*]The fundamental mystery of quantum mechanics is that the basic equations tell us that any isolated quantum system evolves deterministically in accordance with a unitary equation, but in practice, the transition from microscopic to macroscopic environments appears to engender a 'collapse', turning waves into localised particles, and doing so in a probabilistic way.
[*]The second mystery of quantum mechanics is that macroscopic superpositions of very different states, such as Schrodinger's Cat, can easily be described within the mathematical formalism, but appear not to exist in the real world.
Your first point is right on. As to your second point, not quite. We do see macroscopic superpositions of states! Most recently, a group entangled macroscopic diamonds at room temperature! (http://www.sciencemag.org/content/334/6060/1253) We can also see interference patters in two colliding lead nuclei (2*208 particles is certainly huge!), indeed, considering superpositions is essential in being able to model heavy ion nuclear reactions. In addition, we also see interference fringes in large (~10^6) samples of BEC's, and we've even done two slit experiments with viruses! The question then becomes, when does classical behaviour emerge? We make measurements of bigger and bigger systems and we've yet to see objects that are always classical. Maybe collapse never happens? (I have to admit to some Everettian MWI bias here).

bilkusg said:
[*] As I understand it, decoherence provides what might be described as a mathematically suggestive explanation of the second point. Essentially, as soon as a system gets big enough, the complex interaction of a macroscopic number of things causes macroscopic superpositions to be extraordinarily unlikely and unstable configurations, in much the same way as modern interpretations of the second law of thermodynamics describe entropy-lowering events as staggeringly uncommon, rather than theoretically impossible.
Decoherence inevitably appears when you couple a quantum system to the environment. What happens is that the coherent terms in the density matrix (do you know of these? I can explain if you don't) decay.
bilkusg said:
[*] This particular aspect of decoherence appears to be fairly well accepted by many people, and has some support from entanglement experiments, the behaviour of quantum computers etc.
Yep.

bilkusg said:
[*] Although decoherence arguments make it plausible why we never see macroscopic superpositions, it appears at first sight to offer no explanation of the first question. If the apparent collapse of the wave function is simplly an inevitable consequence of the interaction with the rest of the universe, or even a fairly small but macroscopic part of it, then why isn't that a determinstic process, i.e. where does the quantum randomness come from.
Yes, measurement is still distinct from decoherence. The reason it is invoked to explain measurement in systems coupled to the environment is that it looks (in the formalism) the same as measurement. Decoherence doesn't predict what the measured state is. The states are still a probabilistic distribution. It doesn't explain randomness, but then again it's not supposed to.

bilkusg said:
[*] What appears to be randomness could in fact just be extreme sensitivity to the initial conditions. In other words, when an electron goes through two slits at once it's behaving as a wave. When it goes through only one slit and gets measured, it's still behaving as a wave, one which decoherence has concentrated in a small area through interaction with the other particles in the apparatus. But exactly where that concentration will occur, although deterministically calculatable in theory, is in practice so sensitive to intiial conditions and to unknowable ones at that ( the complete starting states of everything in the universe which could influence the result ), that an element of ramdomness appears.
Not quite. Bells Theorem gets rid of hidden variable theories like you suggest.

bilkusg said:
[*] But in this case, quantum randomness is just like classical randomness, albeit computationally even worse by some humungous factor. And so we appear to have an explanation of all quantum wierdness. The entire universe is deterministic, but the emergent behaviour of small parts of it can only be analysed statistically. Einstein was right. God does not play dice with the universe - ( just with almost all parts of it :)
See above, no hidden variable theories. Decoherence is still very much quantum. And we still see quantum effects - don't forget things like two slit experiments!

bilkusg said:
And if my understanding above is in fact what decoherence tells us, then where has the mystery gone, and why do people still advocate alternate, almost philosophical approaches?

The mystery is still there, in that decoherence doesn't actually provide an explanation of measurement. This is where the interpretations come into play, from things like "shut up and calculate", to interpretations where collapse of fundamental particles is an inevitable process in nature (like nuclear decay), and the collapse of one results in the collapse of the system, to interpretations where measurement never happens, and classical behaviour is an illusion.
 
Please forgive me for I may be deeply embarrassing myself, but I would like to ask a question. If a perfectly determined system (such as a computer in the universe) measures a quantum probabilistic event and makes it deterministic again (Schrodinger's cat is alive now), does the fact that the computer was always determined to measure that probabilistic quantum event make the result of the measurement determined since the big bang (just unknown)? in which case the Copenhagen interpretation is dependent on dualism?
So, either..
A. I'm talking about a "hidden variable theory" which was largely dis-proven by Bell's theorem.
or
B. I have no clue what I am talking about and should learn more before asking questions.

Anyhow, I'm glad this thread was posted. I was curious about the exact same thing and wanted to post something similar, but I may need to learn a lot more before asking questions.
 
e.bar.goum said:
and we've even done two slit experiments with viruses
Have they?

Gunner B said:
Please forgive me for I may be deeply embarrassing myself, but I would like to ask a question. If a perfectly determined system (such as a computer in the universe) measures a quantum probabilistic event and makes it deterministic again (Schrodinger's cat is alive now), does the fact that the computer was always determined to measure that probabilistic quantum event make the result of the measurement determined since the big bang (just unknown)? in which case the Copenhagen interpretation is dependent on dualism?

I do often wonder myself regarding this. It would seem you are describing the computer as a classical object (hence it would be deterministic), and in principle you could determine when measurement occurs/what result will be shown. It just doesn't fit in when you use the computer to probe the quantum world. It seems there would have to be no classical/quantum seperation, but quantum all the way - for randomness to hold (as Brian Cox says in his latest book). Of course, I'm not taking into account Bohms theory, but my guess is there would still be no classical/quantum divide. The hidden variables determine the state of the quantum system, which leads to a determination of the computer state (reflecting the result pre-determined). Classical physics, as far as I'm aware, doesn't contain these hidden variables. To say classical physics determines the state of the computer, well - the quantum laws would need align the result of the system to the classical equation giving us the state of the computer.
 
StevieTNZ - I can't seem to find a paper, I must have been mistaken. I was talking to a quantum experimentalist over lunch yesterday and the measurement problem came up - he seemed to be of the belief the experiment has been done, and I'd heard about it as well. So I didn't bother looking for the article when I posted it. Sorry! The diamond example is still good though.

Gunner B said:
Please forgive me for I may be deeply embarrassing myself, but I would like to ask a question. If a perfectly determined system (such as a computer in the universe) measures a quantum probabilistic event and makes it deterministic again (Schrodinger's cat is alive now), does the fact that the computer was always determined to measure that probabilistic quantum event make the result of the measurement determined since the big bang (just unknown)? in which case the Copenhagen interpretation is dependent on dualism?

Gunner, don't be embarrassed! This is interesting to get these questions, because whilst it takes less than a line to show with maths, it's very difficult to explain things in plain English. Do tell me if I'm not clear, or have been to technical.

The thing is, as soon as you're coupling to a quantum system, the computer is no longer deterministic! That is, when you have used the computer to measure the state of the cat, the result of the measurement is no longer determined. Using the cat example - the cat is in a superposition of |alive> and |dead>, ie, the state of the cat is |cat> = |dead> + |alive> (we're missing a normalisation factor here, but it's not important). If you have a computer that can measure the state, it can either measure dead or alive, ie,

|computer> = |computer measures alive> + |computer measures dead>.

Assuming that the probability that the computer measures cat alive when the cat is dead (and vice-versa) is zero, we now have

|cat>x|computer> = |alive>|computer measures alive> + |dead>|computer measures dead>

With some normalisation factors.
Which is a quantum system.

See? As soon as a computer is measuring a quantum system, it is no longer allowed to be deterministic. Does that answer your and StevieTNZ's concerns?
 
I would like to point out that both randomness, and determinism, are simply attributes of theories that we develop. This is quite demonstrably true, it's obvious in fact. They have both been shown to be useful to varying degrees in understanding reality, and neither has ever been shown to be what reality is actually doing, nor is there any reason to imagine that reality is beholden to be either one or the other. We must resist the error of imagining that reality must be the way we think about it.
 
e.bar.goum said:
See? As soon as a computer is measuring a quantum system, it is no longer allowed to be deterministic. Does that answer your and StevieTNZ's concerns?

Oh, I know that already. From what I've gathered, decoherence doesn't collapse the wave function. Superpositions still exist, they're just complex and hard to verify experimentally.
 
StevieTNZ said:
Oh, I know that already. From what I've gathered, decoherence doesn't collapse the wave function. Superpositions still exist, they're just complex and hard to verify experimentally.

Yes, decoherence doesn't collapse the state, but no, decoherence irreversibly converts quantum behaviour (additive probability amplitudes) to classical behaviour (additive probabilities) - so, the superpositions go away - in terms of density matrices, the decoherence corresponds to the diagonalisation of the density matrix.
 
  • #10
There are some issues which are not resolved with decoherence.

Assume we have a classical measuring device with some "pointer states" S = {s1, s2, s3, ...}; these can e.g. be the positions of a real pointer; in case of Schrödinger's cat there would be two positions S = { s1="live", s2="dead"}; the pointer states correspond to classical behaviour and are typically localized in position space.

What decoherence explains quite well is how entanglement with environment results in emergence of some classical pointer states S.

1) What decoherence does not explain is why these pointer states are localized in position space. It could very well be that there exists a second set T = {t1, t2, t3, ...} which has sharply localized states in (e.g.) momentum space. So the emergence of a specific set S of pointer cannot be derived generically but must have something to do with specific interactions.

1') in terms of density matrices this is rather simple: decoherence tells us that in some preferred basis the density matrix becomes nearly diagonal due; but it does not tell us which specific basis we should use. This is the so-called "preferred basis" problem. I haven't seen a paper explaining this issue.

2) What decoherence doesn't explain either is which specific element si will be observed in one specific experiment; assume that issue 1) is solved; now look at the Schrödinger cat experiment which is stopped exactly after half-life of the decaying particle, i.e. with a probability of 1/2 for "dead" and 1/2 for "alive"; so even if we know that there will be a classical pointer state (due to decoherence) and if we know that it is localized in position space, we do not know the result of the experiment.
 
  • #11
e.bar.goum said:
Yes, decoherence doesn't collapse the state, but no, decoherence irreversibly converts quantum behaviour (additive probability amplitudes) to classical behaviour (additive probabilities) - so, the superpositions go away - in terms of density matrices, the decoherence corresponds to the diagonalisation of the density matrix.

So there would still be no definite state. No collapse has occured. A lot of physicists have told me system+environment is just a complex superposition. Density matrices only involve partial information about the system+environment.
 
  • #12
e.bar.goum said:
See? As soon as a computer is measuring a quantum system, it is no longer allowed to be deterministic. Does that answer your and StevieTNZ's concerns?
But, doesn't the computer (observer) that's measuring make the wave function collapse? Then it must only measure dead OR alive. If it was determined (evolution of the universe) to measure either and it measures one, then that measurement was always going to happen but we never could have determined (known about) it. Thus, it was a statistic for humans but it was always a real result with respect to time. Although, this doesn't really explain why you get dead sometimes and alive the other. I'm just trying to critique indeterminism but I probably don't know what I am talking about.

It seems like the Copenhagen interpretation goes something like this:
If not observed: everything that can happen does happen at all times.
If observed: everything that can happen does happen but only sometimes.

So a universe that has a possibility of creating conscious life has to create conscious life because that possibility would make the wave function collapse to that state? Or is this situation purely classical?
 
  • #13
Thanks to all the replies so far, I'm beginning to get a better idea of where we stand.
One thing though, why is the idea I originally had a hidden variables theory which falls foul of Bell's theorem ( which I think I understand ). In my original post, the universe is completely deterministic, and there is no information which is not in the quantum states of all its components ( including fields ). There's nothing 'hidden', any more than the postions of the molecules in a classical gas are hidden, and what I was kind of hoping is that there's a mathematical analysis which can demonstrate that if you evolve a macroscopic system containing correlated photons, decoherence will do Bell-like things to the two measuring apparatuses, whcih are themselves correlated because they've been in the same universe for long enough.
And if the two apparatuses weren't already linked in this sense, you'd have no way of knowing if they were aligned at the angle required to demonstrate a Bell corrrelation.

I can see various other potential objections to this, the most serious probably being the implication that the information in the entire universe now was already present in the big bang. But that's surely just a consequence of any theory which is entirely unitary, and to my mind provides the biggest reason to suspect that the laws of nature will turn out to have something else.
 
  • #14
tom.stoer said:
1') in terms of density matrices this is rather simple: decoherence tells us that in some preferred basis the density matrix becomes nearly diagonal due; but it does not tell us which specific basis we should use. This is the so-called "preferred basis" problem. I haven't seen a paper explaining this issue.
In my opinion, this one is fairly easy to resolve at one relatively unsatisfying level, but it requires noticing the role of the physicist in the physics. I agree we don't know, microscopically, why a position measurement accomplishes the decoherence around a position basis, or why a momentum measurement does that around a momentum basis, but the reason we consider them to be position and momentum measurements is that they have these properties. So we simply try lots of different types of macroscopic interactions, and by pure trial and error, we discover the decohering properties of each, and then simply define them to be measurements of the appropriate type. In short, quantum mechanics doesn't tell us what a position measurement is, it merely tells us how to manipulate one mathematically-- it is only we who can say what a position measurement is, and we did that long before quantum mechanics.
2) What decoherence doesn't explain either is which specific element si will be observed in one specific experiment; assume that issue 1) is solved; now look at the Schrödinger cat experiment which is stopped exactly after half-life of the decaying particle, i.e. with a probability of 1/2 for "dead" and 1/2 for "alive"; so even if we know that there will be a classical pointer state (due to decoherence) and if we know that it is localized in position space, we do not know the result of the experiment.
I believe you have made the key point about what decoherence doesn't resolve-- it tells us neither what will happen, nor even why we will perceive a single outcome rather than a superposition. I believe the answer once again has to do with the physicist-- something in how we think/perceive requires that we encounter only a single internally coherent subsystem. Whether or not the larger decohered "many worlds" actually exists or not is a very difficult question for science, and is the entry point into all the different interpretations. In the absence of better observations and a deeper theory that explains them, all these different interpretations are effectively equivalent, and all rely on decoherence (despite a rather widespread misconception that various different interpretations are disfavored by decoherence).
 
  • #15
Gunner B said:
But, doesn't the computer (observer) that's measuring make the wave function collapse?

According to Euan Squires - no. The computer is just another quantum system.
 
  • #16
I thought this comment on decoherence and ontology by Leifer was an interesting one:

The second caveat, is that some people, including Max Schlosshauer in his review, would argue for plausible denial of the need to answer the ontology question at all. So long as we can account for our subjective experience in a compelling manner then why should we demand any more of our theories? The idea is then that decoherence can solve the emergence problem, and then we are done because the ontology problem need not be solved at all. One could argue for this position, but to do so is thoroughly wrongheaded in my opinion, and this is so independently of my conviction that physics is about trying to describe a reality that goes beyond subjective experience. The simple point is that someone who takes this view seriously really has no need for decoherence theory at all. Firstly, given that we are not assigning ontological status to anything, let alone the state-vector, then you are free to collapse it, uncollapse it, evolve it, swing it around your head or do anything else you like with it. After all, if it is not supposed to represent anything existing in reality then there need not be any physical consequences for reality of any mathematical manipulation, such as a projection, that you might care to do. The second point is that if we are prepared to give a privelliged status to observers in our physical theories, by saying that physics needs to describe their experience and nothing more, then we can simply say that the collapse is a subjective property of the observer’s experience and leave it at that. We already have privelliged systems in our theory on this view, so what extra harm could that do? Of course, I don’t subscribe to this viewpoint myself, but on both views described so far, decoherence theory either needs to be supplemented with an ontology, or is not needed at all for addressing foundational issues.

What can decoherence do for us?
http://mattleifer.info/2007/01/24/what-can-decoherence-do-for-us/
 
  • #17
bohm2 said:
I thought this comment on decoherence and ontology by Leifer was an interesting one:

"...One could argue for this position, but to do so is thoroughly wrongheaded in my opinion, and this is so independently of my conviction that physics is about trying to describe a reality that goes beyond subjective experience. The simple point is that someone who takes this view seriously really has no need for decoherence theory at all. Firstly, given that we are not assigning ontological status to anything, let alone the state-vector, then you are free to collapse it, uncollapse it, evolve it, swing it around your head or do anything else you like with it. After all, if it is not supposed to represent anything existing in reality then there need not be any physical consequences for reality of any mathematical manipulation, such as a projection, that you might care to do."...

Well I’m not sure about this, unless I'm missing the point.

I mean, could we not say such a thing for many aspects of physics? For example I don’t consider there really are point like, massless “objects” we call photons “travelling” from a source to a sink, so I don’t attach any ontological significance to that label other than a picture that represents the “event” between measurements performed at the source and sink. The mathematical predictive model hinges only around measurement, not in terms of what really exists in an ontological sense between the measurements. But I’m not going to throw out the predictive model because I don't consider there is an ontology associated with the photon, the predictive model is entirely valid with or without the ontological baggage of the photon – it doesn’t need the ontology in order to be physics. (At least that’s how it seems to me).

Decoherence theory is weakly objective in principle, it is a theory that is referred to us in terms of there being proper and improper mixtures – the proper mixtures are beyond our capabilities to measure, so we only get access to the improper mixtures, thus the theory cannot provide the realism that Leifer seems to crave, but in terms of a mathematical account of the transition from micro to macro it seems to be a perfectly valid physics model with no pretence of escaping the subjective element.

I don’t actually think physics is about trying to describe a reality that goes beyond subjective experience; I think it is describing our reality with an apparent separation of subject and object. That separation breaks down at the quantum level giving us a weak objectivity, many would like to think of decoherence as re-establishing strong objectivity, but it doesn’t because of what I said above, namely that decoherence theory is weakly objective because the formalism specifically refers to our abilities (or lack of them). So decoherence cannot answer the foundational issues that Leifer wants in terms of an ontology that is independent of us, but I don’t see that we need to discard decoherence theory because of that. If we adopt that view then surely we would end up discarding most of physics wouldn’t we?

The issue of realism and decoherence in terms of proper and improper mixtures is explored by Bernard d’Espagnat in “Veiled Reality” and “On Physics and Philosophy”.
 
  • #18
Len M said:
For example I don’t consider there really are point like, massless “objects” we call photons “travelling” from a source to a sink, so I don’t attach any ontological significance to that label other than a picture that represents the “event” between measurements performed at the source and sink. The mathematical predictive model hinges only around measurement, not in terms of what really exists in an ontological sense between the measurements.

But I think this is the criticism that Bell tried to hi-lite: Measurement of what? Or Information about what? And again no-one is arguing against a “Veiled Reality”. I don't believe that taking a scientific realist perspective leads into "naive" realism. But I do think that taking the alternative perspective does seem to turn physics into the "science of meter reading". As Bell points out in these quotes:

The concepts 'system', 'apparatus', 'environment', immediately imply an artificial division of the world, and an intention to neglect, or take only schematic account of, the interaction across the split. The notions of 'microscopic' and 'macroscopic' defy precise definition. So also do the notions of 'reversible' and 'irreversible'. Einstein said that it is theory which decides what is 'observable'. I think he was right - 'observation' is a complicated and theory-laden business. Then that notion should not appear in the formulation of fundamental theory. Information? Whose information? Information about what?

If the theory is to apply to anything but highly idealised laboratory operations, are we not obliged to admit that more or less 'measurement-like' processes are going on more or less all the time, more or less everywhere? Do we not have jumping then all the time?

The first charge against 'measurement', in the fundamental axioms of quantum mechanics, is that it anchors there the shifty split of the world into 'system' and 'apparatus'. A second charge is that the word comes loaded with meaning from everyday life, meaning which is entirely inappropriate in the quantum context. When it is said that something is 'measured' it is difficult not to think of the result as referring to some pre-existing property of the object in question. This is to disregard Bohr's insistence that in quantum phenomena the apparatus as well as the system is essentially involved. If it were not so, how could we understand, for example, that 'measurement' of a component of 'angular momentum' — in an arbitrarily chosen direction — yields one of a discrete set of values?

Against 'Measurement'
http://duende.uoregon.edu/~hsu/blogfiles/bell.pdf
 
  • #19
I think we are not actually that far apart-- none of us here seem to advocate a science of pure measurement (we hold that our measurements are telling us something about the world). So we are all some flavor of scientific realist-- but we also recognize that we have to measure to do empirical science, and we all recognize that measurement is a kind of filter. We see what passes the filter, because that's what we can do science on, and when we accomplish our goals, we declare that "science works on the real world." But we can notice that science works without needing to believe that science reveals the true world as it actually is-- that is what I would call naive realism, not scientific realism (or "structural realism" or whatever you want to call it). The key distinction is that we can hold there is a real world, and we can hold that it is useful to associate properties with the real world (but only as defined in elements of our theories, because the properties are in the theories not in the real world), and we can have great success, and none of that adds up to the properties themselves being real. Worse, certainly none of it adds up to the idea that properties that we know are simply a subset of a "true properties" that "actually determine" what really happens. That is way beyond scientific realism, and represents a type of blind faith in our own mental faculties that borders on idealism.
 
  • #20
Here's an interesting paper by Susskind and Bousso:
http://arxiv.org/abs/1105.3796

Until I get to college and take some quantum mechanics, I'm sticking with Lenny's idea (based off of String-Theory Landscape?).

Another very interesting paper - on String-Theory Landscape:
http://arxiv.org/abs/astro-ph/0001197

From what I know, if String-Theory/String-Theory Landscape/Anthropic Landscape turns out to be false, it will be the most breathtakingly elegant fable in the history of mankind that explained the history of mankind. That's a personal opinion obviously.
 
  • #21
I don't know, to me the concept of an "exact observable" is pretty close to a scientific oxymoron. Also, it's very unclear that "anthropic landscapes" explain anything at all-- it is certainly true that everything we perceive must be consistent with such a principle, but so must everything we see be consistent with being visible-- does that explain our seeing it?
 
  • #22
Ken G said:
I don't know, to me the concept of an "exact observable" is pretty close to a scientific oxymoron. *
I don't understand what you're saying here.

Also, it's very unclear that "anthropic landscapes" explain anything at all-- it is certainly true that everything we perceive must be consistent with such a principle, but so must everything we see be consistent with being visible-- does that explain our seeing it?

IF string theory is correct, it predicts, from the number of possible alterations of the Calabi-Yau manifold, that there is 10^500 universes and only very few are capable of life and incidentaly they're the only universes where intelligent life can exist to ask the question or "see" as you have said. That's how it explains it, if I interpreted it correctly.

"The string theory landscape or anthropic landscape refers to the large number of possible false vacua in string theory. The "landscape" includes so many possible configurations that some physicists think that the known laws of physics, the standard model and general relativity with a positive cosmological constant, occur in at least one of them. The anthropic landscape refers to the collection of those portions of the landscape that are suitable for supporting human life, an application of the anthropic principle that selects a subset of the theoretically possible configurations.

In string theory the number of false vacua is commonly quoted as 10^500. The large number of possibilities arises from different choices of Calabi-Yau manifolds and different values of generalized magnetic fluxes over different homology cycles. If one assumes that there is no structure in the space of vacua, the problem of finding one with a sufficiently small cosmological constant is NP complete, being a version of the subset sum problem."
(http://en.m.wikipedia.org/wiki/String_theory_landscape)
 
Last edited by a moderator:
  • #23
Gunner B said:
I don't understand what you're saying here.
I'm saying that in physics, an "observation" is always an interaction that involves certain idealizations and approximations. Hence, all observations come with the concept of "measurement error." This is fundamental to science, it's not some minor detail we can pretend does not exist and invoke a concept of "exact observation." We like to minimize this error, but we do so using a concept of an "accuracy target", and if there was not some finite accuracy target in the observations, then no theory could ever be viewed as satisfactory to explain those "exact" observations. So if we start invoking the concept of an "exact observation", we have left the building of what an observation means, and so we can no longer use scientific language or discuss scientific goals in a coherent way. In short, physics never has, nor has ever been supposed to, deal with an exact reality, it is always supposed to replace the exact reality with an approximated one, both in terms of the observations and the idealized theories.
IF string theory is correct, it predicts, from the number of possible alterations of the Calabi-Yau manifold, that there is 10^500 universes and only very few are capable of life and incidentaly they're the only universes where intelligent life can exist to ask the question or "see" as you have said. That's how it explains it, if I interpreted it correctly.
Yes, that's the standard mantra, but there is a great difference between a landscape in biology, and this Calabi-Yau "landscape." In biology, one can go to the various different places in the landscape and study the properties there, and say "yes, I understand why there is no life here." Science is possible throughout a real landscape. This is not the case in the string theoretical "landscape", it is purely an imaginary concept. Thus, it raises the issue, "what is an explanation-- is it just something that gives us a warm fuzzy feeling of understanding?" We must be careful on that path-- countless creation myths afforded their users with a similar warm fuzzy feeling of understanding, and a similar absence of testability.
In string theory the number of false vacua is commonly quoted as 10^500.
A number that is conveniently completely untestable. What experiment comes out A if there are 10^500, and B if there are only 10^400? All we can say is that the view gives us a way to understand, but not a way to test that understanding. We should be quite suspicious of that state of affairs-- it is not something new.
 
  • #24
Ken G said:
I'm saying that in physics, an "observation" is always an interaction that involves certain idealizations and approximations. Hence, all observations come with the concept of "measurement error." This is fundamental to science, it's not some minor detail we can pretend does not exist and invoke a concept of "exact observation." We like to minimize this error, but we do so using a concept of an "accuracy target", and if there was not some finite accuracy target in the observations, then no theory could ever be viewed as satisfactory to explain those "exact" observations. So if we start invoking the concept of an "exact observation", we have left the building of what an observation means, and so we can no longer use scientific language or discuss scientific goals in a coherent way. In short, physics never has, nor has ever been supposed to, deal with an exact reality, it is always supposed to replace the exact reality with an approximated one, both in terms of the observations and the idealized theories.

Hence the phrase "in principle". I much rather have a theory based on reality - the way things actually work - and then hope that later on we have the technology to test it. For example, it seems impossible to link every event back to the beginning of the universe to see why it turned out the way it did but we can study parts of it and find that we could do it in principle, which would suggest, based on the parts we did study, that the universe was determined at the instant of it's beginning - because that's actually what happened. Just like you can't prove gravity exists everywhere in the universe or that the dark side of the moon is made out of green cheese. Even though both of those ideas are idiotic. So, we can see that in principle, the moon should not be made out of green cheese and gravity should exist everywhere in the universe.
I don't know if this is correct or real but it seems that is what Lenny is proposing and it also seems like it could be a better approximation than other quantum mechanical interpretations that you defending?

Yes, that's the standard mantra, but there is a great difference between a landscape in biology, and this Calabi-Yau "landscape." In biology, one can go to the various different places in the landscape and study the properties there, and say "yes, I understand why there is no life here." Science is possible throughout a real landscape. This is not the case in the string theoretical "landscape", it is purely an imaginary concept. Thus, it raises the issue, "what is an explanation-- is it just something that gives us a warm fuzzy feeling of understanding?" We must be careful on that path-- countless creation myths afforded their users with a similar warm fuzzy feeling of understanding, and a similar absence of testability.

That's precisely why I said "From what I know, if String-Theory/String-Theory Landscape/Anthropic Landscape turns out to be false, it will be the most breathtakingly elegant fable in the history of mankind that explained the history of mankind. That's a personal opinion obviously."
Furthermore, I think now, in our day and age, the capability of testing our "warm fuzzy feeling" understanding is becoming a reality. I've read many times and watched many documentaries of physicists explaining different ways that we can or will be able to test, experimentally, such ideas. I'll leave you to look into those ideas yourself.
So, I argue that a huge part, if not the foundation (metaphysics), of science IS finding out how we are here. It wasn't really until Newton that we could apply mathematics to philosophy and then test it. Although, I agree there is no valid question why in the context of a purpose.

A number that is conveniently completely untestable. What experiment comes out A if there are 10^500, and B if there are only 10^400? All we can say is that the view gives us a way to understand, but not a way to test that understanding. We should be quite suspicious of that state of affairs-- it is not something new.

Actually, that number didn't come out of someones *** (as you make it seem), it came from solving equations from our (arguably) best or most comprehensive theory of reality that unites gravity with the standard model, string theory. Again, I discount the complete validity of String Theory as it doesn't have any way of completely being tested. This is why some people don't call it science. Although, I disagree with that because many of our best theories of the way the world works were created before they could be tested and have only been proven as a true description of reality through experiment. Furthermore, I just don't get the feeling that some, if not the greatest, minds of our time spent their entire lives working on a theory that they thought wasn't a plausibly accurate description of reality or a theory that wasn't suggested by experimental evidence. But, I am as much as a realist as you are and would like to know the truth if not the closest thing to it, if there is one. String Theory might be close.
 
  • #25
Gunner B said:
Hence the phrase "in principle".
But what does "in princple" really mean? I can't give any meaning to it, to me it just means "actually wrong." It's kind of a nice way of saying "this is wrong but it's right enough to be useful, so I'll say it's right in principle." Why not just say it is approximately right, or right enough for our present purposes? That's the actual truth.
So, we can see that in principle, the moon should not be made out of green cheese and gravity should exist everywhere in the universe.
Why not just say we will enter into those assumptions because we find it useful to do so and have no good reason to do otherwise? Again, this is the actual truth.
I don't know if this is correct or real but it seems that is what Lenny is proposing and it also seems like it could be a better approximation than other quantum mechanical interpretations that you defending?
Lenny is what I would describe as a Platonic rationalist. He regards our purest concepts about reality as what reality actually is, before we get into the confusing details that make every situation unique. To me, what makes every situation unique is precisely what we would call "the reality", so what we are doing when we rationalize is intentionally replacing the reality with something that makes a lot more sense to us, and we find it quite fruitful to do so. But you are quite right-- this issue is very much the crux of many of the different interpretations of quantum mechanics. When you go to actually learn how to do quantum mechanics, you may be surprised to find just how unnecessary it is to invoke any particular interpretation in order to get the answer right, but we want more than just the answer, we want to understand the answer.
That's precisely why I said "From what I know, if String-Theory/String-Theory Landscape/Anthropic Landscape turns out to be false, it will be the most breathtakingly elegant fable in the history of mankind that explained the history of mankind. That's a personal opinion obviously."
I guess what I'm saying is, it will be that even if it is never found to be false. Indeed, the reason it is probably already a fable is precisely because I cannot imagine how it could ever be found to be false, just as I cannot imagine how saying that some deity waved their hand and created the universe yesterday in exactly the form we find it could ever be found to be false. The real purpose of a scientific theory is to suggest experiments that can refute that theory, and the more such experiments that could have refuted it that do not refute it, the more we start to find advantage in that theory.
Furthermore, I think now, in our day and age, the capability of testing our "warm fuzzy feeling" understanding is becoming a reality. I've read many times and watched many documentaries of physicists explaining different ways that we can or will be able to test, experimentally, such ideas.
I've seen a few, but never any that I found likely or convincing. I'm afraid the existing evidence is quite weak that the "landscape" idea is actually testable. Maybe I just need to wait until some of these experiments actually get done, but you know, I'm just not holding my breath there. It isn't like the predictions of relativity, which suggested many tests that took only a few decades to carry out.
Actually, that number didn't come out of someones *** (as you make it seem), it came from solving equations from our (arguably) best or most comprehensive theory of reality that unites gravity with the standard model, string theory.
We don't have any such unifying theory. Many people like to pretend that we do, or that it is just around the corner, but there is no actual evidence of either. Who cares if 10500 comes from some calculation, if the outcome does not translate into an experiment that comes out differently if it isn't that number but some other number? All of physics has to be framed in the form "experiment A comes out X if theory Y is correct, and something else if it isn't correct", or it just isn't physics.
But, I am as much as a realist as you are and would like to know the truth if not the closest thing to it, if there is one. String Theory might be close.
I have no issue with string theory as a potential path to a new theory, my issue is with the common claims that it is already a theory, or that the landscape already provides an explanation of anything. One can choose to picture a landscape, and doing so answers certain questions, but that is always true of any philosophy (and indeed any religion). What makes something science, rather than philosophy or religion, is experimental testability.
 
  • #26
Ken G said:
I have no issue with string theory as a potential path to a new theory, my issue is with the common claims that it is already a theory, or that the landscape already provides an explanation of anything. One can choose to picture a landscape, and doing so answers certain questions, but that is always true of any philosophy (and indeed any religion). What makes something science, rather than philosophy or religion, is experimental testability.

Everything I have said thus far agrees with this. Although, I think comparing String Theory to religion is a bit abusive. Again, these physicists are not basing their theory on wishful thinking and bad reasoning like religion and bad philosophy.
Here's the perfect short video for you, that I watched a few days ago, of one of the most popular experimental physicists of our time - Brian Cox (Works at CERN LHC, Wonders of the Solar System & Universe) talking to one of the most popular theoretical physicists of our time - Leonard Susskind, about String Theory.
http://www.youtube.com/watch?v=Fkwt2jlvHqM&context=C3709023ADOEgsToPDskI7ABtA32PUSxffW8ysQRg1

He says that he/they got the idea straight from experiments and that they had no choice but to try to track down what was happening. He admits that it may not be a true description of reality and that it will be hard to test if ever possible but I don't think he would like someone calling his theory a religion, for the reasons he has stated. I also think he would argue that his theory is apart of the scientific method, just as the standard model was and still is, as well as general relativity.

Here's another interesting quote about experiments from a founder of String Theory, Michio Kaku:
"Almost all advanced science is done indirectly, not directly. We know that the sun is made mainly of hydrogen, not because we have visited the sun or scooped up some sun material, but because we have analyzed sunlight using prisms from a distance. Similarly, we look for indirect clues to test theories beyond the Standard Model, such as finding evidence of "sparticles" (higher vibrations of the string) which may explain the presence of dark matter, or creating gravity-wave detectors in deep space which may detect evidence of radiation from before the Big Bang, or finding experimental deviations from Newton's inverse square law which may prove the existence of parallel universes.

...it is astounding that all known physical laws, from expanding universes to colliding sub-atomic particles, can be summarized on a single sheet of paper, containing the mathematical equations of general relativity and the Standard Model.

In fact, the sum total of all physical knowledge obtained in the last 2,000 years can be described in the language of unification. This is powerful testament to the power of unification. It would seem bizarre if nature did not take the last step, and unite these two sets of equations."

I think extraordinary claims require extraordinary evidence but whether that evidence comes from a direct experiment/observation or is instead backed up by suggestive reasoning, indirect experiments or other past direct experimental evidence will ultimately affect its validity. I believe there's more hope, or faith if you will, of proving or disproving the existence of String Theory than there is for proving or disproving the existence of a Judeo-Christian God.

Plus, there's always all the other interpretations of quantum mechanics and GUT's/TOE'S that exist and have yet to exist. Only time will tell.
 
  • #28
Gunner B said:
Everything I have said thus far agrees with this. Although, I think comparing String Theory to religion is a bit abusive.
That's why I never made any such blanket comparison. There are aspects of string theory that are more like physics, and aspects that are more like religion. Each individual can sort the various claims they see as they wish, all I said was how to distinguish the two.

For example, let's look at a few of Kaku's claims more closely:

"...it is astounding that all known physical laws, from expanding universes to colliding sub-atomic particles, can be summarized on a single sheet of paper, containing the mathematical equations of general relativity and the Standard Model."

It is a striking accomplishment, but should we view it as astounding that things worked out this way? Is it not true that during most of our various eras of scientific understanding of nature, the same statement could have been made? There are at least two very distinct possibilities here-- either we have learned something about our universe, that it is highly unified, or we have learned something about how we do physics, that it is highly unifying. Most people seem only to see the former possibility, but the latter seems much more plausible to me. And...

"It would seem bizarre if nature did not take the last step, and unite these two sets of equations."

What would "seem bizarre" to Michio Kaku is not of much importance, frankly, because we very often encounter things that "seem bizarre" in physics. Indeed, a fair characterization of the history of physics is a process of encountering one seemingly bizarre surprise right after another (didn't Bohr once say that some new theory was crazy enough to be possible, or words to that effect?). So how Kaku takes this fact, and interprets a principle that says we should not expect physics to be bizarre, I just don't get. There is only one reason to seek the unification of the forces-- science is about understanding, and understanding is about unification. That tells us the arrow of science-- but where it leads is usually very bizarre, and rarely anticipated.
 
Last edited:
  • #29
Ken G said:
So how Kaku takes this fact, and interprets a principle that says we should not expect physics to be bizarre, I just don't get. There is only one reason to seek the unification of the forces-- science is about understanding, and understanding is about unification. That tells us the arrow of science-- but where it leads is usually very bizarre, and rarely anticipated.

I think you misinterpreted what he meant. Kaku, just being a physicist, I'm sure has experienced more bizarre things in his life than both us put together. All he was saying is that it would be bizarre if the universe had different, contradictory laws at different scales. We should expect the universe to be consistent in some way for a many number of reasons including the fact that if it weren't we would never be able to explain black holes and the big bang. The process or the way the universe is built to manifest this unbizarre, expected result is probably the most bizarre thing we have come up with (String Theory, quantum loop gravity, etc.) What you're saying is that it would be bizarre if the universe is consistent and *we could explain black holes and the big bang, the reason why we are here, or that his argument is non-unique. The only reason why we want to find a TOE is because we think it is consistent or can be explained that way, and like you said, for a better understanding and approximation of this fact.
I agree with Kaku that it would be quite bizarre if we could never explain something so vital to our understanding of how the world works and why it is here, and quite astounding if we could.
 
  • #30
Gunner B said:
I think you misinterpreted what he meant. Kaku, just being a physicist, I'm sure has experienced more bizarre things in his life than both us put together.
But he clearly intimated that he expected unification to go a certain way, because it would be bizarre otherwise. So apparently he has chosen to overlook all those bizarre experiences, which is just what I find puzzling. It is all well and good that he should look for unification, but he should not expect to succeed on the grounds that it would be bizarre not to.
All he was saying is that it would be bizarre if the universe had different, contradictory laws at different scales. We should expect the universe to be consistent in some way for a many number of reasons including the fact that if it weren't we would never be able to explain black holes and the big bang.
The universe doesn't have laws, physics does, and it is perfectly normal for the laws of physics to be contradictory, we have had to navigate those waters for thousands of years. There have been periods when we did not recognize that the laws of physics were contradictory, but the next breakthrough always had to wait for us to recognize that they were. And we should certainly not predicate our ability to explain black holes and the Big Bang on consistencies in the scale of our various theories, because we currently have fairly good explanations of both black holes and the big bang even though the laws we use to do that are indeed contradictory on those scales. That's basically what "cosmic censorship" is all about, that the inconsistencies in our laws must have some reason not to bite us or we would not have arrived at those laws in the first place. This is perfectly normal for physics, what we should regard as bizarre is the idea that at some future point something that has always been true about physics will suddenly not be true about it. That is what I meant by wishful thinking.

The only reason why we want to find a TOE is because we think it is consistent or can be explained that way, and like you said, for a better understanding and approximation of this fact.
Yes, that is the reason we want to find a TOE-- not at all because it would be bizarre if the universe did not conform to the kind of TOE we have in mind for it. That kind of bizarreness is the bread and butter of physics.

I agree with Kaku that it would be quite bizarre if we could never explain something so vital to our understanding of how the world works and why it is here, and quite astounding if we could.
I am fine with these uses of "bizarre" and "astounding", what I took issue with was the clear implication by Kaku that we should expect the kind of unification that string theory suggests to end up being correct, on the grounds that it would be bizarre if it were not. That basic philosophy pervades very clearly the current string theory literature, yet appears to fall into a trap that physicists have fallen into so many times in its history that we really ought to know better by now. Sometimes it works (as when Einstein intuited how relativity must work, or when Dirac intuited how positrons must work), but we should never expect the universe to behave the way we imagine it should. Einstein's EPR paradox was anchored on the idea that the universe could not behave a certain way that we now know it does behave-- so even Einstein was hit or miss when it came to anticipating the universe.
 
Last edited:
  • #31
Ken G said:
But he clearly intimated that he expected unification to go a certain way, because it would be bizarre otherwise. So apparently he has chosen to overlook all those bizarre experiences, which is just what I find puzzling. It is all well and good that he should look for unification, but he should not expect to succeed on the grounds that it would be bizarre not to.

Do you mean we should expect unification without General relativity? I do believe there are some, but what makes them better than string theory?
What are you advocating here, what alternatives? That we shouldn't expect anything?

And we should certainly not predicate our ability to explain black holes and the Big Bang on consistencies in the scale of our various theories, because we currently have fairly good explanations of both black holes and the big bang even though the laws we use to do that are indeed contradictory on those scales.

I disagree with this. What about quantum gravity?
What are the "fairly good explanations of both black holes and the big bang"? Also, since you demand such strong experimental evidence for "good explanations", what makes them good enough for you to call them that? I'm not trying to be rude, I honestly want to learn something here.

That's basically what "cosmic censorship" is all about, that the inconsistencies in our laws must have some reason not to bite us or we would not have arrived at those laws in the first place. This is perfectly normal for physics, what we should regard as bizarre is the idea that at some future point something that has always been true about physics will suddenly not be true about it. That is what I meant by wishful thinking.

So far: Galileo, Newton and Einstein have been wrong about gravity - but they're good approximations. We still don't have a 'valid' theory of quantum gravity, of which we need to explain black holes and the big bang. So if you want to call the life's work of the greatest minds "wishful thinking" then that's your business. As for the comparison of astrology, spontaneous generation and other quack science ideas to unification theories are, to me, abusive. I think they're pointing in the right direction. Yet, I admit that I am not credible or experienced enough to say that with complete confidence, but you haven't given me any reason to believe otherwise or go against what I've learned thus far.

we should never expect the universe to behave the way we imagine it should.

I agree that we should only believe, and thereby understand, ultimately what the universe tells us about how it works but we would never get anywhere if we didn't make connections from different truths or (near true approx. to be specific) that we already knew about the universe in order to deduce how it might actually work - especially when we don't have the proper tools to test it. Yet, we wouldn't know what tools to make if we didn't try and expect how something might work. In fact, you wouldn't have a brain (arguably the best tool of the universe) with exceptional reasoning if organisms didn't benefit from expecting how the world works.
Einstein, one if not the most successful physicist, once said that imagination is the most important thing. We wouldn't have the amenities of the modern world if it weren't for his crazy ideas that are still being tested today. General relativity predicts gravity waves, yet we still haven't detected any. His theory still doesn't account for the gravity at the scale of black holes or the big bang.
 
  • #32
Gunner B said:
Do you mean we should expect unification without General relativity? I do believe there are some, but what makes them better than string theory?
What are you advocating here, what alternatives? That we shouldn't expect anything?
Certainly, not expecting anything is likely the most prudent stance. But I could also offer other possibilities-- we could expect unification of gravity with QM but not in the way envisaged in string theory, or we could expect unification of QM with gravity by finding a form of QM that is more consistent with GR (say the way Penrose tries), or my favorite, we could expect that GR is already as unified with QM as it's ever going to get, because the concepts invoked by GR (that of gravity determining the inertial trajectory) have very different goals from the concepts invoked by QM (that of determining how potential energy functions alter the state of a system). In other words, the goal of GR should simply be to determine the behavior of a free test-particle wavefunction against a classically evolving stress-energy background, something it accomplishes quite admirably.

Note this is totally different from quantizing gravity, which we really have no demonstrated need to do, and poor prospects for testing if we succeed. What true unification requires is a way to determine the gravitational effects of quantum mechanical systems, but there is no guarantee that quantizing the gravity of classical systems will give us the gravitational environment of quantum systems that are not in the classical limit, and in point of fact, there are no quantum systems that have important gravitational consequences, so we have essentially no prospects for ever testing a truly unified theory. Let's ask ourselves these questions: how close are we to observationally probing the gravitational effect of a proton on an electron? How is string theory going to change that?

What are the "fairly good explanations of both black holes and the big bang"?
The ones you can find in any current astrophysics textbook, involving general relativity. For the big bang, they also invoke the cosmological principle, which arises from no fundamental physical theory at all, and I know of no prospects that it would arise from string theory either. It is nothing but a default assumption, of the kind physics is quite used to making without apology. So much for a "theory of everything"!
Also, since you demand such strong experimental evidence for "good explanations", what makes them good enough for you to call them that? I'm not trying to be rude, I honestly want to learn something here.
I don't think it's rude at all to ask pointed questions. But my answer is just that we can call them good explanations because there is nothing that is observed that leaves us scratching our heads-- all the observations are currently predicted well. Yes, we cannot predict them all using the same theory, but that is nothing new for physics. If that means we don't have good explanations, then physics has never delivered a good explanation in its history, and if that were true, it would be no kind of justification to require that it deliver a good explanation now.
So far: Galileo, Newton and Einstein have been wrong about gravity - but they're good approximations. We still don't have a 'valid' theory of quantum gravity, of which we need to explain black holes and the big bang.
But that's just it-- we don't need quantum gravity to explain what Newton explained, nor do we need it to explain what Einstein explained, nor do we need it to explain black holes or the big bang, nor do we need it to explain any observation that we have ever done. Our current various theories of gravity succeed at all that. The only thing we need quantum gravity for is to delude ourselves that we have figured nature out, and even that will only last until such a time that we actually do observe something fundamentally new that involves gravity. A theory of quantum gravity that makes gravity seem like quantum mechanics would have the usual pedagogical value of any unification enterprise, so it would be of value, but it would inevitably be totally oversold and we'd end up with the same egg on our faces that physicists have worn a dozen times already and really should have learned by now!
So if you want to call the life's work of the greatest minds "wishful thinking" then that's your business.
I guess I would have to call that a blanket oversimplification of anything I actually said. What I actually said is that the unification enterprise is a perfectly valid thing for smart physicists to get involved with if they choose to, but what is "wishful thinking" are the following common claims we often see:
1) the result of the unification will be the final answer, a "theory of everything", and
2) the unification should work the way we want it to, because it would be strange if it didn't.
It isn't Michio Kaku's "life's work" to assert those two claims, but every time he does, he is still engaging in wishful thinking.
Yet, we wouldn't know what tools to make if we didn't try and expect how something might work. In fact, you wouldn't have a brain (arguably the best tool of the universe) with exceptional reasoning if organisms didn't benefit from expecting how the world works.
No, there is no requirement to expect a theory to work a certain way, it is completely acceptable to adopt complete agnosticism, and indeed it is far more important for the scientist to practice skepticism than it is to embrace faith. This has always been true, it wasn't just true for Galileo. Even Newton didn't really believe his own theory-- he once said that the requirements of "action at a distance" are so physically absurd that no intelligent physicist could ever believe it would be the actual truth of the situation, or words directly to that effect. The role of expectation is quite simple: we benefit from expecting the same things to happen in the same circumstances, and a principle of physics is simply an inductive grouping of all the noticed similarities. When the group is expanded to include fundamentally new members, the principles usually require modification in some surprising way. That has been the history of physics from the start, what I don't get is why so many people seem to expect it to work differently this time.
Einstein, one if not the most successful physicist, once said that imagination is the most important thing.
Exactly, but imagination is much more about breaking from the mold, than it is about converting dogma into expectation. Step one is to imagine a universe that functions completely differently than was expected. That's what string theorists did at the outset of their enterprise-- the worst thing they could do now is to fall back into dogmatic expectations.
General relativity predicts gravity waves, yet we still haven't detected any. His theory still doesn't account for the gravity at the scale of black holes or the big bang.
I'm not sure why you think GR doesn't account for gravity at those scales-- the theories of black holes and the big bang are pretty much 100% GR, and GR works splendidly in predicting everything about them that we can actually test (notwithstanding the difficulties in detecting gravity waves, but that is likely a technological problem because everything that has been observed astronomically is consistent with gravity wave generation). Yes, we don't know what a singularity would look like, or why there's a cosmological constant (if that's what it is), and perhaps string theory might give us a way to understand those things differently, but at present we have no observational constraints on those questions, and string theory gives us way too many ways to understand them to be useful. The situation is so bad that the anthropic principle is often invoked to explain why things are the way they are, which is pretty darn close to giving up on the question. I really don't mind giving up on a question, some might just be too hard to solve, but anthropic reasoning is not an explanation, it's a logical necessity.
 
Last edited:
  • #33
I had a very long and comprehensive reply to your post and took it for granted that when I pressed the button "post reply" that it would work properly. I tried to retrieve the information but couldn't. So, I guess that ends our discussion haha, sorry. Thanks for the info though!
 
  • #34
Yeah, I really hate it when that happens. I've taken to copying every post into a buffer before I hit "post."
 
  • #35
Ken G said:
Yeah, I really hate it when that happens. I've taken to copying every post into a buffer before I hit "post."

Hey, I decided that I'm probably going to re-write what I was going to say. I was discouraged to do this last night after I lost everything I had written. Keep in mind that I would like to continue this discussion (if you would like to) not for the sake of arguing but for the sake of learning something new.
If you agree to continue our discussion, I will post my reply as soon as I have the time to write it and save it.
 
  • #36
Ken G said:
it is completely acceptable to adopt complete agnosticism, and indeed it is far more important for the scientist to practice skepticism than it is to embrace faith.
I agree that it's acceptable and that it's far more important to practice skepticism but*I don't think it's good to say "I don't think we can explain what is actually happening because it looks complicated. So, if you try to explain what is actually happening I'm going to think it's wrong, even though I don't have a better explanation."*
That's what Newton said about the ontology of gravity and I'm glad Einstein was able to, somewhat, prove him wrong with GR (gravitational effects come from the warping of space-time from mass/energy).
Unfortunately, it seems as though Einstein's approximations don't withstand in all of the circumstances of the universe, just like Newton's didn't.

But, it seems like what you're saying is we've reached a point in time where physics and metaphysics cross paths and to the untrained eye one might confuse metaphysical claims with actual science. I'm saying that, I think, *I do a good job of distinguishing those claims when making a choice of what to believe as truth, falsity and probable. This, to me, is the essence of skepticism. I think the probable part is important though, because I believe that for a claim or theory to be probable it must be based or influenced on scientific facts (other claims proven by experiment) and explain a certain phenomena accurately.*

The role of expectation is quite simple: we benefit from expecting the same things to happen in the same circumstances, and a principle of physics is simply an inductive grouping of all the noticed similarities. *

"Inductive reasoning allows for the possibility that the conclusion is false, even where all of the premises are true." (John Vickers. The Problem of Induction. The Stanford Encyclopedia of Philosophy.)
*
So, according to you, String Theory is a principle of physics. String Theory, from what I know (prove me wrong), is nothing more than a way of explaining something through an "inductive grouping of all the noticed similarities". To go father than similarities, I would like to know how String Theory isn't mostly based or influenced by scientific facts (proven by experiment).

*
When the group is expanded to include fundamentally new members, the principles usually require modification in some surprising way. That has been the history of physics from the start, what I don't get is why so many people seem to expect it to work differently this time.
General Relativity + Standard Model = Surprising new way to really explain
black holes and the big bang. (see below)

"Black holes draw audiences, because they are weird, they are profound, they are Albert Einstein and Steven Hawking rolled into a singularity. Or some such – except, none of this is actually the case. The black hole is a much more mundane concept, older than relativity, and despite much misinformation in popular and pseudo science, black holes have in a certain sense little to do with relativity (and I say this although and because I worked for many years on black holes and used general relativity when doing so).

*...Let us get one issue out of the way right now, before even discussing escape velocity, which I will introduce below: A black hole is a body so massive that its escape velocity v exceeds the speed of light c.
*
That’s it – no more – that’s what it was in 1783 already, and this is what it is still today, and relativity did not change a thing about it! Yes, you read correctly: this is still today the only and proper definition of a black hole. Read it again, learn it once and for all, and remember that it does not involve anything weird, like singularities or pathways to other universes, at all. Moreover, black holes are by now well known astronomical objects – they are out there and we have observational evidence...[W]hat did relativity add? Relativity added two issues: Firstly, special relativity found out that nothing can go faster than light. Only in this sense does the “hole” aspect of the black hole become established by relativity. However, dear Wikipedia writer, NOT BY GENERAL RELATIVITY! The fact that light velocity is the limit is mere special relativity!
*
What general relativity added is basically only confusion: If general relativity holds true inside the black hole, there could be, in some cases should be a singularity inside. This however is no more than a sign; a little red flag indicating that general relativity is probably not true far inside a black hole. A singularity is here related to an infinite (divergent) density. This is not weird, not philosophy, not time travel or warp drive, not worm hole or quantum healing, dear Hawking and Caroll and so on, although such silly interpretations do sell silly books. A divergence to infinity in a physical theory is no more than a sign that the theory has left its domain of applicability and should be replaced by something better in the future.
*
Why do I ride on this singularity issue? I like to teach science properly, like in the boring universe,
so that people learn something and do not just go home with their heads full of misleading rubbish plus the notion that I am awesome. People who are under the misconception that black holes involve singularities are also under the impression that black holes have not been found in astrophysics, and that is just wrong. It is well established *that there are black holes in every spiral and elliptical galaxy. The best observational evidence derives from our own galaxy, the Milky Way.
*
So, the next time somebody rambles on about that he or she knows all about the mysterious physics of black holes, the answer is not “Ohhhhhha! Wow!”, but “Do you even know what a black hole is at all?” "
*
In replies that aim to discount what he said about GR and SR, he adds the comment: "I also like to stay close to what experimental observation actually tells us. It tells us that black holes exist all over the universe, but your big name singularities however only exist on paper."
and "infinite density is nonsense. Newton knew that, Einstein knew that (that's why he initially put in the cosmological constant - he only later changed his mind, due to Hubble's discovery), and we nowadays also know that, since there is no infinite density at the start of the universe (there is inflation before the big bang)."*
and, finally, rhetorically adds "So why then does String Theory work without Singularities?"
(It's probably better if you read the entire blog post + comments, if you are interested:*http://www.science20.com/alpha_meme/black_holes_demystified-71881)

Other problems:*
"In the case of a charged (Reissner–Nordström) or rotating (Kerr) black hole it is possible to avoid the singularity. Extending these solutions as far as possible reveals the hypothetical possibility of exiting the black hole into a different spacetime with the black hole acting as a wormhole. The possibility of traveling to another universe is however only theoretical, since any perturbation will destroy this possibility. It also appears to be possible to follow closed timelike curves (going back to one's own past) around the Kerr singularity, which lead to problems with causality like the grandfather paradox. It is expected that none of these peculiar effects would survive in a proper quantum mechanical treatment of rotating and charged black holes.

The appearance of singularities in general relativity is commonly perceived as signaling the breakdown of the theory. This breakdown, however, is expected; it occurs in a situation where quantum mechanical effects should describe these actions due to the extremely high density and therefore particle interactions. To date it has not been possible to combine quantum and gravitational effects into a single theory. It is generally expected that a theory of quantum gravity will feature black holes without singularities.

...Although general relativity can be used to perform a semi-classical calculation of black hole entropy, this situation is theoretically unsatisfying. In statistical mechanics, entropy is understood as counting the number of microscopic configurations of a system that have the same macroscopic qualities (such as mass, charge, pressure, etc.). Without a satisfactory theory of quantum gravity, one cannot perform such a computation for black holes. Some progress has been made in various approaches to quantum gravity. In 1995, Andrew Strominger and Cumrun Vafa showed that counting the microstates of a specific supersymmetric black hole in string theory reproduced the Bekenstein–Hawking entropy. Since then, similar results have been reported for different black holes both in string theory and in other approaches to quantum gravity like loop quantum gravity." (http://en.m.wikipedia.org/wiki/Black_hole)


This is where I compare String theory and all other unifying, yet to be unifying and un-unifying theories to the marvelous example of the geocentric model vs. the heliocentric model.

A long time ago a very smart man named Ptolemy created a beautiful, though complicated, mathematical theory describing a model of our solar system in which the sun and all other planets revolved around the earth. His theory explained something that almost everyone expected at that time to be true.

“The astronomical predictions of Ptolemy's geocentric model were used to prepare astrological charts for over 1500 years. The geocentric model held sway into the early modern age, but was gradually replaced from the late 16th century onward by the heliocentric model of Copernicus, and Kepler.”

It took 1500 years to come up with a mathematically different theory!
After Copernicus’s new theory of heliocentrism, it then took an additional 200 years to officially prove his theory, and thereby disprove the geocentric model, with the observational evidence of William Herschel, using the newly invented telescope.

The moral of the story is that String Theory explains something, that our current theories do not, and does so without experimental evidence. Although it seems like there is no experimental evidence for singularities? So, in that case, String Theory does a much better job of explaining the mechanisms of physical reality than just General Relativity alone. Anyhow, String Theory could be, and probably is, a geocentric model. We won't know until we can test it. We probably can't test it until we come up with something as revolutionary as the telescope. If it is, I congratulate the founders for explaining something (wrongly) and for influencing research to find the truth; just as I would for the intelligent man who created the geocentric model and just as the geocentric model probably did.

P.S. I saw that you mentioned "cosmic censorship", is this a philosophical argument?

It seems as though Hawking lost a bet on it. Doesn't mean it isn't credible though... Just wondering.
 
  • #37
Gunner B said:
I don't think it's good to say "I don't think we can explain what is actually happening because it looks complicated. So, if you try to explain what is actually happening I'm going to think it's wrong, even though I don't have a better explanation."
But I would never say that, because I have a more defensible view of the meaning of the term "explain" than what is normally used. The defensible meaning of "explain" is merely "forming language that gives us both a sense of understanding and useful predictive power". That's it, that's all "explain" ever means-- it certainly nevers means "find the truth behind" or "discover an irreducible description of", even though you'd think it did mean that the way some people use the term. So Newton explained Kepler's laws with a theory of gravity that wasn't correct. We still use that incorrect theory to explain Kepler's laws, and indeed they still do explain them! Explaining something, and being right, are very different animals, so I think we explain all kinds of complicated things, and I don't think we are right, I think we are doing physics.
I'm saying that, I think, *I do a good job of distinguishing those claims when making a choice of what to believe as truth, falsity and probable.
But what is this claim really saying? Would not Aristotle have said that? Newton? Einstein before quantum mechanics? What does it mean to "do a good job"? Yes, we do a very good job of explaining many things, but we should not then conclude that we are "probably" correct, in fact we are probably quite wrong, and a thousand years someone will look at us as quite naive to have imagined that we were probably right about almost everything that we now imagine we are probably right about.
String Theory, from what I know (prove me wrong), is nothing more than a way of explaining something through an "inductive grouping of all the noticed similarities". To go father than similarities, I would like to know how String Theory isn't mostly based or influenced by scientific facts (proven by experiment).
String theory is not really an inductive grouping of phenomena at all-- the grouping of phenomena was done by quantum mechanics and general relativity, in two different groups. String theory tries to unite them in a single group, but there is nothing about the phenomena themselves that are similar, the unification is a kind of shotgun wedding. Granted, it is valuable to unify if possible, but nothing about string theory comes from observations. The number of observers working on string theory are few or none, whereas most areas of physics have observers outnumbering the theorists.
The moral of the story is that String Theory explains something, that our current theories do not, and does so without experimental evidence.
But it isn't clear to me what string theory is supposed to explain. The examples I see from that blog are the entropy of a black hole, but the blog also says that loop quantum gravity can do that too! So apparently it's not that hard to do. Or, it might be that string theory explains the absence of singularities, but the blog also mentions that Kerr black holes don't have singularities either, and of course all black holes will be Kerr black holes! So I just don't see what string theory is explaining here. The analogy with the heliocentric model isn't all that clear either, because the blogger appears to associate string theory more with the geocentric model anyway-- basically what you get when you form a highly rationalistic theory without close observatonal constraints. Even if one thinks that string theory is like heliocentrism, there still was no way for physics to tell which model was better until the observations got better. So the one thing I can agree with from that blog is that the real value of any of these modes of thought is not if they are likely to be true or not, but simply how they might help us figure out what observations we need to do to tell. That's what science is really all about, not guessing what is probable vs. "bizarre." (I'm not objecting to the blog, I think it raises some good talking points, I just don't really agree.)
P.S. I saw that you mentioned "cosmic censorship", is this a philosophical argument?
It is true that the phrase comes up a lot when discussing "naked singularities", which GR appears to allow but which causes most physicists distress to imagine. But I just mean it in the more general sense of any seemingly unphysical behavior that some theory allows, and how the universe must find some way to avoid having it actually happen, without the theory actually being wrong. So the list includes closed timelike loops, the "many worlds" of unitary quantum mechanics, FTL communication via entanglements, etc.
 
  • #38
Ken G said:
But I would never say that, because I have a more defensible view of the meaning of the term "explain" than what is normally used. The defensible meaning of "explain" is merely "forming language that gives us both a sense of understanding and useful predictive power". That's it, that's all "explain" ever means-- it certainly nevers means "find the truth behind" or "discover an irreducible description of", even though you'd think it did mean that the way some people use the term. So Newton explained Kepler's laws with a theory of gravity that wasn't correct. We still use that incorrect theory to explain Kepler's laws, and indeed they still do explain them! Explaining something, and being right, are very different animals, so I think we explain all kinds of complicated things, and I don't think we are right, I think we are doing physics.

I completely agree that 'to explain' means "forming language that gives us both a sense of understanding and useful predictive power". But, isn't it personal preference if you desire to find the ultimate understanding behind something (or everything) and by understanding it you also get useful predictive power? You say "it certainly never means "find the truth behind" or "discover an irreducible description of", even though you'd think it did mean that the way some people use the term."

I don't want to spend my life working on physics mainly so people can use it for "practical" use. If I wanted to do that I would become an engineer. Mind you, I don't think engineering is a waste of time at all but I think that in the pursuit of understanding, wisdom and the ontology of science you get the new physics that becomes the foundation of engineering. This is the main reason why I prefer theoretical physics over experimental. Although the more I learn, the more I feel like if I were to pursue a career in theoretical physics that I will be seen as a philosopher of metaphysics and not a physicist or scientist. That worries me. Perhaps I can become both a theoretical and experimental physicist. I don't know, but all I do know is that I want to find truth. Is this a bad thing? Or a wrong way of looking at physics and science in general?

For example, In the late 17th century, when Isaac Newton discovered the first force of nature - gravity, he was able to create a whole new field of mechanics that gave rise to the industrial revolution. This new age lifted human society from its primitive ways and relieved countless numbers of people from pain staking poverty through the development of large scale farming. Not soon after, discoveries made by Michael Faraday, James C. Maxwell and Nikola Tesla led to our understanding of the second force – electromagnetism, that made possible the invention and operation of every electrical device ever made including television, radio, radar, computers and the Internet. Finally, when Albert Einstein discovered that mass could be turned into energy and vice versa, through his famous equation E = mc2, it helped unlock secrets of the final two forces – the strong and weak nuclear forces; thereby allowing us to harness the profound energy of nuclear power and understand the violent processes of the stars in the heavens.

I might be misinterpreting what I've read about these scientists, and Einstein specifically, but I thought they were motivated by the understanding of the universe, which lead them to make these discoveries and then people discovered the practical use and "predictive power" behind them.

I'm not sure if what you meant by "predictive power" was 'in order to predict how an event will evolve according to the laws of physics (like the big bang and black holes)' or if you meant 'in order to predict when the sun will rise everyday of the year so I know when to plant crops and so we can build steam engines and a Global Positing System with special relativity'. I kind of interpreted it the second way because of the contrast with the word "understanding" in the ontological sense.

If this is just your personal opinion then I suppose we should just agree to disagree but if my thinking is somehow universally wrong I would like to know.
String theory is not really an inductive grouping of phenomena at all-- the grouping of phenomena was done by quantum mechanics and general relativity, in two different groups. String theory tries to unite them in a single group, but there is nothing about the phenomena themselves that are similar, the unification is a kind of shotgun wedding. Granted, it is valuable to unify if possible, but nothing about string theory comes from observations. The number of observers working on string theory are few or none, whereas most areas of physics have observers outnumbering the theorists.

You may be completely correct, but I have no way of knowing, so in that way I suppose it's my fault for posting about something I have such an unclear understanding of. Do you dislike it when people post about physics while having almost no understanding of the math behind it?
It is true that the phrase comes up a lot when discussing "naked singularities", which GR appears to allow but which causes most physicists distress to imagine. But I just mean it in the more general sense of any seemingly unphysical behavior that some theory allows, and how the universe must find some way to avoid having it actually happen, without the theory actually being wrong. So the list includes closed timelike loops, the "many worlds" of unitary quantum mechanics, FTL communication via entanglements, etc.

I don't think you mean this in an anthropomorphic way, so does this mean the theory has a good chance of being wrong in explaining (the understanding part) while being correct in explaining (the "predictive power" part)? It would seem like to me, if a theory isn't correct in explaining all phenomena that is observed it is isn't completely, universally, correct. For example, I agree that "Newton explained Kepler's laws with a theory of gravity that wasn't correct. We still use that incorrect theory to explain Kepler's laws, and indeed they still do explain them!" but his theory only explains things like going to the moon or an OK approx. of the motion of the planets (GR is much better), etc..
If that's all you think that physics should do then that is your opinion; it certainly isn't mine.
Or do you mean that the problems of "closed timelike loops, the "many worlds" of unitary quantum mechanics, FTL communication via entanglements, etc." are physically impossible or physically impossible to observe? In which case the theories are never (universally) wrong. (seems unlikely)
 
Last edited:
  • #39
I think you might be misunderstanding me. What I have said is not opinion, it is just the facts about what physics has always been. Physics has always been a search for effective truth, not for actual truth. Physics has no idea what an actual truth even looks like, but it knows a lot about what an effective truth is-- it's history is rife with them. So you needn't worry that a theoretical physicist is doing metaphysics unless they imagine they are seeking absolute or ultimate truth. If they recognize what they are actually doing, seeking usable, meaningful, approximate, idealized, effective truths, just like their forbears were doing, then they can be completely confident that what they are doing is physics, not metaphysics. Newton's revolution in physics is a perfect example-- he changed the course of physics, yet not a single one of the theories he introduced is correct. Does this make him a failure? No. Does this mean he didn't understand gravity or motion? No, he understood them very well-- but we wasn't right about them. See the difference?
 
  • #40
Sorry, I was incredibly unclear of what I meant by "ultimate truth". I define it as a view that links the mechanisms of objectivity to the mechanisms of subjectivity in the most approximate (possible) harmony. (I hope this definition is clear enough for you to understand my thinking haha but I think it matches what you are saying about effective truth.) These views can only be approximate because they are obscured by our subjective experience (wavelength of light reacts with chemicals in our brain that give us a way to differentiate things in our environment) and this is the same with physics and the universe. In that way we can never have an absolute truth but we can understand something very well and have better approximations. I understand, but I don't think it's good to assume how limited the approximations can or will be. Imagine explaining quantum physics to Socrates or Newton even.

After all, we are a mechanism of the universe. I think this quote from Einstein sums it up: "A human being is part of a whole, called by us the Universe, a part limited in time and space. He experiences himself, his thoughts and feelings, as something separated from the rest - a kind of optical delusion of his consciousness. This delusion is a kind of prison for us..."

Perhaps we will evolve in a way that subjective experience and understanding will be a limit approaching the objective absolute truth, without ever reaching it completely.
 
  • #41
Gunner B said:
These views can only be approximate because they are obscured by our subjective experience (wavelength of light reacts with chemicals in our brain that give us a way to differentiate things in our environment) and this is the same with physics and the universe. In that way we can never have an absolute truth but we can understand something very well and have better approximations. I understand, but I don't think it's good to assume how limited the approximations can or will be. Imagine explaining quantum physics to Socrates or Newton even.
Yes, that's the point-- what Socrates and Newton viewed as "understanding" was quite different, it was the understanding they were ready for. So it is for us today, we seek the understanding we are ready for. That's very much what I mean by "effective truths." So it is a quest for understanding, certainly, but the understanding we achieve is provisional, approximate, idealized, and useful to us, but not "true." And that's OK, it is supposed to be like that-- we only run into trouble when we imagine that our current understanding is the "ultimate" one, or that the "theory of everything" is just around the corner. What generally happens, instead, is that each new insight opens even more profound mysteries than the ones we had before. The way I like to say that is, science is not about demystification, it is about replacing superficial mysteries with much more profound ones. This is physics, not metaphysics, as long as there is still mystery.
I think this quote from Einstein sums it up: "A human being is part of a whole, called by us the Universe, a part limited in time and space. He experiences himself, his thoughts and feelings, as something separated from the rest - a kind of optical delusion of his consciousness. This delusion is a kind of prison for us..."
Yes, that "prison" is very much what most of my above posts were about as well. But we shouldn't view it as a bug of science, it is a feature-- we can do experiments from the safety of our prison cell, and generate understanding of what is outside (a metaphor I do not mean literally), without needing to pretend that we discover what is outside. What is outside is itself just a useful metaphor.
Perhaps we will evolve in a way that subjective experience and understanding will be a limit approaching the objective absolute truth, without ever reaching it completely.
My point is that there is no such thing as the "objective absolute truth"-- there is the scientific truth, and the metaphysical truth, and other types of truth. The scientific truth is never absolute, and is not even supposed to be. Some people are disappointed by this demonstrable fact about science, but I'd say that's the fun of science right there-- it's not a bunch of stodgy books, it is vibrant and alive and always on the move.
 
Back
Top