http://arxiv.org/abs/0908.1832 Testing Einstein's special relativity with Fermi's short hard gamma-ray burst GRB090510 Authors: Fermi GBM/LAT Collaborations (Submitted on 13 Aug 2009) Abstract: Gamma-ray bursts (GRBs) are the most powerful explosions in the universe and probe physics under extreme conditions. GRBs divide into two classes, of short and long duration, thought to originate from different types of progenitor systems. The physics of their gamma-ray emission is still poorly known, over 40 years after their discovery, but may be probed by their highest-energy photons. Here we report the first detection of high-energy emission from a short GRB with measured redshift, GRB 090510, using the Fermi Gamma-ray Space Telescope. We detect for the first time a GRB prompt spectrum with a significant deviation from the Band function. This can be interpreted as two distinct spectral components, which challenge the prevailing gamma-ray emission mechanism: synchrotron - synchrotron self-Compton. The detection of a 31 GeV photon during the first second sets the highest lower limit on a GRB outflow Lorentz factor, of >1200, suggesting that the outflows powering short GRBs are at least as highly relativistic as those powering long GRBs. Even more importantly, this photon sets limits on a possible linear energy dependence of the propagation speed of photons (Lorentz-invariance violation) requiring for the first time a quantum-gravity mass scale significantly above the Planck mass. ****************** As I said elsewhere, the violation might be statistical. Not a naive, eliminated by 1 photon.
Great find! The number 1200 in your headline may be somewhat inaccurate however. See Table 4 on page 23 of the supporting material here: http://gammaray.nsstc.nasa.gov/gbm/grb/GRB090510/supporting_material.pdf And also Table 2 in the main paper, which is essentially the same as that in the supporting material, but gives less explanation. They give several lower bounds for the Mqg/Mplanck ratio, which are based on different reasoning. None of the estimates say > 1200. What they call their "most conservative" estimate says > 1.19 Their "least conservative" or most risky estimate says > 102.
Alright, then please corret the title to "Lorentz violating severely restricted: 1.19 < Mqg/Mplank < 102" I didn't find this article, even though every they I check astrophysics. It was someone that send it to LM and he posted the link on his blog. He used a line from "The Big Bang" TV comedy show to demonstrate that LQG is ruled out by this article. No kidding.
I can't edit other people's but you could PM a request to a Mentor. I would suggest saying Mqg/Mplanck > 1.2 That would be a correct interpretation of their result. It is good to use conservative language in a headline and then you can always say in your post later that one possible interpretation of the data leads to a more stringent conclusion, namely Mqg/Mplanck > 102 Unfortunately no one has yet been able to derive Lorentz violation from the main LQG or Spinfoam models (in the 4D case). So this result is very interesting but does not disfavor LQG. There have been both string and LQG papers which suggested there might be some finite Mqg, but even in the string case I know only of suggestion and speculation. So in neither case does anything get falsified. This kind of data from Fermi-LAT is a valuable guide to LQG researchers. The Fermi mission looks like it is going to make a big contribution to beyond-standard physics and the topics discussed in this forum. In July there was this paper by Doug Finkbeiner interpreting some Fermi-LAT data relating to the possible make-up of dark matter. They are helping to figure out what a possible WIMP could be like. Another great Fermi development. Under no conditions would you want to say Mqg/Mplanck < 102 They did not show this. It is better to simply say Mqg/Mplanck > 1.2 It could easily be that Mqg is infinite, that is equivalent to saying there is no violation or modification of Lorentz invariance at all (at least to first order). No one has shown an upper bound. We only have lower bounds. The higher you can push them the more like saying the ratio is infinite and there is no modification, no dispersion.
Or that Mqg doesn't make sense, whatever delays may be a statistical effect of photon/space-time fluctuation. Try to look for a moving peak as you move to higher energies.
Don't be completely silly, Marcus. Every single model marketed as loop quantum gravity, spinfoam, causal dynamical triangulation, Horava-Lifgarbagez gravity, and dozens of other names violates the Lorentz symmetry by first-order terms, with a coefficient of order one, and is simply safely dead after this paper. The "only" way how the paper may be useful to the researchers in LQG or any other field mentioned above is to show them that they have wasted their professional lives because their whole reasoning was based on a fundamentally wrong assumption, namely a complete denial of the 1905 Einstein't theory of relativity. There's no way to revive the same hypothesis that has been as cleanly falsified as Fermi did with all the discrete models of spacetime at the Planck scale. You're also completely deluded when you say that there are doubts that Lorentz symmetry at the Planck scale has to be respected by string theory. It is a fundamental law that holds everywhere in string theory. If you read at least one section of any textbook on string theory, you will see that string theory is first motivated by the Lorentz-invariant Nambu-Goto action - the proper area of the worldsheet - and this Lorentz invariance is preserved by all interactions, objects, and known vacua in string theory. It may be at most spontaneously broken, by the configuration of spacetime (e.g. B-field), but it surely holds at the fundamental scale. If you're unable to comprehend that this game and debate about LQG and similar stupidities is simply over, you're just unteachable crackpots.
I'll comment on what I feel qualified to comment on. Models that use causal dynamical triangulations do not suggest lorentz invariance violations in nature simply because the continuum limit is always taken, it does not suppose that spacetime is descrete. Obviously Horava violates lorentz. Spin Foams/Loops I'm not sure if these do or not. Certainly they quantize areas and volumes in Loops but I don't think this nessarily means that lorentz is violated. Also calling people silly and stupid because they a disagree with you is a bit off. Then you go on too lump CDT, loops, spin foam and Horava together which clearly shows your ignorance of these different approaches.
He didn't say plank scale, instead, he was thinking about this talk: http://www.ift.uni.wroc.pl/~planckscale/lectures/3-Wednesday/6-Mavromatos.pdf
It's been already 2 years that the question of Lorentz invariance in LQG has been qualified "FAQ" by Ashtekar Loop Quantum Gravity: Four Recent Advances and a Dozen Frequently Asked Questions You may send an email to Ashtekar to notify him being "unteachable crackpot". Or you may reconsider your credibility. Not even to mention the fact that we're talking about one single observation.
Dear Finbar, except that to respect the Lorentz symmetry, it's not enough not to be "discrete" (note the spelling). Even if you take the continuum limit (but you work in the Minkowski space), the "triangles" in the triangulation inevitably pick a privileged reference frame (another version of an aether!), and therefore break the Lorentz symmetry. Only the continuum limit of lattice-like structures in the Euclidean signature would have a chance to reproduce the Euclidean version of the Lorentz symmetry. Every theory where areas are quantized has to violate the Lorentz symmetry at the Planck scale (or the scale of the quanta). This is easy to see by a big boost. Almost null surfaces must have a very small proper area, but whenever the area is calculated as a sum over some intersections with anything resembling a spinfoam or spin network, the area is inevitably as large as similarly large (in the coordinate space) spacelike areas. Moreover, if one counts it from the spinfoam, the areas can never become imaginary, i.e. cannot distinguish time-like and space-like areas. The conclusion is that theories with discrete spectra for areas can't possibly respect the Lorentz symmetry. The violation of the Lorentz symmetry is actually huge at all distance scales, but these people were sticking to a lot of wishful thinking, hoping that the symmetry would only be broken at the Planck scale but gets restored at low energies. Even this very unlikely wishful thinking has been ruled out by now because the Lorentz violation doesn't exist even at the Planck scale. For papers showing that loop quantum gravity - and all other non-stringy theories of quantum gravity, for that matter - have to violate the Lorentz symmetry (and contradict the GZK cutoff), see, for example: http://arxiv.org/abs/gr-qc/0411101 http://prola.aps.org/abstract/PRL/v93/i19/e191301 http://prola.aps.org/abstract/PRD/v67/i8/e083003 http://arxiv.org/abs/hep-th/0501091 http://arxiv.org/abs/hep-th/0605052 http://arxiv.org/abs/gr-qc/0404113 MtD2, your statement about a "statistical violation" is completely meaningless. It doesn't matter that the conclusion stands primarily on one, highest-energy photon, unless there is a risk that the photon didn't come from the burst, which is extremely unlikely. Assuming that the photon has something to do with the burst, one can reconstruct the statistical distribution for the times when such photons should arrive, and the probability that it would arrive at the observed time - while (assuming the now-excluded assumption that) the journey would create delays (corresponding to the Planckian Lorentz-violating mass scale) - is de facto zero. This is the relevant statistics here - and it shows that at a very high confidence level, the coefficient of the violating term must be much smaller than the inverse Planck scale. I distinguish all the approaches and know all the critical differences between them. But that doesn't change one common feature of all of them: they have been proved wrong and I think that only stupid people will continue to work on them after this result. Sorry but this follows from my detailed understanding of physics and the term stupidity.
I have to admit that Smolin has done a terrible job at claiming Lorentz violation for sure in LQG. You have to admit that publishing LQG "predictions" in Nucl.Phys. B is (to say the least) suspicious. The rest of your references shoots yourself in the foot Interestingly, both references are commented in your own first reference as Your manners are quite arrogant I should say. There is no need to be so aggressive.
Sotiriou, Visser and Weinfurtner, http://arxiv.org/abs/0905.2798: we can drive the Lorentz breaking scale arbitrarily high by suitable adjustment ............. Since the ultraviolet dominant part of the Lorentz breaking is sixth order in momenta, it neatly evades all current bounds on Lorentz symmetry breaking. I wonder whether this comment would still hold. I remember from one of marcus's posts Vissers soon to give some conference talk about something like "Who's afraid of Lorentz symmetry breaking?"
Dear humanino, all the papers are "blurred" because the whole loop quantum gravity and all similar theories are ill-defined vague piles of nonsensical unphysical formalisms, so no calculations based on these theories can ever be trusted about anything, and the authors only state the very same fact, giving it a positive spin. But what's important is that there is no candidate calculation based on these theories where the Lorentz violations would cancel. There can't be any because these approaches "fundamentally" contradict the Lorentz symmetry at the Planck scale, by their very philosophy. For example, proper areas are thought to be sums of real numbers such as sqrt(j(j+1)) which can't go imaginary, as needed for timelike two-surfaces. For relativity to hold, the areas, whenever they can be defined, must be allowed to be continuous, and must be allowed to go imaginary. This is a simple way to see that all possible "revivals" of any of those discrete pictures will fail in the future, too. Give me a break with the arrogance. I am just alarmed that some people want to dilute this experimental result and its consequences on physics. But physics is all about direct and indirect comparisons of observations with theories. And this observation happens to be extremely clean and settles the question. It proves that people like me have always been right and people around loop quantum gravity have always been wrong, using their poor education, weak intelligence, and lacking intuition to study questions that go well beyond their abilities. The result proves that all sponsors and foundations who have funded theories building on the assumption that Lorentz symmetry will have to be broken have wasted the money, and as soon as they care about the empirical data, they should learn a lesson and fire all these people. I will not allow anyone to create fog about this very clear situation.
But if the LQG formalism is ill-defined in the first place, how can it make predictions? If it doesn't make predictions, how can it be falsified by experiment?
I can only say that I am glad not to work with you around. I have never seen so little respect among professionals. As a matter of fact, the people I meet at conferences who disagree about philosophical approaches still tend to be interested in each other's technical constructions and talk in a civilized manner. That you can even think being wrong about a theory implies being fired from one's position is beyond all credible discussion. Did you never make any mistake ? How can one come up with such lines ? Even if you happened to be right about the science, the way you present it makes it hard to swallow. Let me put aside the theory, since you made it clear you do not want to hear the (quite significant) part of the LQG community disagreeing with you. You are ready to put all your eggs on a single observation of a single event ? How long have you been following historical developments in science ?
Dear atty, it may be remarkable. But while the formalism is not well-defined enough to actually calculate precise numbers, it says a lot of sufficiently specific qualitative facts that it is possible to determine that the results can't be Lorentz-invariant, even if one can't calculate what the results are. So LQG successfully fails on both counts: it is ill-defined, and despite it's being ill-defined, one can show that it is wrong. It's like a theory of angels pushing the planets from the rear side, to orbit around the Sun. It's not good enough to make quantitative predictions, because the behavior of the angels is not determined, but it is specific enough to prove that it is wrong. Observations show that the angels push the planets from the outer side ;-), like in Newton's attractive gravitational force. Of course, the goal in physics is just the opposite. We want theories that allow us to calculate things quantitatively, and we want theories that make correct predictions, not wrong predictions. We want theories such as QCD or string theory. Cheers LM
I don't have any respect because they don't deserve it. The Academia and professional science has been literally flooded by low-quality people who justify their existence (and funding) by brainwashing, lies, victimism, and whining. Most of this stuff is paid for by the taxpayer. Science has lost much of the standards and is becoming unworthy of respect as a whole, and I just think it is a very bad evolution. So what I want is the return to the standards. People predicting correct things must get advantages while people predicting wrong things - and people who are generally incompetent - should never be getting the same thing, regardless of the amount of demagogy and disgusting pathetic whining like yours. They must be eliminated, otherwise the science and mankind will face for a real trouble soon. I was always interested in all the approaches, and I know all of them in more detail than most of those you call "specialists", but that doesn't mean that I think it is correct to fill science with zombies and it is wrong for science to be overwhelmed by theories and approaches that have already been falsified. This approach was really falsified in 1905, by Einstein's special relativity, and I think that 104 years of tests that speak such a clear language is a long enough time for the people who reject the very relativity to be called crackpots. Although there's no doubt that this is the real situation, many people even on the "correct side" fail to say things that clearly because what they're really for in science is money, and it is useful for them to team up with the crackpots. Sorry, I find it immoral and I will never join such a behavior.
Oh so this is how science works. Some one writes down a theory and then we gather evidence for that supports that theory for an arbitary amount of time, say I dunno 104 years, then we conclude that the theory is correct and unquestionable.
Lubos, what about condense-matter analogue approaches like Volvovik and Wen? Perhaps the "atoms" of spacetime are discrete, but they give rise via collective emergent properties into a superfluid spacetime that appears continuous and lorentz invariant to particles (in 4D, SUSY optional)