Lorentz violating severely restricted: Mqg/Mplank > 1200

  • Thread starter Thread starter MTd2
  • Start date Start date
  • Tags Tags
    Lorentz
  • #91
lumidek said:
That's what the QCD fathers finally got their Nobel prize for. Before that point, one couldn't say anything sensible, deep, or useful about QCD, which is why no one should have studied it.

There would be no QCD fathers if they thought like that before studying it.
 
Physics news on Phys.org
  • #92
lumidek said:
In practice, I would bet 999:1 that these Planck-suppressed terms will never be measured. The only way how they could be measured would be to isolate an effect that doesn't exist without these terms at all, but appears as their consequence. I don't think that any such a phenomenon may exist, even in principle, because the higher-derivative terms mix with the lower-derivative terms if one changes the RG scale, so one can't even objectively say what the coefficients of these terms are - they depend on the RG scale. The only exceptions could be higher-derivative terms that violate a conservation law that is "accidentally" satisfied by the leading terms.

I agree with Woodard that quantum gravity has to agree with GR in most limits - in fact, I independently wrote it above. But I completely disagree that it is disappointing in any way.

Lubos,

thank you again for the clear answer. The reason I added "disappointing" is that the situation you describe is now the following: general relativity (with Einstein-Hilbert action) and string theory differ only by terms which cannot be measured in experiments (your 999:1 bet).

It is fun to see, when you google for "string theory" and "deviations from general relativity", that many pages come up. So thank you for stating so clearly that in fact, these deviations are probably not of any "measurable" importance.

Obviously, this distinguishes string theory from all other theories that predict deviations from general relativity, but it does not distinguish string theory from general relativity itself. We thus can confirm string theory only in the particle physics domain, not in the gravitation domain. I think that is a powerful conclusion. Thanks for saying this so clearly!

heinz
 
  • #93
lumidek said:
1. QCD is not only asymptotically "safe": it is asymptotically free (which means that the coupling goes to zero in the UV, instead of a finite constant, as in asymptotically safe theories). And it is a great example of yours showing what I meant.


Yes QCD is asymptotically safe, asymptotic freedom is a special case of asymptotic safety where the fixed point is Gaussian, as I said. But note that Newtons constant would be asymptotically free in asymptotically safe gravity as it has a negative mass dimension.

lumidek said:
QCD only became a sensible theory worth studying when the people understood why it was asymptotically free - the negative beta-function. That's what the QCD fathers finally got their Nobel prize for. Before that point, one couldn't say anything sensible, deep, or useful about QCD, which is why no one should have studied it.

But by that logic the people who found the negative beta-function shouldn't of been studying it. People study the RG flow of gravity to see whether the beta-function's of gravitational constants have fixed points. Evidence has been found for the fixed so also it makes sense to look at the physical implications.



I was thinking of fields on black hole spacetimes in the entropy discussion. Sorry. Can you give me a reference with the calculation that entropy goes as T^d in a full QFT? I was looking at Susskind's book (BH, information and the ST revolution) where he notes this relationship S~V T^3 for a free scalar field. He then goes on to show that this implies the entropy diverges near the horizon. This though is at least a semi-classical calculation. Obviously a full theory of quantum gravity should solve this problem. It also doesn't seem unreasonable that within asymptotic safety this problem could be solved; the coupling of gravity to matter fields is asymptotically free so effects near the horizon should be reduced.
 
  • #94
lumidek said:
a discussion of this paper by Thiemann involves some technicalities which are not terribly interesting. (...)

Thanks for your clarifications (although I was expecting a more detailed elaboration; but never mind). This is certainly something I should work myself.

lumidek said:
Do you really think that people like me should be wasting time with obscure LQG papers a week after the event that has falsified LQG and all similar research programs?

Evidently, every one is free to do whatever one desires, I'm not here to try to convince anyone otherwise.

Also, there is nothing particularly wrong in establishing one's own standard against which a given theory is found not to deserve further investiment of one's energy and time, so I have nothing to criticize you on thinking that LQG is a waste of your time. However, I do not agree to conclude that it has been "falsified and period". I think there is still a long way to ascertain the situation. This is quite normal in science. I am generally as skeptical as science requires, and I think it is healthy to keep that way. Evidently there is a limit to that and the limit is not often as clear as desirable. The situation in quantum gravity is exactly like that.

You should realize that your standard is not necessarily in agreement with other people's, not because other people are stupid and you are a genius, but just because in the present case, it is clear that there are still subtleties in the LQG formalism (I believe you would agree with this?). These are open for debate, and people are interested in investigating them further. This is not a big issue (although you do often make a big issue on this). You are free to put an end to your own curiosity about LQG, by your own standards. But this does not mean that you are 100% correct. What is needed is clean cut predictions and clean cut observations/experiments. LQG is not at that point yet; Fermi data lead to some interesting (possible) constraints, that need to be established with more data. More understanding of the source, a clear bound on the emission time of the highest energy photon.

A similar situation concerns your preferred approach, string theory, which is often claimed to be under construction. One needs clean cut predicitons of the theory in order to falsify it.

Thanks.
Christine
 
  • #95
Hi Lubos,
as Christine and others pointed out, we are talking about one photon. I don't think that it is appropriate to say, that some theory has been falsified by measuring one photon. Never. Of course this does not change anything in all the arguments exchanged in this very interesting thread. You can always add the small if clause "if the result is confirmed then..." and then we can put a probability to that. But by completely dropping it, I think you make your position attackable at a point, where it is not necessary. Because whether this is confirmed or not we can just wait and see. No point to put energy into this if you are a theorist.
 
  • #96
Lubos:

While I am myself unsure of the merits of LQG, you make some fundemantally flawed claims concerning discrete spacetime. Small scale physics that breaks some laws deduced from macroscopic observations are perfectly capable of reproducing these macroscopic properties in the large scale limit. As an elementary example, just think of wave propagation over a mass-spring chain, which is dispersive but has the continuum wave propagation for wavelengths much larger than the grid spacing.

My impression of you and your likes, is that you are terrified of the possibility that all these fancy theories you have invested your life in, will be falsified by some future experiment. To claim victory over one photon merely shows desperation. But even if these dipersion predictions are falsified, at least that's more than string theory can lay claim to.

Since you seem to know so well, do you dare make any predictions that might be tested with the LHC? What if supersymmetric particles are not found, will that mean anything for your position on string theory?
 
  • #97
Eelco said:
will be falsified by some future experiment.

Not really. Just string theory models he likes best. The one by Mavromatos shows an average distribution for light speed, not of fundamental nature, and could fit possibly delayed fotons.
 
  • #98
You know MTd2, Lubos admitted that he is on a political fight against LQG. I am not sure what is the point to continue any scientific argumentation for or against here, we merely have lobbying activity and it is against PF rules. So I am not sure how much will be necessary and whether it is worth pushing in this direction, for instance Lubos made several references to "God" which should be enough for moderation of a "regular" member.
 
  • #99
humanino said:
for instance Lubos made several references to "God" which should be enough for moderation of a "regular" member.

That statement is completely ridiculous and you know it. The reference to God was clearly not made in a scientific sense.
 
  • #100
I would like to present the relevance from an slighly different viewpoint.

There are two asic options:
a) LQG predicts dispersion of speed of light. Them it has benn falsified.

b) It doesn't predict that dispersion. Them it doesn't predict anything measurable AFAIK. That's contrary to one of it's declared main purposes, to sacrify "ambition" of beeing an unified theory for the predictivity power.

Anyway this result is very bad for LQG.

Let's go with some of the subleties. Some people claim that LQG in fact doesn't predict that dispersions. Lubos, on the contrary, gives a general argument about the lack of imaginary values for areas in LQG (something shared by all the approach to LQG,canonical, spin foams, CDT's if I am not wrong) impliying, whatever LQG people agrees or not that dispersions. Well, I would like to see if Lubos has some reference for an actual paper where that argument is elaborated in detaill.

The other subletie I see is that some people claim that as an experimental result the conclusions are not absolutly settled because it is argued that it is necessary to obtain result for ensemmbles of photons because the dispersion is an average result. Aout this particlar point Lubbos says that the natural thing would be to do an satatistic about the number of collisions of the photon with the "atoms of space time" (inthe markopolulos justification of the phenomena). Form this viewpoint a single photon will have by far enought numer of collisions to do a good statistichal average as far as I see (I think that is the essence of Lubos argument).

About the question of predictions of string theory surelly Lubos can give more detaills (and correct me if I make some mistake in what I say about F-theory). I can point to the F-theory GUT scenaries. In the strings 2009 Vafa gave a brief account of results stating two clear predictions. One was the there were not WIMP's candidates foro dark matter. That implied that the apparent excess of positrons observed by ARTIC and PAMELA were false. Curiously FERMI/GLAST could also prove this.

For the LHC it is predicted some particular particle (I don't remembberthe name just now), with a very clear trace when leaving the detectors.

If that results are found that particular approach, the F-theory GUTS will be clearly favoured. If not that particular approach of string theory (a very good one which reproduces all the characteristics of the standard model) will have been falsified. Still it is possible that other phenomenological models based on string theory could be shown valids.
 
  • #101
Sauron said:
Let's go with some of the subleties. Some people claim that LQG in fact doesn't predict that dispersions. Lubos, on the contrary, gives a general argument about the lack of imaginary values for areas in LQG (something shared by all the approach to LQG,canonical, spin foams, CDT's if I am not wrong) impliying, whatever LQG people agrees or not that dispersions. Well, I would like to see if Lubos has some reference for an actual paper where that argument is elaborated in detaill.

Nice post. I just want to know more about this lack of imaginary area. Does this stem from the Hamilton approach of LQG where by they split space time d=3+1 such that areas can only be real i.e spatial? I see then that this could crop up in CDT as there they seem to give time a direction. As I see it the singling out time could well be the downfall of these theories if this makes them break Lorentz invariance physically. On the other hand it could be that this singling out of time is no more than a gauge fixing procedure for example if one gauge fixes a Lagrangian in the path integral approach this breaks Lorentz invariance in the Lagrangian but the theory still gives the correct gauge independent results.

Clearly a lack of imaginary areas seems like we area seriously restricting the number of metrics that we include in a path integral approach like CDT. Perhaps this restriction is to server on the other hand restriction is needed such that double counting doesn't occur.

In my opinion if your starting principles are general relativity and quantum mechanics and you end up with a theory that breaks local Lorentz invariance you haven't applied those principles. If this is so then you should really restate your guiding principles, change your approach, so you retain Lorentz invariance, or give up on the theory altogether. I must say that the first one seems least appealing therapeutically but experimentally it obviously leads to predictions.
 
  • #102
Sauron said:
...
Anyway this result is very bad for LQG.
...

I don't understand your reasoning, Sauron. LQG researchers tried for some years to derive a prediction of dispersion, but could not make the 4D theory yield such a prediction.
This observation makes dispersion less likely. If it is born out by other similar observations then this will help guide their development of LQG and save them trouble.

It certainly does not falsify the approach :biggrin: since there was no prediction that actually derived from the theory. I see this kind of Fermi-LAT observation as stimulating for LQG and the other QG approaches.

The task of deriving predictions still remains, and various avenues are being explored. But that is a separate topic. All this observational result does is give more direction and focus to the effort. Or? Please explain if you see it differently.
 
  • #103
marcus said:
I don't understand your reasoning, Sauron. LQG researchers tried for some years to derive a prediction of dispersion, but could not make the 4D theory yield such a prediction.
This observation makes dispersion less likely. If it is born out by other similar observations then this will help guide their development of LQG and save them trouble.

It certainly does not falsify the approach :biggrin: since there was no prediction that actually derived from the theory. I see this kind of Fermi-LAT observation as stimulating for LQG and the other QG approaches.

The task of deriving predictions still remains, and various avenues are being explored. But that is a separate topic. All this observational result does is give more direction and focus to the effort. Or? Please explain if you see it differently.

What's your take on Henson's http://arxiv.org/abs/0901.4009 ? He claims spin foams violate Lorentz invariance, specifically photon dispersion tests.
 
  • #104
marcus said:
I don't understand your reasoning, Sauron. LQG researchers tried for some years to derive a prediction of dispersion, but could not make the 4D theory yield such a prediction.
This observation makes dispersion less likely. If it is born out by other similar observations then this will help guide their development of LQG and save them trouble.

It certainly does not falsify the approach :biggrin: since there was no prediction that actually derived from the theory. I see this kind of Fermi-LAT observation as stimulating for LQG and the other QG approaches.

The task of deriving predictions still remains, and various avenues are being explored. But that is a separate topic. All this observational result does is give more direction and focus to the effort. Or? Please explain if you see it differently.

I must have dreamt when I read Smolin's book. Lubos Motl concludes, that LQG is dead. Where is your voice, Marcus?
 
  • #105
atyy said:
What's your take on Henson's http://arxiv.org/abs/0901.4009 ? He claims spin foams violate Lorentz invariance, specifically photon dispersion tests.

I guess you know Joe Henson is not a Lqg researcher. He is a postdoc currently at Perimeter who has done almost all his work in Causal Sets. He indicates that Carlo Rovelli and Daniele Oriti (more experienced representatives of mainstream Lqg) had serious objections to the paper. The paper is iffy and handwaving. It says if such and such then maybe so and so. Ultimately doesn't derive hard prediction.

You asked my take. Well, in essence that paper seems to have been Joe's contribution to one of the parallel sessions at the Potsdam Loops 2005 conference,
http://loops05.aei.mpg.de/index_files/abstract_henson.html
I am not sure why it didn't get published earlier. The preprint is January 2009. Before spending a lot of time on it, I would wait to see how it fares in peer review and the normal publication channels. If he thinks the idea is good maybe he will follow it up with something less tentative.
 
Last edited:
  • #106
Micha said:
... Lubos Motl concludes, that LQG is dead. Where is your voice, Marcus?

Hi Micha. I have never found L especially reliable on the facts or successful at anticipating the future course of research. I really have no business getting involved in this thread. It is an exciting lively discussion and everybody is doing a great job. I think I will try to keep "my voice" out of it (unless it gets too tempting to resist) but thanks so much for asking! Maybe I will start a quieter thread trying to anticipate how QG research is likely to go, emphasis-wise, over the next 6 months or so. Then if I make predictions, and they turn out wrong, anybody who bothers to read can laugh at me.
 
Last edited:
  • #107
It is interesting that a paper (cited in the OP) suggesting that there is no frequency-dependent photon dispersion (to some constraint) rests on the capture of one high-energy photon in one observation. That's not really good science, regardless of the number of names and sponsoring agencies on the paper. That signal could have been unrelated to the GRB in question.

Astronomy is an observational science, and if it going to be used to test and constrain cosmology (a really good idea, IMO) people have to take a breath and wait for trends in repeatable observations to lead the way.
 
Last edited:
  • #108
atyy said:
I agree that the major question about asymptotic safety is its existence. But suppose the UV fixed point for gravity exists, and the critical surface is finite dimensional - will that be enough to make predictions, or will there still be a problem coming in from electroweak theory not having a continuum limit (ie. can asymptotically safe gravity and electroweak theory be combined at Planck scale energies)?
Dear atyy, this is a whole research project of yours. Great questions. ;-)

But there are no clearcut answers known to these questions. It is not known whether the UV fixed surface is finite-dimensional without electroweak stuff added. So of course, it's also unknown whether it would remain finite-dimensional with the non-gravitational forces included. And it is unclear how the Landau poles would be treated. At any rate, it's clear that one cannot "neglect" the nongravitational forces at the Planck scale because they're actually *guaranteed* to be stronger and more important than gravity, so one would be neglecting the bulk of the forces, see

http://arxiv.org/abs/hep-th/0601001

Also, it is unknown how to actually extract predictions from asymptotically safe theories although it should be possible: but no systematic procedure is known. If it were known, it would have to be possible to prove the finiteness of the UV surface, too. And so on. So asymptotic safety's only realistic goal at this moment is to defend an idea, not predict or explain things beyond this idea, which I find too little, too late.
 
  • #109
Micha said:
Hi Lubos,
as Christine and others pointed out, we are talking about one photon. I don't think that it is appropriate to say, that some theory has been falsified by measuring one photon. Never. Of course this does not change anything in all the arguments exchanged in this very interesting thread. You can always add the small if clause "if the result is confirmed then..." and then we can put a probability to that. But by completely dropping it, I think you make your position attackable at a point, where it is not necessary. Because whether this is confirmed or not we can just wait and see. No point to put energy into this if you are a theorist.
Dear Micha, what you write is just nonsense.

Strict and careful analysis implies that it doesn't matter whether a theory is falsified by one photon or 2009 photons. What matters is the confidence level. There is no rule in science that one needs at least two particles to falsify a hypothesis, and there can't be one. Such a rule would be completely arbitrary. And the confidence level that the photon couldn't have been delayed/sped up by minutes is well above 99%: just look how these things are argued in the paper. The last line of page 16 explains, for example, what conclusion is at the 99% confidence:

http://arxiv.org/ftp/arxiv/papers/0908/0908.1832.pdf

It's the inequality with 1.22 times the Planck mass. If one looks at somewhat lower confidence levels, he gets to those 100 times Planck mass. 99% confidence level is higher than the confidence level declared by the IPCC that the 20th century warming was mostly man-made - yet the latter statement is often said to be "settled" (and even I tend to agree that this particular IPCC statement is true, except that I don't think that 0.4 deg C per century - a majority of the observed warming - is worrisome in any way).

I am not "dropping" any confidence levels. I am just saying that they are not simple functions of the number of photons and that even with the observations they made, they're so high that the question is de facto settled, especially if the LQG people were really predicting between minutes and weeks (!) of lags - this is surely not above one second and probably not even above 10 ms. (And the actual prediction of Lorentz-breaking theories is that the speed of light is completely arbitrary!) The question will never be "quite" settled, at 100%, because this is impossible in the real world. One can only be getting "closer" to 100%. However, if you want, I am ready to make a bet that future photons from similar events will just confirm the same thing: no lag. Ready to make a bet? USD 1,000?

Before you decide, let me say that there was also a 3.4 GeV photon about 0.2 seconds from the 31 GeV photon. With 99% confidence, more than 90% of the future photons will arrive within the same 2-second window during all future measurements. That's because the theory - relativity - predicting this statement has been supported by a 99% CL evidence. Forget all crazy comments (in the media, and not only media) about a 4-minute delay produced in MAGIC etc. The delay accumulated by a photon crossing the whole visible Universe can't exceed 2 seconds, at 99.9% confidence level. That's what this Fermi observation shows.

I actually don't believe that you believe otherwise. It would be downright preposterous.
 
Last edited:
  • #110
MTd2 said:
There would be no QCD fathers if they thought like that before studying it.
You're completely wrong about the history, too. Read, for example, 25 years of asymptotic freedom by Gross

http://arxiv.org/abs/hep-th/9809060

It contains a detailed section about his path to asymptotic freedom.

It is very clear that he - and others - never studied theories that they thought were wrong (and not even theories that had no good circumstantial evidence to be right). So instead of a non-existing QCD (or QCD without a proof of a right behavior), he focused on sum rules and good no-go theorems (about positive beta functions) which were shown to have a loophole, which is what really created QCD. But there was no QCD before this discovery. There had to be Yang-Mills theory - for decades - but its relevance for strong interactions couldn't have been understood.

The QCD discovery *is* the discovery of its asymptotic freedom, so there are no fathers of QCD (I mean the theory primarily with gluons) before the discovery of asymptotic freedom. The closest people may be Yang and Mills but saying that Yang-Mills theory would have been relevant for the strong force would have been pure and vacuous wishful thinking before some evidence was known - and the negative beta-function was the first evidence.
 
  • #111
humanino said:
You know MTd2, Lubos admitted that he is on a political fight against LQG. I am not sure what is the point to continue any scientific argumentation for or against here, we merely have lobbying activity and it is against PF rules. So I am not sure how much will be necessary and whether it is worth pushing in this direction, for instance Lubos made several references to "God" which should be enough for moderation of a "regular" member.
I didn't write I am in political war with LQG. I am in political war with some people behind it, like Smolin, who want to destroy science as we've known it for centuries. It just happens that they also defend LQG - but it's not a coincidence that the people who defend unscientific methods to determine the truth also end up with unscientific theories.

"My" well-known signature about God who wrote the world in the language of mathematics is due to Galileo, and very similar comments came from Einstein as well as the majority of famous physicists, too. Feel free to suggest censoring but it shows something about you, not about Galileo, Einstein, or me, for that matter.
 
Last edited:
  • #112
marcus said:
I don't understand your reasoning, Sauron. LQG researchers tried for some years to derive a prediction of dispersion, but could not make the 4D theory yield such a prediction.
This observation makes dispersion less likely.

That's very interesting, Marcus. And what about e.g. these papers

http://arxiv.org/abs/gr-qc/9809038
http://arxiv.org/abs/hep-th/0108061
http://arxiv.org/abs/gr-qc/0411101
http://arxiv.org/abs/gr-qc/0403053
http://arxiv.org/abs/hep-th/0603002
http://arxiv.org/abs/hep-th/0111176
http://arxiv.org/abs/hep-th/0208193
http://arxiv.org/abs/hep-th/0501116
http://arxiv.org/abs/gr-qc/0207030
http://arxiv.org/abs/gr-qc/0207031
http://arxiv.org/abs/gr-qc/0207085
http://arxiv.org/abs/hep-th/0501091
http://arxiv.org/abs/hep-th/0605052
http://arxiv.org/abs/gr-qc/0404113

and hundreds of papers that cite them to almost unanimously conclude that LQG predicts an energy-dependent speed of light? What about all those loud LQG people who were so proudly saying that they had a "prediction"? What about basic physics knowledge that makes it completely obvious that every discrete model of space - with discrete real spectra - is inevitably incompatible with Lorentz symmetry because Lorentz symmetry implies that these spectra are both continuous and allowed to be imaginary?
If it is born out by other similar observations then this will help guide their development of LQG and save them trouble.

It certainly does not falsify the approach :biggrin: since there was no prediction that actually derived from the theory. I see this kind of Fermi-LAT observation as stimulating for LQG and the other QG approaches.

The task of deriving predictions still remains, and various avenues are being explored. But that is a separate topic. All this observational result does is give more direction and focus to the effort. Or? Please explain if you see it differently.

I don't think that you can be both mentally healthy and having elementary human honesty at the same moment if you're able to write these things. This story is over.
 
  • #113
lumidek said:
I am in political war with some people behind it, like Smolin, who want to destroy science as we've known it for centuries. It just happens that they also defend LQG - but it's not a coincidence that the people who defend unscientific methods to determine the truth also end up with unscientific theories.

In what way does Smolin want to destroy science? We all know what smolin are no longer as keen on string theory as he maybe was long time ago, but assming your not equating "science in phyisics" with string theory, then can you be a little bit more specific?

Smoling seeks a little more diversity, and suggest we should not put all eggs in one basked. Never ever have I read him say that string theory should not be researched. Smolin is quite openminded in constrats to some other people which appear very intolerance about differently thinking minds - for no convincing reason.

I don't see how trying to actively suppress variety is good scientific method. The rational choice would be to scheduele resources as per the probable potential. Of course, each scientist may (without contradition) make different ratings of what programs are more probable. That's why there is diversity in the community.

So is the politics what you have mind? or are you talking about smolins rejection of eternal timeless laws?

If I'm not mistaken, you're a hired professional right? Why would other programs treaten you? If you are right and everyone else is wrong, given time you will be the hero. Why not for the sake of healty diversity give your opponents a break?

I don't think anyone here suggest you should drop doing string theory. You obviously burn for it, so go for it with all you've got.

Also, I don't like the main LQG program for other reasons. But fortunately LQG and string theory aren't the only options either. And if there's no existing program no intolerance should ban be from exploring it.

You seems to lump any "non-string approaches" together and seem to think that the string framework is certainly true and unquestionable, and that whatever comes next, will fit into your string world.

This is the intolerance I react to. I don't see Smolin displaying even a fraction of such intolerance??

Can you enlighten me what is so horrible about Mr Smolin? Also as far as I know, smolin isn't into pure LQG, he has been elaborating a lot of ideas including hte ideas of evolving law, that really doesn't fit in the LQG framework. I find rovelli's and smolins reasoning to be in strong contrast of several important points.

/Fredrik
 
  • #114
Certainly the finding on this thread, would - given that the confidence can be established - constrain only certain linear LIV models (which appears to be a small subset of possible "non-stringy appraoches"), which is nothing that worries me anyway and it's no convincing argument for string theory.

/Fredrik
 
  • #115
lumidek said:
That's very interesting, Marcus. And what about e.g. these papers

That's because you call LQG papers, those that are actualy about non-LQG quantum gravity or either that failed to see lorentz breaking effects on LQG! :)
 
  • #116
lumidek said:
It is very clear that he - and others - never studied theories that they thought were wrong (and not even theories that had no good circumstantial evidence to be right).
As opposed to people who study theories that think are wrong? This is not an issue. People study new things because they don't know its final form. This is the same case with gravitational assymtotic safety.
 
  • #117
Hans de Vries said:
Now, while agreeing with you, how would you explain that your favorite theory
doesn't exhibit the same problem? Why doesn't the photon propagator become
"fuzzy" with all these complicated geometry paths at the Planck scale?

The (not so well known) "photon self-propagator" which has the photon
field itself as a source, rather than the current j, does a wonderful job in
canceling out the contributions on all paths other than the light-cone path
(see sect 1.19 of: http://physics-quest.org/Book_Chapter_EM_basic.pdf )
but it needs a flat geometry at Planck's scale.

Regards, Hans

lumidek said:
Because all the interactions that govern physics in the 10/11-dimensional space are Lorentz-invariant i.e. under SO(3,1), all effective actions encoding how the subtle massive and wound objects will influence photons in 3+1D will be Lorentz-invariant, too. So there's never any predicted dispersion or anything like that. All the fundamental laws of string theory are exactly Lorentz-invariant, which means that all the effective laws obtained with them - and all predictions - will be Lorentz-invariant, too. Lorentz invariance implies that the speed of light is a universal constant.

Hi, Lubos

I see you went into this issue by answering ensabah6's post. Thank you for doing so.

lumidek said:
Lorentz invariance implies that the speed of light is a universal constant.

Nevertheless, the propagation speed of massless particles, in this case the photon,
is often not equal to c even if the theory is Lorentz, invariant, with the best know
example the flat spaces with an even number of spatial dimensions which have photon
Green functions which are non-zero inside the light cone. (Figure 1 http://physics-quest.org/Higher_dimensional_EM_radiation.pdf" )

Gravitational lensing is another case where you get a dispersion of the photon propagator.

So it doesn't seem that easy for string theory to escape dispersion due to its complicated
geometry paths at Planck scale (except for the trivial Brane world case of course)

Do you want to say that you expect that the holonomy requirements which lead to
CY and G2 manifolds should be responsible for the cancellations of the non-light cone
paths? and if so do you have any interesting links?Regards, Hans
 
Last edited by a moderator:
  • #118
A counterpart to Lubos' reaction would be to affirm that the recent results by LIGO, e.g.,

http://egregium.wordpress.com/2009/08/20/news-from-ligo/

completely falsify the string theory programme (where in fact they only appear to rule out some ST models that favor the existence of cosmic strings).
 
Last edited by a moderator:
  • #119
The great grand father of spinfoams is Penrose. If we take him seriously, our 4 dimensional space-time emerges from an underlying twistor space. In this context, light-cone structures are maintained even with quantum fluctuations of the geometry, so Lorentz invariance should hold firm. Witten took interest in this direction as you know, and it's quite an interesting possibility even for string theory. Also in that case, locality should be reconsidered.
 
  • #120
lumidek said:
...
Strict and careful analysis implies that it doesn't matter whether a theory is falsified by one photon or 2009 photons.
...
Ready to make a bet?

What, if the single photon was due to a detector malfunction? Do you know all of its inner workings to be sure?

In the real world you never get 100% security, yes. But if a different group with a different detector got the same result, then I think people would start to accept a strong word like "falsify". This is exactly not about betting, but about careful language.

The LHC has two technically completely different detectors constructed and operated by different groups. Is it a waste of resources?
 

Similar threads

Replies
12
Views
3K
  • · Replies 0 ·
Replies
0
Views
2K
  • · Replies 37 ·
2
Replies
37
Views
14K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 0 ·
Replies
0
Views
2K
  • · Replies 19 ·
Replies
19
Views
5K
  • · Replies 17 ·
Replies
17
Views
5K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
25
Views
5K
  • · Replies 9 ·
Replies
9
Views
3K