LHC, supersymmetry and strings

If LHC finds supersymmetry, will you find string theory more appealing?


  • Total voters
    19
  • #51
arivero said:
Hmm I should check (it has been one year since I heard the lectures), but I believe that the model allowed for seesaw.

Perhaps. This would be the dimension five operator HLHL over a mass scale which has to be like 10^14 GeV. I didn't really understand any of the physics in the paper, only the parts that said ``neutrino coupling comprable to the tau lepton is required for such and such a cancellation''.
 
Physics news on Phys.org
  • #52
arivero said:
"Also didn't Connes predict a Higgs mass of 170 GeV? Is that reasonable? Do I have some orders of magnitude wrong? "

A point here is that if you postulate you can run up to GUT scale (or down from it), the Higgs is always in the order of magnitude of these 170.

The Tevatron has almost excluded an SM at 160 GeV, and 170 is pretty close. Does this mean that the Tevatron may have ruled out Connes model by next summer?
 
  • #53
an SM <b>Higgs</b>, that is. Duh.
 
  • #54
BenTheMan said:
Perhaps. This would be the dimension five operator HLHL over a mass scale which has to be like 10^14 GeV. I didn't really understand any of the physics in the paper, only the parts that said ``neutrino coupling comprable to the tau lepton is required for such and such a cancellation''.

I see, then you argue that to seesaw a 1.7 GeV neutrino down to 0.01 eV one needs an intermediate scale, and that such intermediate scale would most probably change the renormalization group running and then the predictions, right?
 
  • #55
josh1 said:
I haven't "softened" my claim. I've explained to you what string theorists mean when they say that there is a unique theory underlying stringy physics as it's currently understood. There's just no way that these relations have no deeper physical meaning.
I agree, there must be a deeper meaning of all this. I do not necessarily agree with a conservative point of view of T. Banks. Nevertheless, I do not want to accept a definite claim on something that is not yet found.
 
  • #56
BenTheMan said:
So the fact that one can derive A/4 for black hole radiation in loops isn't an achievement?
I don't think one can derive it in LQG. All one can derive is that the entropy of a surface is proportional to the surface, with a universal (but undetermined!) constant of proportionality. I can hardly find this result surprising.

What IS a big achievement of LQG in my view, is the result that the theory is ultraviolet finite owing to the diffeomorphism invariance.
 
  • #57
arivero said:
I see, then you argue that to seesaw a 1.7 GeV neutrino down to 0.01 eV one needs an intermediate scale, and that such intermediate scale would most probably change the renormalization group running and then the predictions, right?

Well, it doesn't screw up gauge coupling unification because the right handed neutrino is a singlet and has zero hypercharge. One can give it a mass at 10^14 GeV, and then use something like the seesaw to get realistic neutrino masses.

Something else that is nice is that the mass term for a right handed neutrino explicitly breaks U(1)_{B-L} by two units. So neutrino masses are in some sense tied to this scale.

This is why people like SO(10)---you get a right handed neutrino for free, along with SU(3)xSU(2)xU(1)xU(1)_{B-L}.

I don't think one can derive it in LQG. All one can derive is that the entropy of a surface is proportional to the surface, with a universal (but undetermined!) constant of proportionality. I can hardly find this result surprising.

I agree 100%. I wish I could scale all of MY answers by an arbitrary constant :)

What IS a big achievement of LQG in my view, is the result that the theory is ultraviolet finite owing to the diffeomorphism invariance.

This is a bit confusing---marcus will complain that ``LQG'' is a generic term for non-stringy QG research, and link you to all of the talks at LOOPS 07 (the name of the conference, ironically, sort of proves your point). IF all of these approaches to QG are as different as he claims, but they are all supposedly ``background independant'', why should I be surprised by being able to find a background independant UV finite theory of QG when there are so many to choose from?
 
Last edited:
  • #58
BenTheMan said:
Well, it doesn't screw up gauge coupling unification because the right handed neutrino is a singlet and has zero hypercharge. One can give it a mass at 10^14 GeV, and then use something like the seesaw to get realistic neutrino masses.

Something else that is nice is that the mass term for a right handed neutrino explicitly breaks U(1)_{B-L} by two units. So neutrino masses are in some sense tied to this scale.

Ok, I see, no direct contradiction then, only that we get a new scale to justify! Talk of the desert.
 
  • #59
Hi Ben, you seem to be carrying on a one-sided conversation with me---and giving a not-always-accurate interpretation of what I would say, and mean by it, in various cases.

the fact is that people do use "LQG" in two different senses. Demy was using it in the restricted sense, I think----the canonical approach developed mostly in the 1990s.
That's clear and fine. String-thinkers often use the word as a catch-all generic for the Loop community---the nonstring competition in general. That includes a lot of approaches that you only get an idea of if you look at Loops '07. And then they may make false statements about the nonstring competition because they don't know what it actually looks like.

I don't complain about this. It is just how people use the word. Sometimes the ambiguity can cause confusion and needs to be "disambiguated" (as the Wikipedia people say.)

Most non-string QG approaches do not currently have a BH entropy result. What Demy said seems clear, and I think it is obvious he is talking about canonical "LQG proper."

"LQG proper" does have a BH entropy result. But mathematically speaking it is not, if I remember correctly, what Demy says. I may be wrong about this but I think that according to the best current interpretation (e.g. Hanno Sahlmann and references therein) the BH entropy is NOT proportional to the area.

To first order, yes. But there are some correction terms. People have different attitudes about higher-order terms. Some dismiss them as inconsequential, others may decide they are interesting. Sahlmann is at 't Hooft's institute at Utrecht and has a pretty amazing track record for a young person---I think he is perceptive. He didn't have to look at Corichi's results (Sahlmann is good in a lot of areas and can investigate what he pleases) but he did find them interesting. So that's a cue---there might be something interesting there: in the fact that BH entropy is not proportional to area---in "LQG proper".

I could be misremembering, so I will get a link to Hanno's paper:
http://arxiv.org/abs/0709.2433
Toward explaining black hole entropy quantization in loop quantum gravity
Hanno Sahlmann
14 pages, 5 figures
(Submitted on 15 Sep 2007)

"In a remarkable numerical analysis of the spectrum of states for a spherically symmetric black hole in loop quantum gravity, Corichi, Diaz-Polo and Fernandez-Borja found that the entropy of the black hole horizon increases in what resembles discrete steps as a function of area. In the present article we reformulate the combinatorial problem of counting horizon states in terms of paths through a certain space. This formulation sheds some light on the origins of this step-like behavior of the entropy. In particular, using a few extra assumptions we arrive at a formula that reproduces the observed step-length to a few tenths of a percent accuracy. However, in our reformulation the periodicity ultimately arises as a property of some complicated process, the properties of which, in turn, depend on the properties of the area spectrum in loop quantum gravity in a rather opaque way. Thus, in some sense, a deep explanation of the observed periodicity is still lacking."
 
Last edited:
  • #60
Ok, I see, no direct contradiction then, only that we get a new scale to justify! Talk of the desert.

Yeah, but in this case the scale is linked to two things---neutrino masses and B-L breaking, so it's not quite so bad. One could also conceive of adding a SUSY breaking sector at this scale, giving us THREE things that are linked.

I will point out that, of course, only one of these things has been observed.
 
  • #61
Arivero, in sum, what is the situation about Connes predictions?

I would like it if there were a definite competing prediction from some other person. Then we could see who is right.

So far all I find in Connes is this explicit figure for Higgs 170 GeV.

It seems now that no one wants to challenge this and give a different figure! No competition? Can that be right?

About neutrino, people apparently impute things to him and say this or that, but cannot find any explicit prediction by Connes. Did he actually make one? Am I missing some obvious statement of his? If anybody knows of an explicit prediction, please tell me what paper and on what page, so I can read it.

It would be nice to have a couple of precise predictions by reputable people on record, so we could see who is closer when there finally are results.
 
Last edited:
  • #62
marcus said:
Arivero, in sum, what is the situation about Connes predictions?

I would like it if there were a definite competing prediction from some other person. Then we could see who is right.

So far all I find in Connes is this explicit figure for Higgs 170 GeV.

It seems now that no one wants to challenge this and give a different figure! No competition? Can that be right?

About neutrino, people apparently impute things to him and say this or that, but cannot find any explicit prediction by Connes. Did he actually make one?

Not exactly, the neutrino must be seesawed (seesawaw? seesawt?) away, so not prediction. What he remarked is that if the dirac mass of the tau neutrino was big enough, it should be accounted in the sum of the total mass of fermions when doing predictions.

It is not that there is not competition. It is that more or less all the competitors land in the same targets, and most of them can even tune the target, or to move the arrow while it is flying. Note that any theory with a Higgs very massive is automatically discarded because it can not run up to GUT scale.
What happens is that most of the theories include susy and then more parameters to play with. Connes's model is old fashioned and then he can complete the exercise and show numbers; I believe that was the whole point of the "prediction": to hint that NCG models can give numbers, while others need more input data.
 
  • #63
Fascinating to watch this play out!

If I understand, Connes and Smolin are both at risk from what the LHC could find.
I believe that Smolin ball-and-tube model (that he is investigating since 2005) cannot accept susy or extra dimensions, at the scale to be probed---so it is dead if LHC sees either susy or extra dimension.
And if I understand something you said earlier, Connes NCG standard model has no room for susy, so it is dead if susy is observed. Also Connes has less wiggleroom than the others about the 170 GeV, because he has fewer parameters to adjust. He cannot shift the arrow in mid-flight, as you say. So his model is at risk of being seriously compromised in case something very different from that is seen. Please correct me if I misremember what you said.

Do you think of anybody else's theory at risk from what LHC might reasonably see?
 
Last edited:
  • #64
marcus said:
Demy was using it in the restricted sense, I think----the canonical approach developed mostly in the 1990s.

What Demy said seems clear, and I think it is obvious he is talking about canonical "LQG proper."

... the BH entropy is NOT proportional to the area. To first order, yes. But there are some correction terms.
Marcus, let me just confirm everything you said above. Of course, I have been talking about the first order approximation to the BH entropy.
 
  • #65
The reason people only care about the first order contribution to bh entropy is b/c its a direct result of one loop truncated quantum gravity, the original regime which Hawking and others made the calculation in.

You are free to go to two loops and look for h.c., but you get much larger pathologies as the divergent structure of gravity appears exponentially stronger at 2 loops. Moreover the contributions of these terms are on the order of the Planck scale or transplanckian, which is right around the place where perturbation theory no longer makes sense, so its questionable ipso facto for including them in the first place.

So yea in so far as the semi classical approximation is valid (read in galaxies, etc), that first area term will dominate.
 
  • #66
So far all I find in Connes is this explicit figure for Higgs 170 GeV.

I would be interested to see how one arrives at this figure, seeing as how the precision electroweak data prefer a light higgs. Typically one shows the chi-squared analysis in this situation:

s06_blueband.jpg


This is what the LEP-II data say about the higgs mass---it should be light to fit the rest of the electroweak parameters. The higgs at 160(ish) GeV that FermiLab supposedly saw then un-saw was the CP-odd higgs from the MSSM, not the SM higgs.

Of course, anything IS possible. And Connes paper is pretty untransparent.

And Marcus, I have quoted you the section in Connes paper before (http://arxiv.org/abs/hep-th/0610241). Connes essentially says that the tau neutrino yukawa should be similar in magnitude to the tau lepton, to get some essential cancellations. It's somewhere in section 5 of that 80 pg. paper.

This seems pretty wrong to me, unless he's talking about getting third generation yukawa coupling unification at some renormalization scale---this is probably what he means, I guess. Like I said, I didn't understand his paper---I skipped through all of the confusing math to the phenomenology part.

What happens is that most of the theories include susy and then more parameters to play with. Connes's model is old fashioned and then he can complete the exercise and show numbers; I believe that was the whole point of the "prediction": to hint that NCG models can give numbers, while others need more input data.

I agree with this statement. In this vein, I would like to see one string model analyzed in this much detail, down to the order of neutrino masses and such. It would ostensibly be wrong, but it would still be nice to see a higgs mass prediction.

As an aside, whenever people say ``String theorists never predict anything'', I show them the abstract of Faraggi's paper in 1992, where he predicts a top mass of 175 GeV based on general arguments from heterotic string models: http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6TVN-46YPJFM-14Y&_user=10&_coverDate=01%2F02%2F1992&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=b47f4c34504d94420fb8ff06944d8deb

Do you think of anybody else's theory at risk from what LHC might reasonably see?

Theories are very tricky to kill---just look at how many people still work on technicolor.
 
Last edited:
  • #67
For my part in this interesting discussion, I would like to see a clear explicit standoff.

Connes has publicly signed off on 170 GeV. It is out there even in the ABSTRACT of one of his recent papers---where you don't have to flip pages. Presumably with some reasonable range of uncertainty his beautiful NCG theory standard model is committed to that number, or so it looks to this observer.

I would like to see another prominent person sign off on a substantially different number.

Someone with roughly comparable reputation and body of theory at stake. Once we have a fair even standoff then in a sense I don't care who. Even though I admire Connes enormously I would be delighted if he is proven wrong----scientific theories are supposed to be proven wrong: it's how progress comes about. It would be real, and valuable, information if Connes standard model is falsified. And he would work like a demon (he works like a demon already, I gather from reports).

So I'm basically happy with the situation, but I'd prefer having another player in the game. Likewise in the arena of another particle. If anyone wants to bet that Connes will be refuted by LHC they should find a clear explicit prediction that he makes, written out in black and white, and show me that (not just some bunch of inference about what he predicts but what number he SAYS he predicts) and they should make their own predictions explicitly. And then we'll see.

It's a great time to be watching :biggrin:
 
Last edited:
  • #68
Someone with roughly comparable reputation and body of theory at stake.

What does Connes have at stake? Building a model that has been shown to be wrong? Nobody is going to forget his other contributions. I think the difference with Connes and other people who do phenomenology is that they absolutely DON'T stick to one model---this is exactly as it should be because there is no reason to do such a thing, especially for young people in the field. What will all the people who work on Technicolor do when it isn't found? Staking everything on one model makes everybody except one person a loser---but if everybody hedges their bets, the field is better of in general, I think.

Why does it have to be all or nothing?

It's a great time to be watching

Not as great as it will be in five years :)
 
  • #69
BenTheMan said:
As an aside, whenever people say ``String theorists never predict anything'', I show them the abstract of Faraggi's paper in 1992, where he predicts a top mass of 175 GeV based on general arguments from heterotic string models: http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6TVN-46YPJFM-14Y&_user=10&_coverDate=01%2F02%2F1992&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=b47f4c34504d94420fb8ff06944d8deb
.

The article is also found in KEK, via spires:
http://ccdb4fs.kek.jp/cgi-bin/img_index?9110375
http://www.slac.stanford.edu/spires/find/hep/www?rawcmd=FIND+A+FARAGGI&FORMAT=www&SEQUENCE=citecount%28d%29


Faraggi himself comments on the result: that it was already know that the mass of the top was greater than 80 GeV, and that the phenomena of quasi infrared fixed point for the yukawa coupling of the top makes the prediction almost independent of the GUT model: values between 0.5 and 1.5 at GUT scale renormalise down to about O(1).

There was a lot of buzz about this infrared fixed point in the early nineties, but nowadays it is barely mentioned, I am not sure why nor when the argument has been superseded.
 
Last edited by a moderator:
Back
Top