Kea's remark about the LHC and 2009

  • Thread starter marcus
  • Start date
  • Tags
    Lhc
In summary, the conversation discusses the importance of making quantitative predictions before the opening of the LHC in 2009. It is emphasized that the strongest evidence in science comes from predicting observations before they occur. It is suggested that now is the time for particle theorists to publish predictions of new phenomena that can be confirmed or denied by the LHC. The conversation also mentions the possibility of embedding the standard model in a larger, more fundamental theory to make predictions. The speaker makes a bet that by the end of the year, certain theorists will have made predictions that can be checked by the LHC. The conversation also raises the question of whether those commenting have published physics papers in peer-reviewed journals.
  • #1
marcus
Science Advisor
Gold Member
Dearly Missed
24,775
792
I happened to see this comment of Kea's on a blog and thought it made good sense.


Gil

It is not at all reasonable to wait and see what happens at the LHC. Any responsible tax funded theoretician should be working their ass off to come up with quantitative predictions before 2009.

http://www.math.columbia.edu/~woit/...mbia.edu/~woit/wordpress/?p=684#comment-38202

The strongest kind of scientific evidence is predicting an observation BEFORE the fact.
"post-dicting" or "retro-dicting" doesn't count for much by comparison.
Like when the Microwave Background was predicted in 1948 almost 20 years before it was first observed, and they (Alpher and Gamow) even got the temperature approximately right. That strongly validated the cosmology model from which the prediction was derived. In effect a theory bets its life on its prediction of a new phenomenon, which prior established theory doesn't predict. If that then shows up it is a big deal. And if not, the theory is discredited. there's risk.

It's an important point and it gets overlooked. This is just to check if others are on board with this idea. NOW is the time for particle theorists to be publishing predictions of new phenomena not already expected from prior theory. The opening of the LHC is an expensive event of a sort that does not repeat. To really get your money's worth you need theorists to construct theories which make specific predictions of new stuff which LHC can see whether or not it happens. Theories which will be confirmed if it is seen and denied if it is not seen to happen. In other words theories which live or die based on LHC results.

That's how it's been in the past with other experiments.

Kea put the essential message concisely.
 
Last edited:
Physics news on Phys.org
  • #2
  • #3
jal said:
AND telling everyone that you did make the prediction.

Thank you, Marcus. I am often astonished at how many scientists appear to forget what 2008 means for particle physics. And jal, fortunately the web world makes this possible. Imagine how different things would have been if Web II came a few years after 2008.
 
  • #4
There is an entire section of arxiv devoted to exactly this: hep-ph, with literally tens of thousands of possible models for what we will or will not see at the LHC and beyond. In fact, there are so many models its almost a certainty that we won't be able to pin down the exact physics (this is called the inverse problem) with just a Hadron collider

Many of us have contributed many years to this endeavour, so all I can do is boggle at the implication =/
 
  • #5
Haelfix said:
In fact, there are so many models its almost a certainty that we won't be able to pin down the exact physics ...

Haelfix, from a more mathematical point of view, this is exactly the problem: we should be able to pin down some physics based on how it fits into some highly abstract unification scheme, and strings have somehow failed to focus on this problem. Sure, there are plenty of stringy hep-ph papers, but how many of them contain precise 10 decimal place predictions for the LHC?
 
  • #6
What does string theory have to do with anything? It is pretty divorced from LHC energies, other than setting up some models that have a passing resemblance to some of its mathematical structures. Also nothing in physics automatically implies UV - IR relationships. That would be like expecting the theory of gluons to have something to say about atomic physics. If it happens, that's nice, but logically they need not be correlated.
 
Last edited:
  • #7
In theoretical physics dream land the consistency constraints of established theories should allow us to extrapolate by embedding the existing theories in simpler more fundamental theories and then predict based on that. (And there would be no inverse problem because there are few such theories. In practice there is no inverse problem of this kind because there are none.)

Unfortunately the consistency constraints we have concern physics that is very different from LHC physics and so nobody should expect that we would miraculously be able to make meaningful extrapolations based on them.
The extrapolations that embedd the standard model consistently in a (possibly just slightly) vaster more complicated theory have been well explored.
 
  • #8

Any responsible tax funded theoretician should be working their ass off to come up with quantitative predictions before 2009.


I'm not sure what Kea had in mind exactly but for me that means predictions derived from some theory of geometry-and-matter----some explicit unique theory that will crash and burn if LHC doesn't see what it predicted.

And I'll make a bet. You can make fun of me on January 1 2009 if this doesn't happen. I'll wager that by the end of this year both Ali Chamseddine and Yidun Wan will be on record with theories of geometry-and-matter which predict some particles/masses/interactions that LHC can either confirm or deny.

Maybe Chamseddine and Connes already are on record with a spectral geometry version of the particle model that predicts checkable stuff (a mass or two maybe.) But I expect they will have some bolder and riskier predictions by yearend.

And I expect Wan and the other braid-matter network-geometry people won't be left sitting around in wait-and-see mode either. The models of geometry and matter they are working with are extremely constricting and therefore (at least in my view) highly vulnerable to disproof. I could be wrong of course but that's what I expect will happen.
 
  • #9
As a corallary question: How many of the people who are commenting in this thread have actually published some physics paper in a peer reviewed journal?

Criticisms mean much more to me when coming from within the field than from some outside observer who may or may not have a clue as to what actually happens.
 
  • #10
BenTheMan said:
As a corallary question: How many of the people who are commenting in this thread have actually published some physics paper in a peer reviewed journal?

Criticisms mean much more to me when coming from within the field than from some outside observer who may or may not have a clue as to what actually happens.

I have, but I don't consider myself to be qualified enough to make bold statements on this thread :rofl:. I've read that the data analysis of the LHC actually involves very compicated algorithms that effectively solve an inverse problem and the theoretical models play an important part there.

So, theory and experiment are not well separated and one has to test very carefully if what comes out of the data analysis is not biased. That is done via extensive Monte Carlo simulations.
 
  • #11
marcus said:
I'm not sure what Kea had in mind exactly but for me that means predictions derived from some theory of geometry-and-matter----some explicit unique theory that will crash and burn if LHC doesn't see what it predicted.

And I'll make a bet. You can make fun of me on January 1 2009 if this doesn't happen. I'll wager that by the end of this year both Ali Chamseddine and Yidun Wan will be on record with theories of geometry-and-matter which predict some particles/masses/interactions that LHC can either confirm or deny.
Is there any scientific content in that statement?

Maybe Chamseddine and Connes already are on record with a spectral geometry version of the particle model that predicts checkable stuff (a mass or two maybe.)
That does sound interesting. Do you have any interest in looking at the papers and trying to understand what they obtained and discussing the physics with people here? That would be more informative (and relevant) than betting about who may, possibly, sometimes in the future, come out with some predictions based on some undefined theory:wink:

But I expect they will have some bolder and riskier predictions by yearend.
Ah..back to unsubstantiated banter.

And I expect Wan and the other braid-matter network-geometry people won't be left sitting around in wait-and-see mode either. The models of geometry and matter they are working with are extremely constricting and therefore (at least in my view) highly vulnerable to disproof. I could be wrong of course but that's what I expect will happen.

Is any of this based on actually understanding the models?


Sorry... But I had to point out what the problems is with those "sociology" posts that have no scientific content and would therefore belong to the General Discussion forum. I thought this had been settled but apparently not.
 
  • #12
nrqed said:
...
That does sound interesting. Do you have any interest in looking at the papers and trying to understand what they obtained and discussing the physics with people here?
...

You are asking a personal question about me. I hope it is appropriate to answer. Yes I do have interest. I would have reported the key papers when they came out in August 2006. Connes and John Barrett came out with NCG standard model papers in the same week. I expect I started threads back then to discuss this with people here and we probably had some discussion. But it was pretty new to me at that point. So in Spring 2007 I participated in a seminar with some grad students at the math department where I live. That was helpful. NCG is hot and they were trying to get into some NCG papers---essentially to start their PhD research.

I'd be pleased if anyone here at PF wants to look at the relevant NCG-SM papers and talk about them. I can't carry on discussion by myself. It's nice you think it sounds interesting.
Here are the relevant papers to start with:
http://arxiv.org/find/grp_physics/1/au:+Connes/0/1/0/all/0/1
 
  • #13
Alain Connes original August 2008 is rough going. I found I could understand more of John Barrett's paper that came out the same week with the same results about the SM. Later papers that Connes wrote with Chamseddine were clearer---Ali Chamseddine definitely adds something as a co-author. Here is the relevant list of titles, in case anyone reading this thread is unfamiliar with line of research and coming on it completely new. Most recent listed first.

http://arxiv.org/abs/0706.3690
Conceptual Explanation for the Algebra in the Noncommutative Approach to the Standard Model****
Ali H. Chamseddine, Alain Connes
Physical Review Letters 99, 191601 (2007)

http://arxiv.org/abs/0706.3688
Why the Standard Model
Ali H. Chamseddine, Alain Connes
13 pages


http://arxiv.org/abs/hep-th/0610241
Gravity and the standard model with neutrino mixing
Ali H. Chamseddine, Alain Connes, Matilde Marcolli
71 pages, 7 figures
Adv. Theor. Math. Phys. 11 (2007) 991-1089http://arxiv.org/abs/hep-th/0608226
Noncommutative Geometry and the standard model with neutrino mixing
Alain Connes
Journal of High Energy Physics 0611 (2006) 081

Based on what I remember of these papers, only parts of which I found readable, I would recommend the one starred here (****) titled "Conceptual..." It is the shorter, more recent, and more explanatory paper.

My guess is that the minicourse that Chamseddine will give at the Oporto Meeting in July will follow the second paper in this list "Why the Standard Model". This seems now to be of special interest. Its abstract reads in part:

"The Standard Model is based on the gauge invariance principle with gauge group U(1)xSU(2)xSU(3) and suitable representations for fermions and bosons, which are begging for a conceptual understanding. We propose a purely gravitational explanation: space-time has a fine structure given as a product of a four dimensional continuum by a finite noncommutative geometry F. "
 
Last edited:
  • #14
marcus said:
And I expect Wan and the other braid-matter network-geometry people won't be left sitting around in wait-and-see mode either. The models of geometry and matter they are working with are extremely constricting and therefore (at least in my view) highly vulnerable to disproof. I could be wrong of course but that's what I expect will happen.

nrqed said:
Is any of this based on actually understanding the models?
...

Yes. I've been following braid-matter since 2005. The basics are pretty simple---take a subset of the LQG spin networks (like the 3valent or 4valent ones) and allow twisting and braiding of the links in the network. You get stuff that can act like particles. This came out at the Loops '05 conference at Golm Germany in Fall 2005, and I'm hardly the only one who has been watching the subsequent papers come out with considerable interest.

My expectations are also based on other information. There has already been a paper dealing with the 3-valent case that is predictive. It makes a prediction about the number of generations that could actually cause the 3-valent case to be falsified---shown to be an invalid approach. Yidun Wan has not been involved in that but has been focusing on 4-valent network braid matter. I have seen references to several papers that he has in preparation and there is one in particular that I expect will have particle physics predictions. I know from other things that there is a drive at Perimeter to get testable predictions from the various non-string QG lines of research they work on.

But to answer your question about me personally. Yes, it is based in part on considerable study of the braid-matter models over the past two-and-a-half years. Not that braid-matter is my sole favorite approach---I follow several lines with interest.

It might be better if you would ask questions about the physics or even better discuss and explain what you understand of it, rather than directing personal questions to me. I'm not objecting to your making me the focus, but it's more in keeping to talk about the research and the QG research scene. No offense or reproach intended.
 
Last edited:
  • #15
So you think they will be able to extract highly constrained information about particles at the Tera ev scale from topolgical invariants of 4 valent graphs that are conjectured to describe excitations at the Planck scale and for which there is so far no indication that they behave anything like particles in a semiclassical spacetime state (which we can't construct for these graphs)?
 
  • #16
f-h said:
So you think they will be able to extract highly constrained information about particles at the Tera ev scale from topolgical invariants of 4 valent graphs that are conjectured to describe excitations at the Planck scale and for which there is so far no indication that they behave anything like particles in a semiclassical spacetime state (which we can't construct for these graphs)?

Yeah f-h, and I'm also happy with your savvy and skeptical way of putting it. You have a good insider's feel for this and I've often gotten useful signals from you. But in fact this time I am optimistic.

One way it could work out is if Wan and He's approach using 4-valent graphs leads to predictions which are prima facie wrong. Then we wouldn't even need to wait for LHC data.
That wouldn't be very satisfying, but that could happen. I'm betting that one way or another something falsifiable comes out of Wan et al work this year.

EDIT: Now you've got me worried. Should I retract my prediction, I wonder?

what I am basing this on is my general sense that braid-matter is itself highly constraining and highly vulnerable to falsification. It has no wiggle-room. As they proceed to derive consequences of the model it will very likely diverge from the standard model at some point, and diverge in a detectable way.

I hope it diverges in a way that is subtle enough to require checking by the new collider and does not turn out to be just plain wrong. (which I gather would be your guess, or what would you think likely?)
 
Last edited:
  • #17
short reflection on thinking

This comment is not in favour of or in disfavour of anything particular, it's just a reflection of mine in this context.

marcus said:
In effect a theory bets its life on its prediction of a new phenomenon, which prior established theory doesn't predict. If that then shows up it is a big deal. And if not, the theory is discredited. there's risk.

If for example I would be making a theory, I would be quite concerned with survival even in the event of desctructive feedback. The desctructive feedback could ideally not only tell me I was wrong, it could also constructively contain information how I should remodel my expectations and theories, and improve. So it won't be killing feedback, it would be deforming feedback. The feedback itself, wether positive or negative is food for improvement.

This brings us to a theory of theories, and if there is a such a thing, it clearly much harder to kill but is that a bad thing? IMO no. Because what is our goal? To predict, at all cost what will happen tomorrow? No, we also want to predict what will happen in a year and in 100 years. So survival is also important, and constructive learning and adaption. Can we afford to take the chance that we invest in an approach and after along time get stumped and have not clue how to resolve it?

Can a theory containing adaptive theories not be falsified? Sure it can, if the adaptive strategy fails. But it's more complex to disprove. It seems that the more appropriate comparasion between strategies is that of effiency of development and learning? This puts a constraint on such approach that it is not the same as to say anyhing goes, or that we can have an massive landscape theories that we can test randomly one by one. That is not efficient. Ideally wouldn't we expect a BALANCE between minimizing risk and making specific predictions

As for biological organisms, certainly organisms die, and spieces die, and new speices come. But the important part is the continuous evolution. Life never dies, that would be a total failure.

It seems that many think a good theory, is one that makes very specific and that is highly vulnerable? and that such good theories is where we should focus?

Does anyone else feel that this is weird?

This is somewhat analogous to in biology to just ask wether an organisms is fit in a particular environment, and now even consider how fit this organisms is in adapting. Which is necessary for overall survival and growth?

Is easily beeing shot dead a trait? and what next? :)

/Fredrik
 
  • #19
marcus said:
what I am basing this on is my general sense that braid-matter is itself highly constraining and highly vulnerable to falsification.

That's true I guess. My problem with braids is that they have been vastly overhyped by Lee Smolin. They are certainly interesting but to expect them to correspond one to one to particles of the standard model seems optimistic at best. If the algebra turns out slightly wrong who says that renormalization or the way semiclassical spacetime emerges between the Planck and the low energy scale will not come to the rescue? Or who says that it won't destroy it?

In AQG it seems impossible to do this anyways, and Carlo Rovelli is looking at non graph changing hamitlonians again. The renormalization procedures suggested for spinfoams so far do not seem to conserve them either (from the top of my head, it's been a while since I looked at that).

IIRC Yidun avoided talking about particles in his Morelia talk, it was much better: Look we've got some mathematical structure here, don't know what it is, let's study it.
 
  • #20
marcus said:
It might be better if you would ask questions about the physics or even better discuss and explain what you understand of it, rather than directing personal questions to me. I'm not objecting to your making me the focus, but it's more in keeping to talk about the research and the QG research scene. No offense or reproach intended.


I agree that it would be better to discuss the physics and ask questions about it or explain it. It seems to me that when you say things like "I bet this or that will happen" or make your opinions the central point of a post, it's really you who decides to make yourself the focus. This is the difference between social sciences type of discussions where often the emphasis is on opinions rather than facts.

And I mean no offense either.



Patrick
 
  • #21
f-h said:
That's true I guess. My problem with braids is that they have been vastly overhyped by Lee Smolin. They are certainly interesting but to expect them to correspond one to one to particles of the standard model seems optimistic at best. If the algebra turns out slightly wrong who says that renormalization or the way semiclassical spacetime emerges between the Planck and the low energy scale will not come to the rescue? Or who says that it won't destroy it?

Sometimes it is hard to distinguish between overhyping and excitement with a possibility. I have always viewed braid matter as an extremely risky longshot---but one that deserves to be explored and which presents really difficult combinatorial "knotty" problems.

I haven't ever heard Smolin claim that braid-matter would work, or as you say correspond onetoone with particles of the SM. But I think his excitement with this longshot possibility has helped to get others working on it. Not just Yidun Wan but also Lou Kauffman, Jon Hackett, Song He and others. Also it has probably kept Bilson-Thompson working harder than he would have otherwise.

I also think that win or lose it is valid research. the structures involved are ones that SHOULD be explored. I also like Yidun Wan's attitude, as you describe it.

IIRC Yidun avoided talking about particles in his Morelia talk, it was much better: Look we've got some mathematical structure here, don't know what it is, let's study it.

That was a year ago, of course. Now he is beginning to talk more in the direction of particles----invariants, C,P,T, classification into braids that do or do not propagate/interact. But essentially he seems to be cool, detached, and not rushing.

But I wouldn't want everybody to have the same personality, would you? There's a division of labor, in the different parts people perform in science.
 
Last edited:
  • #22
I just meant my reference to the Wan-He work to be an illustration. They and several other theorists are obviously working very hard right now---to bring their new theories to the point of risking falsifiable predictions.

I think that was the essential point Kea was making:

Now is a time when you would naturally expect theorists NOT to be taking a "wait-and-see" attitude.
And you would expect there NOT to be a lull in theory research publication, other things being equal.
You would, as Kea indicated, expect theorists to now be working their tails off.

They will want to derive predictions of new phenomena from their models before the data arrives.
Because prediction before the onset of a flood of new data is traditionally worth far more than post-diction after the fact.
Einstein for example published Gen Rel in 1915 BEFORE the 1919 eclipse of the sun when Eddington measured the angle. It was more convincing that way. That's what risk and falsifiability is about. Kea reminded us of this, in case we needed reminding.
 
Last edited:
  • #23
marcus said:
And you would expect there NOT to be a lull in theory research publication, other things being equal. You would, as Kea indicated, expect theorists to now be working their tails off.

But there isn't a lull. Here are the number of papers published per month on hep-ph with "LHC" in the title:

  • 2004: 10.2
  • 2005: 10.2
  • 2006: 12.7
  • 2007: 21.4
  • 2008 (to date): 25.6

I don't see a lull here.
 
  • #24
I didn't say there was a decline in hep-ph posting.
Not surprised by what you report. Thanks for doing the checking, Vanadium.
there may be other issues which you are not addressing, of course.
hep-phenomenology is not the whole picture :smile:
 
  • #25
It is if you want papers dealing with 'falsifiable, testable predictions at the LHC'.

There seems to be some sort of cognitive dissonance on this board. You realize, that in the five or six years now that I have been attending weekly theory, ph and cosmology seminars at my university (an ivy league nonetheless), the number of times 'braid -matter program', 'lqg', 'causal CDT', 'asymptotic safety', blah blah blah have been brought up, is identically zero.. Connes model is the exception and got a brief glance for awhile to see if it sparked any interest. Yet be sure the overriding theme is nearly always, related to beyond the standard model physics, the LHC and physical predictions and so forth.

You realize all these mentioned glamorous programs are a tiny, miniscule, fraction of what the majority of high energy physicists and cosmologists actually *do* all day. Even string theory, with all its applications in various different physical regimes shows up fairly infrequently.

So yes, this board needs a bit of a reality check. The implication is that the rest of us are living in some sort of irrelevant la la land, and actual physics must be domiinated by the whims of the fringe corners of the most speculative and out of reach areas of theory land. That is simply not true.. It may be true on the internet, but it most certainly isn't true in academia.
 
  • #26
Haelfix said:
... You realize, that in the five or six years now that I have been attending weekly theory, ph and cosmology seminars at my university (an ivy league nonetheless), the number of times 'braid -matter program', 'lqg', 'causal CDT', 'asymptotic safety', blah blah blah have been brought up, is identically zero.. Connes model is the exception and got a brief glance for awhile to see if it sparked any interest. ... Even string theory, with all its applications in various different physical regimes shows up fairly infrequently.
...

That is valuable sociological data, Haelfix. thanks for sharing. I got the impression years back that you were grad student at an ivy place I think in Pennsylvania. I thought it was Penn State for some reason, but maybe U Penn. Something you said. I may have misremembered or confused you with someone else. My impression was you were doing phenom. not theory actually, or cosmology. But one can change fields in a phd program.

The breadth of what you were exposed to, and your options, would have been different in another institution. But I suspect, sadly enough, not very different in many reputable US departments, ivy or other. That's the problem basically, as I see it.

There are several places in Canada, several in the UK, one or more in France, Holland etc where I expect you would have gotten a wider perspective, with wider thesis research options. But that can be of little comfort now. Best wishes and good luck to you in hopes you will finish up soon and get the all-important license (if that's what you are up to now.)

In any case thanks for giving us the perspective from inside the physics department of a US ivy league institution!
 
Last edited:
  • #27
Haelfix said:
It is if you want papers dealing with 'falsifiable, testable predictions at the LHC'.

To be perfectly honest, Haelfix, I'm not convinced that you have a better idea of what will be seen at the LHC than some of us who are banned by the professional arxiv. It only takes a little electronic paper to write down some good predictions, and there is no reason to assume that these will appear on hep-ph. No offense to your work was intended.
 
  • #28
marcus said:
...
And I'll make a bet. You can make fun of me on January 1 2009 if this doesn't happen. I'll wager that by the end of this year both Ali Chamseddine and Yidun Wan will be on record with theories of geometry-and-matter which predict some particles/masses/interactions that LHC can either confirm or deny.
...

As I may have indicated several posts back, to f-h, I should have added "unless something else doesn't rule them out first!" It's not worth quibbling about but particularly in the braid matter case the model they are dealing with is highly restrictive and it may destined to produce predictions that will rule it out on elementary grounds. So we don't really need to refer to LHC in that case. What I expect is that they will have made the theory falsifiable one way or the other by yearend. If they haven't, remind me of this outburst of foolish optimism and I'll look sheepish. :smile:

That is just to clarify the wager. Kea and Haelfix, I don't mean to interrupt whatever you are talking about.
 

1. What exactly did Kea say about the LHC and 2009?

Kea, a theoretical physicist and researcher, made a remark in 2009 about the Large Hadron Collider (LHC) potentially producing evidence of a new, yet undiscovered, fundamental particle called the Higgs boson. This was a highly anticipated discovery in the field of particle physics.

2. Did Kea's remark turn out to be accurate?

Yes, Kea's remark was indeed accurate. In 2012, the ATLAS and CMS experiments at the LHC announced the discovery of a new particle with properties consistent with the Higgs boson. This discovery was a major breakthrough in understanding the fundamental building blocks of our universe.

3. What is the significance of the LHC and the Higgs boson?

The LHC is the world's largest and most powerful particle accelerator, located at the European Organization for Nuclear Research (CERN) in Switzerland. It was designed to study the smallest known particles and recreate the conditions of the early universe. The discovery of the Higgs boson at the LHC confirmed the existence of the Higgs field, which gives particles their mass and is a crucial piece in the Standard Model of particle physics.

4. How does the LHC work?

The LHC works by accelerating beams of protons to near the speed of light and then smashing them together. These collisions produce a tremendous amount of energy, which can break apart the protons and create new particles. Special detectors around the LHC then record and analyze the particles produced, providing valuable insights into the fundamental nature of matter.

5. What are the future goals of the LHC?

The LHC is currently undergoing a major upgrade to increase its energy and luminosity, allowing for even more precise measurements and the potential discovery of new particles. Its future goals include further studies of the Higgs boson and searches for dark matter and other elusive particles that could help us understand the mysteries of the universe.

Similar threads

  • Beyond the Standard Models
Replies
4
Views
2K
  • Beyond the Standard Models
Replies
10
Views
2K
  • Beyond the Standard Models
Replies
4
Views
4K
  • Beyond the Standard Models
Replies
7
Views
2K
  • Beyond the Standard Models
Replies
28
Views
6K
Replies
3
Views
5K
Replies
67
Views
16K
  • Beyond the Standard Models
Replies
2
Views
3K
Replies
15
Views
4K
  • Beyond the Standard Models
Replies
10
Views
13K
Back
Top