Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Mirror Matter Hypothesis?

  1. Apr 8, 2013 #1
    What is the current state of the hypothesis of mirror matter today?
    Are there any experimental data or theoretical arguments that exclude it by now, or is it still considered viable among physicists?

    The relationship between mirror matter and ordinary matter is different from matter-antimatter which are related by space-time reflection.

    As far as I can make out Matter-Mirror matter is hypothesised to be related by space reflection alone.

    Mirror matter theorists like Dr.Robert Foot believe that dark matter might be made up of mirror hydrogen and mirror helium.
    Last edited: Apr 8, 2013
  2. jcsd
  3. Apr 8, 2013 #2


    User Avatar
    Science Advisor

    This was one of Foot's predictions:

    LHC results seem to rule that out.
  4. Apr 8, 2013 #3
    In ATLAS and CMS hints for a mirror Higgs boson Robert Foot states in the abstract:

    The latest Higgs result at 125 GeV doesn't seem to have this 50% anomaly.

    These are Robert Foot's latest papers mentioning the Higgs:

    Quark-lepton symmetric model at the LHC

    Electroweak scale invariant models with
    small cosmological constant

    Both too technical for me to see what he's now saying about mirror-matter theory and the Higgs boson!

    Could anyone enlighten me?
    Last edited: Apr 8, 2013
  5. Apr 8, 2013 #4


    User Avatar
    2017 Award

    Staff: Mentor

    In addition, any signal of 50% of the standard model value near 144 GeV is excluded now, if I remember correctly.
    That paper looks like a typical example how useless some theory papers can be. There was a statistical fluctuation, a theoretician adapted his pet theory to the fluctuation, the fluctuation went away, and we learned nothing new - that particular idea is excluded, but it was so fine-tuned to the fluctuation that the exclusion is irrelevant anyway.
  6. Apr 8, 2013 #5
    That's a little unfair I think. In science we need as rich a variety of hypotheses to test as possible, and the place of theorists is to generate them. Most of them are crazy and tuned and implausible, but then again the Standard Model is incredibly tuned as well, so who knows? Maybe you are right and there is a good reason why we should know for sure in advance that these things are not going to correlate with reality, but as far as I know our understanding of epistemology has not made it that far yet.

    Also it is the job of theorists to tell us of all the possible interpretations of any fluctuations, even though we all know that said fluctuation is probably just that. Sure, it's not very exciting when the fluctuation goes away, but the field still moves forward inch by inch. Maybe these papers are only interesting to other theorists who work on similar things, but so what? This is the case with minor advances in every field.
  7. Apr 9, 2013 #6


    User Avatar
    2017 Award

    Staff: Mentor

    Well, if the models are so flexible that we can postdict everything, how can we trust any predictions? How can we test a model, if there is a version for every possible measurement?
    The Standard Model is able to predict hundreds of measurement results with just ~25 free parameters. If a new theory requires more free parameters than it can predict measurements (different from SM) within the reach of current experiments, it is hard to test it.
  8. Apr 9, 2013 #7
    Well, this is a philosophical question now, and the answer depends on what you think about the problem of induction. If you adhere strictly to falsificationism then you can't rely on any predictions that go outside the domain of your current data (i.e. even if all the swans you have ever seen are white (say in England), the hypothesis that a swan you see when you go to Australia will be black is still not falsified, so you are not justified in the generalisation that all swans are white.)

    Broadly speaking this is the approach to hypothesis testing adopted in physics today. A model is not falsified until it is falsified, that is, a model with one set of parameters is not damned by the failure of the same model with a different set of parameters. From a frequentist perspective, a separate p-value is computed for every set of parameters; that is each parameter space point is judged independently.

    A model is the theoretical framework plus the parameters, and you can only test them one set of parameters at a time. So each measurement rules out some parameters and not others, and does not tell you anything about the framework itself until *no* parameter set lets you fit the data.

    Yes, it is indeed hard to test it.


    So that is the orthodox story. But I share your general feelings on the matter, and consider it more of an indictment of falsificationism and frequentist hypothesis testing than I do a solid defense of current practice. Unfortunately, however, it is not really clear how to do better. It seems to me like Bayesian methods have the right philosophical angle (and can incorporate considerations of fine-tuning etc) but so far there is no agreement about how to correctly apply them to model comparison in this kind of grand setting.
  9. Apr 10, 2013 #8


    User Avatar
    2017 Award

    Staff: Mentor

    I see a big difference between the work of Robert Foot and some other models. As an example, compare those two:
    Paul Dirac: "for every fermion, there is an antiparticle with the same mass and spin, but opposite charge". Free parameters: zero. The whole theory can be falsified if no partner of the electron can be found.
    Robert Foot: "with mirror matter, with some new parameters with specific values, we get 50% of the SM Higgs signal at 144 GeV". Free parameters: at least as many as the number of measurements the model describes. I do not see any clear predictions in the paper apart from the precise amplitude the signal should have (relative to the SM).

    Sure, but if a model needs 10 parameters to describe 1 measurement I am "a bit" skeptical.
  10. Apr 10, 2013 #9
    Well this is where yes, there are some principles that help guide our thinking, but what I meant was there is no rigorous formalism that can be applied. The somewhat vague advice of falsificationist philosophy is that you should adopt as your working model whichever one is the *most* falsifiable, since this is the model that has been the most "bold", if you will, with it's predictions. So yes, no-one suggests that there is any good reason to believe the predictions of these theories that you can tune the bejesus out of (and they are of course highly vulnerable to being overfitted, as you point out), but nor are they excluded, so we should still study them just in case.

    Also, once we go beyond the Standard Model, it is pretty unclear what model best meets this criteria of "most falsifiable", so casting the net broadly seems a good strategy to me.
  11. Apr 10, 2013 #10


    User Avatar
    2017 Award

    Staff: Mentor

    That is done on the experimental side anyway. 6 out of the last 8 ATLAS papers look for new particles, for example. CMS has published some papers about the SM Higgs boson recently, excluding them most papers are searches for new particles as well.
  12. Apr 10, 2013 #11
    They certainly try, but (without checking...) if I remember correctly the vast majority of the new particle searches are for SUSY particles, and they are mostly in weird simplified scenarios that make no theoretical sense and make strange assumptions like that the new particle decays 100% into one channel or another. Of course this is because they are just trying to cook up something simple enough to optimise a search around, with the hope being that if they capture the phenomenology cleanly enough then they might catch a glimpse of whatever the "real thing" is. This is their job after all; trying to find something. It is not really up to them to try and properly constrain realistic theories, it is better if they can produce broadly applicable results which can be used by theorists to constrain their models.

    Anyway, I guess I mean that it is good if they can include in their search strategies some searches that would be useful for constraining even these weird less popular models, and they are going to need the people working on those models to tell them what to look for. But I guess SUSY is still the go-to framework and will be the main focus of effort for a while yet (unless the experimentalists see something spectacularly non-SUSY...)
  13. Apr 10, 2013 #12

    Vanadium 50

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2017 Award

    I don't think it's black and white, but a theory that when faced with a statistically insignificant excess claims "I predicted it all along" doesn't look so good when it goes away. Real chutzpah is when the theory is said to predict the excess going away as well.
  14. Apr 10, 2013 #13
    Yes, but it is still just an overfitting problem. People usually hedge their predictions, saying something more like "this excess *can* be explained by such and such theory", they practically never claim that an excess is an unambiguous prediction of the theory. Usually you just change the parameters a bit and boom, no more prediction of an excess. I think people should still publish these things though; if nothing else they tell you something about what parameters are going to be disfavoured when the excess does go away. It's not mind-blowingly interesting physics but something is still learned from doing it I think.

    ...within reason that is. Less than 2 sigma excesses are surely a waste of time since those points predicting it will not even be ruled out at 95% confidence when it vanishes (roughly speaking).
    Last edited: Apr 10, 2013
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook