What potential solutions are there for the Hierarchy Problem?

  • Thread starter waterfall
  • Start date
In summary: Well, it's a problem if you want to calculate things accurately, because the higher the energy, the more the interaction grows weak. But it's not really a problem if you just want to understand the theory. The hierarchy problem is a bit more subtle. In the standard model, we know that the hierarchy problem goes away if you add in supersymmetry. Supersymmetry is a theory that says that there are additional particles in the world that are just as massive as the particles we know about. In the hierarchy problem, we're trying to find a theory that explains the particles we know about, and the particles we don't know about. But in the case of supersymmetry
  • #1
waterfall
381
1
Let's deal with the easier problem. The hard probems being how to solve for M-Theory and how LQG can have exact GR as solution.

I read the idea of Hierarchy Problem before in pop-sci books like Warped Passages and others where they merely explained using the idea of virtual particles as if there were actual little balls. Now I'd like to delve more into the mathematical side. I dig up the archives here and saw the following descriptions by nrqed:

"The connection is this. If we compute the one-loop correction to a scalar particle like the Higgs, we find a quadratic divergence (as opposed to the usual logarithmic divergences.). This means that to get a "low" mass (relative to the Planck mass which is, presumably, the natural scale for the cutoff) one needs a fine tuning to an extraordinary precision. Logarithmic divergences do not require such a high level of fine tuning since a log grows so slowly.

Supersymmetry takes care of this because the quadratic divergences introduced by the scalar loops are canceled by the quadratic divergences produced by fermion loops. There rae no quadratic divergences at all in SUSY theories. In fact, almost all SUSY calculations are finite. There is only one class of logarithmically divergent graphs that are present and these can all be taken care of by a wavefunction renormalization.

Hope this helps"

My questions which I haven't seen answered in the archives is this.

1. We know our QED is non-interacting with the interactions done by perturbation. This is because we still don't know a pure interacting QED. But when we do. We can solve directly without perturbation. Would this make the Hierarchy Problem go away because you no longer have to deal with quadratic divergences which came from Perturbation technique or process? The pure interaction QED won't have any perturbation and quadratic divergences, isn't it?

2. LHC hasn't detected or seen any hint of the Super partners (from Supersymmetry). If they won't ever be detected and the model not true. What then would solve the Hierarchy Problem (if this is still retained in the pure interaction QED theory)?
 
Physics news on Phys.org
  • #2
waterfall said:
1. We know our QED is non-interacting with the interactions done by perturbation. This is because we still don't know a pure interacting QED. But when we do. We can solve directly without perturbation. Would this make the Hierarchy Problem go away because you no longer have to deal with quadratic divergences which came from Perturbation technique or process? The pure interaction QED won't have any perturbation and quadratic divergences, isn't it?
"Pure QED" probably doesn't exist mathematically (except in a sense I will discuss), because of the Landau pole. The sense in which QED does exist mathematically, is as a quantum field theory which is defined at energies less than the Landau pole.

But first let's talk about what sort of QFTs do exist mathematically, up to unlimited energies. There might be some simple examples in the mathematical literature, but physically the most interesting is QCD, which is an "asymptotically free" theory. It is well-defined at high energies because the interaction grows weaker with high energy; the higher the energy goes, the more it resembles a "free theory", a completely non-interacting theory.

Let's suppose that most or all of the truly well-defined interacting QFTs are like QCD - they are free at high energies, but at lower energies there are interactions. At lower energies, you may not even be able to see the fundamental fields. In QCD, quarks and gluons are fundamental, but at low energies you only get mesons and baryons.

"QED" would then only exist as a low-energy approximate field theory (an "effective field theory"). But there might be an infinite number of "exact QFTs" which reduce to QED in some low energy range. It would only be as you increased the energy that the electron would be revealed as composite, or some other details took over and made it deviate from pure QED.

The ability to define QFTs that only work within a certain range of energies means that it may be difficult to work out the true fundamental theory (because different high-energy QFTs can look the same at low energies), but it has also allowed progress in particle physics to occur, even before we had a possible complete theory.
2. LHC hasn't detected or seen any hint of the Super partners (from Supersymmetry). If they won't ever be detected and the model not true. What then would solve the Hierarchy Problem (if this is still retained in the pure interaction QED theory)?
Let's compare the meaning of the Landau pole problem for QED and the hierarchy problem for the standard model.

No-one believes that the world is described just by QED - there are other forces. So the question of whether pure QED is defined at ultra-high energies is a mathematical question.

On the other hand, the standard model does describe all the data. Unlike pure QED, experimentally it is a candidate to be the exact and total theory of the world. So if you want to treat the standard model as the theory of everything, and not just an approximation, then the mathematical problems of the exact standard model are physical problems and not just mathematical ones.

However, there is a catch here. The standard model without gravity behaves in a certain way as you extrapolate upwards to infinite energies. But reality contains gravity, so really you need to be considering how standard model plus gravity behaves at high energies.

The standard view is that once you get to Planck-scale energies, particle interactions must include things like short-lived micro black holes. That is, when you collide, say, two protons at those ultra-high energies, sometimes they will create a black hole which then evaporates via Hawking radiation, and in fact the Hawking radiation from the death of the micro black hole will be the "output" of the proton-proton collision. Micro black holes aren't part of the standard model without gravity, so this energy scale represents the limit of the validity of the "standard model without gravity" as an approximate description of physics.

In discussions of the effective field theories which provide approximate descriptions of physics up to a particular energy scale, you will find references to "bare mass", "renormalized mass", "physical mass", and so on. These approximate theories contain parameters which are supposed to be mass, charge, etc, but if you then calculate the mass or charge that would be observed, you get quantities which get larger and larger, the more you take into account short-range processes. In the continuum limit, the observed mass and charge would be infinite, which is experimentally wrong. The "bare mass" is the mass parameter appearing in the basic equation, and then the calculated mass is the bare mass plus a huge correction.

The way people used to describe renormalization was to say that it involved assuming that the "bare mass", the mass parameter appearing in the basic equations, was a huge value which happened to offset the quantum corrections. That is, experimentally the observed mass m of a particle is tiny; theory says the observed mass is the bare mass m_bare plus a huge quantum correction M_correction; so therefore the bare mass must equal "observed mass minus the correction", i.e. m_bare = m - M_correction.

Even worse, the size of M_correction depends on how fine-grained you make your calculations. If you consider arbitrarily short-lived processes, M_correction ends up being infinite, so m_bare has to be "m - infinity".

Later on, the renormalization group came to the rescue somewhat, by describing in detail how M_correction varies as a function of energy scale. You adopt the philosophy of effective field theory; you say, of course the bare mass isn't actually "m_observed - infinity". What's really happening is that your approximate theory is incomplete, and at some high energy, new physical processes show up, and change how the effective mass (charge, etc) varies with energy, so that the "bare" quantities are more reasonable.

(I should probably add that this informal discussion of renormalization may have been simplified to the point of error in some places. I think it gives the correct impression, but in reality you're concerned with the Higgs field energy density, quantum corrections can be multiplicative rather than additive, and there's a whole universe of further technical details that I haven't bothered to check.)

So let us now return to the possibility that the standard model plus gravity is the true theory of everything. Let us suppose that the micro black holes I mentioned are the only new addition to particle physics that gravity introduces. Then this would be the place at which the philosophy of effective field theory runs out and we have to take seriously the parameters appearing directly in the fundamental equations.

Now if it turned out that for the standard model plus gravity, M_correction is still absolutely huge (a Planck-scale mass), that would be a problem, because it looks like m_Higgs is about 125 GeV (and it's definitely true that the masses of the W and Z particles are a little less than 100 GeV). So the bare mass parameter appearing in the theory will have to be something like m_observed - M_correction. That would be fine-tuning to about 1 part in 10^16, the magnitude of the difference between m_observed and M_correction.

This is what people want to avoid - theories in which there are fundamental parameters along the lines of "m_Higgs = 1.000000000000000125 Planck masses", with the "1" out the front disappearing when the quantum corrections are taken into account, so that the observed mass is just .000000000000000125 Planck masses. This is just an example, the actual numbers appearing in a fine-tuned theory wouldn't be so neatly decimal, but they would have a similar degree of artificiality.

So one way to avoid this is to have quantum corrections cancel themselves - there are negative and positive corrections and they mostly cancel out. Supersymmetry can give you that. Another way is to have an asymptotically free theory like QCD, in which the "deconfinement scale" is not too far above 100 GeV. This might imply that the Higgs, at those higher energies, just comes apart into "preons" or "subquarks", so the short-scale physics is completely different. This is the "technicolor" approach to the Higgs, and a lot of people seem to think it can't work for a Higgs at 125 GeV, but I think a few other assumptions are going into this dismissal.

Supersymmetry and technicolor would be the two main solutions proposed to the hierarchy problem. Then there are other approaches, like "little Higgs", an idea using "asymptotic safety", and I'm sure there are others.
 
  • #3
Mitchell, here's what atyy shared about higher spin (more than 2) gravity:

http://arxiv.org/pdf/1007.0435v3.pdf

If you have time, please go over it. It's about a gauge theory of gravity. But I thought gravity as geometry can't be modeled as a gauge theory like QED and the Yang-Mill fields. So I still don't understand what the paper is saying. If you have encountered it already. Please say in a few words what in the world it is talking about. Thanks very much.
 
Last edited:
  • #4
mitchell porter said:
"Pure QED" probably doesn't exist mathematically (except in a sense I will discuss), because of the Landau pole. The sense in which QED does exist mathematically, is as a quantum field theory which is defined at energies less than the Landau pole.

But first let's talk about what sort of QFTs do exist mathematically, up to unlimited energies. There might be some simple examples in the mathematical literature, but physically the most interesting is QCD, which is an "asymptotically free" theory. It is well-defined at high energies because the interaction grows weaker with high energy; the higher the energy goes, the more it resembles a "free theory", a completely non-interacting theory.

Let's suppose that most or all of the truly well-defined interacting QFTs are like QCD - they are free at high energies, but at lower energies there are interactions. At lower energies, you may not even be able to see the fundamental fields. In QCD, quarks and gluons are fundamental, but at low energies you only get mesons and baryons.

"QED" would then only exist as a low-energy approximate field theory (an "effective field theory"). But there might be an infinite number of "exact QFTs" which reduce to QED in some low energy range. It would only be as you increased the energy that the electron would be revealed as composite, or some other details took over and made it deviate from pure QED.

The ability to define QFTs that only work within a certain range of energies means that it may be difficult to work out the true fundamental theory (because different high-energy QFTs can look the same at low energies), but it has also allowed progress in particle physics to occur, even before we had a possible complete theory. Let's compare the meaning of the Landau pole problem for QED and the hierarchy problem for the standard model.

No-one believes that the world is described just by QED - there are other forces. So the question of whether pure QED is defined at ultra-high energies is a mathematical question.

On the other hand, the standard model does describe all the data. Unlike pure QED, experimentally it is a candidate to be the exact and total theory of the world. So if you want to treat the standard model as the theory of everything, and not just an approximation, then the mathematical problems of the exact standard model are physical problems and not just mathematical ones.

However, there is a catch here. The standard model without gravity behaves in a certain way as you extrapolate upwards to infinite energies. But reality contains gravity, so really you need to be considering how standard model plus gravity behaves at high energies.

The standard view is that once you get to Planck-scale energies, particle interactions must include things like short-lived micro black holes. That is, when you collide, say, two protons at those ultra-high energies, sometimes they will create a black hole which then evaporates via Hawking radiation, and in fact the Hawking radiation from the death of the micro black hole will be the "output" of the proton-proton collision. Micro black holes aren't part of the standard model without gravity, so this energy scale represents the limit of the validity of the "standard model without gravity" as an approximate description of physics.

In discussions of the effective field theories which provide approximate descriptions of physics up to a particular energy scale, you will find references to "bare mass", "renormalized mass", "physical mass", and so on. These approximate theories contain parameters which are supposed to be mass, charge, etc, but if you then calculate the mass or charge that would be observed, you get quantities which get larger and larger, the more you take into account short-range processes. In the continuum limit, the observed mass and charge would be infinite, which is experimentally wrong. The "bare mass" is the mass parameter appearing in the basic equation, and then the calculated mass is the bare mass plus a huge correction.

The way people used to describe renormalization was to say that it involved assuming that the "bare mass", the mass parameter appearing in the basic equations, was a huge value which happened to offset the quantum corrections. That is, experimentally the observed mass m of a particle is tiny; theory says the observed mass is the bare mass m_bare plus a huge quantum correction M_correction; so therefore the bare mass must equal "observed mass minus the correction", i.e. m_bare = m - M_correction.

Even worse, the size of M_correction depends on how fine-grained you make your calculations. If you consider arbitrarily short-lived processes, M_correction ends up being infinite, so m_bare has to be "m - infinity".

Later on, the renormalization group came to the rescue somewhat, by describing in detail how M_correction varies as a function of energy scale. You adopt the philosophy of effective field theory; you say, of course the bare mass isn't actually "m_observed - infinity". What's really happening is that your approximate theory is incomplete, and at some high energy, new physical processes show up, and change how the effective mass (charge, etc) varies with energy, so that the "bare" quantities are more reasonable.


What's you are saying in the above is that in renormalization group, instead of

M_correction = infinity
m_bare = m- infinity = - infinity

one simply assume m_bare is some definite value?
How about the M_correction. How did the value gets lower to finite?

But I went to many references after reading this message. In the book The Story of Light. It was mentioned:

"With the bare mass also taken to be of infinite value, the two infinities - the infinities coming out of the perturbation calculations and the infinity of the bare mass - cancel each other out leaving us with a finite value for the actual, physical mass of an electron".

So as more detailed accounts or Renormalization. It is not just m_bare = m - infinity, but the perturbation calculation infinity minus the - m_bare = m_observed. Do you agree?

Now in Renormalization Group calculations. According to http://fds.oup.com/www.oup.co.uk/pdf/0-19-922719-5.pdf [Broken] the fine structure constant for example is altered and this altered value is entered into the perturbation equation as well as mass and charge.. but how do you make a power series with an altered fine structure constant no longer diverge?? Landa pole is still landa pole whatever is the fine structure constant values.

Also you said "What's really happening is that your approximate theory is incomplete, and at some high energy, new physical processes show up, and change how the effective mass (charge, etc) varies with energy, so that the "bare" quantities are more reasonable.".

What is this example of new physical processes showing up at high energy that can affect or make effective mass varies with energy. I have a rough idea of Renormalization Group. Checked out many references for hours but want to get the essence and gist of it. I think this details of the nature of how new physical process showing up at high energy that can affect or make effective mass varies with energy (as well as fine structure constant varies with energy) can give the heart of the understanding.

Also I think you must write a book like "An Idiot's Guide to QFT" which would include Renormalization Group for laymen which almost no books for laymen touch. They only reached up to the normal infinity minus infinity and then suddenly jumped to String Theory.

Thanks. I analyzed every paragraph of yours with deep thought.


(I should probably add that this informal discussion of renormalization may have been simplified to the point of error in some places. I think it gives the correct impression, but in reality you're concerned with the Higgs field energy density, quantum corrections can be multiplicative rather than additive, and there's a whole universe of further technical details that I haven't bothered to check.)

So let us now return to the possibility that the standard model plus gravity is the true theory of everything. Let us suppose that the micro black holes I mentioned are the only new addition to particle physics that gravity introduces. Then this would be the place at which the philosophy of effective field theory runs out and we have to take seriously the parameters appearing directly in the fundamental equations.

Now if it turned out that for the standard model plus gravity, M_correction is still absolutely huge (a Planck-scale mass), that would be a problem, because it looks like m_Higgs is about 125 GeV (and it's definitely true that the masses of the W and Z particles are a little less than 100 GeV). So the bare mass parameter appearing in the theory will have to be something like m_observed - M_correction. That would be fine-tuning to about 1 part in 10^16, the magnitude of the difference between m_observed and M_correction.

This is what people want to avoid - theories in which there are fundamental parameters along the lines of "m_Higgs = 1.000000000000000125 Planck masses", with the "1" out the front disappearing when the quantum corrections are taken into account, so that the observed mass is just .000000000000000125 Planck masses. This is just an example, the actual numbers appearing in a fine-tuned theory wouldn't be so neatly decimal, but they would have a similar degree of artificiality.

So one way to avoid this is to have quantum corrections cancel themselves - there are negative and positive corrections and they mostly cancel out. Supersymmetry can give you that. Another way is to have an asymptotically free theory like QCD, in which the "deconfinement scale" is not too far above 100 GeV. This might imply that the Higgs, at those higher energies, just comes apart into "preons" or "subquarks", so the short-scale physics is completely different. This is the "technicolor" approach to the Higgs, and a lot of people seem to think it can't work for a Higgs at 125 GeV, but I think a few other assumptions are going into this dismissal.

Supersymmetry and technicolor would be the two main solutions proposed to the hierarchy problem. Then there are other approaches, like "little Higgs", an idea using "asymptotic safety", and I'm sure there are others.
 
Last edited by a moderator:
  • #5
I have tried to think of the simplest way to explain this...

Quantum field theory is about randomly fluctuating fields. There are waves in the fields. The mass of a particle is a statement about how the waves in its field travel. The charge of a particle is a statement about how the waves in its field interact with the waves in other fields.

What we call "bare mass" and "bare charge" are actually numbers which tell us how to calculate the probabilities of the basic possible fluctuations in the fields. "Physical mass" and "physical charge" are statements about how the waves behave on average. So "physical mass" is calculated using "bare mass" but it's not exactly the same thing.

When you calculate the average behaviors of the fields, as a first approximation you may only consider situations where the fields change slowly. Then for a better approximation you might add situations where they oscillate a little faster; for an even better approximation, even faster oscillations. But for a divergent quantum field theory, when you take this to the limit and try to consider arbitrarily fast oscillations in the fields, you get infinite values for the physical quantities - unless you add specially constructed infinities ("divergent power series") to the "bare" parameters you use to calculate the probabilities, constructed specifically to cancel out the infinite part of the calculated result.

But even before you "go to infinity", you can consider the way that the predictions of physical mass, charge, ... vary, as you vary the "cutoff" - the highest frequency of field oscillation that you will include in your approximation. This is what renormalization group theory describes - the way that physical values vary with the cutoff. For a particular quantum field theory, even before you set the parameters, it is possible to figure out the exact way that physical quantities vary with the cutoff on field oscillations. The fact that the predicted mass goes to infinity if the "cutoff goes to infinity" (i.e. if you include even infinitely fast field oscillations) is just the extreme extrapolation of this "renormalization group flow".

No-one seriously believes that the bare parameters are infinite; that's just a sign that the theory is incomplete. But a theory which gives useless infinite answers in the extreme limit of no cutoff, can still give useful answers over a finite range of approximations. What this means is that there is some highest frequency of field oscillations beyond which something extra happens, e.g. new fields, or new interactions between the known fields. So really, talking about infinite bare masses is just a reductio ad absurdum. What we should care about is the bare mass implied at the limit of the theory's validity.

I have talked about the physical mass at a given energy scale (which is the same thing as a frequency cutoff) as being equal to bare mass plus quantum corrections. So let's say we measure the physical mass of the standard model Higgs boson to be 125 GeV at low energies. And suppose we think that the standard model becomes incomplete at some high energy - that new fields or forces show up, maybe micro black holes, maybe an X boson that causes proton decay - but we don't know at what energy this happens, exactly. That means we can't say exactly how big the quantum corrections at that scale are, just that they are big. But that is enough to imply a fine-tuning problem, because it means that the bare mass or uncorrected mass of the Higgs at that scale is going to be "125 GeV plus millions or billions of GeV", with the millions or billions part being just enough to cancel out the quantum corrections, so that the final physical mass is the small 125 GeV that we see.

I know I was using a minus sign before, but it's simpler to think of the finetuned bare mass as a large positive number, and the quantum corrections as something that you subtract from that. So I should have said bare mass = physical mass plus quantum corrections, and physical mass = bare mass minus quantum corrections. The point is that if the quantum corrections are huge, then the bare mass also has to be huge, but finetuned enough to give the tiny physical mass when the quantum corrections are taken into account. But it's just more rational to expect that the standard model is incomplete, and that the quantum corrections aren't actually huge - that physical processes not part of the standard model (like supersymmetry) cancel out most of the quantum corrections coming from the standard model.

Incidentally, finetuning is only the first part of the hierarchy problem. Even if you have a theory where you don't have huge quantum corrections destabilizing the Higgs mass, there is still the "problem" of why the Higgs mass is so small compared to the Planck mass. There is a principle called "naturalness", which says that the basic numbers appearing in a theory ought to be of order 1. So in a theory with really small quantum corrections, you no longer have to finetune the Higgs mass to keep it small, but you still might wonder why it's small rather than big. My attitude to the naturalness principle is that it only has a limited usefulness. You should expect that the values of the fundamental constants have an explanation, but that doesn't mean you should expect the fundamental constants to be small, or that you should rule out theories which would have large fundamental constants in them. In other words, having very large or very small numbers is not intrinsically a problem, it's only when those numbers also have to be finetuned that it is clearly a problem.
 
  • #6
I should say something about why there are "quantum corrections". It's because of interactions. A bare electron has a charge, but the fluctuations of the electron field produce virtual electrons and positrons which have a charge too, and then those virtual particles produce their own charged virtual particles, and so on. So the physical charge is the bare charge plus the charges of all the virtual particles, and the virtual particles result from the interaction between the electron field and the photon field.

If you try to naively figure out the physical charge, it comes out as infinite. Imposing a cutoff means that you don't allow yourself to consider infinitely nested sets of virtual particles. So the predicted physical charge won't be infinite, but it will still be very large. Renormalization is a practical philosophy which says, we want to match experiment, so we just assume that the bare charge is whatever it has to be to match the physical charge when the quantum corrections are added. The neat thing about renormalizable field theories is that, once you do this assumption for mass, charge, and maybe a few other fundamental quantities, they become predictive again. If you want to predict something about how five electrons interact with each other, you don't and can't calculate a separate "bare" nonsense prediction which then gets corrected by experiment; once you have correctly renormalized mass and renormalized charge, you now have a functioning model of the physical electron at energy scales of interest, and you can make predictions about interacting electrons using that model of the individual renormalized physical electron.

What this seems to be saying is that there is something fractal about particles, because they are surrounded by these clouds of virtual particles which have their own sub-clouds of second-order virtual particles, and so on. But this fractalness isn't infinitely deep; we expect that quantum gravity provides an objective cutoff to this behavior, and also that at energy scales somewhere short of the Planck scale, new heavy particles show up in the fractal cloud, corresponding to physics beyond the standard model. And the renormalization group is just a way to talk about the charge and mass of a fractal cloud of virtual particles, even when you don't know what the bare mass and bare charge of a single bare particle would be.
 
  • #7
A different philosophy:

http://fds.oup.com/www.oup.co.uk/pdf/0-19-922719-5.pdf [Broken]
http://arxiv.org/abs/hep-th/9210046
http://www.solvayinstitutes.be/events/doctoral/Bilal.pdf [Broken]

"What if one allowed to include non-renormalizable interactions ... More generally, a non-renormalizable interaction ... Thus, as long as |pj| << M one can neglect the effect of these non-renormalizable interactions."

"Theories like QED are presently thought to be only effective theories, in the sense that they provide the effective description of electromagnetic interactions at energies that are low compared to some scale at which new physics could be expected, like e.g. the grand unification scale of 1015GeV or even the Planck scale of 1019 GeV. Such an effective theory then has an effective Lagrangian obtained by “integrating out” the very heavy additional fields that are present in such theories. (We will discuss such integrating out a bit in the next section). This necessarily results in the generation of (infinitely) many non-renormalizable interactions in this effective Lagrangian ... . From the previous argument it is then clear that at energies well below this scale these additional non-renormalizable interactions are completely irrelevant, and this is why we only “see” the renormalizable interactions. Our “low-energy” world is described by renormalizable theories like QED not because such theories are somehow better behaved, but because these are the only relevant ones at low energies: Renormalizable interactions are those that are relevant at low energies, while non-renormalizable interactions are irrelevant at low energies.
 
Last edited by a moderator:
  • #8
mitchell porter said:
I should say something about why there are "quantum corrections". It's because of interactions. A bare electron has a charge, but the fluctuations of the electron field produce virtual electrons and positrons which have a charge too, and then those virtual particles produce their own charged virtual particles, and so on. So the physical charge is the bare charge plus the charges of all the virtual particles, and the virtual particles result from the interaction between the electron field and the photon field.

If you try to naively figure out the physical charge, it comes out as infinite. Imposing a cutoff means that you don't allow yourself to consider infinitely nested sets of virtual particles. So the predicted physical charge won't be infinite, but it will still be very large. Renormalization is a practical philosophy which says, we want to match experiment, so we just assume that the bare charge is whatever it has to be to match the physical charge when the quantum corrections are added. The neat thing about renormalizable field theories is that, once you do this assumption for mass, charge, and maybe a few other fundamental quantities, they become predictive again. If you want to predict something about how five electrons interact with each other, you don't and can't calculate a separate "bare" nonsense prediction which then gets corrected by experiment; once you have correctly renormalized mass and renormalized charge, you now have a functioning model of the physical electron at energy scales of interest, and you can make predictions about interacting electrons using that model of the individual renormalized physical electron.

What this seems to be saying is that there is something fractal about particles, because they are surrounded by these clouds of virtual particles which have their own sub-clouds of second-order virtual particles, and so on. But this fractalness isn't infinitely deep; we expect that quantum gravity provides an objective cutoff to this behavior, and also that at energy scales somewhere short of the Planck scale, new heavy particles show up in the fractal cloud, corresponding to physics beyond the standard model. And the renormalization group is just a way to talk about the charge and mass of a fractal cloud of virtual particles, even when you don't know what the bare mass and bare charge of a single bare particle would be.

Thanks very much for your detailed explanation.

Have you come across or known of papers that describe new physics at low energy? I think the mistake of our search for unification or physics research is only focusing on new physics at high energy (small scale). Are you sure there is nothing lurking at low energy (which I take as large scale). What if the quantum vacuum was really a dirac sea of electron or other exotic configuration. Then by controlling the vacuum, we can change the physics even at low energy. Please share any papers or concepts you have encountered with the similar theme or sense about new physics at low energy. Or state why you think it's "impossible". Thanks.
 
Last edited:
  • #9
atyy said:
A different philosophy:

http://fds.oup.com/www.oup.co.uk/pdf/0-19-922719-5.pdf [Broken]
http://arxiv.org/abs/hep-th/9210046
http://www.solvayinstitutes.be/events/doctoral/Bilal.pdf [Broken]

"What if one allowed to include non-renormalizable interactions ... More generally, a non-renormalizable interaction ... Thus, as long as |pj| << M one can neglect the effect of these non-renormalizable interactions."

"Theories like QED are presently thought to be only effective theories, in the sense that they provide the effective description of electromagnetic interactions at energies that are low compared to some scale at which new physics could be expected, like e.g. the grand unification scale of 1015GeV or even the Planck scale of 1019 GeV. Such an effective theory then has an effective Lagrangian obtained by “integrating out” the very heavy additional fields that are present in such theories. (We will discuss such integrating out a bit in the next section). This necessarily results in the generation of (infinitely) many non-renormalizable interactions in this effective Lagrangian ... . From the previous argument it is then clear that at energies well below this scale these additional non-renormalizable interactions are completely irrelevant, and this is why we only “see” the renormalizable interactions. Our “low-energy” world is described by renormalizable theories like QED not because such theories are somehow better behaved, but because these are the only relevant ones at low energies: Renormalizable interactions are those that are relevant at low energies, while non-renormalizable interactions are irrelevant at low energies.

atyy, i noticed you are a biologist in your profile studying "living organism"... why are you interested in high energy physics, do you think it has relevance in living organism? They have found quantum entanglement in photosynthesis in plants. Would there be similar counterpart in the ATP synthesis in humans? And could there be new physics lurking in living organism.. do you believe in Penrose microtubule idea that our subject experience is related to Planck scale physics? Funding in physics can be splitted to these so if you can debunk the idea, then they can be drained of resources and everything focused on String Theory or LQG.
 
Last edited by a moderator:
  • #10
waterfall said:
atyy, i noticed you are a biologist in your profile studying "living organism"... why are you interested in high energy physics, do you think it has relevance in living organism? They have found quantum entanglement in photosynthesis in plants. Would there be similar counterpart in the ATP synthesis in humans? And could there be new physics lurking in living organism.. do you believe in Penrose microtubule idea that our subject experience is related to Planck scale physics? Funding in physics can be splitted to these so if you can debunk the idea, then they can be drained of resources and everything focused on String Theory or LQG.

My annoying friends read Smolin and Woit a few years back and kept on talking about it for weeks on end. I had no choice in the end but to "see the movie" myself to understand what the hell they were talking about.
 
  • #11
waterfall said:
Thanks very much for your detailed explanation.

Have you come across or known of papers that describe new physics at low energy? I think the mistake of our search for unification or physics research is only focusing on new physics at high energy (small scale). Are you sure there is nothing lurking at low energy (which I take as large scale). What if the quantum vacuum was really a dirac sea of electron or other exotic configuration. Then by controlling the vacuum, we can change the physics even at low energy. Please share any papers or concepts you have encountered with the similar theme or sense about new physics at low energy. Or state why you think it's "impossible". Thanks.


Mitchell. What I was asking above was whether there is something beneath the quantum vacuum... like what if the quantum vacuum were merely the surface of the ocean like ripples and there is a an entire ocean underneath it. To avoid defining quantum vacuum again. Let me refer to the thread of https://www.physicsforums.com/showthread.php?t=343049 where Science Advisor Born2bwire mentioned: "Since the vacuum state has infinite energy, it has infinite photons. Everytime we add energy into the electromagnetic fields, we just pull a photon out of the vacuum state." True? If True it has implications in that there is a source. And new physics there.

Born2bwire wrote:

A quantum vacuum is simply a fancy name for the ground state. That is, it is the lowest energy state of the system. The interesting thing about the electric and magnetic fields in quantum electrodynamics is that their ground state is represented by zero photons. However, their ground state is not zero energy. In fact, in a completely empty space, the quantum vacuum can have an infinite number of frequencies of fluctuating fields occurring, a continuous spectrum. Each frequency represents a mode, a possible excitation of the fields in the system, and each mode has a certain discrete energy density. So the quantum vacuum has infinite energy if we do not restrict the possible frequencies of electric and magnetic fields. One way to think of this is that in quantum electrodynamics, we think of the photons as being the energy packets (quanta) that occur when we excite the electromagnetic waves. Each energy level of the electric and magnetic fields represents an additional photon being excited. These photons "come" from the vacuum state. Since the vacuum state has infinite energy, it has infinite photons. Everytime we add energy into the electromagnetic fields, we just pull a photon out of the vacuum state. It's an interesting idea, I recall I think it was Dirac who mentioned it.

Where this energy comes from we do not say. All we know is that in quantum mechanics, we often get systems where the energy cannot go to zero. Since we have an energy "bath" that we can draw upon, it forces fluctuations in the system (this is an idea from the fluctuation-dissipation that I mentioned earlier). For example, let's say I have a system that draws energy from a heat bath that surrounds it. It is constantly drawing energy from the bath but it cannot put energy back in. We find out that this stipulates that the system must have fluctuations. In the same way, we must have fluctuations in the vacuum state as well. But since these fluctuations are about a mean of zero, they are not measurable in the macroscopic world. So we never see truly see them. Sure we can get non-zero measurements should we attempt them but statistically we will only get a zero measurement in the long run.

So again, we can't say where the energy comes from, it's a definition of the quantum system. The fluctuations of the field can be explained in a few ways. We an show taht it must occur via mathematical rigor of quantum mechanics. The closest "physical" reason I have found is that the vacuum energy is an energy bath that couples with the electric and magnetic fields. Because of this, the fields must have fluctuations as shown by statistical mechanics. Photons are nothing more than the energy quanta of the electric and magnetic fields. We can think of them as being drawn out of the energy of the vacuum state. When they are created they come from the vacuum and when annihilated they return. Of course this may not be a truly physical picture. Anytime we add energy to the fields we create photons. Since they are nothing more than massless particles of energy/momentum, it is hard to say what they are created of. So if I dump energy into the fields using an antenna, then am I drawing the photons up from the vacuum or just creating them from the energy injection from my antenna.

As for virtual particles, they are not real. It is hard to say what they are but I have not heard of them as being any physically real object. They can be useful calculation tools though in Feynman diagrams. In the quantum vacuum, we can represent the vacuum fluctuations as virtual photons. The idea is that we momentarily create the photon let it interact and then destroy it. In the end, because we created and destroyed the particle we add no energy to the fields, but by allowing the particle to interact it is the same as allowing the field fluctuations to have interacted. For example, in the Casimir force, we can calculate it from the force induced by the fluctuation fields or we could calculate it as the "radiation" pressure force of the equivalent virtual photons. The results are identical.
 
  • #12
I certainly don't believe in the Dirac sea. A positron (anti-electron) is a particle of its own, it's not just a hole in an infinitely deep "sea" of electrons filling space.

I also don't believe in the related idea of there being an infinite amount of energy in every small region of space. Because of the uncertainty principle, the ground state of a quantum oscillator has a "zero-point energy", and the Fourier analysis of a continuum field models the field as the sum of an infinite continuum of oscillators (the Fourier modes), so if you naively apply the quantum approach, there will be a zero-point energy in each of the infinitely many modes, and you deduce that the quantum field will have infinite energy even in its ground state.

But there's no real reason to believe that that infinite energy is real. It's just one of the infinities that you have to cancel out with infinite bare parameters in renormalization. In a supersymmetric field theory, the zero-point energy is zero because the ZPE of the fermionic oscillators cancels the ZPE of the bosonic oscillators. Or maybe there's some other way to get rid of it, like Sundrum's "energy-parity symmetry".
 

1. What is the Hierarchy Problem?

The Hierarchy Problem is a puzzle in physics that arises from the large discrepancy between the weak force and gravitational forces. It questions why the weak force is 1032 times stronger than gravity, despite the fact that both forces are fundamentally related to the masses of particles.

2. Why is the Hierarchy Problem important?

The Hierarchy Problem is important because it suggests that there may be unknown forces or particles at play that we do not yet understand. It also raises questions about the fundamental nature of gravity and the Standard Model of particle physics.

3. What are some proposed solutions for the Hierarchy Problem?

Some potential solutions for the Hierarchy Problem include supersymmetry, extra dimensions, and the anthropic principle. These theories suggest that there may be additional particles or dimensions that can help explain the large difference between the weak and gravitational forces.

4. How does supersymmetry address the Hierarchy Problem?

Supersymmetry is a theory that proposes the existence of a new symmetry between particles and their superpartners. This theory predicts that every known particle has a superpartner, which could help balance out the effects of gravity and the weak force, thus solving the Hierarchy Problem.

5. Are there any experimental tests for proposed solutions to the Hierarchy Problem?

Yes, there are ongoing experiments at the Large Hadron Collider (LHC) and other particle accelerators that are searching for evidence of supersymmetry and other proposed solutions to the Hierarchy Problem. However, so far, no definitive evidence has been found, and the Hierarchy Problem remains an open question in physics.

Similar threads

  • Beyond the Standard Models
Replies
1
Views
1K
  • Beyond the Standard Models
Replies
0
Views
728
  • Beyond the Standard Models
Replies
2
Views
2K
  • Beyond the Standard Models
Replies
5
Views
3K
  • Beyond the Standard Models
Replies
0
Views
412
  • Beyond the Standard Models
Replies
4
Views
2K
  • Beyond the Standard Models
Replies
5
Views
2K
  • Beyond the Standard Models
Replies
4
Views
2K
  • Beyond the Standard Models
Replies
1
Views
2K
  • Beyond the Standard Models
Replies
2
Views
2K
Back
Top