
#1
Feb1812, 05:07 PM

P: 381

Let's deal with the easier problem. The hard probems being how to solve for MTheory and how LQG can have exact GR as solution.
I read the idea of Hierarchy Problem before in popsci books like Warped Passages and others where they merely explained using the idea of virtual particles as if there were actual little balls. Now I'd like to delve more into the mathematical side. I dig up the archives here and saw the following descriptions by nrqed: "The connection is this. If we compute the oneloop correction to a scalar particle like the Higgs, we find a quadratic divergence (as opposed to the usual logarithmic divergences.). This means that to get a "low" mass (relative to the Planck mass which is, presumably, the natural scale for the cutoff) one needs a fine tuning to an extraordinary precision. Logarithmic divergences do not require such a high level of fine tuning since a log grows so slowly. Supersymmetry takes care of this because the quadratic divergences introduced by the scalar loops are cancelled by the quadratic divergences produced by fermion loops. There rae no quadratic divergences at all in SUSY theories. In fact, almost all SUSY calculations are finite. There is only one class of logarithmically divergent graphs that are present and these can all be taken care of by a wavefunction renormalization. Hope this helps" My questions which I haven't seen answered in the archives is this. 1. We know our QED is noninteracting with the interactions done by perturbation. This is because we still don't know a pure interacting QED. But when we do. We can solve directly without perturbation. Would this make the Hierarchy Problem go away because you no longer have to deal with quadratic divergences which came from Perturbation technique or process? The pure interaction QED won't have any perturbation and quadratic divergences, isn't it? 2. LHC hasn't detected or seen any hint of the Super partners (from Supersymmetry). If they won't ever be detected and the model not true. What then would solve the Hierarchy Problem (if this is still retained in the pure interaction QED theory)? 



#2
Feb1912, 06:16 AM

P: 748

But first let's talk about what sort of QFTs do exist mathematically, up to unlimited energies. There might be some simple examples in the mathematical literature, but physically the most interesting is QCD, which is an "asymptotically free" theory. It is welldefined at high energies because the interaction grows weaker with high energy; the higher the energy goes, the more it resembles a "free theory", a completely noninteracting theory. Let's suppose that most or all of the truly welldefined interacting QFTs are like QCD  they are free at high energies, but at lower energies there are interactions. At lower energies, you may not even be able to see the fundamental fields. In QCD, quarks and gluons are fundamental, but at low energies you only get mesons and baryons. "QED" would then only exist as a lowenergy approximate field theory (an "effective field theory"). But there might be an infinite number of "exact QFTs" which reduce to QED in some low energy range. It would only be as you increased the energy that the electron would be revealed as composite, or some other details took over and made it deviate from pure QED. The ability to define QFTs that only work within a certain range of energies means that it may be difficult to work out the true fundamental theory (because different highenergy QFTs can look the same at low energies), but it has also allowed progress in particle physics to occur, even before we had a possible complete theory. Noone believes that the world is described just by QED  there are other forces. So the question of whether pure QED is defined at ultrahigh energies is a mathematical question. On the other hand, the standard model does describe all the data. Unlike pure QED, experimentally it is a candidate to be the exact and total theory of the world. So if you want to treat the standard model as the theory of everything, and not just an approximation, then the mathematical problems of the exact standard model are physical problems and not just mathematical ones. However, there is a catch here. The standard model without gravity behaves in a certain way as you extrapolate upwards to infinite energies. But reality contains gravity, so really you need to be considering how standard model plus gravity behaves at high energies. The standard view is that once you get to Planckscale energies, particle interactions must include things like shortlived micro black holes. That is, when you collide, say, two protons at those ultrahigh energies, sometimes they will create a black hole which then evaporates via Hawking radiation, and in fact the Hawking radiation from the death of the micro black hole will be the "output" of the protonproton collision. Micro black holes aren't part of the standard model without gravity, so this energy scale represents the limit of the validity of the "standard model without gravity" as an approximate description of physics. In discussions of the effective field theories which provide approximate descriptions of physics up to a particular energy scale, you will find references to "bare mass", "renormalized mass", "physical mass", and so on. These approximate theories contain parameters which are supposed to be mass, charge, etc, but if you then calculate the mass or charge that would be observed, you get quantities which get larger and larger, the more you take into account shortrange processes. In the continuum limit, the observed mass and charge would be infinite, which is experimentally wrong. The "bare mass" is the mass parameter appearing in the basic equation, and then the calculated mass is the bare mass plus a huge correction. The way people used to describe renormalization was to say that it involved assuming that the "bare mass", the mass parameter appearing in the basic equations, was a huge value which happened to offset the quantum corrections. That is, experimentally the observed mass m of a particle is tiny; theory says the observed mass is the bare mass m_bare plus a huge quantum correction M_correction; so therefore the bare mass must equal "observed mass minus the correction", i.e. m_bare = m  M_correction. Even worse, the size of M_correction depends on how finegrained you make your calculations. If you consider arbitrarily shortlived processes, M_correction ends up being infinite, so m_bare has to be "m  infinity". Later on, the renormalization group came to the rescue somewhat, by describing in detail how M_correction varies as a function of energy scale. You adopt the philosophy of effective field theory; you say, of course the bare mass isn't actually "m_observed  infinity". What's really happening is that your approximate theory is incomplete, and at some high energy, new physical processes show up, and change how the effective mass (charge, etc) varies with energy, so that the "bare" quantities are more reasonable. (I should probably add that this informal discussion of renormalization may have been simplified to the point of error in some places. I think it gives the correct impression, but in reality you're concerned with the Higgs field energy density, quantum corrections can be multiplicative rather than additive, and there's a whole universe of further technical details that I haven't bothered to check.) So let us now return to the possibility that the standard model plus gravity is the true theory of everything. Let us suppose that the micro black holes I mentioned are the only new addition to particle physics that gravity introduces. Then this would be the place at which the philosophy of effective field theory runs out and we have to take seriously the parameters appearing directly in the fundamental equations. Now if it turned out that for the standard model plus gravity, M_correction is still absolutely huge (a planckscale mass), that would be a problem, because it looks like m_Higgs is about 125 GeV (and it's definitely true that the masses of the W and Z particles are a little less than 100 GeV). So the bare mass parameter appearing in the theory will have to be something like m_observed  M_correction. That would be finetuning to about 1 part in 10^16, the magnitude of the difference between m_observed and M_correction. This is what people want to avoid  theories in which there are fundamental parameters along the lines of "m_Higgs = 1.000000000000000125 Planck masses", with the "1" out the front disappearing when the quantum corrections are taken into account, so that the observed mass is just .000000000000000125 Planck masses. This is just an example, the actual numbers appearing in a finetuned theory wouldn't be so neatly decimal, but they would have a similar degree of artificiality. So one way to avoid this is to have quantum corrections cancel themselves  there are negative and positive corrections and they mostly cancel out. Supersymmetry can give you that. Another way is to have an asymptotically free theory like QCD, in which the "deconfinement scale" is not too far above 100 GeV. This might imply that the Higgs, at those higher energies, just comes apart into "preons" or "subquarks", so the shortscale physics is completely different. This is the "technicolor" approach to the Higgs, and a lot of people seem to think it can't work for a Higgs at 125 GeV, but I think a few other assumptions are going into this dismissal. Supersymmetry and technicolor would be the two main solutions proposed to the hierarchy problem. Then there are other approaches, like "little Higgs", an idea using "asymptotic safety", and I'm sure there are others. 



#3
Feb1912, 06:56 PM

P: 381

Mitchell, here's what atyy shared about higher spin (more than 2) gravity:
http://arxiv.org/pdf/1007.0435v3.pdf If you have time, please go over it. It's about a gauge theory of gravity. But I thought gravity as geometry can't be modelled as a gauge theory like QED and the YangMill fields. So I still don't understand what the paper is saying. If you have encountered it already. Please say in a few words what in the world it is talking about. Thanks very much. 



#4
Feb1912, 08:57 PM

P: 381

Hierarchy ProblemWhat's you are saying in the above is that in renormalization group, instead of M_correction = infinity m_bare = m infinity =  infinity one simply assume m_bare is some definite value? How about the M_correction. How did the value gets lower to finite? But I went to many references after reading this message. In the book The Story of Light. It was mentioned: "With the bare mass also taken to be of infinite value, the two infinities  the infinities coming out of the perturbation calculations and the infinity of the bare mass  cancel each other out leaving us with a finite value for the actual, physical mass of an electron". So as more detailed accounts or Renormalization. It is not just m_bare = m  infinity, but the perturbation calculation infinity minus the  m_bare = m_observed. Do you agree? Now in Renormalization Group calculations. According to http://fds.oup.com/www.oup.co.uk/pdf/0199227195.pdf the fine structure constant for example is altered and this altered value is entered into the perturbation equation as well as mass and charge.. but how do you make a power series with an altered fine structure constant no longer diverge?? Landa pole is still landa pole whatever is the fine structure constant values. Also you said "What's really happening is that your approximate theory is incomplete, and at some high energy, new physical processes show up, and change how the effective mass (charge, etc) varies with energy, so that the "bare" quantities are more reasonable.". What is this example of new physical processes showing up at high energy that can affect or make effective mass varies with energy. I have a rough idea of Renormalization Group. Checked out many references for hours but want to get the essence and gist of it. I think this details of the nature of how new physical process showing up at high energy that can affect or make effective mass varies with energy (as well as fine structure constant varies with energy) can give the heart of the understanding. Also I think you must write a book like "An Idiot's Guide to QFT" which would include Renormalization Group for laymen which almost no books for laymen touch. They only reached up to the normal infinity minus infinity and then suddenly jumped to String Theory. Thanks. I analyzed every paragraph of yours with deep thought. 



#5
Feb2012, 02:49 AM

P: 748

I have tried to think of the simplest way to explain this...
Quantum field theory is about randomly fluctuating fields. There are waves in the fields. The mass of a particle is a statement about how the waves in its field travel. The charge of a particle is a statement about how the waves in its field interact with the waves in other fields. What we call "bare mass" and "bare charge" are actually numbers which tell us how to calculate the probabilities of the basic possible fluctuations in the fields. "Physical mass" and "physical charge" are statements about how the waves behave on average. So "physical mass" is calculated using "bare mass" but it's not exactly the same thing. When you calculate the average behaviors of the fields, as a first approximation you may only consider situations where the fields change slowly. Then for a better approximation you might add situations where they oscillate a little faster; for an even better approximation, even faster oscillations. But for a divergent quantum field theory, when you take this to the limit and try to consider arbitrarily fast oscillations in the fields, you get infinite values for the physical quantities  unless you add specially constructed infinities ("divergent power series") to the "bare" parameters you use to calculate the probabilities, constructed specifically to cancel out the infinite part of the calculated result. But even before you "go to infinity", you can consider the way that the predictions of physical mass, charge, ... vary, as you vary the "cutoff"  the highest frequency of field oscillation that you will include in your approximation. This is what renormalization group theory describes  the way that physical values vary with the cutoff. For a particular quantum field theory, even before you set the parameters, it is possible to figure out the exact way that physical quantities vary with the cutoff on field oscillations. The fact that the predicted mass goes to infinity if the "cutoff goes to infinity" (i.e. if you include even infinitely fast field oscillations) is just the extreme extrapolation of this "renormalization group flow". Noone seriously believes that the bare parameters are infinite; that's just a sign that the theory is incomplete. But a theory which gives useless infinite answers in the extreme limit of no cutoff, can still give useful answers over a finite range of approximations. What this means is that there is some highest frequency of field oscillations beyond which something extra happens, e.g. new fields, or new interactions between the known fields. So really, talking about infinite bare masses is just a reductio ad absurdum. What we should care about is the bare mass implied at the limit of the theory's validity. I have talked about the physical mass at a given energy scale (which is the same thing as a frequency cutoff) as being equal to bare mass plus quantum corrections. So let's say we measure the physical mass of the standard model Higgs boson to be 125 GeV at low energies. And suppose we think that the standard model becomes incomplete at some high energy  that new fields or forces show up, maybe micro black holes, maybe an X boson that causes proton decay  but we don't know at what energy this happens, exactly. That means we can't say exactly how big the quantum corrections at that scale are, just that they are big. But that is enough to imply a finetuning problem, because it means that the bare mass or uncorrected mass of the Higgs at that scale is going to be "125 GeV plus millions or billions of GeV", with the millions or billions part being just enough to cancel out the quantum corrections, so that the final physical mass is the small 125 GeV that we see. I know I was using a minus sign before, but it's simpler to think of the finetuned bare mass as a large positive number, and the quantum corrections as something that you subtract from that. So I should have said bare mass = physical mass plus quantum corrections, and physical mass = bare mass minus quantum corrections. The point is that if the quantum corrections are huge, then the bare mass also has to be huge, but finetuned enough to give the tiny physical mass when the quantum corrections are taken into account. But it's just more rational to expect that the standard model is incomplete, and that the quantum corrections aren't actually huge  that physical processes not part of the standard model (like supersymmetry) cancel out most of the quantum corrections coming from the standard model. Incidentally, finetuning is only the first part of the hierarchy problem. Even if you have a theory where you don't have huge quantum corrections destabilizing the Higgs mass, there is still the "problem" of why the Higgs mass is so small compared to the Planck mass. There is a principle called "naturalness", which says that the basic numbers appearing in a theory ought to be of order 1. So in a theory with really small quantum corrections, you no longer have to finetune the Higgs mass to keep it small, but you still might wonder why it's small rather than big. My attitude to the naturalness principle is that it only has a limited usefulness. You should expect that the values of the fundamental constants have an explanation, but that doesn't mean you should expect the fundamental constants to be small, or that you should rule out theories which would have large fundamental constants in them. In other words, having very large or very small numbers is not intrinsically a problem, it's only when those numbers also have to be finetuned that it is clearly a problem. 



#6
Feb2012, 03:13 AM

P: 748

I should say something about why there are "quantum corrections". It's because of interactions. A bare electron has a charge, but the fluctuations of the electron field produce virtual electrons and positrons which have a charge too, and then those virtual particles produce their own charged virtual particles, and so on. So the physical charge is the bare charge plus the charges of all the virtual particles, and the virtual particles result from the interaction between the electron field and the photon field.
If you try to naively figure out the physical charge, it comes out as infinite. Imposing a cutoff means that you don't allow yourself to consider infinitely nested sets of virtual particles. So the predicted physical charge won't be infinite, but it will still be very large. Renormalization is a practical philosophy which says, we want to match experiment, so we just assume that the bare charge is whatever it has to be to match the physical charge when the quantum corrections are added. The neat thing about renormalizable field theories is that, once you do this assumption for mass, charge, and maybe a few other fundamental quantities, they become predictive again. If you want to predict something about how five electrons interact with each other, you don't and can't calculate a separate "bare" nonsense prediction which then gets corrected by experiment; once you have correctly renormalized mass and renormalized charge, you now have a functioning model of the physical electron at energy scales of interest, and you can make predictions about interacting electrons using that model of the individual renormalized physical electron. What this seems to be saying is that there is something fractal about particles, because they are surrounded by these clouds of virtual particles which have their own subclouds of secondorder virtual particles, and so on. But this fractalness isn't infinitely deep; we expect that quantum gravity provides an objective cutoff to this behavior, and also that at energy scales somewhere short of the Planck scale, new heavy particles show up in the fractal cloud, corresponding to physics beyond the standard model. And the renormalization group is just a way to talk about the charge and mass of a fractal cloud of virtual particles, even when you don't know what the bare mass and bare charge of a single bare particle would be. 



#7
Feb2012, 12:18 PM

Sci Advisor
P: 8,009

A different philosophy:
http://fds.oup.com/www.oup.co.uk/pdf/0199227195.pdf http://arxiv.org/abs/hepth/9210046 http://www.solvayinstitutes.be/event...oral/Bilal.pdf "What if one allowed to include nonrenormalizable interactions ... More generally, a nonrenormalizable interaction ... Thus, as long as pj << M one can neglect the effect of these nonrenormalizable interactions." "Theories like QED are presently thought to be only effective theories, in the sense that they provide the effective description of electromagnetic interactions at energies that are low compared to some scale at which new physics could be expected, like e.g. the grand unification scale of 10^{15}GeV or even the Planck scale of 10^{19} GeV. Such an effective theory then has an effective Lagrangian obtained by “integrating out” the very heavy additional fields that are present in such theories. (We will discuss such integrating out a bit in the next section). This necessarily results in the generation of (infinitely) many nonrenormalizable interactions in this effective Lagrangian ... . From the previous argument it is then clear that at energies well below this scale these additional nonrenormalizable interactions are completely irrelevant, and this is why we only “see” the renormalizable interactions. Our “lowenergy” world is described by renormalizable theories like QED not because such theories are somehow better behaved, but because these are the only relevant ones at low energies: Renormalizable interactions are those that are relevant at low energies, while nonrenormalizable interactions are irrelevant at low energies. 



#8
Feb2012, 05:39 PM

P: 381

Have you come across or known of papers that describe new physics at low energy? I think the mistake of our search for unification or physics research is only focusing on new physics at high energy (small scale). Are you sure there is nothing lurking at low energy (which I take as large scale). What if the quantum vacuum was really a dirac sea of electron or other exotic configuration. Then by controlling the vacuum, we can change the physics even at low energy. Please share any papers or concepts you have encountered with the similar theme or sense about new physics at low energy. Or state why you think it's "impossible". Thanks. 



#9
Feb2012, 07:52 PM

P: 381





#10
Feb2012, 10:24 PM

Sci Advisor
P: 8,009





#11
Feb2112, 05:39 PM

P: 381

Mitchell. What I was asking above was whether there is something beneath the quantum vacuum... like what if the quantum vacuum were merely the surface of the ocean like ripples and there is a an entire ocean underneath it. To avoid defining quantum vacuum again. Let me refer to the thread of http://www.physicsforums.com/showthread.php?t=343049 where Science Advisor Born2bwire mentioned: "Since the vacuum state has infinite energy, it has infinite photons. Everytime we add energy into the electromagnetic fields, we just pull a photon out of the vacuum state." True? If True it has implications in that there is a source. And new physics there. Born2bwire wrote: 



#12
Feb2312, 03:04 AM

P: 748

I certainly don't believe in the Dirac sea. A positron (antielectron) is a particle of its own, it's not just a hole in an infinitely deep "sea" of electrons filling space.
I also don't believe in the related idea of there being an infinite amount of energy in every small region of space. Because of the uncertainty principle, the ground state of a quantum oscillator has a "zeropoint energy", and the Fourier analysis of a continuum field models the field as the sum of an infinite continuum of oscillators (the Fourier modes), so if you naively apply the quantum approach, there will be a zeropoint energy in each of the infinitely many modes, and you deduce that the quantum field will have infinite energy even in its ground state. But there's no real reason to believe that that infinite energy is real. It's just one of the infinities that you have to cancel out with infinite bare parameters in renormalization. In a supersymmetric field theory, the zeropoint energy is zero because the ZPE of the fermionic oscillators cancels the ZPE of the bosonic oscillators. Or maybe there's some other way to get rid of it, like Sundrum's "energyparity symmetry". 


Register to reply 
Related Discussions  
Hierarchy Problem  High Energy, Nuclear, Particle Physics  0  
The Hierarchy problem  Beyond the Standard Model  4  
mass hierarchy problem  High Energy, Nuclear, Particle Physics  2  
about hierarchy problem of higgs mass  High Energy, Nuclear, Particle Physics  5 