Understanding Renormalization in Quantum Mechanics: Examples and Context

  • Thread starter ObsessiveMathsFreak
  • Start date
  • Tags
    Example
In summary, Renormalization is a process that occurs when taking the "continuum limit" in systems with many degrees of freedom. It is often used in condensed matter physics and quantum field theory. An example of renormalization is in the calculation of the scattering amplitude of two particles with nearly zero energy, where the amplitude may become infinite. This is resolved by considering the theory as a low energy effective theory and introducing a cutoff value \Lambda, which is usually unknown. The issue of renormalization was previously controversial but has been resolved through the concept of effective field theories. However, there are still questions regarding the arbitrariness of the renormalization procedure and its relation to physical quantities.
  • #1
ObsessiveMathsFreak
406
8
I keep hearing about renormalization in quantum mechanics, frequently in the context of it's mathematical dubiousness. But no-one ever gives an example of this process. Could anyone give an example of a quantity that must be renormalised, along with some context? Even a very general one would do.
 
Physics news on Phys.org
  • #2
Lets say someone gives you a function which you can show to be an energy eigenstate because you act the hamiltonian on it to get the energy. If you take the inner product of the function/state/ket with itself it might come out to be one, it might not. If not its not normalised and you need to stick a constant out the front and inner product it again and set that inner product to one to compute the constant required.
 
  • #3
FYI, renormalization ceased to be "mathematically dubious" a few decades ago. Unfortunately, the stigma remains.

Also, unfortunately, FunkyDwarf's reply doesn't seem to be the same renormalization that you are asking about, and is usually just called "normalizing the wavefunction".

A very good explanation of renormalization is given in Zee's quantum field theory book. Renormalization is something that happens when you take the "continuum limit", i.e., when you consider a system with many degrees of freedom and you take the limit where many becomes infinite. This is usually the case in condensed matter physics and quantum field theory.

For instance, let's say you consider the scattering of two particles with nearly zero energy, and you call the amplitude of that process [tex]g[/tex]. When you calculate the scattering amplitude, you end up having to do integrals in momentum space that usually blow up (their integrands grow like some positive power of momentum, for instance). Naively you say that there is something wrong. However, taking a cue from condensed matter physics, you might reason that your theory only gives a good "long distance" or "low energy" description of nature, and that at "short distances" or "high energies" it should be replaced by something else. It turns out that some theories are insensitive to what happens at "high energies", and those theories are called renormalizable. In essence, instead of pretending that your theory is good up to arbitrary high momenta, you say that it is good up to some value [tex]\Lambda[/tex], where [tex]\Lambda[/tex] is large, and (importantly), unknown. The important feature of renormalizable theories is that you can experimentally measure [tex]g[/tex] at one energy scale, and the theory will tell you how it changes with energy. The actual value of [tex]\Lambda[/tex] doesn't enter your final calculation.

The problem people had with renormalization had to do with getting rid of these seemingly infinite functions of [tex]\Lambda[/tex] in a consistent way. If you believe that your theory is complete, then, yes, you are on shaky grounds. But if you think of it as a low energy effective theory (which was explained by Kenneth Wilson), then you're ok.
 
  • #4
ObsessiveMathsFreak said:
I keep hearing about renormalization in quantum mechanics, frequently in the context of it's mathematical dubiousness. But no-one ever gives an example of this process. Could anyone give an example of a quantity that must be renormalised, along with some context? Even a very general one would do.

To see an application in quantum mechanics in the spirit of effective field theories,
let me suggest the simple introduction

http://aps.arxiv.org/abs/hep-ph/9209266
 
  • #5
consult RG related concepts

lbrits said:
FYI, renormalization ceased to be "mathematically dubious" a few decades ago. Unfortunately, the stigma remains.

Also, unfortunately, FunkyDwarf's reply doesn't seem to be the same renormalization that you are asking about, and is usually just called "normalizing the wavefunction".

A very good explanation of renormalization is given in Zee's quantum field theory book. Renormalization is something that happens when you take the "continuum limit", i.e., when you consider a system with many degrees of freedom and you take the limit where many becomes infinite. This is usually the case in condensed matter physics and quantum field theory.

For instance, let's say you consider the scattering of two particles with nearly zero energy, and you call the amplitude of that process [tex]g[/tex]. When you calculate the scattering amplitude, you end up having to do integrals in momentum space that usually blow up (their integrands grow like some positive power of momentum, for instance). Naively you say that there is something wrong. However, taking a cue from condensed matter physics, you might reason that your theory only gives a good "long distance" or "low energy" description of nature, and that at "short distances" or "high energies" it should be replaced by something else. It turns out that some theories are insensitive to what happens at "high energies", and those theories are called renormalizable. In essence, instead of pretending that your theory is good up to arbitrary high momenta, you say that it is good up to some value [tex]\Lambda[/tex], where [tex]\Lambda[/tex] is large, and (importantly), unknown. The important feature of renormalizable theories is that you can experimentally measure [tex]g[/tex] at one energy scale, and the theory will tell you how it changes with energy. The actual value of [tex]\Lambda[/tex] doesn't enter your final calculation.

The problem people had with renormalization had to do with getting rid of these seemingly infinite functions of [tex]\Lambda[/tex] in a consistent way. If you believe that your theory is complete, then, yes, you are on shaky grounds. But if you think of it as a low energy effective theory (which was explained by Kenneth Wilson), then you're ok.

I really appreciated your nice explanation.
However, I am also reading about the renormalisation part of QFT recently. There are some problems I can't figure out in renormalisation procedure. That is, the physical prediction and the arbitrariness of the renormalisation procedure. Here are my questions to consult,

(1) The finite parts of the counter terms are actually arbitrary, to fix these counter terms, we have to use some renormalisation prescription( or scheme). Once we chose one renormalisation prescription, we have definite renormalized parameters, however, these renormalized parameters are in general NOT the physical quantities we measured in laboratory, right? Only if we choose the "physical subtraction" prescription, we get the physical measurable quantities. And, those renormalized parameters in certain momentum-subtraction prescription are running with an arbitrary mass scale, \mu, right? Even the measurable physical quantities which we obtained from the physical-subtraction would run with the scale.
Is my concept correct?

(2) The reason why we need those many prescriptions is that, choosing different finite parts of counter terms is equivalent to rearrange the perturbation series. So, in some cases, if we chose the renormalisation prescription appropriately, we would get a perturbation theory which has large contributions from first few terms of the perturbation series, and higher order terms are just tiny correction. Is my concept correct?

(3) Even we fixed the renormalisation prescription, we would still have the renormalized parameters depend on the arbitrary mass scale, \mu, and the differential equation which the Green function satisfies with respect to the change in \mu is called the Renormalisation Group Equation (RGE), right?
But, in the case of MS-scheme, the finite parts of the counter terms are all zero, hence the renormalized parameters don't run with scale?

Thanks so much for everyone who discuss with me or correct my concept.
 
  • #6
Regarding arbitrariness of the prescriptions, the idea is that you know what your calculation is doing at low energies, but you don't know what it is doing at high energies. Now, there could be many theories that look different at high energies but the same at low energies (it is said that they flow to the same theory in the IR), so you essentially deform your theory so that it goes to one of those theories in the UV. For instance, you could deform it by adding mass to some particles (Pauli-Villars) or by varying the number of dimensions. You pick whatever regularization procedure works for you.

In the Wilsonian sense of renormalization, when you do calculations at low energies, you "integrate out" the high energy modes, and how you do that doesn't matter. When you suppress high energy modes then the highest energy modes that are left become your new "UV". You continue this process until the UV starts looking the same (albeit with a different factor scale). In other words, the functions you calculate become self-similar over different scales. At this point you have a UV fixed point, and the stuff that you suppressed earlier becomes irrelevant (they are perfectly hidden).

The whole subtraction business is just a way of doing the calculation, when you do perturbation theory, but the physics itself is contained in the RG equations. As far as MS is concerned, the fact that the finite parts are zero doesn't matter, since you still have a momentum dependent remainder.
 
  • #7
http://arxiv.org/abs/hep-th/0212049v3"

Abstract:
An elementary introduction to perturbative renormalization and renormalization group is presented. No prior knowledge of field theory is necessary because we do not refer to a particular physical theory. We are thus able to disentangle what is specific to field theory and what is intrinsic to renormalization. We link the general arguments and results to real phenomena encountered in particle physics and statistical mechanics.
 
Last edited by a moderator:

1. What is renormalization in quantum mechanics?

Renormalization is a mathematical technique used to account for the effects of quantum fluctuations in physical systems. It involves adjusting the parameters of a theory to account for the interactions between particles and their fluctuations, allowing for more accurate predictions of physical phenomena.

2. Why is renormalization important in quantum mechanics?

Renormalization is important because it allows for the prediction of physical phenomena that would otherwise be impossible to calculate accurately. It also helps to reconcile the discrepancy between quantum mechanics, which describes the behavior of particles on a small scale, and classical mechanics, which describes the behavior of larger objects.

3. What are some examples of renormalization in quantum mechanics?

One example of renormalization is in quantum electrodynamics, where the coupling constant, which describes the strength of the interaction between charged particles, is renormalized to account for the effects of virtual particles. Another example is in the theory of the weak force, where the mass of the W and Z bosons is renormalized to account for the effects of the Higgs field.

4. How does renormalization work in practice?

In practice, renormalization involves a combination of analytical and numerical techniques. The parameters of a theory are first adjusted to account for the effects of quantum fluctuations, and then these adjusted parameters are used to make predictions about physical phenomena. These predictions can then be compared to experimental data to determine the accuracy of the renormalization process.

5. What are the implications of renormalization for the understanding of quantum mechanics?

Renormalization has been crucial in reconciling the seemingly contradictory results of quantum mechanics and classical mechanics. It has also led to the development of new theories, such as quantum field theory, which have greatly advanced our understanding of the behavior of particles on a small scale. Furthermore, renormalization has shown that the behavior of particles is highly dependent on their interactions with other particles and their environment, highlighting the interconnected nature of the universe at a fundamental level.

Similar threads

Replies
12
Views
735
  • Quantum Physics
Replies
5
Views
865
Replies
3
Views
633
  • Quantum Physics
Replies
4
Views
1K
Replies
15
Views
1K
Replies
3
Views
820
Replies
15
Views
1K
  • Quantum Physics
Replies
3
Views
2K
  • Quantum Physics
Replies
14
Views
1K
Back
Top