[Novice] Feynman Diagram for Electron Excitation

In summary, my diagram shows an elestron moving in time but not space until it is hit by a photon, after which it disappears and instantly reappears at a higher energy level. After some time in this state it reverts back to its original state, releasing a photon.
  • #1
JDude13
95
0
I have drawn up what I believe happens during electron excitation in an atom.
However, it looks very... wrong somehow.

My diagram shows an elestron moving in time but not space until it is hit by a photon, after which it disappears and instantly reappears at a higher energy level. After some time in this state it reverts back to its original state, releasing a photon.

Could you please show me an accurate depiction of electron excitation using a Feynman Diagram?
 
Physics news on Phys.org
  • #2
I think your understanding Feynman diagrams is incomplete. Many Feynman diagrams look very simple and intuitive but the theory of Feynman diagrams is deceptively subtle. I'm not up to writing a full treatise here, but maybe I can say a few things to point you in the right direction.

1. The first thing you have to understand is that a Feynman diagram represents a specific number. It's not exactly a diagram in the same way that a blueprint of a house is a diagram; it's much more akin to a variable. In the same way that sin(x) is just a number (if I fill in some value for x), the diagram for, say, Compton scattering is just a number -- it's a variable written graphically instead of using letters.

2. What is the number that a diagram represents? It's the quantum-mechanical matrix element, which stands for the amplitude for a process to happen a certain way (symbolized by that diagram). There are in general an infinite number of ways for any process to occur, and so an infinite number of diagrams. What do I mean by "a certain way?" Without too much math, the idea is that for some initial and final states, given by the bra and ket <f| and |i>, there are an infinite number of quantum-mechanical operators I can sandwich between <f| and |i> that give nonzero results; all of these contribute to the transition amplitude. These operators can be derived from the Lagrangian that describes whatever particle theory you're using.

The exact probability for a process to occur is the square modulus of the sum of all of the matrix elements. For realistic theories of interacting particles, there are no known exact, closed-form results -- the matrix elements get too complicated and it becomes impossible to add an infinite number of them. Fortunately, in many situations (but not all!) the more complicated terms are numerically smaller than the simpler terms, so it is possible to get very good approximations by just adding the first couple of diagrams. If you're familiar with Taylor series approximations, it's very much the same idea.

3. If it's all matrix multiplication, why do we use diagrams at all? Because of Richard Feynman. Before Feynman, people did calculate interaction matrices from scratch each time and sandwich them between the initial and final state vectors to calculate transition amplitudes. However, this process can be simplified. Feynman showed that the initial and final states could be pictured as lines going into an interaction, and lines going out. Each line stands for a number (okay, or a spinor, or vector, or tensor...) which is a solution to the equations of motion for that particle when it is not interacting. The interaction operator sandwiched between <f| and |i> is symbolized by a drawing that connects the final-state lines with the initial-state lines. Furthermore, Feynman showed that the drawing in the middle has to consist of only a couple of available pieces; you're not allowed to draw just anything (the rules for what you are allowed to draw can be derived from the underlying laws of physics for that theory, but once you have the rules you can forget about the theory, because they work for all possible diagrams -- this was the great simplification). And these pieces are not just drawings; they represent numbers (or matrices or vectors or tensors or...). To get the matrix element, you multiply all the pieces of the diagram together, and that's it.

For example, consider this diagram:

Feynman-diagram-ee-scattering.png


Time flows from bottom to top. In words, an electron and a positron collide at the bottom vertex. They annihilate and emit a photon. The photon travels and emits an electron and a positron, which travel off. Each of the five lines and two vertices is a number, given by the Feynman rules (actually spinors, a tensor, and matrices), which are multiplied together. The result is the matrix element for this process.

That's not the only valid diagram for e+ e- goes to e+ e-, but it is one of them. And importantly, all the possible diagrams consist of only the pieces you see above: electron or positron lines, photon lines, and vertices where exactly two electron lines and one photon line meet. Although you can draw many different diagrams built of those pieces, the pieces themselves always represent the same thing. That is why Feynman diagrams are so useful.
 
  • #3
Ah... Maybe I should learn some more about quantum mechanics... Although I thought that Feynman Diagrams could represent particle reactions such as beta decay simply...
Nevertheless.
 
  • #4
JDude13 said:
Ah... Maybe I should learn some more about quantum mechanics... Although I thought that Feynman Diagrams could represent particle reactions such as beta decay simply...
Nevertheless.
No.. they can't. They are just pretty pictures for terms in a free-particle perturbation series (and as such not even applicable to electrons bound in atoms without jumping through lots of hoops).

Mike Pemulis' explanation is excellent. If you want to understand Feynman diagrams, you should work on understanding his post.
 
  • #5
cgk said:
No.. they can't. They are just pretty pictures for terms in a free-particle perturbation series (and as such not even applicable to electrons bound in atoms without jumping through lots of hoops).

Ever used Hugenholtz or Goldstone diagrams for anything in practice? I haven't.
Seems they were popular in the 1970's but have since almost entirely disappeared from the literature.
 
  • #6
alxm said:
Ever used Hugenholtz or Goldstone diagrams for anything in practice? I haven't.
Seems they were popular in the 1970's but have since almost entirely disappeared from the literature.
I personally haven't, because I prefer the algebraic approach. I like to think that everything a computer can do much better than a human should be done with a computer, and whether one programs algebra in terms of diagrams or Wick theorems directly makes little difference in practice.

But I know there are people who see this differently. For example, Kallay's ingenious high-order coupled cluster program (http://dx.doi.org/10.1063/1.1383290 ) does its algebra with Goldstone diagrams (and if you talk to him personally, he's quick to paint up diagrams as well). Hanrath's high-order coupled cluster program also seems to be based on some homebrewn variant of Goldstone diagrams according to a statement in http://dx.doi.org/10.1063/1.3561739 .

So they still seem to be alive.. it's just that they are really only good for single-reference coupled cluster and single-reference perturbation theory, and in the meantime there is not much left to do in this area.
 
Last edited by a moderator:
  • #7
JDude13 said:
Ah... Maybe I should learn some more about quantum mechanics... Although I thought that Feynman Diagrams could represent particle reactions such as beta decay simply...
Nevertheless.

I sympathize a lot. That's what I thought for a long time. Often Feynman diagrams appear to be just a straightforward schematic of what's going on. Particle A emits particles C and D, which fly away, or whatever. And of course, whatever they tell you, most professional physicists picture them that way -- just as a description of the reaction. Which is fine as long as you don't take the picture too literally. Because mathematically, Feynman diagrams are really terms in the matrix element like I tried to describe above. The fact that they do look so simple and intuitive is sort of a miracle to me; I'm not sure how to explain it well.

So how would I answer your question? There are at least two possible answers:

1. Historically, most of the work that went into evaluating Feynman diagrams has concentrated on initial and final states that consist of free particles, so none of the standard rules would apply to the reaction you want to describe. So that's a problem. But it might be possible to derive approximate solutions for the hydrogen atom as a bound state in the initial and final bound states, and derive a whole new set of effective Feynman rules that describe the interaction of the H atom with a photon. alxm and cgk, is that what you're talking about? I think I'm in over my head there.

2. An easier way might be to abandon quantum field theory and go back to non-relativistic quantum mechanics. Because an electron in a light atom has energy much lower than its mass, it is well described by QM, which is why you can use the Schrodinger Equation to derive the orbitals and so on. The photon then becomes an applied classical potential which interacts with the electron. You could then calculate the transition amplitude between the initial and final states using quantum mechanics, and even get the cross section using the Born approximation or whatever. If you went this way, it would be incorrect to call your diagram of the process a "Feynman diagram" because that is a name reserved for scattering processes in relativistic quantum field theory.
 
  • #8
Mike Pemulis said:
[...]But it might be possible to derive approximate solutions for the hydrogen atom as a bound state in the initial and final bound states, and derive a whole new set of effective Feynman rules that describe the interaction of the H atom with a photon. alxm and cgk, is that what you're talking about? I think I'm in over my head there.
Sorry for hijacking that. Diagrams are also used for non-QFT-purposes, like the calculation of matrix elements in non-relativistic correlated interacting many-body theories. This is what this was about. For example, if you wanted to calculate the ground state energy of a molecule, you could do that in terms of a coupled-cluster wave function. And then calculate the formulas for the matrix elements required to evaluate the cluster amplitudes and the energy in terms of a diagrammatic expansion:
http://www.ccc.uga.edu/lec_top/cc/html/node11.html
These expansions are supposed to cover the interaction of electrons with each other (via the instantaneous Coulmob potential) and the interaction of the electrons with external potentials (like the nuclei) and the mean field. So there are no photons and no scattering. They look rather different than Feynman diagrams due to describing different processes and different variables, but the underlying principle is the same: The diagrams describe terms in some algebraic formula expansion in a pictoral manner.

Actually, I was of the impression that those coupled-cluster/perturbation theory diagrams were the Goldstone diagrams, but I might have mixed up something there. I'm not sure if I got the term wrong, or if there was some term diffusion (like for the closely related "Wick theorem", which is also used for lots of different theorems on how to decompose second-quantized operators).

Anyway: What these diagrams are about is precisely what you suggested: Dropping the QFT and describing the electron/potential interaction in terms of simpler quantities (no photons) because for excitation processes in atoms these are really not relevant (well, at least not for the energetic description of the initial state and the final state).
 
Last edited by a moderator:
  • #9
Mike Pemulis said:
What is the number that a diagram represents? It's the quantum-mechanical matrix element, which stands for the amplitude for a process to happen a certain way (symbolized by that diagram). There are in general an infinite number of ways for any process to occur, and so an infinite number of diagrams.

...

The exact probability for a process to occur is the square modulus of the sum of all of the matrix elements.

Take a scalar toy theory and say we have a process with two incoming and two outgoing particles. When I have summed all the Feynman diagrams, i.e. matrix elements of that process (tree diagrams and all the higher-order loop diagrams) and I take square modulus of that sum, I get the exact probability for that process to occur.

Question: what are the other processes that get the other probabilities assigned to? Are that the processes with a different number of incomming and outgoing particles?
 
  • #10
Yeah, exactly. If I want to find the probability for A + B goes to C + D, I have to sum up all the diagrams that have those external lines, or at least as many as is practical. For a scattering with different initial and final states, I would have a whole new set of diagrams to sum up.
 
  • #11
Mike Pemulis said:
Yeah, exactly. If I want to find the probability for A + B goes to C + D, I have to sum up all the diagrams that have those external lines, or at least as many as is practical. For a scattering with different initial and final states, I would have a whole new set of diagrams to sum up.

So just for clarity, a process with four incoming and three outgoing particles would get a probabilty assigned, so a process with six incoming and six outgoing particles, and so forth.

All the possible processes with n incoming and m outgoing particles get a probablity assigned, which was computed for each process by summing the many possible Feynman diagrams for each process.

So some processes are more likely than others. The sum of the probabilities of all processes is one.

For example, the process A + B to C + D has prob. 0,3 and the process A + B to C + D + E has probability 0,2.

But what about the momentum of the particle in one process? Does not the momentum, how hard we clash particles into another determine if a process takes place or not, i.e. with what probability it takes place?
 
  • #12
But what about the momentum of a process? Does not the momentum, how hard we clash particles into another determine if a process takes place or not?

Yeah, that's right. I glossed over that. The initial and final states are considered to be certain particles, at certain momentum. So the "probability" that you finally get is really a differential probability at certain momenta. To get a total probability, as a function of initial-state momenta (which are controlled by your accelerator or whatever), you would integrate over all final-state momenta that satisfy conservation of energy and momentum. Actually, more useful than raw probability are cross-section (for scattering events) and rate (for decays). But it's the same idea -- you take your Feynman diagram differential probability, and integrate over all possible final momenta (and spin, if the particles have spin) to get a measure of the total probability of finding the specified final-state particles.

I didn't see anything good on Wikipedia, but Introduction to Elementary Particles by David Griffiths has great explanations, at the undergrad level, of Feynman diagrams and how they are related to cross sections and decay rates.
 
Last edited:
  • #13
Thanks, great post!

Ok, say again we have a theory, i.e. some Lagrangian with interaction terms and all kinds of Feynman diagrams that we derive from it.

Does the square modulus of the sum of all these diagrams, with all kinds of outgoing and ingoing legs give exactly one? Is that what they mean when they say the S-matrix is unitary?
 
  • #14
Or is it rather that given an initial state |i> and all possible final states |f>, the sum over all these possible final states <i|S|f><f|S|i> gives one?

It is still not fully clear to me what we mean with when we say the process A + B to C + D happens with a certain probability. What are the other processes?

Are the other processes that happen with other probabilities also processes where A + B are the incoming particles? Or are these other processes all the other processes with all possible incoming states, not only A + B?

thanks
 
Last edited:
  • #15
I don't mind talking to myself, so here goes another take..

We have <final|initial> = <k_1,k_2|p_1,p_2>, the has S-matrix for two incoming and two outgoing particles. It is the sum of many Feynman graphs with four external legs and it gives the amplitude for two incoming and two outgoing particles.

Of course, when we vary the momentum, the amplitude changes.

Is it true that suming over all possible |final> in <initial|final><final|initial> with some given |initial> gives one? Is that when they say the S-matrix is unitary?

thanks
 
  • #16
Hi, sorry I was gone for a little. To be honest I had never really thought about overall normalization before. Just logically, I think that the following,

Or is it rather that given an initial state |i> and all possible final states |f>, the sum over all these possible final states <i|S|f><f|S|i> gives one?

is correct. Given some predefined initial state, the sum of the probabilities of all possible final states should be 1.

What does it mean to say that the S-matrix is unitary? I believe that we should think of the S-matrix as being the QFT version of the time-evolution operator in QM; that is, S = exp(-2iHT), in the limit that T goes to infinity, where H is the Hamiltonian for the theory. So S is the time-evolution operator connecting the remote past and remote future. (Hence the factor of 2; we are going from -T to +T).

Now, (from Peskin & Schroeder Chapter 4) I think the connection between the S-matrix and Feynman diagrams is explained as follows.

Consider a scattering with two initial particles (A, B) and n final state particles (1, 2, 3... n). The transition amplitude is,

<p1, p2, p3... pn | S | kA, kB>

If there is no interaction, S is the identity matrix, because the initial and final states must be identical. So the part of the S-matrix due to interaction is given by T, where:

S = 1 + iT

The Hamiltonian is Hermitian just as in QM, so all eigenvectors are orthogonal. Then if the final state is different from the initial state at all,

<p1, p2, p3... pn | 1 | kA, kB> = 0

So the transition amplitude is,

[tex]
<p_1, p_2, p_3... p_n | iT | k_A, k_B > = (2 \pi)^4 \delta^4 (k_A + k_B - \Sigma p) \cdot i \mathcal{M}
[/tex]

Where [tex]\mathcal{M}[/tex] is the matrix element. I think this equation is meant to be the definition of the matrix element; the idea is to separate conservation of momentum and energy (enforced by the delta function) from the "dynamics" unique to the field theory. It is the dynamics that are given by the matrix element, which is calculated using the Feynman rules.

Um, not sure how coherent that was. Someone with a better grasp is invited to clarify if necessary.
 
  • #17
thanks, Mike! another nice explanation, it's clear now, very much appreciated
 

1. What is a Feynman Diagram for Electron Excitation?

A Feynman diagram is a visual representation of the interaction between elementary particles, proposed by physicist Richard Feynman. In the context of electron excitation, the diagram shows the exchange of a virtual photon between an electron and another particle, resulting in the electron's energy changing.

2. How does a Feynman Diagram for Electron Excitation work?

The Feynman diagram for electron excitation shows the electron emitting a virtual photon, which is then absorbed by another particle. This interaction transfers energy to the electron, causing it to move to a higher energy state.

3. What is the significance of the Feynman Diagram for Electron Excitation?

The Feynman diagram provides a mathematical and visual representation of the quantum mechanical process of electron excitation. It helps scientists understand and predict the behavior of electrons and other particles at the subatomic level.

4. How is the Feynman Diagram for Electron Excitation related to quantum mechanics?

The Feynman diagram is a fundamental tool in quantum mechanics as it allows for the calculation of probabilities for particle interactions at the subatomic level. In the case of electron excitation, it shows the probability of an electron gaining energy through the exchange of a virtual photon.

5. Can the Feynman Diagram for Electron Excitation be applied to other interactions?

Yes, the Feynman diagram can be applied to a wide range of particle interactions, including those involving other fundamental particles such as protons and neutrons. It is a versatile tool in understanding the behavior of particles at the subatomic level.

Similar threads

  • Quantum Physics
Replies
23
Views
386
Replies
1
Views
649
Replies
134
Views
7K
  • Quantum Physics
Replies
4
Views
2K
  • Quantum Physics
Replies
2
Views
275
Replies
8
Views
828
  • Quantum Physics
Replies
1
Views
696
  • Quantum Physics
Replies
2
Views
760
  • Atomic and Condensed Matter
Replies
6
Views
1K
Replies
9
Views
1K
Back
Top