arlesterc said:
Thanks for the response. Is there any reason why one electron emits and the other absorbs? Or is it just that two electrons when they come close to each other produce a field which is seen as a photon the force of which they both create/feel simultaneously so to speak?
* This has more to do with the inadequacies of the English language which naturally try to fit events into a cause and effect sequence than any difference that is relevant to how the interaction works. The more neutral and precise language would be to say that each of the two electrons
"couple" to the photon.
The Standard Model physical constant called the electromagnetic
"coupling constant" alpha which is approximately equal to 1/137 describe the probability of a photon coupling to a particle with an electromagnetic charge of magnitude one (such as an electron), which refers to the technical and precise terminology
"coupling" in this sense. The weak force has an analogous coupling constant that governs couplings between particles that interact via the weak force and weak force bosons (W and Z). And the strong force, in turn, has an analogous coupling constant that governs the couplings between gluons and either quarks or other gluons. (All three of these coupling constants are "dimensionless" numbers in contrast to the physical constant known as Newton's constant that governs the strength of the gravitational force which is not dimensionless in the conventional formulation of General Relativity - one more barrier to a theory of quantum gravity.)
* Why would someone want to use a technical word like "coupling" instead of plain old vernacular English?
Because, a Feynman diagram of an interaction is equally valid and happens with the same probability regardless of how it is rotated in four dimensions space-time, even though in the English language the way that the interaction depicted by the Feynman diagram would be described is very different in different rotations of the diagram and in different interpretations of a given Feynman diagram with the same rotation.
For example, one way to describe a particular Feynman diagram is that two photons suddenly transforms into an electron and a positron. An equivalent description is that the positron is moving back in time, emits a photon backward in time, and then reverses its direction emitting another photon to become an electron moving forward in time. Rotate that Feynman diagram 180 degrees and a common way to describe the same interaction (from a mathematics of quantum electrodynamics perspective) is that an electron and positron collide and annihilate into two photons. But, that rotated diagram could also equivalently be described as an electron moving forward in time, emitting a photon forward in time and emitting another photon as it is reversing its direction to go backwards in time. All four descriptions describe equivalent Feynman diagrams with the same probabilities of occurring and count as a single possible type of interaction when you add up the probability of all possible ways that something can happen to evaluate the "path integral" the tells you what the probabilities of a particular change in the status quo is.
(I would draw pictures of the Feynman diagram in question and its 180 degree rotated equivalent if I knew how to do that with this forum interface, but I am not so talented. Maybe someone else could drop in a couple of images into a post.)
* Another question you asked which wasn't really answered squarely is what we mean when we say that a particle (composite or fundamental)
"decays".
Basically, a decay is a transformation into other particles that is not prohibited by energy conservation even in the absence of any mass-energy other than the rest mass of the particle. In other words, a decay is an interaction that produces different particles and excess kinetic energy in addition to the decay products.
(I'm not sure if a transformation of one particle to another that was kinetic energy neutral would count as a decay or not, perhaps because I can't think of any good examples at the moment.)
Interactions that convert one or more particles plus additional energy into different particles without the excess energy (i.e. the inverse of a decay interaction) are also possible (because all interactions except weak force interactions are perfectly reversible, and weak force interactions are reversible too subject to a small adjustment for CP violation in a matrix called the CKM matrix that contains a bunch of physical constants pertinent to weak force interactions). But, so far as I know, there is no catchy exact antonym to the word "decay" as used in the technical physics sense that I have just described in the English language. (If I am wrong and simply limited in my vocabulary, I will take no offense if a reader or forum participant supplies the word I lack in my vocabulary and makes us all wiser.)
* Anyway, and finally, somewhere along the thread your original question -
how can a bound neutron be stable when a free one is not? - seems to have been lost along the way, but you deserve an answer.
A nice understandable presentation answering that question can be found at Matt Strassler's blog called Of Particular Significance:
https://profmattstrassler.com/artic...-together/neutron-stability-in-atomic-nuclei/
In a nutshell,
this is true in some atoms due to conservation of mass-energy. Sometimes a bound nucleus which includes a neutron has less mass as a single unit than the free neutron and the rest of the nucleus would have if broken into parts. In those cases, the bound neutron is stable.
It is possible for the whole to have less mass than the sum of its parts, because
the nuclear binding energy of the nuclei of atoms less heavy than iron is a negative number (which is why energy is emitted in nuclear fusion involving these elements).
(If I were more nimble I would insert a chart of the binding energy per nucleon of the various chemical elements here, but I'm not so you'll just have to imagine a U-ish shape with hydrogen at one end, iron at the bottom of the U, and a right hand high end that extends up into numbered elements with no name.)
Quantum mechanics can seem to cheat mass-energy conservation in interactions like quantum tunneling and interactions that involve virtual or "off shell" particles, but these little short term "energy lending" transaction (to horribly bastardize the correct technical description of these processes in the interest of a heuristic description) don't change the iron law that the mass-energy present in an initial state must always be exactly the same in an end state. (Indeed, this iron law of mass-energy conservation is the foundational principle of the two main ways that the equations of particle physics can be written -- Hamiltonians and Lagrangians - both of which can also be used in classical mechanics as well.)
On the other hand, if you have a big nucleus (bigger than iron), nuclear binding energy is positive so the parts have less mass than the whole and splitting the atom creates energy. In those atoms, beta decay can be possible and the bound neutrons are not entirely stable, even though they may still be much more stable than free neutrons. So, the statement that a bound neutron is stable is only partially true. Neutrons bound in stable isotypes of light elements really are stable, while other bound neutrons are merely "metastable" with a stability that is a function of the particular isotype in question.
* Let's take the question one step further than Strassler does, because negative energy sounds like a pretty mysterious thing and really, in this circumstance it isn't, because the fact that binding energy is negative is simply a product of our arbitrary (but natural, convenient, and intuitively helpful) choice of where to put zero in the coordinate system by which we measure nuclear binding energy.
This negative binding energy is possible because the force that binds protons and neutrons together in an atomic nucleus is just a second order effect of the strong force that binds the quarks in individual protons and neutrons together, and the strong force binding the quarks together is the source of most of the mass of a proton or neutron.
When arranged in the right kind of light atomic nucleus, the amount of strong force energy per quark necessary to hold all of the quarks in the entire nucleus together in the right configuration is slightly less than the amount of strong force energy per quark necessary to do so in free proton or neutron (heuristically, it is helpful to think of it as similar to the way you need less material to build to buildings that share a party wall between them than to build two free standing buildings).
If we measured the total energy in a nucleus created by the strong force, by starting with the total mass of the nucleus and then subtracting out the mass of the quarks in the nucleus and the Mass-energy contribution for the electromagnetic fields in the nucleus, the gluon field mass-energy of the system would be positive in both the bound state and the divided state and the difference in the gluon field mass-energy would be equal to the difference in binding energy. The physics would be the same and the description is equivalent, but the coordinate system we used to measure the change in energy would have a true zero baseline, rather than the arbitrary baseline that is used for the nuclear binding energy.
The normal zero for nuclear binding energy is analogous to measuring temperature in degrees Celsius, while the approach taken above in the previous paragraph is analogous to measuring temperature in degrees Kelvin. So, even though nuclear binding energy is negative in light nuclei that doesn't really mean that there is negative energy in the sense that we talk about the hypothetical and probably unphysical concept of "true negative energy" in the context of the theoretical physics of general relativity.