The values of the fundamental constants

In summary, the individuals involved in this thread are trying to find out why the fundamental constants have the values we know and not others. They suggest that the term ##\alpha## is a mathematical factor proper to the electrodynamic laws, and that the method of reducing to the absurd allows for the determination of this value.
  • #1
slow
93
16
The reason for initiating this thread is to have read another, initiated by Serra Nova here in the forum, accessible at the following address.
https://www.physicsforums.com/threads/fundamental-physical-constants.938267/

The starting note of this thread refers to knowing if those terms called fundamental constants are really constant. The attempt to reflect that, led me to the question that titles this thread: why do the fundamental constants have the values we know and not others? I want to expose how little and imprecise I managed to reason, hoping to receive help to filter my ideas, discard what should be discarded and correct what needs to be corrected. Is the next.

The task of looking for an answer always refers to a context. We have learned that quantum electrodynamics is the most reliable theory to date. So the advisable thing is to base ourselves on something that comes from that theory. It is also advisable to start with a mathematically simple expression.

There is a mathematically simple expression that comes from QED. Corresponds to the fine structure ##\alpha## constant.
[tex]\displaystyle \alpha=\dfrac{e^2}{2 \ \varepsilon_o \ C \ h} [/tex]
The method of reduction to the absurd is mathematically valid to demonstrate something. The way to apply it in this case is to treat as variables the four terms of the right member that we normally accept as fundamental constants. That implies treating ##\alpha## as a function of 4 variables, which are ##e##, ##\varepsilon_o##, ##C##, ##h##.

1. The term ##\alpha## is dimensionless. So we can not exclude the possibility that ##\alpha## is a numerical factor proper to the electrodynamic laws. In case that happens, the numeric value of ##\alpha## should appear as a result of a theorem deduced from the electrodynamic laws, in a purely theoretical way and independent of all experimental data. Pure theory, which with a theorem gives the value of ##\alpha##. In that case the term ##\alpha## could not be treated as a variable in the absurdity method and, for that reason, the 4 variables would be forced to adopt a value tetrad according to the value of ##\alpha## imposed by the electrodynamic laws. The absurd method would have a legal framework of application. Nobody has published until today a theorem that expresses the value of ##\alpha## determined by the electrodynamic laws. What will we do then? Will we treat ##\alpha## as a constant or as a variable? Many physicists trust the possibility of formulating this theorem and dedicate a great effort to that. Let's trust them and decide to treat ##\alpha## as a constant in the absurdity method.

2. We know that ##\varepsilon_o## and ##C## are interdependent, because ##\varepsilon_o## appears in the electrodynamic expression of ##C##.
[tex]\displaystyle C=\dfrac{1}{\sqrt{ \mu_o \ \varepsilon_o} }[/tex]
Now we are facing two really fundamental issues.
Is there or not complete interdependence, that is to say that each variable depends on the other 3 ? The electric field and the magnetic field are interdependent. Then the corresponding variables too. The reasonable thing is to assume that there is complete interdependence between the 4 variables. In that case there is only one tetrad of values according to the value of ##\alpha##. The method of varying the short list and demonstrating that the variation is absurd leads to a univocal response, that is, a unique and unambiguous response.

The previously expressed does not answer the initial question of this thread. It simply shows that there is hope of reaching the answer based on the laws of electrodynamics.
 
Last edited:
Physics news on Phys.org
  • #2
slow said:
So we can not exclude the possibility that αα\alpha is a numerical factor proper to the electrodynamic laws. In case that happens, the numeric value of αα\alpha should appear as a result of a theorem deduced from the electrodynamic laws, in a purely theoretical way and independent of all experimental data.
Hmm, why do you say this? This statement doesn’t seem right to me and I don’t know of any reference that supports it.

slow said:
Is there or not complete interdependence, that is to say that each variable depends on the other 3 ?
In SI units they are interdependent. In other unit systems one or more of those constants may not exist (meaning they have a numerical value of 1 and are unit-less)

slow said:
The method of varying the short list and demonstrating that the variation is absurd leads to a univocal response, that is, a unique and unambiguous response.
What?
 
  • Like
Likes slow and Asymptotic
  • #3
You can get rid of all dimensionful constants by choosing different units, but the dimensionless constants stay the same, and currently we do not have any theory to predict them. We can measure them, but we don't know why they have the values we measure.
 
  • Like
Likes JMz, laymanB, slow and 1 other person
  • #4
Hello Dale, hello mfb. The way you have pointed out my mistakes is useful to me. I appreciate your presence in the thread and I begin to feel that physicsforums is a good house.
 
  • Like
Likes mfb
  • #5
I think you forgot big G, ##\lambda##, and a few dozens other free parameters from the standard model.

My understanding is that some people don't even think it's a question to be answered. I.e. You cannot do statistical analysis and probabilities on one universe. Others think that a theory of everything will show us that the all the free parameters are derivable from that theory, where they have those values by necessity. Others think that they do indeed show fine tuning and we need a multiverse to use the law of large numbers to explain them. Others believe they point to theological conclusions.

There are numerous theories proposed in which the "constants" change with time, but I'm not sure of any consensus acceptance of these theories.

My understanding is as we go to higher and higher energy levels in particle accelerators, the discovery of new particles, or lack thereof will tell us more about the apparent fine tuning. But I am not sure on this point.
 
  • #6
slow said:
The method of reduction to the absurd is mathematically valid to demonstrate something.
I'm not sure I'm following your reasoning, but it could just be me, I don't know QED. If you want to use the reduction to absurdity way of reasoning to show that the "constants" are actually variables, then it seems to me that you should treat them as constants and show that the conclusion of the fine structure "constant" is absurd (i.e. that the math is contradictory). I could be wrong.
 
  • #7
For LaymanB. First of all thank you for making yourself present and for exposing the current state of the subject in a few sentences.

I am grateful that Dale, mfb and you have indicated my errors, contained in the initial note of this thread. Normally I do not add argumentation when my first idea is wrong, but out of courtesy I feel the need to answer something you have asked me.
laymanB said:
I think you forgot big G, ?, and a few dozens other free parameters from the standard model.

If you want to use the reduction to absurdity way of reasoning to show that the "constants" are actually variables, [...]
Regarding the constants not included in ##\alpha##, I agree with you. They can not be ignored. I have taken into account only those 4 that appear in ##\alpha## because I am interested in a first attempt based on something reliable, without addressing innovations inspired by cosmology, the anthropic hypothesis, the multiverse, etc. If the attempt based on electrodynamics fails, then attempts with risky hypotheses seem less apt to succeed.

The idea of my initial note was not to show that the terms taken as constants are really variable. That exceeds my focus. I do not ask if they vary or not depending on spatio-temporal, astronomical, cosmic circumstances, the local intensity of the fields, etc. My concern focused on the following. Are all the constants interlinked, like the gears of a clock, so that the position of a gear can not vary without all other positions varying? The clock can be subject to the influence of gravity that changes its rhythm, or to the influence of the speed that does the same in an inertial system, or to any circumstance that modifies the rhythm without disassembling the clock. While the watch is assembled, the gears will be interlinked. And there is only one way to see in the quadrant a certain time. We see it when each gear is in a specific intrinsic position. I use the adjective intrinsic to indicate that the position refers to a system of own coordinates, fixed to the housing of the clock.

Suppose that some theorems, not yet formulated, are possible in electrodynamics. And suppose these theorems determine two details. 1) The theoretical value of ##\alpha##. 2) A mathematical link between the 4 terms that appear in ##\alpha##. In that case we will have a clock of 4 gears and there is a single way to see in the quadrant the time ##\alpha##. In such a context, the absurdity method would be based on denying the uniqueness, assuming that there is more than one way to obtain the value ##\alpha##. Then we would introduce a ##\delta## in each term, we would do a variational analysis and we would arrive at an absurdity, that is, to the conclusion that the 4 ##\delta## must be zero in order not to violate the physical laws. That is, more or less, the idea that haunted my head at the beginning of the thread.

I had also been interested in another detail. Let us suppose that theorems are formulated and that the theoretical value of ##\alpha## differs by 0.03% from the accepted empirical value. I specify a percentage simply to frame the question. A difference of that order seems excessive compared to the precision of electrodynamics. Would the obligation be directly to discard the theorems?

That's what I was thinking when I started the thread.
 
  • #8
slow said:
2) A mathematical link between the 4 terms that appear in α
Which 4?
##\alpha = \frac{1}{4 \pi \varepsilon_0} \frac{e^2}{\hbar c} = \frac{\mu_0}{4 \pi} \frac{e^2 c}{\hbar} = \frac{k_\text{e} e^2}{\hbar c} = \frac{c \mu_0}{2 R_\text{K}} = \frac{e^2}{4 \pi}\frac{Z_0}{\hbar}##
This is just the SI. There are so many more ways to express the fine-structure constant with dimensionful parameters:
##\alpha = \frac{e^2}{\hbar c}## in cgs.
##\alpha = \frac{e^2}{4 \pi}## in natural units.
##\alpha = \frac{1}{c}## in atomic units.

All these equations are equally valid - in their respective unit systems. But unit systems are arbitrary man-made constructs that have no impact on physics. The only interesting thing here is ##\alpha##, which has the same value in every unit system. All the dimensionful constants on the right side are just unit conversion factors.
slow said:
I had also been interested in another detail. Let us suppose that theorems are formulated and that the theoretical value of α\alpha differs by 0.03% from the accepted empirical value. I specify a percentage simply to frame the question. A difference of that order seems excessive compared to the precision of electrodynamics. Would the obligation be directly to discard the theorems?
The hypothesis would be ruled out clearly. A modification of it that predicts different values might still work.
 
  • Like
Likes Dale and slow
  • #9
slow said:
That's what I was thinking when I started the thread.
OK, now I understand you better. Thanks for elaborating.
 
  • Like
Likes slow
  • #10
I apologize for changing the perspective of the thread a bit. Please let me know if it is convenient to open a new thread.

I was attracted to reading the following.

https://www.physicsforums.com/insights/struggles-continuum-part-6/

The text comments on the appearance of divergent series in QED and the dissatisfaction that, in formal terms, some physicists experience with renormalization. That left me reflecting. I remembered the reading of some historical debates about QED and renormalization, carried out by prestigious and respected physicists and mathematicians. With disgust or with pleasure, nobody found alarming reasons to object to QED or renormalization. The reading left me with a feeling that is summarized in the following analogy. A research group manages to create a system of three very precise lasers, mounted on fixed bases that guarantee the parallelism of the beams. The system is very perfect, but it is also perfectly useless in practice, since it does not concentrate the energy of the lasers at one point. Then some engineers modify the assemblies and achieve that the beams always converge. Is that criticizable? Is it defective? It does not seem.

But the inquisitive spirit is not enough for a two-stage procedure, with a divergent QED and a renormalization that then aids it. That spirit seeks a one-stage convergent procedure. But if QED is well founded, within the quantum context the two stages are inevitable. Then let's change the context! , the inquisitive minds protest. Can someone think in another context?

When the thermodynamics based on magnitudes such as pressure, volume, temperature, heat, enthalpy, entropy, was discussed in another context, the new context corresponded to the molecular and atomic levels, which are elementary with respect to the variables used by the founders of thermodynamics . When gravitation was discussed in a different context, the new context was spacetime, which is elementary regarding movement, energy, linear and angular moments, fields and everything that helps to shape the metric. If the inquisitive quantum spirit has to proceed as in other cases that history registers, then the expectation is to investigate something that is elementary regarding quantum properties. Can anyone think consciously and clearly about something like that? Or is the only hope that in the middle of an investigation initiated by other motives, the evidence of something appears, which is elementary, or underlying, with respect to the quantum properties?
 
  • #11
slow said:
Suppose that some theorems, not yet formulated, are possible in electrodynamics. And suppose these theorems determine two details. 1) The theoretical value of αα\alpha.
I don’t think that there is any hope that such a theorem exists in QED. However, there is a hope that some future theory of everything would be able to derive the fine structure constant from first principles.

slow said:
2) A mathematical link between the 4 terms that appear in αα\alpha.
This is not based on a theorem. This is simply a matter of definitions based on your system of units. There is no physical content to this. All of the physical content is in the dimensionless number.
 
  • #12
Dale said:
I don’t think that there is any hope that such a theorem exists in QED. However, there is a hope that some future theory of everything would be able to derive the fine structure constant from first principles.

This is not based on a theorem. This is simply a matter of definitions based on your system of units. There is no physical content to this. All of the physical content is in the dimensionless number.

Thank you very much Dale, once again, for correcting my mistakes.
 
  • #13
Its a matter of simplicity.

Consider a simpler case of an inertial system: x=vt where x is displacement, v is velocity, t is time. We can take the approach that this is is precisely true for an inertial system, and then say we need two of the three variables to be fixed (standardized), and derive the third. The question "are two, or all three varying in such a way that this equation is always exactly true?" is not something we can decide experimentally, so it is an improper question.

So we define a standard time - the period of a carefully designed pendulum. We define a standard length - the distance between two marks on a platinum bar in Paris. We notice that all the equations of physics, to within experimental error, are rather simple, simpler than any other definition. Now the technology advances, and we discover that, according to the theory of relativity, the speed of light in a vacuum is very constant. Also, an isolated Krypton-86 atom emits its radiation very stably. According to theory, not according to our pendulum-platinum definition of distance and time, which, for understandable reasons, are not stable enough to verify the theory. So we declare that the speed of light, and the wavelength of the krypton atom are our new standards. All the equations of physics remain simple, so good.

But what if we come up with another clock, which according to our best theory, is linear in time, but which disagrees with our krypton clock in, let's say, a small, periodic manner, so small that, to within experimental error, all of the equations of physics (e.g. x=vt) can remain in their same simple form. The question of "which clock is correct?" is improper. The error may be due to any number of things - we don't understand the physics of one or both of our standard clocks, the speed of light should be taken to vary cyclically, or the equation x=vt for an inertial frame should be taken to be inexact. I say "taken to be" rather than "is", because while picking any of those choices is defensible mathematically, if our theories become less simple, we should want to reject such a choice.

The bottom line is simplification. We choose our "standards" because they allow great precision, and keep things simple, not because they are "constant". If we are faced with two competing standards, we have to question the whole chain of logic and pick and choose what to declare "fixed", guided by the simplicity principle. We may run into a conundrum, although that has not happened so far.

With regard to the fine structure constant, according to the best theories we have so far, using the standards we have chosen, it is a constant. This doesn't prove it's constant, only that declaring it to be constant keeps things simple. So far. If some future theory, in its simplest form, states that it is a constant, or better, gives a specific value that agrees with the measured value, then great. But a theory that, in its simplest form, rejects the idea that it is constant, should not be rejected out of hand, and to deal with it properly, the whole chain of logic should be re-examined.
 
Last edited:
  • Like
Likes slow and Dale

1. What are the fundamental constants?

The fundamental constants are physical quantities that are believed to be universal and unchanging in nature. They represent the basic building blocks of our understanding of the physical world.

2. How many fundamental constants are there?

There are currently 26 known fundamental constants, with the most well-known being the speed of light, Planck's constant, and the gravitational constant.

3. How are the fundamental constants determined?

The fundamental constants are determined through rigorous experimentation and measurement. Scientists use a variety of tools and methods to accurately measure these constants and refine their values over time.

4. Are the values of the fundamental constants constant?

Despite their name, the values of the fundamental constants are not necessarily constant. In fact, some scientists believe that these values may have varied in the early stages of the universe and may continue to change over time.

5. How do the values of the fundamental constants affect our understanding of the universe?

The values of the fundamental constants play a crucial role in our understanding of the physical laws that govern the universe. They help us make precise calculations and predictions in fields such as physics, astronomy, and chemistry.

Similar threads

  • Advanced Physics Homework Help
Replies
2
Views
1K
Replies
1
Views
939
Replies
2
Views
2K
Replies
38
Views
2K
  • Quantum Physics
Replies
20
Views
1K
Replies
2
Views
800
Replies
1
Views
1K
  • Quantum Physics
Replies
1
Views
810
  • Introductory Physics Homework Help
Replies
29
Views
923
Back
Top