A Physical properties of the vacuum in GR vs. QFT

  • A
  • Thread starter Thread starter Iskandarani
  • Start date Start date
Iskandarani
Messages
15
Reaction score
7
Hi,

I'm trying to reconcile the view of the vacuum in General Relativity with the view in Quantum Field Theory. In GR, the vacuum is the stage for spacetime geometry, described by the metric tensor gμν. In QFT, it's a dynamic sea of virtual particles and zero-point energy.

My question is: Do these two views imply that the vacuum has measurable physical properties, like an impedance, permittivity, or permeability? In GR, are these properties considered emergent from the geometry, or are they fundamental constants imposed upon the geometry? For example, if spacetime curvature changes dramatically near a black hole, does the vacuum's local impedance to electromagnetic waves also change, or is it assumed to be constant everywhere?
 
Physics news on Phys.org
In GR the local tangent space at every point is the flat Minkowski geometry of SR. This is often, therefore, a good approximation for a local region of spacetime.

To reconcile QFT with GR you'll need a quantum theory of gravity.
 
Iskandarani said:
Do these two views imply that the vacuum has measurable physical properties, like an impedance, permittivity, or permeability? In GR, are these properties considered emergent from the geometry, or are they fundamental constants imposed upon the geometry?
Yes, gravitational fields in vacuum or a medium can be viewed as inducing effective dielectric properties therein. Here's the abstract of a 1980 paper on the subject:
Dielectric tensor and magnetic permeability in the weak field approximation of general relativity
by Pegoraro, F. ; Radicati, L. A.
Abstract: A treatment of classical electromagnetic theory in the presence of a weak gravitational field, both in a vacuum and in a material medium, is presented in terms of an effective dielectric and magnetic permeability tensor. It is shown that the gravitational red shift can be interpreted as the work done by the electric field of the light ray against the gravitationally induced polarization current. The dispersion relation is derived for an electromagnetic wave in a medium and it is shown that it depends upon the polarization state of light.
(https://ui.adsabs.harvard.edu/abs/1980JPhA...13.2411P/abstract)
 
  • Like
  • Informative
Likes ohwilleke and PeroK
Iskandarani said:
I'm trying to reconcile the view of the vacuum in General Relativity with the view in Quantum Field Theory.
Currently we don't have a good reconciliation of this. The best that we have is QFT in curved spacetime, but that's still not a complete reconciliation and leaves many open questions unresolved.

Iskandarani said:
In GR, the vacuum is the stage for spacetime geometry, described by the metric tensor gμν.
Sort of. "Vacuum" in GR just means the stress-energy tensor is zero. The fact that there are solutions of the Einstein Field Equation that are vacuum in this sense but are still curved geometries (such as Schwarzschild spacetime) is an indication that "vacuum" does have some kind of structure; but the metric tensor describes the spacetime geometry whether the solution is vacuum or not.

Iskandarani said:
In QFT, it's a dynamic sea of virtual particles and zero-point energy.
Not really; this view, while it sometimes can be useful heuristically, has lots of problems and should not be relied on. See this Insights article:

https://www.physicsforums.com/insights/misconceptions-virtual-particles/
 
renormalize said:
gravitational fields in vacuum or a medium can be viewed as inducing effective dielectric properties therein
There's another line of research that might be relevant to what the OP was asking about. There was a line of research in, IIRC, the 1960s, pursued by Sakharov among others, about drawing an analogy between gravity as an emergent property of spacetime and elasticity as an emergent property of solids, with Newton's gravitational constant as the analogue to Young's modulus and the Poisson ratio. This is discussed in item 6. of Box 17.2 in Misner, Thorne, & Wheeler. I'm not sure this research ever led anywhere, though.
 
  • Like
  • Informative
Likes ohwilleke and PeroK
Thank you both for these clarifications.

@PeroK, you're right that the ultimate answer lies in a quantum theory of gravity. My question is more about the nature of the vacuum that such a theory would need to describe.

@PeterDonis, thank you for the correction on the meaning of "vacuum" in GR and the warning about the "sea of virtual particles" analogy. Your point that a vacuum solution can still have a rich geometric structure (like Schwarzschild spacetime) is what I find most fascinating. It implies that even when "empty" of matter and energy, spacetime itself possesses structure. This brings me back to my core question about its physical properties, which I see @renormalize and your other post address directly.


The paper treating a gravitational field as an effective dielectric medium, and the idea from Sakharov of gravity being an emergent property analogous to elasticity, seem to be two sides of the same powerful coin. They both strongly suggest that we can think of spacetime not as a fundamental, abstract stage, but as a medium with tangible physical properties.

This raises a bold question: If we take these analogue models seriously, should we re-evaluate what fundamental constants represent? For instance, are constants like the permittivity of free space (ϵ0), the permeability (μ0), and even the gravitational constant (G) truly fundamental, or are they better understood as constitutive properties of the vacuum medium itself, much like how bulk modulus and density are constitutive properties of a material?

Following that thought, could it be that General Relativity and Quantum Field Theory are not describing two different vacuums, but are rather two different—and currently unreconciled—effective theories describing the large-scale (geometric) and small-scale (quantum) behavior of this one physical medium?
 
Iskandarani said:
This raises a bold question
Iskandarani said:
Following that thought,
Of course these sorts of speculations have been made. The problem is, nobody has yet come up with a theoretical model that incorporates them and makes testable predictions. One way of looking at the search for a viable theory of quantum gravity is as the search for such a model.
 
PeterDonis said:
Of course these sorts of speculations have been made. The problem is, nobody has yet come up with a theoretical model that incorporates them and makes testable predictions. One way of looking at the search for a viable theory of quantum gravity is as the search for such a model.
That's the perfect way to frame the challenge. A model is only as good as its testable predictions, and I completely agree that this is where these "emergent" or "analogue" ideas must prove their worth to move from speculation to physics.

This brings up a clarifying question for me about the nature of such a prediction. When searching for a viable model of this type, is the primary goal to find new phenomena that would cause deviations from General Relativity or the Standard Model in some high-energy experiment?

Or, could a model also demonstrate its predictive power by successfully calculating the values of the fundamental constants that the Standard Model must currently treat as experimentally measured inputs? For example, if a theory of the vacuum as a physical medium could derive the mass hierarchy of the elementary particles from its own internal structure (using, say, topological configurations as a hypothetical mechanism), would that be considered a powerful and falsifiable prediction, even if it didn't predict a brand new particle?
 
Iskandarani said:
When searching for a viable model of this type, is the primary goal to find new phenomena that would cause deviations from General Relativity or the Standard Model in some high-energy experiment?
That would be the best way, yes.

Iskandarani said:
could a model also demonstrate its predictive power by successfully calculating the values of the fundamental constants that the Standard Model must currently treat as experimentally measured inputs?
That's something that has been attempted multiple times, but nobody has yet succeeded. If it could be done, that would be a reason to take such a model seriously, yes. It's a weaker reason than an actual new experimental prediction because the values that need to be produced are already known.
 
  • #10
Iskandarani, I would suggest you to first study something similar but much simpler and much better understood. You can contrast the "vacuum" in classical electrodynamics with vacuum in quantum electrodynamics. The former is any configuration of EM fields for which the charge density is zero. When you learn something about this, your original question will look much simpler to you.
 
  • #11
PeterDonis said:
That would be the best way, yes.


That's something that has been attempted multiple times, but nobody has yet succeeded. If it could be done, that would be a reason to take such a model seriously, yes. It's a weaker reason than an actual new experimental prediction because the values that need to be produced are already known.
@PeterDonis: That's a very helpful distinction between predicting new phenomena versus calculating existing constants. I understand why post-diction can be viewed as weaker evidence; it's always a risk that a model is simply complex enough to be fine-tuned to fit known data.

This raises a question about how we would evaluate the success of such a model. Let's imagine a hypothetical theory, starting from a few principles about the vacuum's structure, that could derive the entire pattern of the 12 fundamental fermion masses using, for example, only 2 or 3 of its own input parameters.

While the mass values are known, the Standard Model requires a separate free parameter (a Yukawa coupling) for each one. In this scenario, would the model's ability to explain this complex pattern of ~12 seemingly arbitrary numbers with a vastly simpler and more constrained theoretical structure still be considered 'weaker evidence'? Or would this dramatic reduction in free parameters be seen as powerful evidence in its own right, suggesting the model has captured a deeper layer of reality?
 
  • #12
Iskandarani said:
would this dramatic reduction in free parameters be seen as powerful evidence in its own right
Still not as powerful as predicting a new effect that hasn't been observed before, and having that prediction confirmed.

Note that theories along the lines you suggest already exist--for example, there are extensions of the Standard Model that claim to derive the SM symmetry group as a subgroup of some larger simple group, such as SU(5) or SO(10), and therefore reduce the number of free parameters (since the standard SM parameters are now derived from a smaller number of group parameters for the simple group). And so far none of them have worked out--they all have made predictions that have been falsified by further data. That is a reason to be very cautious about such proposals, even if mathematically they seem to be huge improvements.
 
  • #13
@Demystifier. Thank you, I appreciate the sugestion. If you have a favorite review/textbook section that walks through the hierarchy CED → QED (Heisenberg–Euler, running of α) → SM, and a standard treatment of Maxwell on curved backgrounds (constitutive-tensor viewpoint vs. “just” GR kinematics), I’d appreciate the citation. I think that ladder will help me frame the “vacuum properties” question within established theory.

@PeterDonis — the cautionary note is well taken.
To sharpen my understanding (and to keep this squarely within published work), could I ask for pointers on two things?
  1. Concrete case studies of “parameter-reduction” that failed. For example, reviews that lay out which specific predictions in simple GUTs (e.g., classic proton-decay channels and lifetimes in minimal setups) conflicted with later data, and how the community weighed those misses against the aesthetic appeal of unification/parameter economy.
  2. Methodology for weighing evidence. In practice, when a published framework yields tight, renormalization-group–stable relations that link several otherwise independent observables (masses, mixings, couplings) without extra dials to tune, and those relations later check out at the percent level, does that cross the bar for “predictive success”? Or is the consensus that only qualitatively new phenomena (a new line, resonance, or process) count as strong evidence, with parameter compression viewed as secondary?
I’m not advocating any specific model here; I’m trying to understand the criteria by which the community distinguishes genuine explanatory gain from retrofitting, with some canonical references to read.
 
  • #14
Iskandarani said:
I’m trying to understand the criteria by which the community distinguishes genuine explanatory gain from retrofitting, with some canonical references to read.
I don't think there is a "canonical" process by which the community does this. Every physicist has their own opinion about it. Consider the ongoing debate about string theory and whether it counts as giving any "genuine explanatory gain": opinions of physicists vary from one extreme (yes, it does, and the case that it does is such a slam dunk that anyone who doesn't accept it is just an ignorant contrarian) to the other (no, it doesn't, string theory is a complete waste of time and the fact that it's sucking up so much time and effort is a huge problem). And there is no one in a position to adjudicate the debate and reach a resolution.
 
  • #15
Iskandarani said:
@Demystifier. Thank you, I appreciate the sugestion. If you have a favorite review/textbook section that walks through the hierarchy CED → QED (Heisenberg–Euler, running of α) → SM, and a standard treatment of Maxwell on curved backgrounds (constitutive-tensor viewpoint vs. “just” GR kinematics), I’d appreciate the citation. I think that ladder will help me frame the “vacuum properties” question within established theory.

@PeterDonis — the cautionary note is well taken.
To sharpen my understanding (and to keep this squarely within published work), could I ask for pointers on two things?
  1. Concrete case studies of “parameter-reduction” that failed. For example, reviews that lay out which specific predictions in simple GUTs (e.g., classic proton-decay channels and lifetimes in minimal setups) conflicted with later data, and how the community weighed those misses against the aesthetic appeal of unification/parameter economy.
For the most part, the problem is that string theory and GUTs and other BSM models simply don't make predictions about parameters at all, even though they should be able to do so in principle.

Attempts to fix parameters in the supersymmetry paradigm have a long history of making predictions about new supersymmetric particles at particular masses which haven't been seen but should be possible to observe "just around the corner" which are then ruled out, after which a new set of "just around the corner" predictions are made. See the second to last update from Woit at this link noting this reality. A preprint of a commissioned review article for the Encyclopedia for Particle Physics, submitted on May 3, 2025 by Hyun Min Lee, entitled "Supersymmetry and LHC era" updates the experimental bounds on supersymmetry (SUSY) theories based upon the Large Hadron Collider (LHC) data. A paper that all but spells out this problem clearly is Luis A. Anchordoqui, Ignatios Antoniadis, Karim Benakli, Jules Cunat, Dieter Lust, "SUSY at the FPF", arXiv:2410.16342 (October 21, 2024).

I have a running list of more interesting papers that were attempting to do this (which is far from comprehensive), none of which are particularly precise. Many were subsequently published, but its more convenient to use the arXiv link which is never pay per view and is the first one you encounter.

Tejinder P. Singh, "Fermion mass ratios from the exceptional Jordan algebra" arXiv:2508.10131 (August 13, 2025) (90 pages).

Jean-Marcel Rax, "Gravity induced CP violation in mesons mixing, decay and interference experiments" arXiv:2503.09465 (March 12, 2025).

Boris Altshuler, "Quark mixing angles and weak CP-violating phase vs quark masses: potential approach" arXiv:2303.16568 (March 29, 2023) (substantial text overlap with arXiv:2210.09780).

Nobuhito Maru, Yoshiki Yatagai, "Fermion Mass Hierarchy in Grand Gauge-Higgs Unification" https://arxiv.org/abs/1903.08359

Van E. Mayes "All Fermion Masses and Mixings in an Intersecting D-brane World" https://arxiv.org/abs/1902.00983

Alexander Baur, Hans Peter Nilles, Andreas Trautner, Patrick K.S. Vaudrevange, "Unification of Flavor, CP, and Modular Symmetries"
https://arxiv.org/abs/1901.03251

Andrea Wulzer, "Behind the Standard Model" https://arxiv.org/abs/1901.01017

Junu Jeong, Jihn E. Kim, Se-Jin Kim, "Flavor mixing inspired by flipped SU(5) GUT" https://arxiv.org/abs/1812.02556

Gauhar Abbas, "A new solution of the fermionic mass hierarchy of the standard model" https://arxiv.org/abs/1807.05683

Emiliano Molinaro, Francesco Sannino, Zhi-Wei Wang, "Safe Pati-Salam" https://arxiv.org/abs/1807.03669

J. T. Penedo, S. T. Petcov, "Lepton Masses and Mixing from Modular S4 Symmetry" https://arxiv.org/abs/1806.11040

Yoshio Koide, Hiroyuki Nishiura, "Parameter-Independent Quark Mass Relation in the U(3)×U(3)′ Model" https://arxiv.org/abs/1805.07334

T. K. Kuo, S. H. Chiu, "Flavor Mixing and the Permutation Symmetry among Generations" https://arxiv.org/abs/1805.05600

M. Novello, V. Antunes, "Connecting the Cabbibo-Kobayashi-Maskawa matrix to quark masses" https://arxiv.org/abs/1804.00572

Astrid Eichhorn, Aaron Held, "Mass difference for charged quarks from asymptotically safe quantum gravity" https://arxiv.org/abs/1803.04027

HM Chan, ST Tsou, "The Framed Standard Model (I) - A Physics Case for Framing the Yang-Mills Theory?" https://arxiv.org/abs/1505.05472

Stephen F. King "A model of quark and lepton mixing"
https://arxiv.org/abs/1311.3295

J. Lemmon, "The origin of fermion families and the value of the fine structure constant" https://arxiv.org/abs/1307.2201

A less ambitious recent paper that tried to work out first generation SM fermion masses from self-interactions (not for the first time) is Eckart Marsch, Yasuhito Narita, "On the Lagrangian and fermion mass of the unified SU(2) ⊗ SU(4) gauge field theory" arXiv:2508.15332 (August 21, 2025) (13 pages).

Another less ambitious paper simply tries to calculate a maximum particle energy from quantum gravity considerations. Jarmo Mäkelä, "A Possible Quantum Effect of Gravitation" arXiv:2405.18502 (May 28, 2024).

Some papers focus on maximizing or minimizing some quantity such as:

Jesse Thaler, Sokratis Trifinopoulos, "Flavor Patterns of Fundamental Particles from Quantum Entanglement?" arXiv:2410.23343 (October 30, 2024)

Alexandre Alves, Alex G. Dias, Roberto da Silva, "Maximum Entropy Principle and the Higgs boson mass" (2015).

A review/analysis paper grappling with what GUT theories are and aren't inconsistent with observation is Giacomo Cacciapaglia, Aldo Deandrea, Konstantinos Kollias, Francesco Sannino, "Grand-unification Theory Atlas: Standard Model and Beyond" arXiv:2507.06368 (July 8, 2025).

A new review paper which will be a chapter in an Encyclopedia of Particle Physics summarizes various theories that have been advanced to explain the fundamental fermion masses in the Standard Model, and while it isn't complete, its table of contents is a nice summary of some of the leading approaches. Its table of contents lays out some of the leading approaches and catalogues their introductions and failures.

2 Fermion masses and mixing angles
2.1 The Standard Model
2.2 Neutrino masses
Majorana neutrino masses
The seesaw mechanism
Type-I seesaw
Type-III seesaw
Type-II seesaw
Dirac neutrino mass
2.3 The data
3 In search of an organizing principle

4 Grand Unified Theories
4.1 SU(5) GUTs
4.2 SU(10) GUTs
5 Fermion masses from quantum corrections
5.1 Radiative fermion masses
5.2 Infrared fixed points
6 CompositeFermions
6.1 Massless composite fermions
6.2 Partial compositeness
7 Flavor Symmetries
7.1 The Froggatt-Nielsen Model
7.2 Variants and alternatives
8 Fermion masses in String Theory
8.1 Aiming at the SM from strings
8.2 Eclectic flavor symmetries from heterotic orbifolds
8.3 Flavor in models with D-branes
8.4 Metaplectic flavor symmetries from magnetized branes

Koide's 1981 rule for charged lepton masses is one of the few empirical successes of the genre to high precision, although there are multiple proposed theories about why it works. The Koide's rule predicted value for the tau lepton mass is 1776.96894(7) MeV. The Particle Data Group's world average of the tau lepton mass is 1776.93 ± 0.09 MeV which is a precision of slightly under one part per 20,000. The experimentally measured mass of the tau lepton is consistent with the Koide's rule prediction made in 1981 at the 0.4 sigma level, even though the relevant masses were known much less precisely in 1981, and the experimental value has grown closer to the predicted value over time.

Efforts have been made to generalize Koide's rule for charged lepton masses to quarks such as A. C. Kleppe's preprint entitled "Quark mass matrices inspired by a numerical relation" that explores how Koide's rule for charged lepton masses can be extended to quarks. A similar approach is taken in Alejandro Rivero, "An interpretation of scalars in SO(32)" arXiv:2407.05397 (July 7, 2024) (the published version is open access and was published on October 15, 2024).

But part of the problem with any proposed explanation of the quark masses is that unlike the electron, muon, and tau lepton masses, the quark masses (because they are confined in hadrons for the first five and because top quark mass is hard to measure even though it can be done directly) aren't measured nearly so precisely, so you can't definitely confirm or rule out any proposal for explaining their masses. The theory space of proposals that can fit the quark masses to ± 2 sigma (which is the usual standard in high energy physics for a theory being consistent with the experimental evidence) is huge. The relative precision of the various fundamental physics constants and their values (some of which are a few years out of date) are summarized here:

Screenshot 2025-08-27 at 2.19.01 PM.webp

There is also no consensus on whether a proposal to explain these masses should fit some formula at a single energy scale (since masses and other Standard Model parameters run with energy scale), or if it should, like Koide's rule, apply to "pole masses" (in the case of c, b, and t quarks) which are unique, and if so why.

Not long before the Higgs boson was definitively discovered and its properties were measured, one paper listed dozens of different predictions for its mass, some of which, inevitably, were consistent with the final result because there was some paper making a guess about almost the entire plausible range of Higgs bosons masses.
 
Last edited:
  • Like
  • Informative
Likes dextercioby, WernerQH and Iskandarani
  • #16
@ohwilleke Thank you for taking the time to put together such a detailed, source-rich reply. That was a lot of work, and it shows. The mix of reviews, historical context, and up-to-date bounds is exactly what I needed to orient myself. I’ll treat your post as my reading list and work through it systematically (paying attention to the scheme/scale choices and the quark-mass caveats you highlighted). Sincere thanks—this is extremely helpful.
Again, Many thanks for the exceptionally thorough response and the generous list of references. I really appreciate the time and care that went into curating both the big-picture reviews and the concrete case studies (SUSY expectations vs LHC bounds, GUT constraints, mass-relation pitfalls, etc.). The practical cautions you flagged—scale/scheme dependence, hadronic uncertainties, and the risk of post-hoc retuning—are exactly the checkpoints I needed. I’m downloading all the pdfs so I can easily access them, add them to my bibtex library, and see your post as a roadmap and will work through the sources in depth. Grateful for the help. So much new content 🙃🌀 1000x thanks
 
Back
Top