Confusion about 2016 Unruh paper - fluctuating vacuum energy density?

In summary, the conversation discusses a paper by Wang, Zhu, and Unruh which challenges the commonly held belief that the vacuum energy density is constant. The paper asserts that the vacuum energy density fluctuates in time and space, and these fluctuations are responsible for the accelerating expansion of the universe. This contradicts the current understanding of vacuum fluctuations as described in @A. Neumaier's articles and Physics FAQ. The conversation also touches on the issue of the cosmological constant and its fate under renormalization. In addition, there is a discussion about the implications of the paper's findings and the role of vacuum fluctuations in the expansion of the universe.
  • #1
asimov42
377
4
TL;DR Summary
"The vacuum energy density is treated as a constant in the usual formulation of the cosmological constant problem. While this is true for the expectation value, it is not true for the actual energy density."
Hi all,

Just had a look at the 2016 paper by Wang, Zhu, and Unruh,

"How the huge energy of quantum vacuum gravitates to drive the slow accelerating expansion of the Universe," Qingdi Wang, Zhen Zhu, and William G. Unruh, Phys. Rev. D 95, 103504 – Published 11 May 2017

The paper states (Section III):

"The vacuum energy density is treated as a constant in the usual formulation of the cosmological constant problem. While this is true for the expectation value, it is not true for the actual energy density.

That is because the vacuum is not an eigenstate of the local energy density operator $T_00$, although it is an eigenstate of the global Hamiltonian operator H. This implies that the total vacuum energy all over the space is constant but its density fluctuates at individual points.
...
Furthermore, the energy density of the vacuum is not only not a constant in time at a fixed spatial point, it also varies from place to place. In other words, the energy density of vacuum is varying wildly at every spatial point and the variation is not in phase for different spatial points. This results in an extremely inhomogeneous vacuum."

This seems to patently violate the description of vacuum fluctuations in @A. Neumaier's Insight articles and in his Physics FAQ. Also, since the fields are in the ground state everywhere in spacetime, how can energy density change? Lastly, the authors use a cutoff to recover the slow expansion... doesn't this break Lorentz invariance?

I'm very puzzled, but it's a paper with Unruh, so... clarifications welcome.

p.s., In a 2018 follow on paper, the abstract states that "By treating the fluctuations in the vacuum seriously..." - I'm not clear what this means, either.

Thanks all.
 
Last edited:
Physics news on Phys.org
  • #2
asimov42 said:
Summary: "The vacuum energy density is treated as a constant in the usual formulation of the cosmological constant problem. While this is true for the expectation value, it is not true for the actual energy density."

"How the huge energy of quantum vacuum gravitates to drive the slow accelerating expansion of the Universe," Qingdi Wang, Zhen Zhu, and William G. Unruh, Phys. Rev. D 95, 103504 – Published 11 May 2017

The arxiv preprint is here:

https://arxiv.org/abs/1703.00543
 
  • #3
asimov42 said:
Summary: "The vacuum energy density is treated as a constant in the usual formulation of the cosmological constant problem. While this is true for the expectation value, it is not true for the actual energy density."
The actual energy density is a quantum field, and one needs interpretation to say what it ''is'' in terms of observation. Observed is the expectation value, and it is obvilusly not constant, though it might be approximately constant when averagrd over galactical scales, as in cosmology. A constant energy density would make an infinite total energy.

The cosmological constant is just a contribution to the observable energy, and for lack of a good theory of qauntum gravity, its fate under renormalization is questionable. Thus I'd regard all assertions as being tentative to speculative
 
  • #4
Thanks @A. Neumaier - I'm a bit confused, however. In your FAQ and insight article, you mention that the vacuum does not fluctuate - its dynamics in space and time do not change.

Isn't the energy density defined by the ground state of all the fields in the vacuum state? And so isn't this simply the same case as you mention in your FAQ, where the energy density does not fluctuate, but is does not have a single well-defined value? The observations will fluctuate (multiple measurements with a nonzero standard deviation).

Is it not then the case that the observed energy density will fluctuate in space and time, but that this is due to measurement variance, and not to the dynamics (in space and time) of the vacuum itself?
 
  • #5
Let's also leave out the cosmological constant for now - in the paper by Wang et al. they note that the vacuum is not an eigenstate of the local energy density operator ##T_{00}## - this implies that energy density can vary (by huge amounts and over very short periods) in space and time in the vacuum state ...

The vacuum state may have a huge amount of energy (standard 120 orders of magnitude issue, etc). - if all the (known) fields are in their ground state, how can the energy density change over time anywhere in the vacuum? Changes in energy density require energy to move from one place to another over time (since the total energy is constant) ... this means dynamics, which contradicts @A. Neumaier's FAQ. And seems to imply that a field in the ground state can 'give up' energy...

Should this also not mean, that, e.g,, because the energy density of the electron field is changing with time over space, I should be able to observe those fluctuations?
 
  • #6
1. Fluctuations are not events that happen but just a name for certain2-point functions.

2. The universe is not in a vacuum state, hence vacuum fluctuations don't apply.
 
  • Like
Likes vanhees71 and weirdoguy
  • #7
I think the overall point in the paper is being missed.

@A. Neumaier - thank you again... Point 1 goes back your FAQ I believe? Could you clarify if you are referring to your FAQ and that you are speaking about fluctuations in measurement? See below - the authors really are discussing the vacuum only, so 2) above is not true here (and vacuum fluctuations are applicable).
---

The Wang paper with Unruh asserts that a universe that is truly in a vacuum state would be expanding - they consider the vacuum state itself, noting that vacuum energy density (in an empty universe) will be "wildly varying [in time] at every spatial point."

They then use this fact that the vacuum energy density has extreme inhomogeneity (their words) to show that there are implications. By redefining the FLRW metric to allow for the scale factor to have spatial dependence, they demonstrate, for simplified model that: in a universe that is a true vacuum, it is the vacuum fluctuation in energy density that drives the accelerating expansion.

@A. Neumaier - so they are talking about the vacuum state only, I believe that vacuum fluctuations do apply (which is why the paper is based on them), and much of the work is about variation in space and time.

Can anyone comment on the authors' notion of the extreme inhomogeneity of vacuum in the context of @A. Neumaier's FAQ - I'm lost because the two seem quite at odds... e.g., in the @A. Neumaier Insight article: "The vacuum is isotropic (i.e., uniform) in space and time and does not change at all."

Extreme inhomogeneity and isotropy are opposites - so what am I missing? Help!
 
  • #8
Last note: the final result in the paper is that "physically, [the] fluctuating features of ##\dot{a}/a## imply that, at any instant of time, if the space is expanding in a small region, it has to be contracting in neighboring regions; and at any spatial point, if the space is expanding now, it has to be contracting later."

This can only occur in a GR sense if there are huge fluxes in energy (and indeed the authors mention the vacuum energy flux, which, again, implies something 'happening' in time) ...
 
  • #9
Uh, is this just not his way of solving the cosmological constant problem? I know Unruh has a different way of looking at the issue. Have you looked into this problem, because this is what it seems like.
 
  • #10
Yes, the paper is an attempt to solve the cosmological constant problem.

Apologies all - I don't think I was clear with my questions about the paper

1.) The article is written throughout in a way that makes it sound very much like energy is being redistributed in space and time in the vacuum state (this is attributed to vacuum fluctuations) - based on @A. Neumaier's helpful FAQ, and knowledge that the vacuum state is the ground state for all fields, this does not seem to be possible (or that vacuum fluctuations imply any spatiotemporal change at all).

Is it possible?

2.) If there are truly random fluctuations in energy density of the vacuum (i.e,. rapid or slow changes in density in space with time), would this also not make the expansion rate of the universe a random variable?

That's hopefully more clear - thanks much all. And thanks @PeterDonis for posting the arXiv link (which I should have done).
 
  • #11
asimov42 said:
The article is written throughout in a way that makes it sound very much like energy is being redistributed in space and time in the vacuum state (this is attributed to vacuum fluctuations) - based on @A. Neumaier's helpful FAQ, and knowledge that the vacuum state is the ground state for all fields, this does not seem to be possible (or that vacuum fluctuations imply any spatiotemporal change at all).

I would describe what the article is saying differently. The article is saying that, at the classical level, i.e., when we're looking at the large scale dynamics of the universe using the Einstein Field Equation, up to now everyone has been assuming that the effective stress-energy tensor that appears in the EFE when quantum fields are involved is the expectation value of the corresponding quantum operator. The article is proposing that this is not correct, and that when other properties of the quantum vacuum state besides just the expectation value are taken into account, the effective stress-energy tensor is different. There is no requirement that those other properties must be interpreted as actual redistribution of energy in spacetime; and as @A. Neumaier pointed out, that interpretation is not really correct.
 
  • Like
Likes vanhees71
  • #12
Thanks @PeterDonis - that's very helpful and clarifies things significantly.

Given the above, I'm still uncertain about one thing (if I may bother you once more @PeterDonis): should the value of the global Hubble expansion rate ##H## in the paper be treated as a random variable? The result for ##H## appears to be exact, and I would expect that it would be if you 'integrated over' the fluctuations, I'm just having a bit of difficulty following the math to reach Eqn. (75) in the paper.
 
  • #13
asimov42 said:
should the value of the global Hubble expansion rate HH in the paper be treated as a random variable?

I don't think so. ##H## is a classical level parameter and at that level it should be a single number, not a random variable. The paper is simply arguing for a different classical level behavior of ##H## due to quantum field properties than previous models.
 
  • #14
Thanks so much @PeterDonis!

Really, really last question (and thanks again to both you and @A. Neumaier): (this may be difficult to answer), can one arrive at a classical level parameter exactly in this case by integrating or averaging ``"over the quantum behaviour"? (for lack of a better term) This is just guessing since the vacuum state is isotropic and possibly because it is also an eigenstate of the global Hamiltonian (so the total energy is fixed).

(@A. Neumaier might know the answer off the top of his head)
 
Last edited:
  • #15
asimov42 said:
can one arrive at a classical level parameter exactly in this case by integrating or averaging ``"over the quantum behaviour"?

That's basically what a classical parameter is.
 
  • #16
Thanks @PeterDonis - sorry (I'm sometimes far too dense), just to make sure I'm clear and that I didn't misspeak - to paraphrase:

A classical parameter can be / is arrived at (determined) when one can eliminate the probabilistic behaviour of the quantum system (by integration or some other method) to arrive at a single value rather than a distribution. Key here is that the parameter describes the system without being an approximation (unless computed as such), i.e., we haven't just 'swept the quantum bits under the rug' as it were.

Just wanted to verify, as I'd used the expression 'averaging over' above (which is not very concise at all), but I meant that in the sense of eliminating the probabilistic behavior, and not in the sense of computing an expectation over a quantity (which, of course, is still probabilistic). Probably obvious.

I also didn't realize how ambiguous English can be in certain cases 😊
 
  • #17
asimov42 said:
Key here is that the parameter describes the system without being an approximation

Any classical parameter and any classical equation is an approximation from the point of view of quantum mechanics. If quantum mechanics is the correct theory microscopically, then no classical parameter or equation can be exactly correct.
 
  • #18
So in this case, the global Hubble expansion rate is effectively then just an approximation ... although on average you might see space expanding according to the Hubble rate, in any portion of the universe the expansion rate will, in fact, be a random variable...
 
  • #19
asimov42 said:
So in this case, the global Hubble expansion rate is effectively then just an approximation

It is in the standard case too. In fact it is even in the case where we ignore all quantum effects and assume zero vacuum energy (i.e., zero cosmological constant/dark energy). This is because the whole concept of a "global Hubble expansion rate" is based on the assumption of exact homogeneity and isotropy--the density of everything is the same everywhere. Of course that's not really true, so the concept of a "global Hubble expansion rate" is already an approximation for that reason alone, apart from anything else.

asimov42 said:
although on average you might see space expanding according to the Hubble rate, in any portion of the universe the expansion rate will, in fact, be a random variable...

No, that's not what the paper is saying. The quantum effects it is talking about are only "random" on scales at or smaller than the Planck length. On any length scale we can probe experimentally, they're constant.

In other words, the paper is not saying that the effect of vacuum energy on the expansion of the universe is different in different places; it's the same everywhere, just as in the standard model of vacuum energy where the expectation value is used. The difference in this paper is that a different model is used for the effect of vacuum energy on the expansion of the universe, which gives a different answer from the standard model.

Bear in mind that the "standard model" of vacuum energy we are talking about here, the one that uses the expectation value, says that vacuum energy density should be either infinite (if we don't put a cutoff in the integral) or about 120 orders of magnitude larger than the dark energy density we actually observe (if we put the cutoff at the Planck length). So adopting a different model of the effect of vacuum energy that gives (with a Planck length cutoff) an answer roughly the same as what we actually observe is not just a matter of a small tweak, and doesn't have to involve a variation in the effective vacuum energy density from place to place. Just getting an answer that is of the right order of magnitude is a huge improvement.
 
  • #20
PeterDonis said:
It is in the standard case too. In fact it is even in the case where we ignore all quantum effects and assume zero vacuum energy (i.e., zero cosmological constant/dark energy). This is because the whole concept of a "global Hubble expansion rate" is based on the assumption of exact homogeneity and isotropy--the density of everything is the same everywhere. Of course that's not really true, so the concept of a "global Hubble expansion rate" is already an approximation for that reason alone, apart from anything else.

Thanks @PeterDonis - right, sorry, yes, I meant in the case of the vacuum only; the matter density, etc. of course has an effect. And agreed that the paper result is quite impressive, certainly.

PeterDonis said:
No, that's not what the paper is saying. The quantum effects it is talking about are only "random" on scales at or smaller than the Planck length. On any length scale we can probe experimentally, they're constant.

Ah, this cuts exactly to the heart of what I've been wondering. Although the scale of the random fluctuations is well below the Planck length, the effects are still random. Doesn't this unavoidably mean that, even a scales larger than the Planck length, there has to be variation... The probability that the observed expansion rate deviates from ##H## must be very small, to be sure (and smaller and smaller over larger spatial regions) - but it can't be zero, can it?

That is, although macroscopically the rate appears to be constant and we may treat it as such, in reality this is not possible, no? That is, you can't absolutely bound the rate.
 
  • #21
asimov42 said:
Although the scale of the random fluctuations is well below the Planck length, the effects are still random. Doesn't this unavoidably mean that, even a scales larger than the Planck length, there has to be variation...

No, because the random variation on sub-Planck length scales can average out to something constant on larger scales.

asimov42 said:
The probability that the observed expansion rate deviates from ##H##

Note that the expansion rate we observe in our actual universe is not just due to whatever properties the quantum vacuum has. The paper is dealing with an idealized universe in which only quantum vacuum is present.

In that idealized universe, ##H## on macroscopic scales is still constant everywhere--on those scales the classical spacetime geometry is still de Sitter (i.e., positive cosmological constant and nothing else). The difference is the value of ##H## that is predicted by the model--instead of being either infinite or 120 orders of magnitude too large, it is roughly of the order of magnitude we observe.
 
  • #22
First @PeterDonis, just wanted to say thanks again for taking the time to answer some of these naive questions - very helpful as I'm learning a ton.

PeterDonis said:
No, because the random variation on sub-Planck length scales can average out to something constant on larger scales.

Let's work with the idealized universe as in the paper (not in our universe).

When you say that the sub-Planck scale fluctuations can average to something constant, you mean (by the term) that this is an exact result? I.e., the result of the mathematical averaging is a single value, which does not vary at all (to confirm) above a given length scale (which must be at some exact cutoff threshold as well, I think).

I am just not familiar enough with the math in this case - earlier in the thread, you'd noted that if quantum mechanics is the correct theory on the small scale, then no classical parameter or equation can be exactly correct (which makes good sense). But here, if via averaging we have arrived at an ##H## in our idealized universe that is an exact constant, variance = 0, haven't we produced an exact classical parameter?
 
  • #23
asimov42 said:
When you say that the sub-Planck scale fluctuations can average to something constant, you mean (by the term) that this is an exact result? I.e., the result of the mathematical averaging is a single value

Of course it's a single value, that's what mathematical average means.

I think you mean to ask, is is the same value everywhere in the universe at a given instant of time (more precisely, of time according to comoving observers). On any scale well above the Planck scale, yes.

asimov42 said:
But here, if via averaging we have arrived at an ##H## in our idealized universe that is an exact constant, variance = 0, haven't we produced an exact classical parameter?

No, because we arrived at it via averaging. An exact classical parameter would be, for example, a cosmological constant ##\Lambda## that we just declared by fiat was a term in the Einstein Field Equation, treating classical GR as an exact theory of everything. But nobody actually believes that classical GR is an exact theory of everything, so the fact that, when you use classical GR and plug in the result of the averaging in this paper, you get a value of ##H## that is the same everywhere in the universe at an instant of time. does not mean you have an exact classical parameter. It just means you are using classical GR as an approximation valid within its domain (which in this case is any scale well above the Planck scale).
 
  • #24
@PeterDonis - thanks - you're going to hate me - the last part of the above makes complete sense. However, still fuzzy on the first part - I'll try to explain why:

PeterDonis said:
Of course it's a single value, that's what mathematical average means.

I think you mean to ask, is is the same value everywhere in the universe at a given instant of time (more precisely, of time according to comoving observers). On any scale well above the Planck scale, yes.

Apologies, I should have said an exact value in comparison to a distribution. And yep, re: comoving observers (I should have specified that, too).

The fluctuations at the sub-Planck scale are stochastic - we've then taken an average of many stochastic variables ... as far as I know (although in this case I'm very much out of my league), the average of many of stochastic processes is also a stochastic process.

To end up with a "process" that has an single value only (i.e., a variance of precisely zero throughout the whole universe at time t), there would need to be some type of limit-taking step that says "as we approach this length scale (e.g., the Planck length), the averaging process reduces the variance of the resulting stochastic process to zero exactly at t."

I dawned on me as I was writing this that you've actually already answered the above, I think - the limit taking step is precisely applying GR to the average. Presumably, since GR is indeed an approximation, there might be other (stochastic-y) things going on - but for that, we'd need quantum gravity.

Am I close?

Thanks again much for your patience @PeterDonis - it truly is a fascinating paper (there's a follow on one from 2018 that does backpedal a little bit, but the main results still hold essentially).
 
  • #25
asimov42 said:
the limit taking step is precisely applying GR to the average.

Yes. We are treating the averaged value that the paper predicts as the classical value in the classical GR model.

asimov42 said:
Presumably, since GR is indeed an approximation, there might be other (stochastic-y) things going on - but for that, we'd need quantum gravity.

Yes, if we had a full quantum gravity theory, we would either see other things going on that the model in the paper does not include, or we would confirm that the model in the paper correctly captures what is going on at the Planck scale and correctly predicts the average that should be used at scales much larger than the Planck scale.
 
  • #26
asimov42 said:
there's a follow on one from 2018 that does backpedal a little bit

Do you have a link?
 
  • #28
asimov42 said:
the arXiv preprint is available here

Thanks!
 

FAQ: Confusion about 2016 Unruh paper - fluctuating vacuum energy density?

1. What is the Unruh effect?

The Unruh effect is a theoretical concept in quantum field theory that suggests that an accelerating observer will perceive a different vacuum state compared to a non-accelerating observer. This can lead to the appearance of particles in the vacuum, also known as vacuum fluctuations.

2. What is the 2016 Unruh paper about?

The 2016 Unruh paper, titled "Fluctuating Vacuum Energy Density and the Cosmological Constant Problem," discusses the issue of the cosmological constant problem in relation to the Unruh effect. The paper proposes a solution to this problem by considering the vacuum energy density as a fluctuating quantity rather than a constant value.

3. What is the cosmological constant problem?

The cosmological constant problem refers to the discrepancy between the observed value of the cosmological constant (a measure of the energy density of the vacuum) and the predicted value based on theoretical calculations. This problem has been a major challenge in cosmology and has led to various proposed solutions, including the one presented in the 2016 Unruh paper.

4. How does the 2016 Unruh paper address the cosmological constant problem?

The 2016 Unruh paper suggests that the vacuum energy density is not a constant value, but rather a fluctuating quantity. This means that the observed value of the cosmological constant may be a result of these fluctuations, rather than a fixed value. The paper presents a mathematical model to support this idea and proposes further research to test its validity.

5. What are the implications of the 2016 Unruh paper?

If the proposed solution in the 2016 Unruh paper is proven to be valid, it could have significant implications for our understanding of the cosmological constant problem and the nature of vacuum energy. It could also have implications for other areas of physics, such as quantum field theory and general relativity.

Back
Top