SchwarzschildGeometry4

The Schwarzschild Geometry: Part 4

[Total: 2    Average: 5/5]

Click for complete series

 

In the last article, we looked at various counterintuitive features of the Schwarzschild spacetime geometry, as illustrated in the Kruskal-Szekeres spacetime diagram. But counterintuitive, in itself, does not mean physically unreasonable or unlikely. So the obvious next question is, how much of the entire spacetime geometry we have been looking at is actually believed to be physically reasonable?

We can get a handle on this by observing that the geometry we have been looking at is vacuum everywhere–the stress-energy tensor is zero. But in our real universe, of course, there is matter and energy, so the stress-energy tensor is not zero everywhere. It is true, though, that, at least on the distance scales we deal with most of the time (basically in any context except cosmology), we can view the universe as consisting of isolated objects containing nonzero stress-energy, separated by large regions of zero stress-energy. (True, strictly speaking, there is very sparse matter and energy present in these regions, but it is much too sparse to have any significant effect on the spacetime geometry, so its stress-energy tensor can be considered to be effectively zero.) And since most of the isolated objects are rotating, if at all, very slowly (here “very slowly” means their angular momentum is very small compared to their mass, when both are normalized to geometric units), they can be considered, at least to a good approximation for most purposes, as being spherically symmetric, which means that the vacuum regions around them, at least at distances that are small compared with the distance to other isolated objects, can also be considered to be spherically symmetric.

This is important because there is a theorem known as Birkhoff’s Theorem, which says that any vacuum solution to the Einstein Field Equation that is spherically symmetric must be (at least a portion of) the Schwarzschild geometry. So if we idealize a single isolated object as a spherically symmetric region of nonzero stress-energy surrounded by vacuum out to infinity, then the spacetime geometry must be Schwarschild from ##r \rightarrow \infty## down to some finite value of ##r##, which we can call ##R##, corresponding to the surface of the isolated object. (Inside this surface, the geometry will be different because the region is not vacuum; we’ll go into that below.)

The obvious next question is, what values of ##R## are possible? Of course this question is very general, but we can start by asking a more specific version of it: what values of ##R## are possible for an isolated object that is static, i.e., ##R## does not change with time? (Here it doesn’t matter whether we interpret “time” to be Schwarzschild coordinate time, Gullstrand-Painleve coordinate time, or the proper time of some observer sitting on the surface.) It turns out that there is another theorem, known as Buchdahl’s Theorem, that answers this question: for a spherically symmetric static object, we must have ##R > \frac{9}{4} M##, i.e., 9/8 of the Schwarzschild radius ##r = 2M##. The basic reason is that, for a static object, gravity must be balanced by internal pressure; and it turns out that, as ##R \rightarrow \frac{9}{4} M##, we have ##p \rightarrow \infty## at the center of the object.

(By the way, there is another closely related theorem, due to Einstein, which says that a spherically symmetric system composed of particles in circular free-fall orbits, whose overall mass and radius does not change with time, must have a surface radius ##R## which is larger than ##3M##. Here the reason is that, as ##R \rightarrow 3M##, the free-fall orbits have orbital speeds approaching the speed of light–more precisely, the worldlines of the orbits become null instead of timelike. We now know that this is because ##r = 3M## is what is called the “photon sphere” in the Schwarzschild geometry, the radius at which light rays can have closed circular orbits. Einstein concluded–incorrectly–that his theorem showed that black holes could not form; we now understand that this is a misconception, similar to the one discussed in the first article of this series, that objects can never reach a black hole’s horizon.)

Buchdahl’s Theorem tells us something important: there cannot be a static object with a surface radius arbitrarily close to ##r = 2M##, let alone equal to or less than that value. However, there is still a way for an object with surface radius ##R## smaller than ##\frac{9}{4} M## to exist: if ##R## is changing with time. This opens up two possibilities: ##R## could be decreasing with time, or ##R## could be increasing with time.

The first possibility is just gravitational collapse: an object that can no longer support itself against gravity will collapse and form a black hole. The first idealized model of this kind of process was presented in a classic paper by Oppenheimer and Snyder in 1939. The vacuum region of their model has a spacetime diagram, in coordinates similar to Kruskal-Szekeres coordinates, that looks similar to this (courtesy of PF Science Advisor DrGreg):

Oppenheimer-Snyder spacetime diagram

As you can see, this diagram only includes a portion of regions I and II from the full Kruskal-Szekeres spacetime diagram that we looked at in previous articles. The boundary on the left of the diagram is the surface of the object: the region to the left of that surface, not shown on the diagram, has the spacetime geometry like that of a portion of a collapsing universe, i.e., a collapsing FRW spacetime. The point at the upper left of the diagram is where the collapsing object reaches ##R = 0##, i.e., its surface collapses to a point; that is where the singularity forms (in this idealized model), and the hyperbola at the top is the singularity itself. So if we were to fill in the non-vacuum region occupied by the collapsing matter, it would have a “width” from right to left that gradually decreased from the bottom to the top of the diagram, tapering to zero width at the top left corner where it meets the singularity. The left boundary of this region would be labeled ##r = 0## if we adopted appropriate coordinates in this region (which would not be precisely the same as any of the ones we have looked at in these articles for the vacuum region); and in this region, ##r = 0## means what your ordinary intuition would think it means, the spatial point at the center of the collapsing object. In more technical language, the curve ##r = 0## is timelike up to the point where the singularity forms; only after that does it become spacelike, like a moment of time instead of a place in space.

Notice that in this diagram there is also an event horizon–the 45 degree dotted line going up and to the right. The event horizon also extends inside the collapsing matter; the 45 degree line just extends into the non-vacuum region until it intersects its left boundary at ##r = 0##. It is instructive to think about what this means physically. Consider a series of light rays emitted radially outward from the spatial point ##r = 0## inside the collapsing matter. Any such rays emitted before the event horizon line will reach the surface of the matter and continue outward to infinity. But there will be some ray that is emitted exactly at the event horizon line. This ray will reach the surface of the collapsing matter at the exact instant that the radius ##R## of the collapsing matter is equal to ##2M##. This ray will then, since it is now in the vacuum region with Schwarzschild geometry, remain at ##r = 2M## forever. So the event horizon can be thought of as simply the set of all such light rays, emitted in all possible directions. (These rays, or more properly the null curves that describe their worldlines, are called the “generators” of the horizon.)

This model presents a consistent (if highly idealized) picture of gravitational collapse, at least until we get close to the singularity and issues of arbitrarily large spacetime curvature arise. (We’ll talk about that further below.) Consequently, the portions of the Schwarzschild geometry that appear in this model, namely the portions of regions I and II and the event horizon between them, would appear to be physically reasonable: these portions of the geometry could exist in our universe, at least as far as classical GR is concerned. But there is no reason to think that the rest of the geometry–the “past horizon” and regions III and IV–is physically reasonable, at least not based on models of gravitational collapse.

What about the other possibility mentioned above, that the surface radius ##R## of an object could be increasing with time? A model of this sort would basically look like the time reverse of the Oppenheimer-Snyder model: we would have a vacuum region consisting of portions of regions I and IV from the Kruskal-Szekeres diagram, and the past horizon between them, plus a non-vacuum region containing expanding matter, with a geometry like that of a portion of an expanding FRW spacetime. The past horizon would extend into this region and intersect the spatial point ##r = 0##. The term “white hole” could be used to describe this kind of model (although it is more frequently used to describe region IV in the full, vacuum everywhere Schwarzschild spacetime).

If we consider whether such a model could describe an isolated region of our universe where a white hole is exploding, much as the Oppenheimer-Snyder model describes an isolated region where an object is collapsing into a black hole, there is a serious difficulty: where does the initial singularity come from? There is no known physical process that could create one, since the past horizon isolates it from all other objects in the universe–much as nothing can get out of a black hole, nothing can get into a white hole. So such a white hole singularity would have to be “built into” the universe from the beginning. That seems highly implausible, and as far as I know nobody has seriously tried to defend such a model.

The question of whether our entire observable universe could be a portion of such a white hole spacetime is a bit more interesting, because such a model would imply that our observable universe is somewhere inside the non-vacuum region of such a model, and depending on where (and when) we are in that non-vacuum region, it is possible that no light signals from the vacuum region outside could have reached us yet. In other words, since the non-vacuum portion of the white hole model is a portion of an expanding FRW universe, the fact that our observable universe looks like a portion of an expanding FRW universe is not enough, in itself, to rule out a white hole model for the universe as a whole (as opposed to just a full expanding FRW universe and nothing else).

However, there is another reason for thinking that a white hole model for the entire universe is unlikely. This is basically the converse of the reason for thinking isolated white holes in our observable universe are unlikely. There the question was where the initial singularity would come from; here the question is where the region outside the past horizon would come from. Basically, our universe would have to be an isolated white hole inside some much larger “universe”, so this model doesn’t really give a final answer; it just pushes the question back a step. The model of our entire universe as an expanding FRW spacetime does not have this issue, because an expanding FRW spacetime–the full model, not just a portion–is self-contained, with no need to postulate anything outside it.

To summarize, our best current belief is that regions I and II of the Schwarzschild geometry, as shown in the Kruskal-Szekeres spacetime diagram, are physically reasonable, but the rest of the geometry, including the “antihorizon” boundary between those two regions and the other two, is not. At least, that is the best answer we can give according to classical General Relativity; quantum effects might change this picture. In fact they are expected to, at the very least, when spacetime curvature becomes large enough, where “large enough” is thought to be, heuristically, when the radius of curvature of spacetime is of the order of the Planck length. In the Schwarzschild geometry, this would happen for some value of ##r## that was sufficiently small, i.e., sufficiently close to the singularity at ##r = 0##.

We don’t know exactly how quantum effects would change the picture in this regime, and won’t until we have a good theory of quantum gravity. However, it seems likely that, if quantum effects do change the picture, it will be in direction of making less of the full geometry physically reasonable, not more, by making the portion of region II below some positive value of ##r## not physically reasonable, because, as above, quantum effects are expected to become relevant when the spacetime curvature gets large enough. It is even possible that quantum effects might make all of region II and its event horizon boundary not physically reasonable, in the sense that this region would no longer occur in the classical limit of whatever quantum model ends up being confirmed, because quantum effects would prevent a true event horizon from ever forming. This is an active area of research, and we’ll have to wait and see what comes out.

References

(1) Einstein’s 1939 paper on a stationary system of particles in free-fall orbits:

http://www.cscamm.umd.edu/tiglio/GR2012/Syllabus_files/EinsteinSchwarzschild.pdf

(Note that Einstein uses isotropic coordinates in this paper; these have a radial coordinate which is not the same as the areal radius ##r##. In the article above I have translated his result into a form that uses the areal radius ##r## as the radial coordinate.)

 

 

33 replies
Newer Comments »
  1. Haelfix
    Haelfix says:

    There are a few other really interesting points about region III and region IV.

    1) There is a serious Cauchy problem with having a past singularity that is allowed to communicate information off to infinity.

    2) The white hole horizon is conceptually really bizarre…

    Since nothing is allowed to get in, that means that 'test' particles traveling in orbits around the white hole horizon (more precisely the particle horizon) will accumulate there, and there will be a severe blue shift when viewed from infinity.  This blue sheet is a sort of classical instability, and it is argued that it leads to gravitational collapse, and thus there is likely a singularity in the future as well!   See:

    Death of White Holes in the Early Universe – Eardley, Douglas M. Phys.Rev.Lett. 33 (1974) 442-444

    3) Quantum mechanically, if you believe in Hawking radiation/evaporation, and blackhole thermodynamics, in some sense black hole and white hole microstates have to be the same thing!  See: 

    Black Holes and Thermodynamics – Hawking, S.W. Phys.Rev. D13 (1976) 191-197

  2. PeterDonis
    PeterDonis says:

    There is a serious Cauchy problem with having a past singularity that is allowed to communicate information off to infinity.

    If "Cauchy problem" is intended to mean that the spacetime has a Cauchy horizon, this is not true. The Schwarzschild spacetime is globally hyperbolic.

    It is true that the past singularity seems highly unphysical, but I'm not sure "Cauchy problem" is the best way to describe why.

    Since nothing is allowed to get in, that means that 'test' particles traveling in orbits around the white hole horizon (more precisely the particle horizon) will accumulate there

    Which test particles are these? If they are test particles in stable orbits in region I, they can equally well be viewed as orbiting the black hole; they certainly don't accumulate near the white hole horizon.

    If you mean test particles that are close to the white hole horizon, there are no stable orbits there; there are no stable orbits inside ##r = 6M##, and there are no orbits at all, even unstable ones, inside ##r = 3M##. So any freely falling object below ##r = 3M## will fall into the black hole, region II; it won't "accumulate" at the white hole horizon.

    there will be a severe blue shift when viewed from infinity

    Not for objects that are free-falling radially inward. They will see incoming light from infinity to be redshifted.

    Objects in free-fall orbits will see incoming light from infinity to be blueshifted, but at the lowest possible orbit, ##r = 3M##, the blueshift is quite modest.

    Objects that have nonzero proper acceleration can "hover" close to the horizon and will indeed see a large blueshift in light coming in from infinity. But this is due to their proper acceleration, which increases without bound as the horizon is approached.

    All of this is standard Schwarzschild spacetime physics; none of that changes when we include the full maximally extended spacetime in our model.

    See:

    Death of White Holes in the Early Universe – Eardley, Douglas M. Phys.Rev.Lett. 33 (1974) 442-444

    Unfortunately this paper is behind a paywall so I can't access it. If you want to email me a copy, I'm at peterdonis@alum.mit.edu. I would be curious to read the paper and see exactly what spacetime geometry it is assuming. Since it is dealing with the early universe, it obviously is not using a vacuum geometry, and the Schwarzschild spacetime I am discussing in this series is a vacuum solution (except for the Oppenheimer-Snyder model, which has a non-vacuum region, but that model also has no region III or IV so it's not relevant here). In short, I'm not sure the term "white hole" in that paper means the same thing as I mean by "white hole" in these articles.

  3. PeterDonis
    PeterDonis says:

    Quantum mechanically, if you believe in Hawking radiation/evaporation, and blackhole thermodynamics, in some sense black hole and white hole microstates have to be the same thing!

    I'm aware of this hypothesis by Hawking, but I don't know if it has led to anything in the field of quantum gravity.

  4. Haelfix
    Haelfix says:

    If "Cauchy problem" is intended to mean that the spacetime has a Cauchy horizon, this is not true. The Schwarzschild spacetime is globally hyperbolic.

    A Cauchy problem ('initial value problem') in GR is a statement about taking surfaces of initial data (in GR– spacelike  surfaces but they could in principle also involve data from other matter fields) and developing them forward in some regular way subject to the relevant partial differential equations such that the process satisfies certain constraints (basically you want reversibility, avoiding many to one mappings, etc).   Here, the initial data surface is singular as there is geodesic incompleteness, and physically this manifests itself as a loss of predictability between any 'two' distinct states in the theory, provided the singular surface was is in at least ones past lightcone.  Basically you are taking an infinite amount of information (states) and allowing that to propagate throughout spacetime.  This language is often used when discussing formulations of cosmic censorship, but for some reason that I don't understand the FRW singularity and the White hole singularity seem to be excluded from theorems about cosmic censorship (probably b/c they are trivial). 

    Since it is dealing with the early universe, it obviously is not using a vacuum geometry, and the Schwarzschild spacetime I am discussing in this series is a vacuum solution (except for the Oppenheimer-Snyder model, which has a non-vacuum region, but that model also has no region III or IV so it's not relevant here). In short, I'm not sure the term "white hole" in that paper means the same thing as I mean by "white hole" in these articles.

    Sorry i'm not being clear here.  The geometry i'm referring to is not vacuum, but it is somewhat similar to Oppenheimer Snyder which you were discussing.  It is the *perturbed* extended Schwarschild solution with an infalling sheet of spherically symmetric null dust.  Unfortunately I'm now away from my institution for the holidays, and it seems hard to find material discussing this that's not behind a paywall (there is a whole chapter about white hole instabilities in Novikov and Frolov), but for the Eardley instability I found roughly the picture I was looking for in the following paper, as well as some of the discussion of the setup:  See figure 1

    http://gravityresearchfoundation.org/pdf/awarded/1989/blau_guth.pdf

    I'm aware of this hypothesis by Hawking, but I don't know if it has led to anything in the field of quantum gravity.

    Hawking's argument is a statement about semiclassical states and thermal equilibrium, and in my opinion is pretty convincing.  Of course without knowing the degrees of freedom of quantum gravity, it's hard to speculate whether a similar thing holds true in the full theory or not.

  5. PeterDonis
    PeterDonis says:

    A Cauchy problem ('initial value problem') in GR is a statement about taking surfaces of initial data (in GR– spacelike surfaces but they could in principle also involve data from other matter fields) and developing them forward in some regular way subject to the relevant partial differential equations

     

    Ah, ok. I had seen that language before but got confused thinking of a Cauchy horizon.

     

    Here, the initial data surface is singular as there is geodesic incompleteness

     

    I am still confused by this, however. As I said before, the maximally extended Schwarzschild spacetime is globally hyperbolic; that means it automatically has a well-posed initial value problem. As an example of how to formulate it, the spacelike surface ##T = 0## in Kruskal-Szekeres coordinates is a Cauchy surface for the spacetime; appropriate initial data on that surface (basically the geometry of all the 2-spheres that make it up, which is equivalent to specifying the one free parameter ##M## in the line element) determines the entire spacetime. It is true that the entire spacetime thus determined is geodesically complete–more precisely, it is timelike geodesically incomplete. But that is not inconsistent with the spacetime being globally hyperbolic and having a well posed initial value problem.

     

    The geometry i'm referring to is not vacuum, but it is somewhat similar to Oppenheimer Snyder which you were discussing. It is the *perturbed* extended Schwarschild solution with an infalling sheet of spherically symmetric null dust.

     

    I'll look at the paper you linked to and comment further after I've read it.

  6. martinbn
    martinbn says:

    A Cauchy problem ('initial value problem') in GR is a statement about taking surfaces of initial data (in GR– spacelike  surfaces but they could in principle also involve data from other matter fields) and developing them forward in some regular way subject to the relevant partial differential equations such that the process satisfies certain constraints (basically you want reversibility, avoiding many to one mappings, etc).   Here, the initial data surface is singular as there is geodesic incompleteness, and physically this manifests itself as a loss of predictability between any 'two' distinct states in the theory, provided the singular surface was is in at least ones past lightcone.  Basically you are taking an infinite amount of information (states) and allowing that to propagate throughout spacetime.  This language is often used when discussing formulations of cosmic censorship, but for some reason that I don't understand the FRW singularity and the White hole singularity seem to be excluded from theorems about cosmic censorship (probably b/c they are trivial).

    Can you elaborate, because as written it doesn't seem right? The initial hypersurface of the initial value problem is not singular. It is a complete Riemannian manifold. Its future (and past Cauchy) development is incomplete (Lorentzian manifold), but the initial data is as regular as it gets.

  7. Haelfix
    Haelfix says:

    So there are certainly spacelike Cauchy surfaces that one can construct that will have finite values for all physical quantities arbitrarily 'near' the singularity, but I don't believe this is sufficient condition for being a well posed surface (regular is I agree an incorrect word choice). There are other technical restrictions on the form of the initial data and I'd have to consult a textbook (im currently away) for the exact statements.  Clearly having arbitrarily large(but finite) tidal forces is not what one would want for well behaved data.

  8. martinbn
    martinbn says:

    So there are certainly spacelike Cauchy surfaces that one can construct that will have finite values for all physical quantities arbitrarily 'near' the singularity, but I don't believe this is sufficient condition for being a well posed surface (regular is I agree an incorrect word choice). There are other technical restrictions on the form of the initial data and I'd have to consult a textbook (im currently away) for the exact statements.  Clearly having arbitrarily large(but finite) tidal forces is not what one would want for well behaved data.

     I don't think there is any problem, but I would like to know, so I'd like to see it when you find it.

    It seems that you expect the initial hypersurface to be as far back in the past as possible, but that is not needed any surface could be used. For example a horizontal line that goes right in the middle of the diagram is as good as any other.

  9. Haelfix
    Haelfix says:

    Yes but think about it, any such line has access to the singularity region in its causal past.  Surfaces that include data with arbitrarily large curvature invariants are thus being evolved forward with Einsteins equations, when they likely don't even obey the equation to begin with.  The entire future spacetime is thus built out of that dubious development.  When people formulate statements about cosmic censorship they are trying to formalize that notion somehow (and I know there are difficulties with making the statement precise).  I'll look into it when I get the chance

  10. stevendaryl
    stevendaryl says:

    2) The white hole horizon is conceptually really bizarre…

    Since nothing is allowed to get in, that means that 'test' particles traveling in orbits around the white hole horizon (more precisely the particle horizon) will accumulate there, and there will be a severe blue shift when viewed from infinity.  This blue sheet is a sort of classical instability, and it is argued that it leads to gravitational collapse, and thus there is likely a singularity in the future as well!

    Exactly what the white hole is is a little mysterious to me. It seems that there is a sense in which there is no difference in the spacetime geometry of a black hole and a white hole; the difference is simply initial conditions of the test particles traveling in that geometry.

    Let me explain why I think that.

    To simplify, let's talk about purely radial motion, so we can treat the Schwarzschild geometry as if there were only one spatial dimension. Let [itex]Q[/itex] be the Schwarzschild factor defined by: [itex]Q equiv 1 – frac{2GM}{c^2 r}[/itex]. Let [itex]tau[/itex] be proper time. Let [itex]U^mu equiv frac{partial x^mu}{partial tau}[/itex]. Then for a test particle of mass [itex]m[/itex] moving along a radial timelike geodesic, we have the following conserved quantities:

    1. [itex]K equiv m c Q U^t[/itex]. This is sort of the "momentum" in the t-direction.
    2. [itex]H equiv frac{m c^2}{2} Q (U^t)^2 – frac{m (U^r)^2}{2Q}[/itex]. This is actually [itex]frac{mc^2}{2} frac{ds^2}{dtau^2}[/itex], so it's just equal to [itex]frac{mc^2}{2}[/itex].

    Putting these together gives an equation for [itex]U^r[/itex]:

    [itex]frac{m}{2} (U^r)^2 – frac{GMm}{r} = mathcal{E}[/itex]

    where [itex]mathcal{E} = frac{K^2}{2 mc^2} – frac{mc^2}{2}[/itex]

    I wrote it in this way so that you can immediately see that it's just the energy equation for a test particle moving under Newtonian gravity. So without any mathematics, we can immediately guess the qualitative behavior:

    If [itex]mathcal{E} < 0[/itex], and initially, [itex]U^r > 0[/itex], then the test particle will rise to some maximum height: [itex]r_{max} = frac{GMm}{|mathcal{E}|}[/itex], and then will fall back to annihilation at [itex]r=0[/itex] in a finite amount of (proper) time. The interesting case is [itex]r_{max} > frac{2GM}{c^2} equiv r_S[/itex], where [itex]r_S[/itex] is the black hole's Schwarzschild radius. In that case, this scenario represents a particle rising from below the event horizon and then turning around and falling back through the event horizon.

    That seems to contradict the fact that nothing can escape from the event horizon, but to see why it doesn't, you have to see what the time coordinate [itex]t[/itex] is doing: In the time period between the particle rising out of the event horizon and falling back into the event horizon, only a finite amount of proper time passes, but an infinite amount of coordinate time passes. In the far past, [itex]t rightarrow -infty[/itex], the particle arises from the event horizon, and in the far future, [itex]t rightarrow +infty[/itex], the particle sinks below the event horizon. The time period while the particle is rising up to the event horizon, and the time period while the particle is falling below the event horizon is not covered by the coordinate [itex]t[/itex] (well, you can still have a [itex]t[/itex] coordinate there, but its connection to the [itex]t[/itex] coordinate above the horizon is broken by the event horizon). So from the point of view of someone far from the black hole, using the [itex]t[/itex] coordinate for time, nothing ever crosses the event horizon (in either direction) for any finite value of [itex]t[/itex].

    Going back to the test particle, we can identify the various parts of the Schwarzschild geometry:

    1. During the time that the particle is rising below the event horizon, the particle is traveling through Region IV, the white hole interior.
    2. During the time that the particle is above the event horizon, the particle is traveling through Region I, the black hole exterior.
    3. During the time that the particle is falling below the event horizon, the particle is traveling through Region II, the black hole interior.

    (A fourth region, Region III, is not visited by the test particle, but is a black hole exterior like Region I).

    The point is that nothing about the local geometry of spacetime changes in going from Region IV (the white hole interior) to Region II (the black hole interior). The only difference is the sign of [itex]frac{dr}{dtau}[/itex]. So the difference between a black hole and a white hole is simply the initial conditions of the test particle. So it's not that the particle is repelled by the white hole and is attracted by the black hole. It's true by definition that:

    • If the test particle is below the event horizon and [itex]frac{dr}{dtau} > 0[/itex], then the particle is in the black hole interior.
    • If the test particle is below the event horizon and [itex]frac{dr}{dtau} < 0[/itex], then the particle is in the white hole interior.

    As for the exterior, the same region, Region I, serves as the exterior of the white hole and the black hole. The same event horizon looks like a white hole in the far past [itex]t rightarrow -infty[/itex], because the test particle is rising from it, and looks like a black hole in the far future [itex]t rightarrow +infty[/itex], because the test particle is falling toward it. (For a realistic black hole formed from the collapsed of a star, there is no event horizon in the limit [itex]t rightarrow -infty[/itex], so there is no corresponding white hole.)

    Here are some puzzles having to do with the test particles:

    1. In the case of many one test particles instead of just one, do all the particles have the same sign of [itex]frac{dr}{dtau}[/itex]? They are all falling in the black hole interior, and all rising in the white hole interior. Why aren't there some particles that are rising, while other particles are falling? It turns out that there is a simple answer to this question. If you have [itex]frac{dr}{dtau} > 0[/itex], you can make it [itex]frac{dr}{dtau} < 0[/itex] by reparametrizing: [itex]tau rightarrow -tau[/itex]. So it's possible to arrange it so that all particles have the same sign of [itex]frac{dr}{dtau}[/itex].
    2. A followup to the first puzzle: If you just arbitrarily flip the sign of [itex]tau[/itex] for a test particle, it makes no difference, since they have no internal state. But if instead you don't have a test particle, but a physical object, such as a clock or a human being, then flipping the sign of proper time means reversing the usual progression of states. The clock will start running backwards, and the human will start getting younger, instead of older. That's not technically a contradiction, because the laws of physics are reversible, so it's possible for a human to age backwards. But it's a violation of the law of increasing entropy. So if it happens to be the case (and it sure seems to be) that all processes in the universe have the same thermodynamic arrow of time–entropy increases as proper time increases–then puzzle number 1 switches to the question of why is there a universal thermodynamic arrow of time? This boils down to the question: Why was entropy lower in the far past? General Relativity doesn't answer this question. (I'm not sure what does
    3. Another complication is to include test particles that don't move on geodesics, because of non-gravitational forces. How does that affect the picture?
  11. PeterDonis
    PeterDonis says:

    It's true by definition that:

    • If the test particle is below the event horizon and ##frac{dr}{dtau} > 0##, then the particle is in the black hole interior.
    • If the test particle is below the event horizon and ##frac{dr}{dtau} < 0##, then the particle is in the white hole interior.

    You have these backwards.

    Why was entropy lower in the far past? General Relativity doesn't answer this question. (I'm not sure what does

    We don't have a final answer to this question, because we don't know what preceded the hot, dense, rapidly expanding "Big Bang" state. We only know that the entropy of that state was much lower than the present entropy of the universe.

    Another complication is to include test particles that don't move on geodesics, because of non-gravitational forces. How does that affect the picture?

    In regions IV and II (the white hole and black hole), it doesn't really change things at all: all test particles must still leave the white hole, and all test particles that enter the black hole still can't escape.

    In region I (and III as well), it allows test particles that would otherwise fall into the black hole to avoid it and stay in region I (or III). It still doesn't allow anything to enter the white hole.

  12. PAllen
    PAllen says:

    To me there is a simple inverse symmetry between BH and WH : for a WH, the singularity is in the past light cone of every event in the interior, while for a BH it is the future light cone of every interior event.

  13. PeterDonis
    PeterDonis says:

    Surfaces that include data with arbitrarily large curvature invariants are thus being evolved forward with Einsteins equations

    You don't have to evolve them forward. You can evolve the initial data on the hypersurface ##T = 0## in Kruskal-Szekeres coordinates both forwards and backwards. Doing so will give you the complete globally hyperbolic region, all the way back to the past singularity and forward to the future singularity. Since the equations are time symmetric, this is perfectly well-defined and justified.

    when they likely don't even obey the equation to begin with.

    I don't know what you're basing this on. The subject under discussion is a well-defined solution of the classical Einstein Field Equation. Any event with finite spacetime curvature invariants, including arbitrarily large ones, can occur in such a solution. The solution might not end up describing anything physically relevant, but that doesn't mean the points with large spacetime curvature values "don't obey the equation"; it just means physics, unlike this particular mathematical model, chooses some other equation at that point.

  14. martinbn
    martinbn says:

    Yes but think about it, any such line has access to the singularity region in its causal past.  Surfaces that include data with arbitrarily large curvature invariants are thus being evolved forward with Einsteins equations, when they likely don't even obey the equation to begin with.  The entire future spacetime is thus built out of that dubious development.

    Well, it's not how it works. The initial data doesn't include anything from the past of the Cauchy surface. In fact until you solve the equations, there is no past nor future. The initial data consists of fields defined on the surface. Whatever the values of the past and future evolution may be, say arbitrary large, they are not part of the initial conditions. So there is nothing dubious here and by construction you get solutions to the Einstein equation.

    When people formulate statements about cosmic censorship they are trying to formalize that notion somehow (and I know there are difficulties with making the statement precise).  I'll look into it when I get the chance

    I am not sure if this is relevant but one way the strong cosmic censorship conjecture is formulated is that the maximal Cauchy development is not extendible. Which is the case in Schawrtzschild, but not Kerr. The weak version usually asks for completeness of future null infinity.

  15. Haelfix
    Haelfix says:

    Well, it's not how it works. The initial data doesn't include anything from the past of the Cauchy surface. In fact until you solve the equations, there is no past nor future. The initial data consists of fields defined on the surface. Whatever the values of the past and future evolution may be, say arbitrary large, they are not part of the initial conditions. So there is nothing dubious here and by construction you get solutions to the Einstein equation.

    Sure, you can formally do this.  Butt then I can formally take a line in the middle of the diagram, evolve it arbitrarily far backwards to the singularity region, then evolve it forward again back to the start.  The two resulting hypersurfaces won't necessarily agree anymore depending upon details of what takes place near the singularity.  This is why it's often said that naked singularities yield problems for determinism.  So I would say the propriety of those sorts of manipulations are basically equivalent to whether you accept (weak) cosmic censorship or not.

  16. stevendaryl
    stevendaryl says:

    You have these backwards.

    Right. In the black hole interior, [itex]frac{dr}{dtau}< 0[/itex] and in the white hole interior, [itex]frac{dr}{dtau} > 0[/itex].

    So now I'm a little confused: What is it that prevents having two nearby test particles with opposite signs of [itex]frac{dr}{dtau}[/itex]?

  17. Ben Niehoff
    Ben Niehoff says:

    Sure, you can formally do this.  Butt then I can formally take a line in the middle of the diagram, evolve it arbitrarily far backwards to the singularity region, then evolve it forward again back to the start.  The two resulting hypersurfaces won't necessarily agree anymore depending upon details of what takes place near the singularity.  This is why it's often said that naked singularities yield problems for determinism.  So I would say the propriety of those sorts of manipulations are basically equivalent to whether you accept (weak) cosmic censorship or not.

    I think the problem you are trying to highlight is merely that you can't use the white-hole singularity as a Cauchy surface.  This doesn't mean that Cauchy surfaces don't exist.  Informally, anything can come out of a white hole, much like anything can fall into a black hole.

    I agree this leads to problems with causality in the eternal black hole spacetime, because effectively one cannot evolve from the infinite past into the infinite future.  So one cannot answer the question, "What happens if I put a white hole in spacetime?"  However, the Cauchy problem is not "What happens if I do something undefined?", but rather "Given that the current state is A, what happens next?"

    You don't have to evolve them forward. You can evolve the initial data on the hypersurface ##T = 0## in Kruskal-Szekeres coordinates both forwards and backwards. Doing so will give you the complete globally hyperbolic region, all the way back to the past singularity and forward to the future singularity. Since the equations are time symmetric, this is perfectly well-defined and justified.

    I disagree with the terminology "globally hyperbolic" here.  The equations of motion fail at the singularities, and the singularities are reachable in finite proper time.  Thus the hyperbolic region is not "global".

    The main issue here is the geodesic incompleteness at the singularities.  This means you cannot just excise the singularities, as you could if they were "infinitely far away".

    I don't know what you're basing this on. The subject under discussion is a well-defined solution of the classical Einstein Field Equation. Any event with finite spacetime curvature invariants, including arbitrarily large ones, can occur in such a solution. The solution might not end up describing anything physically relevant, but that doesn't mean the points with large spacetime curvature values "don't obey the equation"; it just means physics, unlike this particular mathematical model, chooses some other equation at that point.

    The issue is that the singularities don't obey the equation.  There is no sense in which they do (in contrast, e.g., to the singularity in the electric field of a point charge, which can be dealt with by using distributions).

  18. PeterDonis
    PeterDonis says:

    What is it that prevents having two nearby test particles with opposite signs of ##frac{dr}{dtau}##?

    The convention you just implicitly adopted for the direction along the worldline in which ##tau## increases. To be fair, I slipped it in there without saying so. :wink:

    A more explicit unpacking would be this: first, at every event in the spacetime, we make a choice of which half of the light cone is the "future" half, and which half is the "past" half, in such a way that the choice is continuous throughout the spacetime. There are only two ways of doing this: we can choose the half that points towards region II on the Kruskal diagram as the "future" half, or we can choose the half that points towards region IV. But once we've made that choice at one event, for continuity we have to make the same choice at every event. The usual convention is to choose the "future" half to point towards region II.

    Then we just define ##tau## along every timelike worldline such that it increases from the past to the future, as defined by the halves of the light cones. Once we've done that, then we must have ##dr / dtau > 0## in region IV and ##dr / dtau < 0## in region II along every timelike worldline.

    If you think about it, you will see that there is no actual loss of generality in doing all this, because the spacetime as a whole is time symmetric.

  19. PAllen
    PAllen says:

    Right. In the black hole interior, [itex]frac{dr}{dtau}< 0[/itex] and in the white hole interior, [itex]frac{dr}{dtau} > 0[/itex].

    So now I'm a little confused: What is it that prevents having two nearby test particles with opposite signs of [itex]frac{dr}{dtau}[/itex]?

    Applying an orientation to an orientable spacetime involves choosing a consistent labeling of past/future of all light cones. Then, for any world line, a tangent directed one way (one sign, in your case) is future directed, while the other sign is past directed.

Newer Comments »

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply