Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Insights The Schwarzschild Geometry: Part 4 - Comments

  1. Dec 21, 2016 #1

    PeterDonis

    User Avatar
    2016 Award

    Staff: Mentor

  2. jcsd
  3. Dec 21, 2016 #2

    Haelfix

    User Avatar
    Science Advisor

    There are a few other really interesting points about region III and region IV.
    1) There is a serious Cauchy problem with having a past singularity that is allowed to communicate information off to infinity.

    2) The white hole horizon is conceptually really bizarre...
    Since nothing is allowed to get in, that means that 'test' particles traveling in orbits around the white hole horizon (more precisely the particle horizon) will accumulate there, and there will be a severe blue shift when viewed from infinity. This blue sheet is a sort of classical instability, and it is argued that it leads to gravitational collapse, and thus there is likely a singularity in the future as well! See:

    Death of White Holes in the Early Universe - Eardley, Douglas M. Phys.Rev.Lett. 33 (1974) 442-444

    3) Quantum mechanically, if you believe in Hawking radiation/evaporation, and blackhole thermodynamics, in some sense black hole and white hole microstates have to be the same thing! See:
    Black Holes and Thermodynamics - Hawking, S.W. Phys.Rev. D13 (1976) 191-197
     
  4. Dec 21, 2016 #3

    PeterDonis

    User Avatar
    2016 Award

    Staff: Mentor

    If "Cauchy problem" is intended to mean that the spacetime has a Cauchy horizon, this is not true. The Schwarzschild spacetime is globally hyperbolic.

    It is true that the past singularity seems highly unphysical, but I'm not sure "Cauchy problem" is the best way to describe why.

    Which test particles are these? If they are test particles in stable orbits in region I, they can equally well be viewed as orbiting the black hole; they certainly don't accumulate near the white hole horizon.

    If you mean test particles that are close to the white hole horizon, there are no stable orbits there; there are no stable orbits inside ##r = 6M##, and there are no orbits at all, even unstable ones, inside ##r = 3M##. So any freely falling object below ##r = 3M## will fall into the black hole, region II; it won't "accumulate" at the white hole horizon.

    Not for objects that are free-falling radially inward. They will see incoming light from infinity to be redshifted.

    Objects in free-fall orbits will see incoming light from infinity to be blueshifted, but at the lowest possible orbit, ##r = 3M##, the blueshift is quite modest.

    Objects that have nonzero proper acceleration can "hover" close to the horizon and will indeed see a large blueshift in light coming in from infinity. But this is due to their proper acceleration, which increases without bound as the horizon is approached.

    All of this is standard Schwarzschild spacetime physics; none of that changes when we include the full maximally extended spacetime in our model.

    Unfortunately this paper is behind a paywall so I can't access it. If you want to email me a copy, I'm at peterdonis@alum.mit.edu. I would be curious to read the paper and see exactly what spacetime geometry it is assuming. Since it is dealing with the early universe, it obviously is not using a vacuum geometry, and the Schwarzschild spacetime I am discussing in this series is a vacuum solution (except for the Oppenheimer-Snyder model, which has a non-vacuum region, but that model also has no region III or IV so it's not relevant here). In short, I'm not sure the term "white hole" in that paper means the same thing as I mean by "white hole" in these articles.
     
  5. Dec 21, 2016 #4

    PeterDonis

    User Avatar
    2016 Award

    Staff: Mentor

    I'm aware of this hypothesis by Hawking, but I don't know if it has led to anything in the field of quantum gravity.
     
  6. Dec 22, 2016 #5
    Fantastic series Peter, thanks!
     
  7. Dec 22, 2016 #6

    Haelfix

    User Avatar
    Science Advisor

    A Cauchy problem ('initial value problem') in GR is a statement about taking surfaces of initial data (in GR-- spacelike surfaces but they could in principle also involve data from other matter fields) and developing them forward in some regular way subject to the relevant partial differential equations such that the process satisfies certain constraints (basically you want reversibility, avoiding many to one mappings, etc). Here, the initial data surface is singular as there is geodesic incompleteness, and physically this manifests itself as a loss of predictability between any 'two' distinct states in the theory, provided the singular surface was is in at least ones past lightcone. Basically you are taking an infinite amount of information (states) and allowing that to propagate throughout spacetime. This language is often used when discussing formulations of cosmic censorship, but for some reason that I don't understand the FRW singularity and the White hole singularity seem to be excluded from theorems about cosmic censorship (probably b/c they are trivial).

    Sorry i'm not being clear here. The geometry i'm referring to is not vacuum, but it is somewhat similar to Oppenheimer Snyder which you were discussing. It is the *perturbed* extended Schwarschild solution with an infalling sheet of spherically symmetric null dust. Unfortunately I'm now away from my institution for the holidays, and it seems hard to find material discussing this that's not behind a paywall (there is a whole chapter about white hole instabilities in Novikov and Frolov), but for the Eardley instability I found roughly the picture I was looking for in the following paper, as well as some of the discussion of the setup: See figure 1
    http://gravityresearchfoundation.org/pdf/awarded/1989/blau_guth.pdf [Broken]

    Hawking's argument is a statement about semiclassical states and thermal equilibrium, and in my opinion is pretty convincing. Of course without knowing the degrees of freedom of quantum gravity, it's hard to speculate whether a similar thing holds true in the full theory or not.
     
    Last edited by a moderator: May 8, 2017
  8. Dec 22, 2016 #7

    PeterDonis

    User Avatar
    2016 Award

    Staff: Mentor

    Ah, ok. I had seen that language before but got confused thinking of a Cauchy horizon.

    I am still confused by this, however. As I said before, the maximally extended Schwarzschild spacetime is globally hyperbolic; that means it automatically has a well-posed initial value problem. As an example of how to formulate it, the spacelike surface ##T = 0## in Kruskal-Szekeres coordinates is a Cauchy surface for the spacetime; appropriate initial data on that surface (basically the geometry of all the 2-spheres that make it up, which is equivalent to specifying the one free parameter ##M## in the line element) determines the entire spacetime. It is true that the entire spacetime thus determined is geodesically complete--more precisely, it is timelike geodesically incomplete. But that is not inconsistent with the spacetime being globally hyperbolic and having a well posed initial value problem.

    I'll look at the paper you linked to and comment further after I've read it.
     
  9. Dec 23, 2016 #8

    martinbn

    User Avatar
    Science Advisor

    Can you elaborate, because as written it doesn't seem right? The initial hypersurface of the initial value problem is not singular. It is a complete Riemannian manifold. Its future (and past Cauchy) development is incomplete (Lorentzian manifold), but the initial data is as regular as it gets.
     
  10. Dec 23, 2016 #9

    Haelfix

    User Avatar
    Science Advisor

    So there are certainly spacelike Cauchy surfaces that one can construct that will have finite values for all physical quantities arbitrarily 'near' the singularity, but I don't believe this is sufficient condition for being a well posed surface (regular is I agree an incorrect word choice). There are other technical restrictions on the form of the initial data and I'd have to consult a textbook (im currently away) for the exact statements. Clearly having arbitrarily large(but finite) tidal forces is not what one would want for well behaved data.
     
  11. Dec 24, 2016 #10

    martinbn

    User Avatar
    Science Advisor

    I don't think there is any problem, but I would like to know, so I'd like to see it when you find it.

    It seems that you expect the initial hypersurface to be as far back in the past as possible, but that is not needed any surface could be used. For example a horizontal line that goes right in the middle of the diagram is as good as any other.
     
  12. Dec 24, 2016 #11

    Haelfix

    User Avatar
    Science Advisor

    Yes but think about it, any such line has access to the singularity region in its causal past. Surfaces that include data with arbitrarily large curvature invariants are thus being evolved forward with Einsteins equations, when they likely don't even obey the equation to begin with. The entire future spacetime is thus built out of that dubious development. When people formulate statements about cosmic censorship they are trying to formalize that notion somehow (and I know there are difficulties with making the statement precise). I'll look into it when I get the chance
     
  13. Dec 24, 2016 #12

    stevendaryl

    User Avatar
    Staff Emeritus
    Science Advisor

    Exactly what the white hole is is a little mysterious to me. It seems that there is a sense in which there is no difference in the spacetime geometry of a black hole and a white hole; the difference is simply initial conditions of the test particles traveling in that geometry.

    Let me explain why I think that.

    To simplify, let's talk about purely radial motion, so we can treat the Schwarzschild geometry as if there were only one spatial dimension. Let [itex]Q[/itex] be the Schwarzschild factor defined by: [itex]Q \equiv 1 - \frac{2GM}{c^2 r}[/itex]. Let [itex]\tau[/itex] be proper time. Let [itex]U^\mu \equiv \frac{\partial x^\mu}{\partial \tau}[/itex]. Then for a test particle of mass [itex]m[/itex] moving along a radial timelike geodesic, we have the following conserved quantities:
    1. [itex]K \equiv m c Q U^t[/itex]. This is sort of the "momentum" in the t-direction.
    2. [itex]H \equiv \frac{m c^2}{2} Q (U^t)^2 - \frac{m (U^r)^2}{2Q}[/itex]. This is actually [itex]\frac{mc^2}{2} \frac{ds^2}{d\tau^2}[/itex], so it's just equal to [itex]\frac{mc^2}{2}[/itex].
    Putting these together gives an equation for [itex]U^r[/itex]:

    [itex]\frac{m}{2} (U^r)^2 - \frac{GMm}{r} = \mathcal{E}[/itex]

    where [itex]\mathcal{E} = \frac{K^2}{2 mc^2} - \frac{mc^2}{2}[/itex]

    I wrote it in this way so that you can immediately see that it's just the energy equation for a test particle moving under Newtonian gravity. So without any mathematics, we can immediately guess the qualitative behavior:

    If [itex]\mathcal{E} < 0[/itex], and initially, [itex]U^r > 0[/itex], then the test particle will rise to some maximum height: [itex]r_{max} = \frac{GMm}{|\mathcal{E}|}[/itex], and then will fall back to annihilation at [itex]r=0[/itex] in a finite amount of (proper) time. The interesting case is [itex]r_{max} > \frac{2GM}{c^2} \equiv r_S[/itex], where [itex]r_S[/itex] is the black hole's Schwarzschild radius. In that case, this scenario represents a particle rising from below the event horizon and then turning around and falling back through the event horizon.

    That seems to contradict the fact that nothing can escape from the event horizon, but to see why it doesn't, you have to see what the time coordinate [itex]t[/itex] is doing: In the time period between the particle rising out of the event horizon and falling back into the event horizon, only a finite amount of proper time passes, but an infinite amount of coordinate time passes. In the far past, [itex]t \rightarrow -\infty[/itex], the particle arises from the event horizon, and in the far future, [itex]t \rightarrow +\infty[/itex], the particle sinks below the event horizon. The time period while the particle is rising up to the event horizon, and the time period while the particle is falling below the event horizon is not covered by the coordinate [itex]t[/itex] (well, you can still have a [itex]t[/itex] coordinate there, but its connection to the [itex]t[/itex] coordinate above the horizon is broken by the event horizon). So from the point of view of someone far from the black hole, using the [itex]t[/itex] coordinate for time, nothing ever crosses the event horizon (in either direction) for any finite value of [itex]t[/itex].

    Going back to the test particle, we can identify the various parts of the Schwarzschild geometry:
    1. During the time that the particle is rising below the event horizon, the particle is traveling through Region IV, the white hole interior.
    2. During the time that the particle is above the event horizon, the particle is traveling through Region I, the black hole exterior.
    3. During the time that the particle is falling below the event horizon, the particle is traveling through Region II, the black hole interior.
    (A fourth region, Region III, is not visited by the test particle, but is a black hole exterior like Region I).

    The point is that nothing about the local geometry of spacetime changes in going from Region IV (the white hole interior) to Region II (the black hole interior). The only difference is the sign of [itex]\frac{dr}{d\tau}[/itex]. So the difference between a black hole and a white hole is simply the initial conditions of the test particle. So it's not that the particle is repelled by the white hole and is attracted by the black hole. It's true by definition that:
    • If the test particle is below the event horizon and [itex]\frac{dr}{d\tau} > 0[/itex], then the particle is in the black hole interior.
    • If the test particle is below the event horizon and [itex]\frac{dr}{d\tau} < 0[/itex], then the particle is in the white hole interior.
    As for the exterior, the same region, Region I, serves as the exterior of the white hole and the black hole. The same event horizon looks like a white hole in the far past [itex]t \rightarrow -\infty[/itex], because the test particle is rising from it, and looks like a black hole in the far future [itex]t \rightarrow +\infty[/itex], because the test particle is falling toward it. (For a realistic black hole formed from the collapsed of a star, there is no event horizon in the limit [itex]t \rightarrow -\infty[/itex], so there is no corresponding white hole.)

    Here are some puzzles having to do with the test particles:
    1. In the case of many one test particles instead of just one, do all the particles have the same sign of [itex]\frac{dr}{d\tau}[/itex]? They are all falling in the black hole interior, and all rising in the white hole interior. Why aren't there some particles that are rising, while other particles are falling? It turns out that there is a simple answer to this question. If you have [itex]\frac{dr}{d\tau} > 0[/itex], you can make it [itex]\frac{dr}{d\tau} < 0[/itex] by reparametrizing: [itex]\tau \rightarrow -\tau[/itex]. So it's possible to arrange it so that all particles have the same sign of [itex]\frac{dr}{d\tau}[/itex].
    2. A followup to the first puzzle: If you just arbitrarily flip the sign of [itex]\tau[/itex] for a test particle, it makes no difference, since they have no internal state. But if instead you don't have a test particle, but a physical object, such as a clock or a human being, then flipping the sign of proper time means reversing the usual progression of states. The clock will start running backwards, and the human will start getting younger, instead of older. That's not technically a contradiction, because the laws of physics are reversible, so it's possible for a human to age backwards. But it's a violation of the law of increasing entropy. So if it happens to be the case (and it sure seems to be) that all processes in the universe have the same thermodynamic arrow of time--entropy increases as proper time increases--then puzzle number 1 switches to the question of why is there a universal thermodynamic arrow of time? This boils down to the question: Why was entropy lower in the far past? General Relativity doesn't answer this question. (I'm not sure what does
    3. Another complication is to include test particles that don't move on geodesics, because of non-gravitational forces. How does that affect the picture?
     
  14. Dec 24, 2016 #13

    PeterDonis

    User Avatar
    2016 Award

    Staff: Mentor

    You have these backwards.

    We don't have a final answer to this question, because we don't know what preceded the hot, dense, rapidly expanding "Big Bang" state. We only know that the entropy of that state was much lower than the present entropy of the universe.

    In regions IV and II (the white hole and black hole), it doesn't really change things at all: all test particles must still leave the white hole, and all test particles that enter the black hole still can't escape.

    In region I (and III as well), it allows test particles that would otherwise fall into the black hole to avoid it and stay in region I (or III). It still doesn't allow anything to enter the white hole.
     
  15. Dec 24, 2016 #14

    PAllen

    User Avatar
    Science Advisor
    Gold Member

    To me there is a simple inverse symmetry between BH and WH : for a WH, the singularity is in the past light cone of every event in the interior, while for a BH it is the future light cone of every interior event.
     
  16. Dec 24, 2016 #15

    PeterDonis

    User Avatar
    2016 Award

    Staff: Mentor

    You don't have to evolve them forward. You can evolve the initial data on the hypersurface ##T = 0## in Kruskal-Szekeres coordinates both forwards and backwards. Doing so will give you the complete globally hyperbolic region, all the way back to the past singularity and forward to the future singularity. Since the equations are time symmetric, this is perfectly well-defined and justified.

    I don't know what you're basing this on. The subject under discussion is a well-defined solution of the classical Einstein Field Equation. Any event with finite spacetime curvature invariants, including arbitrarily large ones, can occur in such a solution. The solution might not end up describing anything physically relevant, but that doesn't mean the points with large spacetime curvature values "don't obey the equation"; it just means physics, unlike this particular mathematical model, chooses some other equation at that point.
     
  17. Dec 26, 2016 #16

    martinbn

    User Avatar
    Science Advisor

    Well, it's not how it works. The initial data doesn't include anything from the past of the Cauchy surface. In fact until you solve the equations, there is no past nor future. The initial data consists of fields defined on the surface. Whatever the values of the past and future evolution may be, say arbitrary large, they are not part of the initial conditions. So there is nothing dubious here and by construction you get solutions to the Einstein equation.

    I am not sure if this is relevant but one way the strong cosmic censorship conjecture is formulated is that the maximal Cauchy development is not extendible. Which is the case in Schawrtzschild, but not Kerr. The weak version usually asks for completeness of future null infinity.
     
  18. Dec 26, 2016 #17

    Haelfix

    User Avatar
    Science Advisor

    Sure, you can formally do this. Butt then I can formally take a line in the middle of the diagram, evolve it arbitrarily far backwards to the singularity region, then evolve it forward again back to the start. The two resulting hypersurfaces won't necessarily agree anymore depending upon details of what takes place near the singularity. This is why it's often said that naked singularities yield problems for determinism. So I would say the propriety of those sorts of manipulations are basically equivalent to whether you accept (weak) cosmic censorship or not.
     
  19. Dec 26, 2016 #18

    stevendaryl

    User Avatar
    Staff Emeritus
    Science Advisor

    Right. In the black hole interior, [itex]\frac{dr}{d\tau}< 0[/itex] and in the white hole interior, [itex]\frac{dr}{d\tau} > 0[/itex].

    So now I'm a little confused: What is it that prevents having two nearby test particles with opposite signs of [itex]\frac{dr}{d\tau}[/itex]?
     
  20. Dec 26, 2016 #19

    Ben Niehoff

    User Avatar
    Science Advisor
    Gold Member

    I think the problem you are trying to highlight is merely that you can't use the white-hole singularity as a Cauchy surface. This doesn't mean that Cauchy surfaces don't exist. Informally, anything can come out of a white hole, much like anything can fall into a black hole.

    I agree this leads to problems with causality in the eternal black hole spacetime, because effectively one cannot evolve from the infinite past into the infinite future. So one cannot answer the question, "What happens if I put a white hole in spacetime?" However, the Cauchy problem is not "What happens if I do something undefined?", but rather "Given that the current state is A, what happens next?"

    I disagree with the terminology "globally hyperbolic" here. The equations of motion fail at the singularities, and the singularities are reachable in finite proper time. Thus the hyperbolic region is not "global".

    The main issue here is the geodesic incompleteness at the singularities. This means you cannot just excise the singularities, as you could if they were "infinitely far away".

    The issue is that the singularities don't obey the equation. There is no sense in which they do (in contrast, e.g., to the singularity in the electric field of a point charge, which can be dealt with by using distributions).
     
  21. Dec 26, 2016 #20

    PeterDonis

    User Avatar
    2016 Award

    Staff: Mentor

    The convention you just implicitly adopted for the direction along the worldline in which ##\tau## increases. To be fair, I slipped it in there without saying so. :wink:

    A more explicit unpacking would be this: first, at every event in the spacetime, we make a choice of which half of the light cone is the "future" half, and which half is the "past" half, in such a way that the choice is continuous throughout the spacetime. There are only two ways of doing this: we can choose the half that points towards region II on the Kruskal diagram as the "future" half, or we can choose the half that points towards region IV. But once we've made that choice at one event, for continuity we have to make the same choice at every event. The usual convention is to choose the "future" half to point towards region II.

    Then we just define ##\tau## along every timelike worldline such that it increases from the past to the future, as defined by the halves of the light cones. Once we've done that, then we must have ##dr / d\tau > 0## in region IV and ##dr / d\tau < 0## in region II along every timelike worldline.

    If you think about it, you will see that there is no actual loss of generality in doing all this, because the spacetime as a whole is time symmetric.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: The Schwarzschild Geometry: Part 4 - Comments
  1. Schwarzschild Geometry (Replies: 14)

Loading...