|Dec2-12, 11:11 PM||#103|
Julian Barbour on does time exist
Sheaf: No, I wouldn't dare to tamper with the words of the great Muslim philosopher Omar Khayyam! See Rubaiyat of Omar Khayyam, quatrain No. 51.
|Dec3-12, 12:16 PM||#104|
The words came to mind without their original punctuation and are so transcribed.
The moving finger writes, and having writ
moves on––nor all your piety nor wit
shall lure it back to cancel half a line,
nor all your tears wash out a word of it.
One can well ask "why" the apparent connection between AFTERNESS, as in odtAa, and algebraic unswitchability. John Baez put in a related comment at Jeff Morton's blog (of the "I think this is cool..." sort) when they were discussing Rovelli thermal time idea.
I'll get a link.
The joking reference to "many-fingered time" was sly of Sheaf and a bit arcane. It is a modern hypothesis that a few people have explored. (Including Demystifier among others.) I think it comes in different versions. One picture (not Demy's) might be of a block universe past that grows forwards in time from many different points, in a sort of uncoordinated way. Sheaf must know a lot more about it than I. The idea would have baffled Mssrs Fitzgerald and Khayyam, I imagine. We don't really need to consider it here, since the thermal time construction gives us one unique universal time (which we can compare local and observer times to.)
Here's link to Jeff Morton's blog post about TT.
Here's the Baez quote from "Theoretical Atlas":
John Baez Says:
October 30, 2009 at 12:08 am
I think every von Neumann algebra has a ‘time-reversed version’, namely the conjugate vector space (where multiplication by i is now defined to be multiplication by -i) turned into a C*-algebra in the hopefully obvious way. And I think the Tomita flow of this time-reversed von Neumann algebra flows the other way!
I know that every symplectic manifold has a ‘time-reversed version’ where the symplectic structure is multiplied by -1. This is equivalent to switching the sign of time in Hamilton’s equations.
I think it’s cool how time reversal is built into these mathematical gadgets.
|Dec3-12, 09:42 PM||#105|
Thanks for the pointer to Jeff Morton's blog. It's a gem. And for translating Sheaf's post -- I didn't know about that particular gadget; all mathematical gadgets are most definitely cool, like Omar's stuff. The wonder for me is how so many of them are practical, as well!
|Dec3-12, 10:12 PM||#106|
I'm in general agreement about Jeff Morton's blog, especially the October 2009 post, and with the spirit of your remarks. I notice I got Jeff's location wrong, a few posts back. He was at Lisbon until recently but is now at Uni Hamburg. That's become a pretty good place for Quantum Gravity, as well as Mathematical Physics (Jeff's field). He could continue to be interested and well-informed about QG (whatever direction his own research takes) which would be our good fortune.
The only recent question no one has responded to in this thread is from H. Cow about the current situation in QG. It's changing rapidly, and strongly affected by what's happening in Quantum Cosmology, since that is where the effects of quantum geometry are most apt to be visible, in the aftermath of the Bounce, or whatever happened around the start of the present expansion. Since that is not the main topic of this thread, I would urge H. Cow to start a thread asking about that---and also take a look at my thread about the current efforts at REFORMULATING Loop QG.
Getting back to the topic of TIME. I'd be very curious to know how a cosmological bounce looks in the general covariant Heisenberg picture----there is just one timeless state, which is a positive linear functional on a C* algebra of observables. An algebra [A] and a functional ρ defined on it which gives us, among other things, expectation values ρ(A) and correlations e.g. ρ(AB) - ρ(A)ρ(B) and the like.
Taken together ([A], ρ) give us a one-parameter group αt which acts like the passage of time on the observables---mapping each A into the corresponding observable taken a little while later---a PROCESS that mixes and morphs and stirs the observables around, a "flow" defined on the algebra [A].
So if the theory has a bounce one intuitively feels there should be an energy density observable, call it A, corresponding to a measurement made pre-bounce. So that we can watch the expectation value of αt(A) evolve thru the bounce. In other words ρ(αt(A)) should start low, as in a classical universe of the sort we're familiar with, and rise to some extremely high value within a few ten-powers of Planck density, and then subside back to low densities comparable to pre-bounce.
Now the state ρ being timeless means that it does not change. So the challenge is to come up with an algebra of observables which undergoes a bounce, when given the appropriate timeless positive linear state functional ρ defined on it.
Thermal time could be starting to attract wider attention. I noticed that it comes up in the latest Grimstrup Aastrup paper, "C*-algebras of Holonomy-Diffeomorphisms & Quantum Gravity I", pages 37-39
G&A's reference  is to the paper by Connes and Rovelli:
==sample excerpt from pages 37-39==
...A more appealing possibility is to seek a dynamical principle within the mathematical machinery of noncommutative geometry. In particular, the theory of Tomita and Takesaki states that given a cyclic and separating state on a von Neumann algebra there exist a canonical time flow in the form of a one-parameter group of automorphisms. If we consider the algebra generated by HD(M) and spectral projections of the Dirac type operator, then the semi-classical state will, provided it is separating, generate such a flow. This would imply that the dynamics of quantum gravity is state dependent13 - an idea already considered in  and . Since Tomita-Takesaki theory deals with von Neumann algebras it will also for this purpose be important to select the correct algebra topology.
Hidden within the two issues concerning of the dynamics and the complexified SU(2) connections lurks a very intriguing question. If it is possible to derive the dynamics of quantum gravity from the spectral triple construction – for instance via Tomita Takesaki theory – then it should be possible to read off the space-time signature (Lorentzian vs. Euclidean) from the derived dynamics, for instance a moduli operator.
I don't follow Grimstrup et al work at all closely, but note their interest.
|Dec4-12, 02:28 AM||#107|
|Dec4-12, 07:11 AM||#108|
I won't be able to answer your post right away. John Baez is a teacher. I think he is using his comment to hint at how the Tomita flow of time is sensed or tasted from the algebra [A] and the function rho defined on it (that basically just gives expectation values of the observables separately and in combination).
The way the flow is constructed has very much to do with replacing i with -i.
That is what the * operation of a C* algebra does. On a larger scale.
The slow way to digest this business intuitively begins with thinking a little about the complex conjugate operation that takes x+iy into x - iy. It flips the plane of complex numbers over along the horizontal axis.
Then you think about generalizing that to matrices. For a one-by-one matrix, well that is just a single complex number so you just take the conjugate. For a two-by-two that is 4 complex numbers and there is an analogous thing involving conjugates and taking the "transpose" of the matrix (exchange upper right and lower left entries.
It's a swapping operation that, if repeated, gets you back what you had to start with---the mathematics term is "involution".
The first thing the Tomita timeflow construction does is define a "swap" operator S that basically does the star involution in a very concrete way, analogous to simply multiplying one matrix by another: taking A --> SA, and where SA turns out to have the same effect as A*.
And then Tomita analyzes S into two factors, one of which is self-adjoint. THIS IS A WAY OF TASTING THE ESSENTIAL FLAVOR OF TIME-REVERSAL. Squeezing the juice out of time-reversal.
That is what Baez is trying to plant the idea of in the reader's mind, by making that innocentsounding observation. It is really important. The Tomita flow is based on that selfadjoint factor of S. In the usual notation this factor is called uppercase Delta Δ.
This is vague and handwavy. I'll try to say it better later today.
Here's a WikiP about an operation on matrices that is analogous to conjugate of complex numbers:
You may be thoroughly familiar with this already but I'll try to supply detail for other readers who might not be.
|Dec5-12, 03:45 PM||#109|
For math buffs fond of rigorous proof, the best paper I've found online about Tomita flow is this 1977 one by Marc Rieffel and Alfons van Daele
Only selected parts of it are directly about Tomita flow, it delves into a bunch of related matters. The whole article is some 34 pages long.
Pages 187-221 of an issue of Pacific Journal of Mathematics.
It would be nice if someone could point us to a more concise, say a 10 page, treatment of just the T-theorem. Or could extract the essential line of reasoning from this paper.
There is a short explanatory article commissioned by Elsevier's ENCYCLOPEDIA OF MATHEMATICAL PHYSICS, written by Stephen Summers.
But it does not give proofs of the hard parts.
It seems that Tomita-Takesaki theory is deep, non-trivial. It is easy to say and not difficult to grasp the general idea, but drilling down to logical bedrock takes effort. The original approach involved unbounded operators, one had to wonder if and where they were well-defined. Rieffel and van Daele work with bounded operators and take more steps---lots of lemmas.
There's a Master's Thesis by someone named Duvenhage at Pretoria that takes essentially the same approach as Rieffel van Daele but could be helpful because it puts in more background algebra and analysis.
To give an example of the kind of questions that come up, recall we have ([A], ρ)
a *-algebra and a state---from which by well known means we get ([H], ψ) a hilbertspace with a cyclic separating vector which represent both the algebra and the state in a way familiar to physicists. Algebra elements A are represented as operators in customary fashion.
Then a new operator S is defined by SAψ = A*ψ. How do we know this is well-defined? We are only told what SA does to the cyclic vector. And do we think of S as an operator on the hilbertspace or on the algebra? Both, but can this be done consistently?
Then this operator S is resolved into two factors: S = JΔ, or in other papers S = JΔ1/2. How do we know we can do this? OK as operators on the hilbertspace. The first factor is conjugate-linear and a kind of flip or involution. It is its own inverse, J2=I. The second factor is positive and selfadjoint, as an operator on the hilbertspace. That means you can diagonalize it with positive real numbers down the diagonal, as learned in undergrad linear algebra class. And you can raise it to the it power to make Δit, which will be unitary.
Then we define Tomita flow: αt(A) = ΔitAΔ-it.
I guess that makes sense as operators on the hilbertspace, but how do we know that the flow actually stays in the original *-algebra?
How do we know that αt(A) is still in [A]?
This turns out to be a large part of the Tomita-Takesaki theorem: the statement that
Δit [A] Δ-it = [A]
If you take the original star-algebra and advance each item in it by the same time-interval t, then what you get is the same star-algebra. The time flow just shifts or shuffles or permutes the items among themselves.
The fame of Tomita rests on the fact that he was able to show this, not all the stuff leading up to it, but this. So if you look at the kind of tutorial paper by Summers http://arxiv.org/abs/math-ph/0511034
it is precisely this which you see as "Theorem 1.1" on page 2.
This, and also a seemingly inconsequential fact about J. Namely that if you apply J front and back to every item in [A] it picks out for you all the items that commute with everything in [A], the so-called "commutant" customarily denoted with a prime, in this case [A]'. I have seen mathematicians make nimble use of this fact but its significance is not obvious, so I think of the content of "Theorem 1.1" as primarily
Δit [A] Δ-it = [A]
Delta, when turned into a unitary operator, stirs the pot without splashing any of the soup out.
I supect that this Delta, which is the positive real heart of a "swap" or "reversal" operator, will eventually become part of our language because it encapsulates the intrinsic TIME inherent in a world (of observables) and a state (what we think we know about that world). And so whatever this Delta is eventually called, it will probably settle in to our collective awareness.
BTW the Princeton Companion to Mathematics (page 517) points out that Δ = S*S which makes excellent sense and for economy of notation they don't bother to introduce the symbol Δ. They just use S*S, the product of the "swap" S with its adjoint S*.
Minoru Tomita's work went unpublished for several years until discovered and made more presentable by Takesaki, whose name can be remembered by resolving it into "take saki".
Alain Connes, in a 2010 interview, says "I am too young to have met von Neumann, but I was much more influenced at a personal level by the Japanese: Tomita and also Takesaki.”
The interviewer adds: "Minoru Tomita (1924) is a Japanese mathematician who became deaf at the age of two and, according to Connes, had a mysterious and extremely original personality. His work on operator algebras in 1967 was subsequently refined and extended by Masamichi Takesaki and is known as Tomita–Takesaki Theory..."
|Dec5-12, 09:15 PM||#110|
Since we have this concept of a universal standard time it can be useful to compare other times with it. E.g. associated with an accelerated observer or with a location in the gravitational field.
Back in 1934 RC Tolman defined a local temperature of space associated with depth in a gravitational field, now known as the Tolman-Ehrenfest effect and it turns out that this temperature is the RATIO of the two rates: intrinsic Tomita time divided by proper time of a local observer.
If ds is a local observer's proper time-interval and dτ is the corresponding interval of Tomita time, then the Tolman-Ehrenfest temperature is proportional to dτ/ds.
So the temp is a comparison of ticking rates. The local temperature is high if Tomita time is ticking a lot faster than the local observer's clock.
There is a connection here to the Hawking BH temp and the Unruh temp of an accelerated observer in Minkowski space. The details are interesting and tend to validate the thermal time (i.e. Tomita time) idea. I won't go into detail at this point (supposed to help with supper) but will simply link to a relevant article:
Thermal time and the Tolman-Ehrenfest effect: temperature as the "speed of time"
Carlo Rovelli, Matteo Smerlak
(Last revised 18 Jan 2011)
The notion of thermal time has been introduced as a possible basis for a fully general-relativistic thermodynamics. Here we study this notion in the restricted context of stationary spacetimes. We show that the Tolman-Ehrenfest effect (in a stationary gravitational field, temperature is not constant in space at thermal equilibrium) can be derived very simply by applying the equivalence principle to a key property of thermal time: at equilibrium, temperature is the rate of thermal time with respect to proper time - the `speed of (thermal) time'. Unlike other published derivations of the Tolman-Ehrenfest relation, this one is free from any further dynamical assumption, thereby illustrating the physical import of the notion of thermal time.
btw the proportionality is hbar over Boltzmann k. If T is the Tolman temperature then
T = (
|Dec6-12, 03:07 AM||#111|
Marcus: small queries. In one of your posts that I now can't find I'm sure you mentioned that self-adjoint matrices are the analogs of the set of real numbers. Is this (to me interesting ) statement just common knowledge, or have you a reference for it? And do you have a pointer to the "Master's Thesis by someone named Duvenhage at Pretoria" that you mentioned in post # 109?
|Dec6-12, 11:52 AM||#112|
The photo shows him with a dour scowl (having drunk some bad wine, or found a mistake in a proof by one of his students). He was the thesis advisor of Henri Poincaré and Thomas Stieltjes.
The analogy is very nice. It is undergrad math, which is the longestlasting and most beautiful kind of math. You have to know what a BASIS of a vectorspace is (a set of vectors in terms of which all the rest can be written as unique combinations). It is a CHOICE OF AXES or a choice of framework. And a matrix is a way of describing a linear transformation by saying what it does to each member of some particular basis. I don't at the moment have an online undergrad linear algebra textbook link. There might be a Kahn Academy treatment. Many years ago we used a book by Paul Halmos.
You can DIAGONALIZE an hermitian (self-adjoint) matrix by finding a new basis for the vectorspace in which the same linear map is expressed by a diagonal matrix. When you do this the numbers down the main diagonal (upper L to lower R) turn out ALL REAL.
A POSITIVE hermitian or self-adjoint matrix is where the numbers down the diagonal turn out all positive real numbers. This means the linear map is just re-scaling along each of a fixed set of directions. No rotations no funny business. Just expanding a bit in this direction and perhaps contracting a bit in this other.
There is a strong analogy between a matrix that is simply real numbers down the diagonal (and zero elsewhere) and the real numbers themselves. A selfadjoint matrix is like a bunch of real numbers applied in an assortment of specified directions. so it is the HIGHER DIMENSIONAL ANALOG of a real number.
the beautiful thing is that the DEFINING CONDITION A* = A of self-adjointness is also analogous to the defining condition of realness which we can write as z* = z if you use * to mean the conjugate of a complex number (exchange i and -i, if z = x+iy then z* = x-iy)
The only way a complex number z can have z*=z is if the imaginary part y = 0.
Conjugation is flipping the complex number plane over keeping the real axis fixed, so the only way a number can have z*=z is if it is on the real axis.
The business of diagonalizing matrices, or finding the right axis framework for a given linear map so that its matrix will be very simple comes under the heading of the SPECTRAL THEOREM. The "spectrum" of an operator is the list numbers down the diagonal when you put it in diagonal form. It's like analyzing some light into its different wavelengths, with a prism. You really know the beast when you know that list of numbers. I think calling it the spectrum is metaphorical, a kind of 19th-Century physicist's poetical flight of language. From a time when the most exciting thing physicists did was heat various chemical elements and separate out the colors of the light they gave off when they were hot. Determining the spectrum was the pinnacle act of analysis. We still have their word for the list of numbers down the diagonal.
I can't recommend you go to Rocco Duvenhage's thesis. It is over a hundred of pages and you can get lost if you don't already know roughly what you are looking for, but I will put the link just in case I'm wrong and it actually is helpful to you or someone else.
Quantum statistical mechanics, KMS states and Tomita-Takesaki theory
|Dec6-12, 11:23 PM||#113|
Paulibus and others: I hope the foregoing account of self-adjointness, the analogy with real numbers, and diagonalizing was not too elementary. I tend to want to cover a range of levels: the topic is interesting enough so I think people with all different backgrounds might want to read about it. So some posts in the thread can be at a basic level, others less basic. Here's a more advanced treatment which has the merit of being very concise. It is from The Princeton Companion to Mathematics, edited by Field medalist Tim Gowers, a good math source book. I found a passage treating Tomita-Takesaki theory, and transcribed a sample excerpt
This is from page 517.
========quote Princeton Companion to Mathematics (2008)==========
Modular theory exploits a version of the GNS construction (section 1.4). Let M be a self-adjoint algebra of operators. A linear functional φ: M → C is called a state if it is positive in the sense that φ(T*T) ≥ 0 for every T in M (this terminology is derived from the connection described earlier between Hilbert space theory and quantum mechanics). for the purposes of modular theory we restrict attention to faithful states, those for which φ(T*T) = 0 implies T = 0. If φ is a state, then the formula
<T1, T2> = φ(T1* T2)
defines an inner product on the vector space M. Applying the GNS procedure, we obtain a Hilbert space HM. The first important fact about HM is that every operator T in M determines an operator on HM. Indeed a vector V in HM is a limit V = limn→∞ Vn of elements in M, and we can apply an operator T in M to the vector V using the formula
TV = lim TVn
where on the right-hand side we use multiplication in the algebra M. Because of this observation, we can think of M as an algebra of operators on whatever Hilbert space we began with.
Next, the adjoint operation equips the Hilbert space HM wtih a natural "anti linear" operator
S: HM → HM by the formula [see footnote]
S(V) = V*.
Since U*g = Ug-1 for the regular representations, this is indeed analogous to the operator S we encountered in our discussion of continuous groups. The important theorem of Minoru Tomita and Masamichi Takesaki asserts that, as long as the original state φ satisfies a continuity condition, the complex powers
Ut = (S*S)it
have the property that
Ut M U-t = M for all t.
The transformations of M given by the formula T → Ut T U-t are called the modular automorphisms of M.
Alain Connes proved that they depend only in a rather inessential way on the original faithful state φ. To be precise, changing φ changes the modular automorphisms only by inner automorphisms, that is, transformations of the form T → UTU-1 where U is a unitary operator in M itself. The remarkable conclusion is that every von Neumann algebra M has a canonical one-parameter group of "outer automorphisms," which is determined by M alone and not by the state φ that is used to define it.
[footnote] The interpretation of this formula on the completion HM of M is a delicate matter.
I like their expression for Δ, namely S*S. It makes sense because we know JJ = I so therefore
S* S = Δ1/2 J J Δ1/2 = Δ
|Dec7-12, 03:38 AM||#114|
Thanks for that full and clear reply, Marcus. My interest in the spectrum of a selfadjoint (Hermitian) matrix being regarded as the "higher dimensional analog of a real number "(as you put it) was provoked by Eugene Wigner’s remark in his essay "The Unreasonable Effectiveness of Mathematics in the Natural Sciences"(Communications in Pure and Applied Mathematics, vol. 13, No. 1 February 1960):
“The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve”.
I think Wigner overstated things a bit; the supportive match between maths and physics is perhaps a bit less than a miraculous, wonderful gift. Looking at mathematics from the outside, it seems likely to me that the set of real numbers lies close to the heart of much maths. For me an interesting aspect of the real numbers is that they are commutatively symmetric under arithmetic operations like addition and subtraction; it doesn’t matter where zero is located; numbers are like a line of labels that with impunity can be translated along its length.
At the heart of physics lies the symmetry of the dimensions we inhabit. As far as we know, physics is ruled by the same mathematical laws everywhere and everywhen. That’s why successful physics theories must be covariant. And why momentum and energy are conserved; because space and time have commuting translational symmetries.
It seems to me that physics (an evolving description of physical reality) and mathematics (an evolving universal language used by physicists and many others) are founded on similar symmetries. Perhaps the close match between them is quite mundane and may yet come to be better understood (counting numbers were probably an abstraction invented to quantify resources, like goats. Real numbers and much else evolved from these humble roots.) Even the need for a spectrum of numbers to statistically quantify observations on a quantum scale can be understood, up to the mysterious finiteness of h. Now it seems from the work of folk like Barbour, Connes and Rovelli, that this statistical quantification promises a new understanding
of time. Great stuff.
An understanding of space may take longer; for practical purposes, it’s what we can swing a cat in!
|Dec8-12, 11:56 AM||#115|
Paulibus, one thing your post reminds me is that significant advances in physics have often been accompanied by maturing philosophical sophistication. E.g. early 20th century the role of the observer, and of measurement, no fixed prior geometry, nonexistence of the continuous trajectory, irreducible uncertainty. Taking certain philosophical (epistemic?) proposals seriously actually helped the physics develop in some cases.
So progress is not always "physics as usual". Sometimes a dialog with philosophy of science people is helpful.
I suspect we will be seeing a General Covariant Quantum theory of Time emerge along lines suggested in this thread. A world (*-algebra) of possible observations and a state (defined on it giving correlations and expectation values) which expresses what we think the laws are, what we know from prior observations, what we predict deduce or expect.
The laws and constants of physics are after all merely correlations among actual and possible measurements, involving--like everything else--uncertainty. They are "regularities" in the *-algebra (call it M for "measurements" if you like), and are embodied in the state functional, along with what we think has been observed. The state is an elementary mathematical object, just a positive linear functional ω: M → ℂ.
I suppose that this model (M, ω) will replace the model consisting of space-time manifold with fixed geometry and fields defined on it, in part because the "block universe" picture has philosophical shortcomings: is incompatible with quantum theory.
This last is the theme of a conference opening in Capetown in a couple of days (10-14 December).
Main theme: ideas of time and challenges to block universe idea.
Abstracts of scheduled talks (scroll down to get to the abstracts)
BTW we should also keep an eye on tensorial group field theory "TGFT", I just watched the first 50 minutes of Sylvain Carrozza's PIRSA talk:
It was interesting. Also the last 9 minutes (64:00-73:00) where he gives conclusions, outlook, and answers questions. Most of the questions were from Dittrich and (someone I think was) Ben Geloun.
|Dec8-12, 12:27 PM||#116|
|Dec8-12, 01:38 PM||#117|
See also the thought experiment on page 12 of his earlier paper
You may have already looked at his "evolving/crystallizing" spacetime papers and would like to hear him present them in person.
One of the points Ellis makes is that as far as we know the future space-time geometry is in principle unpredictable. As un-predetermined as are the times of radioactive decay, which conventional QM tells us are not pre-determined. Therefore the conventional block universe, extending into future with a predetermined spacetime geometry, cannot exist.
I assume by "Avi" you mean Avshalom Elitzur, one of the other participants at this week's Capetown Time conference.
|Dec8-12, 01:43 PM||#118|
|Dec8-12, 03:10 PM||#119|
(Greek roots: on- = being,reality; epistem- = knowledge)
Anyway Ruta you said you wished you could hear Ellis' talk about the evolving block. I don't especially go for Ellis' proposed solution, but I like the clear way he describes the problem. This 2008 essay for wide audience communicates really well, and other readers of thread might enjoy it. It got the FQXi second community prize, right after Rovelli's essay.
==quote Ellis page 2==
To motivate this, consider the following scenario: A massive object has rocket engines attached at each end to make it move either left or right. The engines are controlled by a computer that decides what firing intervals are utilised alternately by each engine, on the basis of a non-linear time dependent transformation of signals received from a detector measuring particle arrivals due to random decays of a radioactive element. These signals at each instant determine what actually happens from the set of all possible outcomes, thus determining the actual spacetime path of the object from the set of all possible paths (Figure 1). This outcome is not determined by initial data at any previous time, because of quantum uncertainty in the radioactive decays. As the objects are massive and hence cause spacetime curvature, the spacetime structure itself is undetermined until the object’s motion is determined in this way. Instant by instant, the spacetime structure changes from indeterminate (i.e. not yet determined out of all the possible options) to definite (i.e. determined by the specific physical processes outlined above). Thus a definite spacetime structure comes into being as time evolves. It is unknown and unpredictable before it is determined.
Something essentially equivalent has already occurred in the history of the universe. According to the standard inflationary model of the very early universe, we cannot predict the specific large-scale structure existing in the universe today from data at the start of the inflationary expansion epoch, because density inhomogeneities at later times have grown out of random quantum fluctuations in the effective scalar field that is dominant at very early times...
...It follows that the existence of our specific Galaxy, let alone the planet Earth, was not uniquely determined by initial data in the very early universe. The quantum fluctuations that are amplified to galactic scale are unpredictable in principle. Thus spacetime evolution is not predictable even in principle in physically realisable cases. The outcome is only determined as it happens.
An arxiv link to the same essay:
List of the 2008 time essay contest winners:
|Similar Threads for: Julian Barbour on does time exist|
|Does time exist?||General Discussion||56|
|First post: "End of Time" by Julian Barbour||Science Textbook Discussion||0|
|Time observables (but not time operators) DO exist in QM||General Physics||1|
|[SOLVED] Julian Barbour's "End of Time"||Science Textbook Discussion||2|