Did the Universe Begin? III: BGV Theorem

There is a theorem due to Borde, Guth, and Vilenkin which might be taken as evidence for a beginning of time.

Roughly speaking, this theorem says that in any expanding cosmology, spacetime has to be incomplete to the past.  In other words, the BGV theorem tells us that while there might be an "eternal inflation" scenario where inflation lasts forever to the future, inflation still has to have had some type of beginning in the past.  BGV show that "nearly all" geodesics hit some type of beginning of the spacetime, although there may be some which can be extended infinitely far back to the past.

If we assume that the universe was always expanding, so that the BGV theorem applies, then presumably there must have been some type of initial singularity.

The fine-print (some readers may wish to skip this section):
[BGV do not need to assume that the universe is homogeneous (the same everywhere on average) or isotropic (the same in each direction on average).  Although the universe does seem to be homogeneous and isotropic so far as we can tell, they don't use this assumption.

More precisely, let H be the Hubble constant which says how rapidly the universe is expanding.  In general this is not a fully coordinate-invariant notion, but BGV get around that by imagining a bunch of "comoving observers", one at each spatial position, and defining the Hubble constant by the rate at which these observers are expanding away from each other.  The comoving observers are assumed to follow the path of geodesics, i.e. paths through spacetime which are as straight as possible, that is without any acceleration.

Now let us consider a different type of geodesic—the path taken by a lightray through spacetime.  Now if the average value H_\mathrm{avg} along some lightlike geodesic is positive, then BGV prove that it must reach a boundary of the expanding region in a finite amount of time.  In other words, these lightlike geodesics reach all the way back to some type of "beginning of time" (or at least the beginning of the expanding region of spacetime which we are considering).

We can also consider timelike geodesics, describing the motion of particles travelling at less than the speed of light.  For nearly all timelike geodesics, if H_\mathrm{avg} > 0 then that geodesic also begins at a beginning of time.  However, the theorem only applies to geodesics which are moving at a finite velocity with respect to the original geodesics which we used to define H_\mathrm{avg}.  The original set of observers is allowed to extend back infinitely far back in time.

As an example of this, one can consider a spacetime metric of the following form:

ds^2 = dt^2 - a(t)^2 (dx^2 + dy^2 + dz^2).

If we set the "scale factor" to be exponentially inflating:

a(t) = e^{Ht},

then such a universe extends infinitely far to the past from the perspective of an observer who remains at a fixed value of (x,\,y,\,z).  But nevertheless, observers travelling at a finite velocity relative to those hit a beginning of time (or else exit the region of spacetime where this metric is valid).

Since the BGV theorem only refers to the average value of the expansion, it applies even to cosmologies which cyclically oscillate between expanding and contracting phases, so long as there is more expansion (during the expanding phases) then there is contraction (during the contracting phases).

On the other hand, in certain cases even an expanding cosmology may have 0 average expansion, due to the fact that we are averaging over an infinite amount of time.  So the BGV theorem does not rule out e.g. a universe where the scale factor a(t) approaches some constant value in the distant past.]
The fine print is now over.

All right, everyone who skipped the details section is back, yes?

The BGV theorem is sometimes referred to as a "singularity theorem", but it is not really very closely connected to the others, because it doesn't use an energy condition or any other substantive physical assumption.  It's really just a mathematical statement that all possible expanding geometries have this property of not being complete.

Carroll correctly observes that the BGV theorem relies on spacetime being classical:

So I’d like to talk about the Borde-Guth-Vilenkin theorem since Dr. Craig emphasizes it. The rough translation is that in some universes, not all, the space-time description that we have as a classical space-time breaks down at some point in the past. Where Dr. Craig says that the Borde-Guth-Vilenkin theorem implies the universe had a beginning, that is false. That is not what it says. What it says is that our ability to describe the universe classically, that is to say, not including the effects of quantum mechanics, gives out. That may be because there’s a beginning or it may be because the universe is eternal, either because the assumptions of the theorem were violated or because quantum mechanics becomes important.

It is quite true that the BGV theorem is proven only for classical metrics, although I see no particular reason to believe that its conclusion (if the universe is always expanding, than it had an edge) breaks down for quantum spacetimes.

However, Carroll's secondary point that the assumptions of the theorem might not hold seems even more devastating.  It says that there must be a beginning if the universe is always expanding.  So maybe have it contract first, and then expand.  That's an easy way around the BGV theorem, and (as Carroll points out) there are a number of models like that.  On this point I agree with Carroll that the BGV theorem is not by itself particularly strong evidence for a beginning.

About Aron Wall

I am a postdoctoral researcher studying quantum gravity and black hole thermodynamics at UC Santa Barbara. Before that, I studied the Great Books program at St. John's college Santa Fe, and got my Ph.D. in physics from U Maryland.
This entry was posted in Physics, Reviews. Bookmark the permalink.

26 Responses to Did the Universe Begin? III: BGV Theorem

  1. TY says:

    Aron:
    Interesting stuff and thank you.

    I have a few question.
    1) Based on your understanding of all the cosmological model and the hypotheses floating around (including yours), what is the view of most physicists on the question of whether the universe has a beginning and whether the Big Bang is the beginning of time. It seems logical to conclude that if most physicists accept the Big Bang as the standard cosmological theory, then most (must?) accept the notion of a universe with a beginning. But I could be wrong.
    2) In “Did the Universe Begin? II: Singularity Theorems” you mentioned "Generalized Second Law" “(GSL), which says that the Second Law of thermodynamics applies to black holes and similar types of horizons.” On the bouncing universe scenario, Stephen Barr writes in “Modern Physics and Ancient Faith”:
    “The Second Law of Thermodynamics suggests that the “entropy”, or the amount of disorder, of the universe will be greater with each successive bounce……Given this, it would seem unlikely that the universe has already undergone an infinite number of bounces in the past.”
    3) Is the thermodynamics argument of Barr about the bouncing scenario the same as your yours? That is, the universe could not have gone through an infinite number of “Big Crunches and “Big Bangs”, each bounce as fresh as the previous one, and still satisfy the physics of the Second Law of thermodynamics applies. So there is a contradiction.

    I realise you’re a busy man and thanks for your time.

  2. Aron Wall says:

    The Big Bang Model really refers to the history of the universe after the Big Bang. I would guess most cosmologists would say they don't know for sure whether there really was a beginning, that it sort of looks like there was, but that quantum gravity effects might resolve the singularity. But we don't really do science using opinion polls, so I don't really know. I'm trying to summarize the evidence the best I can.

    I'll get to the argument from the Second Law of Thermodynamics in a later post. This is a separate argument from the Penrose Singularity Theorem. It turns out that you can recast both of these arguments in terms of the GSL, but even so they remain seperate arguments. If you apply my singularity theorem to an initial singularity such as the Big Bang, you actually have to use the time-reverse of the GSL, so it's not the same argument. Hopefully I'll be able to explain the distinctions when it comes up.

  3. TY says:

    Aron:
    Thank you for the response.
    As a non-physicist, please explain in plain English how “quantum gravity effects might resolve the singularity.” I am assuming you mean that using Einstein’s equations (with no “quantum gravity effects”), at t=0 , the result is that both the density of matter and the curvature of space time become infinite. But with the quantum gravity effects, this initial singularity vanishes, so that we cannot speak of a beginning of the universe and therefore a t=0.
    I would be keen to know about the plausibility of these alternative models that attempt to explain away a beginning ; what simplifying assumptions are used, and I am not suggesting that the motivation is anti-Biblical, though (from experience) I am not persuaded that personal beliefs and biases have no influence one goes about building a model.
    If you would be taking up this topic in your upcoming Posts, I can wait.

  4. David J. Marcus says:

    A nit.. please fix the typo in the sentence "(or else exist the region of spacetime where this metric is valid)."

    I think you 'exit' not exist'

    Great blog.

    -David

    [Fixed. Thanks--AW]

  5. Aron Wall says:

    TY,
    As you say, Einstein's equations would predict that the curvature of spacetime goes to infinity as we approach the Big Bang. But we also know that if the curvature gets large enough (certainly once the distance scale of curvature is as tiny as the Planck scale) then the classical Einstein equation doesn't apply.

    You could tell several different stories about the early universe which might go under the general heading "quantum gravity effects resolving the singularity". For example, it could be that once the curvatures get large enough, there is a "bounce" connecting our expanding universe to a prior contracting phase, which is also approximately classical. Or it could be that time reaches an end, but the curvatures don't go to infinity. Or it could be that once you go back early enough, there is a "region" of spacetime where ordinary classical concepts of spacetime break down and something else replaces them. This "something else" might or might not be eternal to the past (if the question even has a meaning once classical concept of time goes away....) All of this is just unpacking the plain English phrase "We don't know for sure what happens. (Although, in this series I am trying to discuss the limited evidence we have, such as it is.)

  6. Jack Spell says:

    Aron,

    I was wondering if you would be so kind as to offer a few points of clarification. Is it possible that you might be mistaken to write,

    Carroll correctly observes that the BGV theorem relies on spacetime being classical . . . It is quite true that the BGV theorem is proven only for classical metrics, although I see no particular reason to believe that its conclusion (if the universe is always expanding, than it had an edge) breaks down for quantum spacetimes.

    The reason I ask is, based on everything that I have read, it seems to me that the BGV theorem does not rely on spacetime being classical:

    What can lie beyond this boundary? Several possibilities have been discussed, one being that the boundary of the inflating region corresponds to the beginning of the Universe in a quantum nucleation event. The boundary is then a closed spacelike hypersurface which can be determined from the appropriate instanton.

    Whatever the possibilities for the boundary, it is clear that unless the averaged expansion condition can somehow be avoided for all past-directed geodesics, inflation alone is not sufficient to provide a complete description of the Universe, and some new physics is necessary in order to determine the correct conditions at the boundary. This is the chief result of our paper. The result depends on just one assumption: the Hubble parameter H has a positive value when averaged over the affine parameter of a past-directed null or noncomoving timelike geodesic. (Borde, A., Guth, A., and Vilenkin, A. (2003) Inflationary spacetimes are not past-complete. Physical Review Letters 90, 151301; emphasis added)

    Vilenkin would later seem to state this point even more explicitly:

    A remarkable thing about this theorem is its sweeping generality. We made no assumptions about the material content of the universe. We did not even assume that gravity is described by Einstein’s equations. So, if Einstein’s gravity requires some modification, our conclusion will still hold. The only assumption that we made was that the expansion rate of the universe never gets below some nonzero value, no matter how small. This assumption should certainly be satisfied in the inflating false vacuum. The conclusion is that past-eternal inflation without a beginning is impossible. (Vilenkin, A. (2007) Many Worlds in One: The Search for Other Universes, p.175; emphasis added)

    Unless I’m missing something, Vilenkin seems to unquestionably affirm the applicability of BGV to the quantum regime—if we include the relevant data for the quantum regime in our calculation of Hav and still find that Hav>0, then we would know that our spacetime (i.e., the combination of classical and quantum) is necessarily past-incomplete.

    Moreover, this exact question came up in Dr. Craig’s debate with Prof. Lawrence Krauss. During his opening statement, Krauss produced a personal email from Vilenkin that read, “Note for example that the BGV theorem uses a classical picture of spacetime. In the regime where gravity becomes essentially quantum, we may not even know the right questions to ask.” Given the apparent conflict with his previous proclamations, Craig personally wrote (http://www.reasonablefaith.org/honesty-transparency-full-disclosure-and-bgv-theorem) to Vilenkin for clarification:

    In that vein, I do have a question about your statement: “the BGV theorem uses a classical picture of spacetime. In the regime where gravity becomes essentially quantum, we may not even know the right questions to ask.” Elsewhere you’ve written:

    “A remarkable thing about this theorem is its sweeping generality. . . . We did not even assume that gravity is described by Einstein’s equations. So, if Einstein’s gravity requires some modification, our conclusion will still hold. The only assumption that we made was that the expansion rate of the universe never gets below some nonzero value” [Vilenkin, 2006, p. 175].

    How are these statements compatible? The 2006 statement sounds as if a quantum theory of gravitation would not undo the theorem. But the letter to Krauss sounds as if we are awash in uncertainty.

    Vilenkin answered:

    The question of whether or not the universe had a beginning assumes a classical spacetime, in which the notions of time and causality can be defined. On very small time and length scales, quantum fluctuations in the structure of spacetime could be so large that these classical concepts become totally inapplicable. Then we do not really have a language to describe what is happening, because all our physics concepts are deeply rooted in the concepts of space and time. This is what I mean when I say that we do not even know what the right questions are.

    But if the fluctuations are not so wild as to invalidate classical spacetime, the BGV theorem is immune to any possible modifications of Einstein's equations which may be caused by quantum effects.

    Nevertheless, I fully acknowledge my limitations regarding subject-matter expertise here and am well-aware of the possibility that I may have misunderstood something. If so, I would appreciate correction.

    However, if I am correct in my interpretation of Vilenkin’s assessment of BGV, then it would seem to follow that the theorem is strong evidence for an absolute beginning: if classical spacetime reaches a boundary in the finite past due to Hav over its history equaling >0, then the only way to restore past completeness is to find some plausible mechanism for whatever is on the other side of that boundary to exist eternally in a state of Hav≤0; but my novitiate survey of the literature found no such mechanism. Even leaving aside any philosophical objections that might arise, there still does not seem to be any viable cosmogonic models that did not fail on scientific grounds.

    In “emergent universe” scenarios, one of the extreme difficulties (in addition to the fatal philosophical problem of internal incoherence) is crafting a system that somehow either (1) oscillates “forever” or (2) remains static “forever,” and then proceeds to evolve into the expansion phase. As it was noted here (http://arxiv.org/abs/1306.3232):

    Although we have analyzed only one version of the Emergent Universe, we would argue that our analysis is pointing to a more general problem: it is very difficult to devise a system – especially a quantum one – that does nothing “forever,” then evolves. A truly stationary or periodic quantum state, which would last forever, would never evolve, whereas one with any instability will not endure for an indefinite time. Moreover, the tendency of quantum effects to destabilize even classically stable configurations suggests that even if an emergent model were possible, it would have to be posed at the quantum (and quantum-gravitational) level, largely undermining the motivation to provide an early state in which quantum gravitational effects are not crucial.

    Here (http://arxiv.org/abs/1204.4658)agrees: even if a mechanism could be found to facilitate the transition from the asymptotically static/periodic state, still, “there do not seem to any matter sources that admit solutions that are immune to collapse.” And one final affirmation (http://arxiv.org/abs/1110.4096):

    Our analysis in this paper indicates that oscillating and static models of the universe, even though they may be perturbatively stable, are generically unstable with respect to quantum collapse. Here we focused on the simple harmonic universe with matter content described by Eq. (8), but we expect our conclusions to apply to a wider class of models. In particular, one could investigate the quantum stability of braneworld, loop quantum cosmology, and other models.

    Another seemingly possible way to evade BGV is via a cyclic model. However, as you touched on, BGV is applicable here provided we average the scale factor over each individual cycle. And if we were to do so, we would surely find Hav>0 due to the effects of entropy increase and accumulation in each successive cycle.

    Finally, we come to models positing an infinite contraction. I would like to ask about the part where you say the following:

    However, Carroll's secondary point that the assumptions of the theorem might not hold seems even more devastating. It says that there must be a beginning if the universe is always expanding. So maybe have it contract first, and then expand. That's an easy way around the BGV theorem, and (as Carroll points out) there are a number of models like that.

    Sure, it would seem that a model that posits a prior contracting phase certainly evades BVG—the time coordinate τ will vary monotonically from −∞ to +∞ as spacetime contracts for all τ 0. But how does that entail that models of this sort are in fact viable options? In a personal communication with James Sinclair, George Ellis identifies two problems that plague these models:

    The problems are related: first, initial conditions have to be set in an extremely special way at the start of the collapse phase in order that it is a Robertson-Walker universe collapsing; and these conditions have to be set in an acausal way (in the infinite past). It is possible, but a great deal of inexplicable fine tuning is taking place: how does the matter in widely separated causally disconnected places at the start of the universe know how to correlate its motions (and densities) so that they will come together correctly in a spatially homogeneous way in the future??

    Secondly, if one gets that right, the collapse phase is unstable, with perturbations increasing rapidly, so only a very fine-tuned collapse phase remains close to Robertson-Walker even if it started off so, and will be able to turn around as a whole (in general many black holes will form locally and collapse to a singularity).

    So, yes, it is possible, but who focused the collapse so well that it turns around nicely? (William Lane Craig and James Sinclair, The Kalam Cosmological Argument, in The Blackwell Companion to Natural Theology, ed. Wm. L. Craig and J. P. Moreland (2009), p. 143)

    But let’s leave all of that aside. As it turns out (if I understand correctly), the same incompleteness theorem that proved past incompleteness in a spacetime with H_av>0, will also prove future incompleteness (http://arxiv.org/abs/1403.1599) in a spacetime with H_av<0:

    Our results in this paper suggest that expansion should on average prevail, at least in the models that we considered here. But suppose for a moment that there is some more general bouncing multiverse model in which H_av<0. The spacetime in such a model might be past geodesically complete, but then it would have a different problem, of a rather unusual kind. If H_av<0, then the same argument that proved incompleteness to the past in Ref. [6] would now prove incompleteness to the future. This would be a somewhat bizarre and perplexing conclusion. Future-incomplete geodesics would indicate that the spacetime can be extended beyond what appears to be its future boundary. But the evolution of our model from given initial conditions is completely specified (at least in a statistical sense) by the field equations, complemented by a semiclassical model of bubble nucleation. Since future incompleteness of inflating spacetimes appears rather unlikely, the above argument suggests that our past incompleteness result is more general, extending well beyond the patchwork models for which we proved it here.

    Isn’t it true that the reason BGV proves past incompleteness in a spacetime with Hav>0 correctly explained by the following: if we trace the worldline of a geodesic observer (timelike or null trajectory) as she moves through an expanding spacetime, we will find that the observer slows down relative to a congruence of comoving test particles. Thus, if we follow the observer backwards, we will see that she speeds up relative to the comoving test particles; and the calculation shows that she will reach the speed of light in a finite proper time. Therefore, the geodesic is incomplete to the past.

    Consequently, would not the relative velocity between a congruence of comoving test particles and a geodesic observer moving through a contracting spacetime from t = −∞ be measured as increasing? If so, then won’t the observer reach the speed of light in a finite proper time (finite affine length, in the null case)? If she will, then doesn’t that show that she will never make it to the bouncing phase at t = 0? I think that it does, which would therefore entail that, necessarily, no such model positing an infinite contraction could ever make it to the bounce phase and subsequent expansion. Therefore, as it turns out, models of this sort don’t in fact escape BGV.

  7. Aron Wall says:

    Jack,

    You make some good comments about difficulties with many of the pre-Big Bang models. However, in the case of Aguirre-Gratton, the arrow of time reverses prior to the bounce. This makes Ellis' comments about the need to fine-tune the conditions at t = -\infty inapplicable, because the "initial conditions" are instead specified at t = 0, the moment of the bounce. Although this approach raises some philosophical issues of its own, which I plan to discuss later.

    Regarding inconsistency with BGV, you write:

    Isn’t it true that the reason BGV proves past incompleteness in a spacetime with Hav>0 correctly explained by the following: if we trace the worldline of a geodesic observer (timelike or null trajectory) as she moves through an expanding spacetime, we will find that the observer slows down relative to a congruence of comoving test particles. Thus, if we follow the observer backwards, we will see that she speeds up relative to the comoving test particles; and the calculation shows that she will reach the speed of light in a finite proper time. Therefore, the geodesic is incomplete to the past.

    That paragraph seems correct to me, but I don't think the next one is right:

    Consequently, would not the relative velocity between a congruence of comoving test particles and a geodesic observer moving through a contracting spacetime from t = −∞ be measured as increasing? If so, then won’t the observer reach the speed of light in a finite proper time (finite affine length, in the null case)? If she will, then doesn’t that show that she will never make it to the bouncing phase at t = 0? I think that it does, which would therefore entail that, necessarily, no such model positing an infinite contraction could ever make it to the bounce phase and subsequent expansion. Therefore, as it turns out, models of this sort don’t in fact escape BGV.

    Yes, the relative velocity is increasing during the contacting phase, but starting from any specific time t < 0, there is only a finite amount of time before the bounce, so no conflict with the BGV theorem. The infinte length of time contacting in the past can't conflict with BGV any more than an infinite amount of time expanding to the future can do so.

    If you consider the case of de Sitter space (which contracts and then expands), you can work out explicitly that there is no contradiction with BGV. (As you get farther and farther away from the bounce, the relative velocity to the comoving particles gets smaller and smaller, and thus you have more and more time before the BGV theorem causes trouble.)

  8. Jack Spell says:

    Aron,

    Thanks for taking the time to respond to my post. I am aware of the fact that the Aguirre-Gratton model evades BGV due to the highly speculative arrow of time reversal. Additionally I affirm that this model is but one of many that fail to satisfy the only condition assumed by BGV, and thus can be said to evade the theorem. But as I say, a model’s mere avoidance of the condition Hav > 0 is far from a demonstration of its plausibility. And in the AG model what we have is an approach that aims to simply transfer the problem of enforcing low-entropy boundary conditions to the bounce hypersurface, rather than at t=−∞. Far from resolving all the difficulties, this approach actually creates many more (which you touch on in your wonderful paper http://iopscience.iop.org/0264-9381/30/16/165003).

    Let’s see if I can sufficiently articulate my contention that the BGV seems to render models positing an infinite contraction geodesically past (future?) incomplete. You’ll recall that I took a moment to briefly outline just how exactly BGV proves kinematic incompleteness for Hav > 0 spacetimes: given that the relative velocity between a geodesic observer and an expanding congruence of comoving test particles decreases over time, it follows that the velocity will increase as we rewind the time while tracing the observer's worldline to the past; and we reach the speed of light in a finite proper time.

    With that in mind, my point is simply this: in an assessment of an expanding spacetime, the act of “rewind[ing] the time while tracing the observer's worldline to the past” produces conditions identical to what we find for t < 0 in infinitely-contracting spacetimes—namely, the relative motion between each member of a congruence of comoving test particles is of an approaching sort, unlike the recessionary motion found in expanding spacetimes. Now, you agree with me that “the relative velocity is increasing during the contacting phase.” But tell me this: if BGV proves that the observer will reach the speed of light in a finite proper time, then, given the infinite duration of time from t = -∞ to t = 0, shouldn't the observer reach the speed of light short of arriving at t = 0? How could she not? Not matter how long BGV says that proper time time will be, it is still finite. So an infinite amount of time would surely be sufficient for arriving at the speed of light.

    You maintain that, “starting from any specific time t < 0, there is only a finite amount of time before the bounce, so no conflict with the BGV theorem.” I wholeheartedly agree—for any arbitrary point before t = 0 there is only a finite duration until the bounce is reached. But in order to have geodesic completeness, the wordline must necessarily extend all the way back to t = -∞, not just to some arbitrary point t < 0. Therefore, the observer will be measuring the relative velocity between herself and the contracting congruence of comoving test particles for an infinite amount of time. Therefore, given that BGV proves she will reach the speed of light in some finite amount of time, the infinite amount of time she spends in the contracting phase prior to t = 0 is sufficient to necessitate geodesic incompleteness.

  9. Ron Cram says:

    Hi Aron,
    Thank you for this post and your helpful discussion with Jack Spell. I have a question regarding the Aguirre model. In Alan Guth's paper "Eternal inflation and its implications" ( http://arxiv.org/pdf/hep-th/0702178.pdf?origin=publication_detail ), Guth discusses the Aguirre model and puts it in the category of not being reasonable or plausible. I assumed it was because the model required a reversal in the arrow of time. Can you discuss the Guth paper in this context and do you agree with Guth that the Aguirre model is not reasonable or plausible?

  10. Aron Wall says:

    Welcome Ron,

    I looked at the Guth paper and I didn't see where he calls the Aguirre-Gratton model not "reasonable" or "plasible". He says in the abstract that under reasonable assumptions, you can prove that inflation had a beginning. He also says on page 14 that under "plausible" assumptions, the universe is finite to the past (due to the BGV theorem). But I don't think that Guth is saying the Aguirre-Gratton model is unreasonable or implausible. "Plausible" just means reasonably likely, and I feel that ito say that some idea is plausible or reasonable, is a relatively weak statment, not nearly as strong as saying that the opposite idea is implausible or unreasonable. That's just how I parse the English terms, other people may feel differently about it. Apparently nowadays Guth thinks the universe probably didn't have a beginning (as mentioned by Carroll in the debate).

    Anyway, I'll be giving my own opinion of Aguirre-Gratton and related models in future posts.

  11. Aron Wall says:

    Jack writes:

    But tell me this: if BGV proves that the observer will reach the speed of light in a finite proper time, then, given the infinite duration of time from t = -∞ to t = 0, shouldn't the observer reach the speed of light short of arriving at t = 0? How could she not? Not matter how long BGV says that proper time time will be, it is still finite. So an infinite amount of time would surely be sufficient for arriving at the speed of light.

    The BGV theorem says that if the observer starts at particular time with a finite nonzero velocity relative to the comoving observers, and then waits a long enough time, she'll reach the speed of light in a finite amount of time. The smaller the initial velocity, the greater the time before there is a problem. But, in the limit that t \to -\infty, her starting velocity also gets smaller and smaller. As you go towards t = -\infty, her starting velocity gets infinitesimally small, and the time needed before there is a conflict gets infinitely long. So there is no conflict with the BGV theorem.

    If you do the calculation explicitly in the case of de Sitter space, you will see that what I am saying is true.

  12. Jack Spell says:

    Aron,

    Thanks again for finding the time to respond; I hope you are able to do so further when you return from France. Unfortunately, though, I'm sorry to say I still cannot agree that there is no conflict between BGV and spacetimes positing an infinite contraction. Although, I do want to be clear about something: I am well aware that all of this is a little above my pay grade; thus my disagreement is probably due to ignorance of some fundamental point on my part; so I would greatly appreciate any additional expertise that you might provide to help me gain a proper understanding.

    Before engaging in another endeavor to articulate why I believe BGV to conflict with latex H_\mathrm{av}  0), BGV proves that there will be causal geodesics that, when extended to the past of an arbitrary point, reach the boundary of the inflating region of spacetime in a finite proper time $latex \tau$ (finite affine length, in the null case).
    The measure of temporal duration latex t_\mathrm{0} \rightarrow -\infty is a quantity that is actually infinite (latex \aleph_0) rather than potentially infinite (latex \infty).
    If we agree on the veracity of those three statements, then I want us to keep them in mind as we turn to what was said in your last comment. I'll break it down into two parts:

    The BGV theorem says that if the observer starts at particular time with a finite nonzero velocity relative to the comoving observers, and then waits a long enough time, she'll reach the speed of light in a finite amount of time. . . .

    That seems to be in accord with what was implicit in (2): if the velocity of a geodesic observer latex \mathcal{O} (relative to comoving observers in an expanding congruence) in an inertial reference frame is measured at an arbitrary time latex t to be any finite nonzero value, then she will necessarily reach the speed of light at some time latex t\prime  0 is to be avoided), then the time coordinate latex \tau will run monotonically from latex -\infty \rightarrow +\infty as spacetime contracts during latex \tau  0. In other words, the limit would not be latex t \rightarrow -\infty because latex \mathcal{O} wouldn't approach latex -\infty; rather, it seems to me latex t = -\infty \rightarrow t_0 would be the limit because latex \mathcal{O} would slowly approach latex t_0 .

    Thus, if what I have argued above is correct (and it very well may not be), then the implications are unmistakable: as long as latex \mathcal{O}:

    is a non-comoving geodesic observer;
    is in an inertial reference frame;
    is moving from latex t = -\infty \rightarrow t_0;
    is tracing a worldline through a contracting spacetime were latex H_\mathrm{av} < 0;
    it therefore follows that the relative velocity of latex \mathcal{O} will get faster and faster as she approaches the bounce at latex t_0. Moreover, since we know that will reach the speed of light in a finite proper time latex \tau, coupled with the fact that the interval latex t = -\infty \rightarrow t_0 is infinite, we can be sure the she will reach the speed of light well-before ever making it to the bounce—and therefore cannot be geodesically complete.

  13. Aron Wall says:

    Jack,
    Regarding your summary of the argument:
    1. Correct
    2. I would add, that BGV prove that this very geodesic is incomplete, assuming it is not comoving with the original set of geodesics used to define the expansion of the universe.
    3. Normally physicists leave talk of "actual" vs. "potential" infinite to the philosophers; in any case I don't see how it is relevant to what BGV showed.

    Regarding your continued argument that the BGV rules out AG, I think that in your last sentence there is the same problem as before. Let me make an analogy this time.

    The following premises are all true:
    I. If a finite positive quantity doubles in time every second, then it becomes greater than 10 in a finite time
    II. The quantity 2^t is finite and positive for every finite value of t.
    III. The quantity 2^t doubles in time at each elapsed second (t \to t + 1).
    IV. The function 2^t is defined for arbitrarily negative times, t \to -\infty.

    However, the following argument based on these premises is not:
    V. Since the amount of time from t = -\infty to t = 0 is infinite, it follows that 2^t will become greater than 10 some finite time after t = -\infty. This proves that 2^t > 10 for values of t well before t = 0, which is a contradiction. Therefore, the function 2^t is impossible.

    The mistake in the argument is a change in the order of limits. At any finite time, the amount of time needed is finite, but one cannot conclude that at t = -\infty, the amount of time needed is finite, because 2^t is infinitesimal at t = -\infty. The situation with BGV is exactly like this.

    Again I say, your extension of BGV cannot possibly be correct, because if it were correct, it would rule out de Sitter space, but de Sitter space is a perfectly consistent spacetime metric.

    This is now my third comment making these same points. If you still do not agree, I suggest you go figure out how to actually do the calculation in the case of de Sitter space. You will see that there is no contradiction.

  14. Jack Spell says:

    Aron,

    I appreciate you taking the time necessary for your three comments. Still, I do not agree with them. But I do want to thank you, though, for your patience through all of my questioning. Though your last reply seems to indicate that this discussion has ceased to warrant your interest, I would still like to make this effort to provide clarification for several misunderstandings.

    It’s unfortunate that you hastily dismissed my inquiry regarding infinites as irrelevant, when this seems to me to be not only relevant, but also at the very heart of this disagreement (as I shall try once more to show). In response to your points in the order they were stated:

    Regarding your summary of the argument:
    1. Correct
    2. I would add, that BGV prove that this very geodesic is incomplete, assuming it is not comoving with the original set of geodesics used to define the expansion of the universe.
    3. Normally physicists leave talk of "actual" vs. "potential" infinite to the philosophers; in any case I don't see how it is relevant to what BGV showed.

    1. Finally, something we agree on! :)
    2. I’m fine with the addition.
    3. I suppose that I am to blame for your dismissive attitude here\textemdashI guess I should’ve elaborated a bit. Perhaps if I had done so you would’ve known that the relevant talking in this case is that which is done by the mathematicians.

    When I speak of a potential infinite, I am referring to the same thing\textemdashthe lemniscate\textemdashthat Cantor called the “variable finite” and denoted with the sign \infty. The role of this infinite is to serve as an ideal limit and it is most certainly the infinite that you continue to use in this discussion.

    Contrast that with an actual infinite. This infinite, pronounced by Cantor to be the “true infinite,” is denoted by the symbol \aleph_{0} (aleph zero). This infinite represents the value that indicates the number of all the numbers in the series 1, 2, 3, . . . This is the infinite to which I continue to refer in this discussion.

    Just to be clear, an actual infinite is a collection of definite, distinct objects, and whose size is the same as the set of natural numbers. A potential infinite contains a number of members whose membership is not definite, but can be increased without limit. Thus, a potential infinite is more appropriately described as indefinite. The most crucial distinction that I am attempting to convey is that an actual infinite is a collection comprising a determinate whole that actually possesses an infinite number of members; a potential infinite never actually attains an infinite number of members, but does perpetually increase.

    With that distinction in mind, let’s talk about your analogy.
    I. No dispute here.
    II. Again, looks good.
    III. Despite what was asserted prior to the first premise, this premise is clearly false: the quantity 2^t does not double in time at each elapsed second unless the value of \mathrm{t} is both finite and positive. What you’ve done by plugging in a negative value, far from doubling the quantity each second, actually causes the quantity to be cut in half each second. Mathematically, this premise formulated as such helps shed light on the task of distinguishing a potential infinite from an actual infinite. Namely, while you can continue indefinitely to divide each successive quantity in half, the series of subintervals generated consequently is merely potentially infinite, in that infinity serves as a limit that one endlessly approaches but can never reach. Time, in much the same way as space, is infinitely divisible only in the sense that the divisions can proceed indefinitely, but time is never actually infinitely divided, in exactly the same way that one simply cannot arrive at a single point in space.
    I.V. Okay.
    V. Even though I believe this analogy to be a very poor likeness of my position, I am optimistic that it can actually be used to show where our misunderstanding lies. In order to construe the analogy properly so that it parallels contracting spacetimes, let’s take a closer look at each premise:

    I. We don’t need to do anything to change this premise because it beautifully mirrors the implications of BGV: namely, the part where the relative velocities increase successively, and then the observer reaches the speed of light in a finite time.
    II. While this premise is true, I’ve already pointed out that if the time is to double each second then the value chosen must be positive. So let’s add in that property.
    III. If we make the necessary change called for in (II), then this premise will be correct.
    IV. Again, in order to mimic the relative velocities of the Observer and test particles the value chosen must be positive. Otherwise the velocity does not successively increase.
    V. And finally, in order to best understand my contention, you must evacuate the notion of limits and \mathrm{t} = 0 from your thought; just forget about them for a moment. Now, the analogy requires that the value be + \infty rather than - \infty for the premises to all to be true. Thus, I would consequently reason as follows:

    1. If a finite positive quantity doubles every second, then it becomes greater than 10 in a finite duration of time.
    2. 2^t is a finite positive quantity that doubles every second if and only if \mathrm{t} is positive.
    3. \mathrm{t} is positive.
    4. Therefore, 2^t is a finite positive quantity that doubles every second. (MP, 2,3)
    5. Therefore, 2^t becomes greater than 10 in a finite duration of time. (MP, 1,4)

    Having concluded the truth of (5), ask yourself, if the proposition,

    6. A finite duration of time has elapsed.

    is also true, then how does the conclusion,

    7. 2^t \textgreater 10

    not follow necessarily?

    That is the crux of this whole thing, and it has nothing to do with the order of limits. It is simply my perception that, if there were an infinite contraction phase, it therefore follows that there has elapsed an actually infinite duration of time. And if an infinite amount of time is not enough for our Observer to hit lightspeed, nothing is. In any case, what do you understand the following to mean:

    Our results in this paper suggest that expansion should on average prevail, at least in the models that we considered here. But suppose for a moment that there is some more general bouncing multiverse model in which Hav < 0. The spacetime in such a model might be past geodesically complete, but then it would have a different problem, of a rather unusual kind. If Hav < 0, then the same argument that proved incompleteness to the past in Ref. [6] would now prove incompleteness to the future. This would be a somewhat bizarre and perplexing conclusion. Future-incomplete geodesics would indicate that the spacetime can be extended beyond what appears to be its future boundary. But the evolution of our model from given initial conditions is completely specified (at least in a statistical sense) by the field equations, complemented by a semiclassical model of bubble nucleation. Since future incompleteness of inflating spacetimes appears rather unlikely, the above argument suggests that our past incompleteness result is more general, extending well beyond the patchwork models for which we proved it here. (http://arxiv.org/abs/1403.1599)

    As far as de Sitter space goes, my contention wouldn’t rule out dS as a viable metric; it would rule it out as a viable eternal metric. You seem to think that it is eternal, but how do you understand:

    A simple example is a ‘comoving’ geodesic x = const in de Sitter space with flat spatial slicing,

    ds^2 = dt^2 ? e^{2Ht} dx^2

    .Observers evolving along such geodesics will see inflation continue from the infinite past. This is true, but all other past-directed geodesics reach \mathrm{t} = -? in a finite proper time. The null surface \mathrm{t} = -? plays the role of the boundary B in this example. We say that inflation must have a beginning in the sense that some physical process has to enforce the boundary conditions on that surface (or on some surface in its future) (http://arxiv.org/pdf/1305.3836v2.pdf)

    and on p.18 here (http://online.kitp.ucsb.edu/online/strings_c03/guth/pdf/KITPGuth_2up.pdf)?

  15. Aron Wall says:

    Jack,

    As you gathered I don't really have the energy to continue this discussion much further, but here are three parting comments:

    1. When I said that the function f = 2^t doubles every second, I mean that f(t+1) = 2f(t). This is true even for negative values of t.
    2. If the question really comes down to actual vs. potential infinities, why couldn't the proponent of an infinite past just say that it's a potential infinity rather than an actual one?
    3. The flat slicing of de Sitter is only a patch of the complete de Sitter space. The complete de Sitter space is described by a geometry where space at one time is a sphere, and it contracts down from infinite size (at t = -\infty, it's finite for any finite t), bounces, and then expands again.

  16. Jack Spell says:

    I guess, Aron, it seems to me that the entire issue comes down to the concept of "necessary and sufficient conditions." I would simply ask, Is an actualized infinite amount of time sufficient to necessitate arriving at light speed? If yes, then why is my argument incorrect? If no, then you've basically just undermined and disagreed with the crux of BGV.

  17. Jack Spell says:

    Actually, let me rephrase my dilemma:

    1. If an actually infinite amount of time is sufficient to reach light speed, then why isn't light speed reached and therefore my argument correct?

    2. If an actually infinite amount of time is not sufficient to reach light speed, then what is sufficient to reach it?

  18. Aron Wall says:

    Jack,

    Starting from any finite nonzero speed, it takes a finite amount of time to reach light speed. The smaller the initial speed, the longer it takes to reach light speed.

    Starting from an infinitesimal speed, it takes an infinite amount of time to reach light speed.

    The latter is the situation in Aguirre-Grattan and related models. That's all.

    Since you insist on using the distinction between actual and potential infinities, I will insist that the infinite past of AG is a potential rather than an actual infinity. That is, there is not a real time t = -\infty, there are only times with arbitrarily large negative values of t. When I say that the speed is infinitesimal at t = -\infty, I really mean that \mathrm{speed} \to 0 as t \to -\infty.

  19. Jack Spell says:

    Aron,
    You stated:

    Starting from any finite nonzero speed, it takes a finite amount of time to reach light speed. The smaller the initial speed, the longer it takes to reach light speed.

    Starting from an infinitesimal speed, it takes an infinite amount of time to reach light speed.

    That's fine. But the point is that the temporal duration prior to t_{0} is an infinite amount of time. Thus, even starting at an infinitesimal speed (and therefore requiring an infinite amount of time to reach light speed), light speed will be reached because we would in fact have an infinite temporal interval elapse prior to t_{0}.

    As far as your comment that "I will insist that the infinite past of AG is a potential rather than an actual infinity" goes, I have to respectfully say that it seems that you have misunderstood the difference between the two infinites. As I said, the potential infinite is never actualized (i.e., achieved) while the actual infinite is realized. In order for us to have traversed the necessary infinite temporal interval of an infinite contraction, that infinite cannot be merely potential; it must be actual because we would have already traversed it.

  20. Jack Spell says:

    Please disregard my last paragraph--I overlooked the part where you where specifying that the AG model is potential. I agree with you completely. I thought you were saying all infinitely contracting models are potential. Sorry.

  21. Aron Wall says:

    That's fine. But the point is that the temporal duration prior to t_{0} is an infinite amount of time. Thus, even starting at an infinitesimal speed (and therefore requiring an infinite amount of time to reach light speed), light speed will be reached because we would in fact have an infinite temporal interval elapse prior to t_{0}.

    No, that just doesn't follow. You have to be VERY CAREFUL when making arguments like that about infinity, because there are lots of things that can go wrong. You've probably heard about those paradoxes about things like infinity over infinity. It could be one, but it could also be any other number. You can't just reason about infinity as if it were a finite number; mathematically there are any number of examples where it doesn't work out. Talking about "infinitesimals" is sloppy anyway, and I'm sorry if that confused you. A mathematician would tell you that you should instead formulate all the infintesimal and infinite quantities being considered here as limits of finite quantities.

    I've already said more than enough times why it doesn't work in this particular case. The correct conclusion in the case of AG, is that there exists some time t, such that a) there is an infinite amount of time before t, and b) if the universe cannot be contracting on average until t. It does not follow that for any time t^\prime, if (a) then (b). These statements are no longer the same statement when the amounts of time involved are infinite.

    As I said before, you can see this quite explicitly by thinking about simple examples like the function x = 2^t, for which it is true that 2^t exceeds any given value of x at some sufficiently large t, and yet false that it exceeds any given value of x at any t. (Even though there is an infinite amount of time between t = -\infty and any finite t).

    This is my absolute last reply on this subject; if you still disagree you can either reread what I've already written or else go to somebody you trust more.

  22. Jack Spell says:

    I realize that you said that the previous reply will be your absolute last on this subject, but I’ve asked you twice now for your take on a passage where I believe Vilenkin is in agreement with me. This will be the third time I am asking you to share your thoughts on the following:

    Our results in this paper suggest that expansion should on average prevail, at least in the models that we considered here. But suppose for a moment that there is some more general bouncing multiverse model in which H_{av} < 0. The spacetime in such a model might be past geodesically complete, but then it would have a different problem, of a rather unusual kind. If H_{av} < 0, then the same argument that proved incompleteness to the past in Ref. [6] would now prove incompleteness to the future. (http://arxiv.org/abs/1403.1599)

    If he is not arguing exactly what I have been, what is he arguing?

  23. Aron Wall says:

    From the context of his paper, I don't think he was talking about the AG model (in which there is an expanding region in which H_{av} > 0), but rather a model with infinitely many AdS bounces, for which H_{av} < 0 throughout the whole spacetime history. He can't possibly have meant that BGV rules out AG, since it doesn't. But if you're still unsure, you can ask him about it.

    Obviously I am a pushover who can't enforce my own statements about not arguing anymore. Well, this time for real! :-)

  24. Jack Spell says:

    LOL. Yeah, you are a pushover :). Seriously, though, I appreciate all of your time. I have emailed Dr. Vilenkin for clarification and hope that he can find the time to respond.

    On a side note, I was not necessarily arguing against the AG Model because it can be interpreted at least two different ways (not to mention one's own theory of time can come into play to further complicate things). Nevertheless, thanks again for all of your thoughts.

  25. Aron Wall says:

    You're welcome!

    Oops---I responded to this thread again! Look what you made me do! :-)

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>