Did the Universe Begin? IX: More about Imaginary Time

In this post, I've put some more technical details about what the concept of imaginary time means, to help clarify the previous post about the Hartle-Hawking No Boundary Proposal.  If you don't want to have to understand equations, skip this.

First of all, a bit of remedial math.  There are a lot of functions which (even if they teach them to you in school as being functions of real numbers) actually make sense when extended to complex numbers of the form z = x + iy.  I already had to say something about complex numbers earlier in this series.  If you know how to add, subtract, multiply, and divide complex numbers, you can pretty easily make sense out of polynomial fractions like f(z) = z^3 + z / (z^2 - 1), but you can also make sense out of things like sines and cosines and exponentials.  For example, if we take an exponential of an imaginary number we get

e^{iy} = \cos(y) + i \sin(y).

This formula allows you to turn all sines and cosines into exponentials, enormously simplifying trigonometry by making it so you don't have to memorize a bunch of weird trig identities.  So even though they call them complex numbers, they actually make your life simpler!

So when you see something in a scientific equation like e^{ix}, that looks like an exponential, but the power is imaginary, that's really something that's spinning around in the complex plane as you change x, without growing or shrinking in its absolute size.  It is a general rule that things which oscillate in the real direction correspond to things which exponentially grow and/or shrink in the imaginary direction, and vice versa.

This process of extending functions to the complex plane is called analytic continuation, and functions which can be so continued are called (wait for it!) analytic.  (Not all functions are analytic: those which suffer from abrupt changes, like the absolute value function |x|, are not.  |x changes unpredictably at x = 0; if someone told you what it looks like for x < 0, and you tried to extrapolate it to x > 0 you'd guess wrong.

Now it turns out that there is a close mathematical connection between quantum mechanics and thermodynamics (a.k.a. statistical mechanics).  Quantum mechanics is all about how the phase of a wavefunction oscillates around as time passes.  The rate at which the phase spins around is proportional to the energy H of the state, as told to us by Schrödinger's equation:

H \Psi = i \hbar (d/dt) \Psi.

If you solve this equation, you find that a state with definite energy H = E spins around as time passes like \Psi(t) = \Psi(0) e^{iEt/\hbar}, where \hbar is Planck's constant.

On the other hand, statistical mechanics is all about thermal equilibrium states, and the rule of thermal equilibrium is that the probability to be in a given state falls off exponentially with the energy.  The probability is proportional to p = e^{-E/T}/Z, where T is the temperature, and Z is an extra random thing called the "partition function'' you throw in to normalize the probabilities so they add up to 1. It turns out that states like these maximize the entropy given how much entropy they have.  If you squint these two exponentials they start looking quite similar to each other, if only you can accept the mystical truth that inverse temperature is like imaginary time:

1/2T = it,

where the factor of 2 comes from the fact that the probability is the absolute value squared of the wave function.

If you start with an initial condition where all states have equal probability, and "evolve'" for a finite quantity of "imaginary'" time, you end up with a thermal state ( after normalizing the total probabilities to be 1 at the end).  Better still, if you start with (almost any) state and evolve for an infinite amount of imaginary time, you end up with the "vacuum" state of lowest energy, all other states being exponentially damped by comparison to that one.

Well, this may seem like a bit of mumbo-jumbo, but with the help of that complex number math I mentioned above, you can actually put it on a fairly rigorous footing, for ordinary QM systems, and even for quantum field theories.  So of course, Hartle and Hawking had to be more bold than that, and try to apply this idea in the context of quantum gravity.

In quantum gravity (to the extent that we understand it), the dynamics are not governed by an ordinary Hamiltonian.  Instead they are governed by a Hamiltonian constraint:

H \Psi = 0,

also known as the Wheeler-DeWitt equation.  This equation seems to say that nothing changes with time, but it really means that the choice of time slice is arbitrary and has no coordinate-invariant meaning.

Now the Hartle-Hawking prescription is really just a clever way to calculate one particular state which (at the level of formally manipulating equations that we can't really make sense of) solves the Wheeler-DeWitt equation.

It tells us the wavefunction of the universe, expressing the "quantum amplitude" for any possible metric of space at one time to exist.  (The quantum amplitude is just a term for the complex number saying what the wavefunction is for a particular possibility to occur.  Take the absolute value squared and you get the probability.) Since there are many ways to slice spacetime into moments of time, all of them have to exist side-by-side in this wavefunction, late moments in time no less than early ones.  That's what it means to solve the Wheeler-DeWitt equation!

It's not the only solution to the Wheeler-DeWitt equation, but it's an especially nice one.  In some ways it is like a "vacuum" state of the theory, one especially nice state to which others may be compared.  (In other ways, it's more like a thermal state, due to the fact that there is only a finite amount of imaginary time evolution, before one reaches the end of imaginary time).

In order to calculate the Hartle-Hawking amplitude that a given geometry for 3 dimensional space (call it \Sigma) will appear ex nihilo (as it were), all you have to do is this:

1. Consider the space of all 4 dimensional curved spatial geometries whose only boundary is \Sigma,
2. For each geometry, integrate the total value of the Ricci scalar R over the 4 dimensional geometry, call that the action S, and assign to that geometry the value e^{-S}.
3. Figure out how to integrate e^{-S} over the infinite dimensional space of all possible 4 dimensional geometries.  This requires choosing a measure on this space of possibilities, which is quite tricky for infinite dimensional spaces,
4. Cleverly dispose of several different kinds of infinities which pop up, and
5. Consider all possible choices of \Sigma and figure out how to normalize it so that the total probability adds to 1 (nobody knows how to do this properly either).

Good luck!

 

Posted in Physics, Reviews | 10 Comments

Did the Universe Begin? VIII: The No Boundary Proposal

The last bit of evidence from physics which I'll discuss is the "no-boundary" proposal of Jim Hartle and Stephen Hawking (and some related ideas).  The Hartle-Hawking proposal was described in Hawking's well known pop book, A Brief History of Time.  This is an excellent pop description of Science, which also doubles as a somewhat dubious resource for the history of religious cosmology, as for example in this off-handed comment:

[The Ptolemaic Model of Astronomy] was adopted by the Christian church as the picture of the universe that was in accordance with Scripture, for it had the great advantage that it left lots of room outside the sphere of fixed stars for heaven and hell.^{[citation\,needed!]}

Carroll, after making some metaphysical comments about how outdated Aristotelian metaphysics is, and how the only things you really need in a physical model are mathematical consistency and fitting the data—this is Carroll's main point, well worthy of discussion, but not the subject of this post—goes on to comment on the Hartle-Hawking state in this way:

Can I build a model where the universe had a beginning but did not have a cause? The answer is yes. It’s been done. Thirty years ago, very famously, Stephen Hawking and Jim Hartle presented the no-boundary quantum cosmology model. The point about this model is not that it’s the right model, I don’t think that we’re anywhere near the right model yet. The point is that it’s completely self-contained. It is an entire history of the universe that does not rely on anything outside. It just is like that.

Temporarily setting aside Carroll's comment that he doesn't actually think this specific model is true—we'll see some possible reasons for this later—the first thing to clear up about this is that the Hartle-Hawking model doesn't actually have a beginning!  At least, it probably doesn't have a beginning, not in the traditional sense of the word.  To the extent that we can reliably extract predictions from it at all, one typically obtains an eternal universe, something like a de Sitter spacetime.  This is an eternal spacetime which contracts down to a minimum size and then expands: as we've already discussed in the context of the Aguirre-Gratton model.

This is because the Hartle-Hawking idea involves performing a "trick", which is often done in mathematical physics, although in this case the physical meaning is not entirely clear.  The trick is called Wick rotation, and involves going to imaginary values of the time parameter t.  The supposed "beginning of time" actually occurs at values of the time parameter that are imaginary!  If you only think about values of t which are real, most calculations seem to indicate that with high probability you get a universe which is eternal in both directions.

Now why is the Hartle-Hawking model so revolutionary?  In order to make predictions in physics you need to specify two different things: (1) the "initial conditions" for how a particular system (or the universe) starts out at some moment of time, and (2) the "dynamics", i.e. the rule for how the universe changes as time passes.

Most of the time, we try to find beautiful theories concerning (2), but for (1) we often just have to look at the real world.  In cosmology, the effective initial conditions we see are fairly simple but have various features which haven't yet been explained.  What's interesting about the Hartle-Hawking proposal is that is a rather elegant proposal for (1), the actual initial state of a closed universe.

One reason that the Hartle-Hawking proposal is so elegant is that the rule for the initial condition is, in a certain sense, almost the exact same rule as the rule for the dynamics, except that it uses imaginary values of the time t instead of real values.  Thus, in some sense the proposal, if true, unifies the description of (1) and (2).  However, the proposal is far from inevitable, since there is no particularly good reason (*) to think that this special state is the only allowed state of a closed universe in a theory of quantum gravity.  There are lots of others, and if God wanted to create the universe in one of those other states, so far as I can see nothing in that choice would be inconsistent with the dynamical Laws of Nature in (2).

(Hawking has a paragraph in his book asserting that the proposal leaves no room for a Creator, but I'll put my comments on that into a later post!)

In the context of a gravitational theory, imaginary time means that instead of thinking about metrics whose signature is (-, +, +, +), as normal for special or general relativity, we think about "Euclidean" (or "Riemannian") signature metrics whose signature is (+, +, +, +).  So we have a 4 dimensional curved space (no longer spacetime).

The assumption is that time has an imaginary "beginning", in the sense that it is finite when extended into the imaginary time direction.  However, because there is no notion of "past" or "future" when the signature of spacetime, it's arbitrary which point you call the "beginning".  What's more, unlike the case of the Big Bang singularity in real time, there's nothing which blows up to infinity or becomes unsmooth at any of the points.

All possible such metrics are considered, but they are weighted with a probability factor which is calculated using the imaginary time dynamics.  However, there are some rather hand-waving arguments that the most probable Euclidean spacetime looks like a uniform spherical geometry. The spherical geometry is approximately classical, but there are also quantum fluctuations around it.  When you convert it back to real time, a sphere looks like de Sitter space: hence the Hartle-Hawking state predicts that the universe should look have an initial condition that looks roughly like de Sitter space, plus some quantum fluctuations.

I say handwaving, because first of all nobody really knows how to do quantum gravity.  The Hartle-Hawking approach involves writing down what's called a functional integral over the space of all possible metrics for the imaginary-time goemetry.  There are an infinite-dimensional space of these metrics, and in this case nobody knows how to make sense of it.  Even if we did know how to make sense of it, nobody has actually proven that there isn't a classical geometry that isn't even more probable than the sphere.  Worst of all,  it appears that for some of the directions in this infinite dimensional space, the classical geometries are a minimum of the probability density rather than a maximum!  This gives rise to instabilities, which if interpreted naively give you a "probability" distribution which is unnormalizable, meaning that there's no way to get the probabilities to add up to 1.

So Hartle and Hawking do what's called formal calculations, which is when you take a bunch of equations that don't really make sense, manipulate them algebraically as if they did make sense, cross your fingers and hope for the best.  In theoretical physics, sometimes this works surprisingly well, and sometimes you fall flat on your face.

Unfortunately, it appears that the predictions of the Hartle-Hawking state, interpreted in this way, are also wrong when you use the laws of physics in the real universe!  The trouble is that there are two periods of time when the universe looks approximately like a tiny de Sitter space, (a) in the very early universe during inflation, and (b) at very late times, when the acceleration of the universe makes it look like a very big de Sitter space.  Unfortunately, the Hartle-Hawking state seems to predict that the odds the universe should begin in a big de Sitter space is about 10^{120} times greater than the odds that it begins in the little one.  That's a shame because if it began in the little one, you would plausibly get a history of the universe which looks roughly like our own.  Whereas the big one is rather boring: since it has maximum generalized entropy, nothing interesting happens (except for thermal fluctuations).  St. Don Page has a nice article explaining this problem, and suggesting some possible solutions which even he believes are implausible.

Alex Vilenkin has suggested a different "tunnelling" proposal, in which the universe quantum fluctuates out of "nothing" in real time rather than imaginary time.  This proposal doesn't actually explain how to get rid of the initial singularity, and requires at least as much handwaving as the Hartle-Hawking proposal, but it has the advantage that it favors a small de Sitter space over a big one.  From the perspective of agreeing with observation, this proposal seems better.  And it has an actual beginning in real time, something which (despite all the press to the contrary) isn't true for Hartle-Hawking.

(*) There is however at least one bad reason to think this, based on a naive interpretation of the putative "Holographic Principle" of quantum gravity, in which the information in the universe is stored on the boundary.  A closed universe has no boundary, and therefore one might think it has no information, meaning that it has only one allowed state!  (The argument here is similar to the one saying the energy is zero.)  At one time I took this idea seriously, but I now believe that such a strong version of the Holographic Principle has to be wrong.   There are lots of other contexts where this "naive" version of the Holographic Principle gets the wrong answer for the information content of regions, and actual calculations of the information content of de Sitter-like spacetimes give a nonzero answer.  So I'm pretty sure this isn't actually true.

Posted in Physics, Reviews | 64 Comments

Separation of Physics and Theology?

Down in the comments section of this post, reader St. TY has the following kind thing to say about me:

What an excellent blog. I have been looking for one like this for a long time. I tell what I like about it: Although we all know St. Aron’s Christian bias, but he does not let it intrude into his physics and, as one with a mathematical background, I like that separation of Church and State.

As for the format I’m old fashioned and I like the written word because good writing demands clarity and coherence I must add honesty, and so I like reading Aron’s pieces and the comments.

I would like Aron to put all of this meaty stuff in a book.
Would you, Aron?
Thank you.

Thanks so much for your gracious compliments about my blog!  It's too bad really, that I must strongly disagree with you when you say that

Although we all know St. Aron’s Christian bias, but he does not let it intrude into his physics and, as one with a mathematical background, I like that separation of Church and State.

Your proposal that I keep a separating wall is not really very undivided, is it?  I expressed a different aspiration in my About page:

"Undivided Looking" expresses the aspiration that, although compartmentalized thinking is frequently helpful in life, one must also step back and look at the world as a whole. This involves balancing specialized knowledge with common sense to keep both kinds of thinking in perspective.

So in response I would say, that one's physics views can and should be influenced by one's theological views (or vice versa), if there is a legitimate reason why it should do so.  There is, after all, only one universe, and therefore no compartments can be kept completely watertight.  For example, most economists don't need to know much about chemistry, but if they're talking about buying things that might explode then there needs to be some cross-talk.

Christianity is not a "bias", but a "belief", one which happens to be true.  Deducing things from one's beliefs is not bias unless it is done in an irrational and capricious manner.  But perhaps you were speaking in a semi-humorous way, in the way that we might say that all scientists seek to be biased towards the truth!

Reasonable physicists will probably have similar intuitions about how physics should be done (I'm excluding unreasonable people like Young Earth Creationists), regardless of whether they are atheists or theists.  Or rather, people have different intuitions about physics but they mostly don't correlate with religious views!  But if on a particular matter (e.g. the universe having a beginning in time) somebody happens to be influenced by their religion (or lack thereof) to think that one viewpoint is more likely than another, I don't think that should be taboo.

Far from corrupting the scientific process, I think science usually works better when people explore a variety of intuitions and options.  As I said in discussing the importance of collaboration in science:

Healthy scientific collaboration encourages reasonable dissent.   Otherwise group-think can insulate the community from effective criticism of accepted ideas.  Some people say that scientists should proportion their beliefs to the evidence.  However, there's also some value in diversity of opinion, because it permits subgroups to work on unpopular hypotheses.  I suppose things work best when the scientific community taken as a whole proportions its research work to the evidence.

It doesn't necessarily matter whether the source of the original intuition is something that could be accepted by all scientists.  What matters is that the resulting idea can be tested.  Sometimes, the original motivation for a successful scientific theory is rather dubious (e.g the Dirac sea motivation for antimatter), but nevertheless the resulting theory is confirmed by experiment and later is motivated by a different set of considerations.

So I don't believe in the complete separation of Physics and Theology, hence the blog.  But maybe I believe in something else which has some similar effects on my writing.  You must after all be detecting something about what I am doing which provoked your favorable statement.

Perhaps it is this: I believe in being honest.  I must to the best of my ability weigh the evidence on fair scales, and be open about what I am doing.  It would be dishonest if, because I want to prove the truth of Theism, I were to report the relevant Physics data in an imbalanced way, playing up anything which might seem to help my case and playing down anything which does not.  People often do this kind of thing reflexively when they argue, even to the extent of first deceiving themselves before they deceive others.  But it's still unfair tactics, especially when deployed by the expert against the layman.

It is not dishonesty for me to have my own views about what's important in Physics and what's not, but it would be dishonest if I implied that all physicists agreed with me about that when they don't.  Nor would it be dishonest if my views about speculative physics are influenced to some extent by my theological views—I think this is inevitable, and possibly not even fully conscious—but to pretend that a view is based on purely physical considerations when it is not, or to distort the data about Physics to match a preconceived agenda (theological or otherwise) is repugnant to me.

So I'll do the best I can to be honest, and hopefully that will tilt the scales in the right direction.

Once upon a time, a college friend and I planned to write a book about Science-and-Religion topics, but that never got off the ground.  A few of the ideas from that time are being recycled here.

I originally started this blog because an elder Christian whom I respect back in Maryland told me (and gave me to understand that it was a divine revelation to him, and I trust him to know the difference) that I should not neglect my gift of teaching when I went to Santa Barbara.  At first I tried to start a Bible study with my church, but it already had lots of other groups, and it kept not working out for various reasons; then I thought of the idea of blogging instead.

Once I reach a critical mass on the blog, perhaps some of them could be organized into book format.  But I don't need to decide that yet.  For the time being, the informal blogging environment seems more fruitful for developing ideas.

Posted in Blog, Ethics, Scientific Method, Theological Method | 14 Comments

Server back up

The blog was down for several days due to computer troubles.  There was a power failure in Mountain View while my parents were out of town, and then when it ended the wall.org computer booted up with the wrong operating system.  Just wanted to let everyone know that the problem is fixed now.

Posted in Blog | 1 Comment

Did the Universe Begin? VII: More about Zero Energy

A reader who wishes to be anonymous writes in with the following question:

I heard your paper referenced in the Carroll vs Craig debate, attempted to read it, then looked you up and found your blog (which I really like!!).  I’m fascinated by the origin of the universe and think it is a great argument for a creator.  I have a question I’m hoping you can help me with, or better yet, do a blog post on so I have something to reference!

Frequently when I debate an atheist online, they will bring up the argument that the net energy of the universe is zero and so the First Law of Thermodynamics was not violated at the origin of the universe since energy was still conserved.  As they explain it, the positive energy of matter is countered by the negative energy of gravity.  Our universe formed from a freak quantum fluctuation and is the ultimate free lunch.  I understand this at a very simple level, but what I do not understand is how a zero-energy universe matches what we observe.  If matter only makes up ~5% of the universe, 30% if you include dark matter, then how does the universe have a net energy balance of zero if 70% of it is dark energy pushing the universe apart through repulsive gravity?  It seems the expansion of the universe indicates a net positive energy.  Could you please give a simple layperson explanation for why folks like Hawking, Krauss, Guth, etc claim the universe has a net energy of zero?  It feels like there is a slight-of-hand going on and dark energy is being excluded, but I don’t know enough or have any sources to point to that say otherwise.

Dear Reader, thanks for your question.  I notice there's an interesting inversion here from the Carroll-Craig debate.  In that debate, St. Craig was trying to argue that the universe had a beginning, and Carroll was trying to outmaneuver him with the "Quantum Eternity Theorem", saying that the universe couldn't have begun unless its total energy is zero.  He then opened himself up to the retort that the energy probably is zero.

On the other hand, in your debate, it's the atheist who seems to be championing the position that the energy of the universe is zero.  Presumably this is because he wants to say that the universe emerged from a Nothing somewhat like the one Krauss' has in mind (though all this talk of Nothing doing things as if it were Something keeps reminding me of "The Nothing" in The Neverending Story...) and therefore `no room for a Creator' etc.  In this case the theist might argue that Energy Conservation makes this impossible (absent a miracle), opening herself up to the retort that the energy probably is zero.

So perhaps if you and Craig were locked in a room together, you might discuss whether a physics-type beginning of the universe is helpful or unhelpful, when arguing for Theism.  Alternatively, there could be a Krauss-Carroll debate about whether there's less "room" for a Creator with or without a beginning of time (both of them granting that the idea is absurd either way).  One could more or less construct such a debate just from their remarks directed against Theism already linked to on this blog.  Carroll could argue that in models like Aguirre-Grattan:

There is no room in such a conception [an eternal universe with the entropy lowest in the middle] for God to have brought the universe into existence at any one moment.

and Krauss could respond that:

It has become clear that not only can our universe naturally arise from nothing, without supernatural shenanigans, but that it probably did.

and Carroll could retort that:

That is not what the universe does even in models where the universe has a beginning, a first moment. Because the verb popping, the verb to pop, has a temporal connotation, is the word I'm looking for. It sounds as if you waited a while, and then, pop, there was the universe. But that's exactly wrong. The correct statement is that there are models that are complete and consistent in which there is a first moment of time. That is not the same as to say there was some process by which the universe popped into being.

Apologies to Krauss and Carroll for wrenching their remarks totally out of context, but I believe I have not done any violence to their actual views.  If you'd rather see what the real Carroll actually said about Krauss' conception, you can find that on his blog here.

But that wasn't your question.  Setting aside which team benefits more from it, what does physics say about whether the energy is zero?

As I said when discussing the "Quantum Eternity Theorem", there are lots of different concepts of energy in General Relativity, and even the experts sometimes find the relationships between them tricky to think about.  It's no wonder laypeople get confused when the "experts" make definitive sounding pronouncements about the subject.  If the energy at every point in the universe is positive, how could it possibly be true that the total adds to zero?

Well, the ``simple layperson'' explanation is that in cosmology, there's contributions to the energy both from 1) matter (baryons, dark matter, dark energy, etc.) and 2) from spacetime, stored in the gravitational fields.  There's a notion of energy density where you only count category #1, and then the energy density is positive.  But this notion isn't very useful for discussing things like energy conservation, since it isn't conserved in situations where space is changing with time (e.g. expanding).  There's another notion where we count both #1 and #2, and then it turns out that the contribution from #2 is negative and (in a finite sized "closed" universe) the total is zero.

That's the best I can do without launching into technicalities.  But I can't resist trying to say more about the real story, even if what follows may not really count as a simple layperson explanation.

Perhaps it would be easiest to explain if we start with a theory that's simpler than GR.  GR is in many ways quite similar to an easier theory of physics, namely Maxwell's equations.  Like the gravitational field, the electromagnetic field is sourced by a particular type of matter.  Gravitational fields are produced by the flow of energy and momentum through a spacetime, while electric and magnetic fields are produced by the flow of charge.

Let's just think focus on one of the Maxwell equations right now, the Gauss Law.  This is a special type of Law of Physics called a constraint.  That means, instead of telling you how things change with time, it places restrictions on what is allowed to be the case at a single moment of time.

The Gauss Law is written in equations like this:

\nabla \cdot E = \rho.

Here E is the electric field vector at any given point, and \rho is the rate at which charge is flowing through time at a given point.  Which is a really fancy way of saying, the charge density.  \nabla \cdot E means \nabla_x E^x + \nabla_y E^y + \nabla_z E^z, where \nabla_i means taking the derivative with respect to the i-th spatial coordinate.

But maybe you hate equations: if so you are in good company.  When I was at St. John's College we read a funny letter in which St. Faraday wrote to St. Maxwell, saying that he loved his work, but why did he have to write it using math?   St. Faraday, you see, lived in the time where you could still be a respectable scientist and explain everything using words.  Very carefully chosen words, expressing precise quantitative relationships.

Anyway, Faraday figured out this brilliant way to visualize the Gauss Law, which we still use as a crutch today.  Instead of thinking of E as a vector, you can think of it as a density of electric field lines passing through a point.  The direction of the vector says which direction the lines are going in, and the magnitude says how many there are.  I'm sure you've seen electric and magnetic field lines before, but if not, here are some pretty pictures on Google.

The Gauss Law says that electric field lines can only begin or end on charges.  The number of electric field lines coming out of (into) a charge, is proportional to the positive (negative) charge of the particle.  (We say "number" to make it easy to visualize, but in fact the field lines form a continuum.)

This means that if you have a region of space R, you can do a census of the total charge in that region, simply by measuring the total amount of electric field lines coming into or out of that region.   One can write this as an equation too:

Q_R = \int_{\partial R} E_n\,dA.

Here Q_R is the total charge inside the region R, \partial R is fancy-schmancy notation for the boundary of R, E_n is the number of electric field lines poking out per unit area, and \int dA tells you to integrate that over the whole area to get the total number of electric field lines poking out.  (Faraday would have said, why work so hard to invent these silly symbols when you could just say "count the number of electric field lines poking out"?)  We physicists call an integral like this a boundary term, because—go figure—it's the integral over a boundary of a region.

We are now in a position to appreciate the following interesting truth.  Suppose the universe is closed.  (That means, finite in size but without any boundary.  For example, space at one time could be shaped like a giant hypersphere; as we all know a sphere is finite in size but has no end.  Or like one of those video games where if you go off the edge of the screen on one side, you "wrap around" and appear on the other side, so that there isn't really an edge there.)  In a closed universe, the total electric charge is always EXACTLY ZERO.

If you're Faraday, that's because each electric field line has to either circle around in loops, or else begin on a positive charge and end on a negative charge.  So everything has to balance out.  If you're Maxwell, it's because if you take the region R to be the whole universe, then \partial R is the empty set, and so the Gauss Law just says Q_R = 0.

This doesn't necessarily have to be true if space is infinitely big.  You could just have a single electric charge sitting in infinite empty space, and this would be OK because the field lines beginning at the charge would go out to infinity, so they don't need another endpoint.

Now what about GR?  It turns out that things work in a very similar way, only using energy instead of charge.  If the universe were a single star or a galaxy sitting in an otherwise empty infinite space, then the gravitational ``field lines'' coming out of the mass extend out to infinity.  This allows the total "ADM" energy of the spacetime to be nonzero.  In fact, there is a Positive Energy Theorem in GR which says that, for reasonable types of matter, this energy is always positive for any state besides the vacuum (which has 0 energy).

On the other hand, if the universe is closed, then the total energy is zero because there's no boundary for gravitational field lines to go off to.  But how can this be, when the cosmologists tell us that the universe consists of about 5% ordinary matter, about 25% dark matter and 70% dark energy, and each of these components of energy is positive?

(I hate the term ``dark energy'', by the way, since it makes people think it's related to dark matter.  The two are nothing alike.  Dark matter is just some other kind of stuff, which clumps into structures.  The so-called dark energy is most likely just a cosmological constant, i.e. a constant positive energy density throughout all of space.)

To answer this, I need to remind you of how Einstein's equation of GR works.  The Einstein equation says how energy and momentum lead to spacetime curvature.  It can be written like this

G_{ab} = T_{ab}.

The symbol G_{ab} = R_{ab} - (1/2) g_{ab} R is called the Einstein tensor; basically it's a 4x4 symmetric matrix which encodes certain properties of the curvature of spacetime.  On the other hand, T_{ab} is the stress-energy tensor of matter.  This is also a 4x4 symmetric matrix, which encodes the rate at which momentum in the a-direction is flowing in the b-direction.  (The T_{tt} component, where both indices are chosen to be time, is just the energy density, since energy is momentum in the time direction.)

A key point here is that T_{ab} only counts the energy and momentum in matter.  It does not count the energy and momentum stored in the gravitational field (although by convention, these days most people include the cosmological constant or ``dark energy'' in T_{ab}).  When the cosmologists tell you about the "energy budget" of the universe, they are only really talking about T_{tt}.  They are ignoring the contribution from the gravitational field, which also contributes to the total energy of the universe.  It turns out that in a closed universe, the gravitational part (due to G_{tt}) counts negatively and this exactly cancels the matter contribution.

Defining the total energy of the universe is, as I said, quite tricky, since in the Hamiltonian formalism energy is related to time, and you have to make an arbitrary decision about what counts as the ``time'' direction.  You have to decide this separately for every single point, so there's actually a lot of arbitrariness here.  Once you've picked a time coordinate, if you want to evaluate the total energy on t = \mathrm{constant} slice \Sigma, the total energy H ends up being given by something like the following integral over the volume V of space:

H = \int_R (T^t_t - G^t_t)\,dV + \mathrm{boundary\,term}.

(If you don't know about tensor notation, just don't worry about the fact that one of the t's moved upstairs.  If you do, I've raised an index using the inverse metric g^{ab}.)  The boundary term is an integral \int_{\partial R} of something I'm not bothering to write down.

Now the t-t component of the Einstein equation, a.k.a. the Hamiltonian constraint, tells us that T^t_t = G^t_t.  So the whole thing boils down to a boundary term, and in a closed universe that has to be zero.  Thus, the ambiguity about time doesn't matter in the end, since "0" is conserved no matter what.

Posted in Physics, Reviews | 18 Comments