Ratio Christi talk [UPDATED]

This Monday, I will be giving a talk on "Science and the Resurrection" at Rutgers University.  In my talk, I will describe the basic principles of Science and compare them to Christianity to see how it measures up.  Then there will be a Q&A period.  It is being hosted by a chapter of Ratio Christi, an apologetics organization (their blog is here).

The talk will be on this upcoming President's Day, Monday evening, Feb 15 at 8pm in the Busch Student Center, Room 174.  This is located in the Busch campus of the New Brunswick campus of Rutgers.  Non-students are welcome to attend, and I think there is going to be pizza as well.  So if any of my readers happen to be in the neighborhood, you are welcome to come!

However, if you do plan to come, please register at the Facebook event page, so that they can have an approximate estimate of how many people to prepare for.  This will prevent us from needing to pray for a miraculous multiplication of food.

The talk is free.  However if you plan to park your vehicle in Lot 51 right next to the Busch Student Center you will need to get a parking permit for $5 from this website.  Alternatively, you can park for free in Lot 48 by the Vistor Center, and then walk 15 minutes north.

Posted in Talks | 8 Comments

Black Swans

A reader asks:

After a lot of reading, I've come to realize that the Bayes factor for the resurrection is quite high that if the event in question wasn't a supernatural occurrence, no rational person would think that the event did not occur. However, I've stumbled upon an argument by a philosopher who argues against the resurrection argument by using bayes theorem as well.

I've included a link of a debate where he presented his arguments in a long mathematical form in case you wanted to refer to it, but the gist of his argument is that the prior probability of God raising Jesus from the dead is always going to be magnitudes lower than that of God *not* raising Jesus from the dead. He is a theist himself, so he argues that he does't follow Hume in his argument against miracles, but rather he claims to be making an argument from natural theology: Every experimental confirmation of a scientific theory that we observe counts as evidence of the fact that God created and ordered the world in an orderly and causally closed way and does not intervene. In another presentation, he puts forth a statistical inference of this sort(I didn't copy and paste it so it might be a flawed syllogism, but I think it captures the gist of what he's saying):

(1) For every dead person, 99.9999...% of the time God does not intervene

(2) Jesus died

(3) Therefore, we can be 99.9999....% certain that God did not intervene in Jesus' death

He argues that for every instance of a "miracle" being reported, we have experimental confirmations of the laws of nature of a much higher frequency. So, he concludes from all of this that the prior probability that God would raise Jesus from the dead is so astronomically low that however high our Bayes factor is *for* the resurrection, the prior improbability of God wanting to intervene with the laws of nature is always going to be much higher such that the posterior probability (or final probability) of the resurrection is always going to be really low.

This argument is unlike any other because it doesn't assume naturalism, in fact it assumes theism. It doesn't assume that God cannot or could not have raised Jesus from the dead, but that it is highly improbable that God would have intervened.

As a scientist, what do you think of this argument (Since your career involves seeing confirmations of God's love for order in the universe everyday?)

[Dr. Robert Cavin vs. St. Calum Miller]

(He presents his argument from the 14th minute to the 30 minute mark)

What do you think of this argument?

(1) For every American citizen who lives during a presidential election, 99.9999...% of the time they do not become President.
(2) St. Barack Obama was a living American citizen in 2008.
(3) Therefore, we can be 99.9999....% certain that Barack Obama did not become President of the United States.

Clearly there is something wrong with this argument.  What's wrong with it is that Obama is not a randomly selected [or typical] citizen.  He belonged to a special class of people who is unusually likely to become President (a Senator, a charismatic speaker, wanted to become president, went on to receive the nomination of a major party...).  Since we have additional information, it is fallacious to use the background rate to decide the chances of him becoming President.  [And of course, we also have excellent posterior evidence, coming from the period after the election, that he did in fact become President.]

In the same way, Jesus is not a randomly selected human being.  He was a person who claimed to be the Jewish Messiah and the Son of God, fulfilled certain prophesies, did other miracles, and so on.  So the prior probability that God will dramatically intervene shortly after Jesus' death, is a lot larger than the probability that he will dramatically intervene when one of my uncles dies.  (Although, actually God DOES plan to raise 100% of human beings from the dead when Jesus returns, the difference in the case of Jesus is that he did it right away.)

The reasonable question is, what is the prior probability that God would make some special person to be the Messiah and raise that person from the dead?  (Just like, we could ask what is the probability that any person becomes President.)  Once we believe that somebody is going to be President, or that somebody is going to be the Messiah, we shouldn't be all that surprised to learn that any one particular person turns out to be President, or the Messiah, so long as they are qualified for the position.)

The argument in the video is even more fallacious.  First of all, I should say you should be VERY SUSPICIOUS of any person who starts their argument by making concessions that huge to the other side. Factors of 10^{297} are ridiculous numbers that should never be thrown around in almost any real life situations, and if he concedes something that ridiculous to his opponent, he ought to be guaranteed to lose, plain and simple.  He's like a stage magician who makes a big show of how he's blindfolded and his hands are tied behind his back and so on.  You can be very sure there's a trick somewhere, and that all that patter is there to distract you from the way he actually does the trick.

(The other guy, St. Calum Miller, is also making a fallacy, when he quotes a liklihood factor of 10^{43} for the Resurrection; this number incorrectly assumes that the evidence from each apostle's testimony counts independently.  The odds of a group conspiracy to lie are certainly bigger than 10^{-43}, which is an astronomically tiny number.  No real historical event is ever that certain.  That being said, he's right that the evidence for the Resurrection is extremely strong, as far as historical evidence goes!  It's just that nothing in life is really that certain.)

By the way, Cavin is derisive about St. Craig Keener's statement that there are a hundred million miracle reports, but this is not actually all that silly of a number.  If 2% of the world's population claims to have seen a miracle, that's 140 million right there, assuming none of the events are redundant.  So I don't think this claim can be dismissed quite so easily.

Anyway, in his argument, Cavin compares the likelihood ratios of L (the laws of nature are always valid), M (at least once, God acts miraculously), and ~(M v L) (neither one is true).  The last comes in because L and M are not exhaustive, since there might be neither laws of nature nor divine interventions.

The actual fallacy in his argument is displayed on the slides at the 33:45 mark of the video.  He claims that ~M (i.e. not M, which would include both L and ~(M v L)), because it is maximally unspecific and does not necessarily predict that there are any laws of nature at all, is disconfirmed every time anything happens in accordance with a natural law.  Then he claims that M, because it only adds to ~M the claim that at least one miracle happens, is at least as bad off as ~M!

But this is clearly quite absurd.  Not even the most ardent believer in the supernatural thinks that every time I drop a ball, there is a 50% chance that it will miraculously fall up instead of down.  Not even the most tempestuous skeptic really halves their chance that God does miracles, every single time they see a ball drop!

Obviously, miracles don't happen all the time.  What Christians actually believe is:

M': the usual laws of Nature are almost always valid, but on rare occasions (especially at important moments in salvation history) God intervenes to perform miracles.

(By important moments in salvation history, I mean things like: critical events in ancient Israel, the ministry of Jesus and the Apostles, times when missionaries preach the Gospel to a group of people for the first time, or sometimes for the conversion of a particular individual.  Aside from this, sometimes God heals people in answer to prayer and so on, but my point is that miracles are not randomly tossed into history like darts shot into a dartboard; they tend to happen in specific kinds of situations.)

Now M' clearly does predict that balls will normally fall down.  So it is just as good as L (the laws of nature always hold) for purposes of everyday life.  So his huge probability factor of 2^{gazillion} goes away.  But M' is better than L in situations like Jesus' ministry, where there is significant historical evidence that miracles really occurred.

Incidentally, this implies that he was quite wrong to rank the probability of ~M (no miracles) so low.  Even though it is a very unspecific hypothesis, we shouldn't consider randomly selected examples of ~M, instead we should focus on whatever are the most plausible versions of ~M.  And clearly, the most plausible versions of ~M are scenarios where the laws of nature are followed, at least most of the time.  In fact, the most plausible version of ~M is L.  Thus he is guilty of a clear-cut violation of the laws of probability theory here, since he simultaneously argues that ~M is very improbable, and L very probable, even though L actually implies ~M!  This is an example of the Conjunction Fallacy:


Had St. Miller realized this, he could have totally eviscerated Cavin's argument in a couple seconds, in a way that would have been completely humiliating and decisive.  However as far as I can tell (I skimmed through his remarks very quickly) he mostly just ignored that argument and presented the positive case for the Resurrection.

Similarly, the most plausible version of M is not a scenario where God intervenes half the time we do a science experiment (I agree THAT is ruled out), instead it is a scenario along the lines of M' or similar.

To give another illustration, consider the famous proposition

W: All swans are white.

For a long time, Europeans noticed that every swan they ever looked at was white.  You could take this as huge experimental confirmation for W.  Every time you look at a swan, W predicts it is white and therefore is confirmed by a factor of at least 2 over ~W (and that's if there was only one other color besides white), which says the swan could be any color.  Since there were millions of observations of white swans, doesn't this mean that W is a gazillion times more probable than ~W?

And yet, there are black swans!

The fallacy is to assume that the most plausible version of ~W is that each individual swan's color is random.  In fact all the swans in Europe are white; the black swans are not only rarer, they live in Australia.  So it is no surprise the Europeans didn't notice them until they came to Australia.  So actually ~W was almost as good of a theory as W, aside from being slightly more complicated.

As a scientist, what do you think of this argument (Since your career involves seeing confirmations of God's love for order in the universe everyday?)

That is indeed the exact point.  We worship a God who loves order, and therefore he does not do miracles haphazardly.  No scientific experiment can ever be evidence against miracles, unless you have some theological reason to believe that God would have been likely to intervene in that particular experiment.  For most experiments, the opposite is true—it would frustrate the ability of his creatures to learn about the world, without providing any particular benefit.

(I am assuming here that the goal of the particular experiment was not specifically to look for evidence of God, as in e.g. prayer experiments.  In that case, we all know that God does not usually respond to challenges to show his existence by striking a nearby tree with a lightning bolt.  The fact that he doesn't do that may be evidence against a certain sort of deity, but even there I don't see what is gained by dressing up the challenge with a veneer of science, when the whole point is simply to challenge God to act.)

Note: I only answered this question as a special favor to the particular reader in question.  I hate watching long web videos, and I tried to watch as few seconds of this one as I possibly could, to answer the question accurately!  I much prefer to interface with texts, which can be read at the speed I want, and then quoted accurately using the copy-and-paste function!

[Edit: In an earlier version of this blog post I misspelled the name "Cavin"; I apologize for this mistake.  Also, I would like to make it clear that, except in the portions of this blog post where I respond directly to the video debate, I am responding to the arguments as presented by my interlocutor, without asserting that it is necessarily an accurate summary of Cavin's position.

A few other changes made after the fact are in square brackets.]

Posted in Reviews, Theological Method | 20 Comments

Quantum Mechanics II: Decoherence & States

Strictly speaking, most of the other rules about QM are already implicit in what I've already said.  But a few implications of this setup are worth pointing out.

First note that, in QM, the "state" includes information about every single object in the system.  So, when you add up the different histories, they only interfere if the final states are exactly the same in every respect.  If even one tiny particle is in a different place than it otherwise would be, then they don't interfere.  In that case, you just add up the probabilities normally.

This is why measurement is such a significant thing in QM.  If you try to catch out Nature by explicitly measuring which slit the particle went through, then YOU are now different as a result of you knowing which slit it went through.  As a result, the two histories don't interfere.  But it needn't be a person which does the "measurement".  Even if you refuse to look at it, the detector being different still prevents the interference from happening.  As far as we know experimentally, there is no special relationship between consciousness and QM (although some people have proposed interpretations of QM in which there is a connection between the two.).

Usually, once histories become sufficiently different from each other, for a long enough period of time, their random interactions with the environment will tend to be different, so that the chances of getting everything perfectly the same become tiny, and the histories won't interfere anymore.  This phenomenon is called decoherence.  People argue about what this tells us about the interpretation of QM, but the phenomenon itself can be studied in the laboratory, so my use of this word should not be regarded as an endorsement of any particular interpretation.

Secondly, if you have two or more distinct states, then it's possible to take a quantum superposition of the two states, formed by adding them up with complex coefficients.  For example, if X and Y are two distinct states, then

(\mathbf{X} + \mathbf{Y}) / \sqrt{2}


(\mathbf{X} - \mathbf{Y}) / \sqrt{2}


(2\mathbf{X} +i \mathbf{Y}) / \sqrt{5}

are all equally valid states!  (The reason for the square root in the denominator, is to make it so that, by the Born Rule, the total probability of the state is still 1.)  These states are just as much valid states as X or Y themselves would be.

The possibility of quantum superpositions is implicit in the quantum probability rules, since if you start with a particular state A, in general it will evolve to a superposition of different states as time passes.  And there's no particularly good reason you couldn't also have started out the experiment with a quantum superposition.

(Note that if we take any state like (\mathbf{X} + \mathbf{Y}) / \sqrt{2}, and we multiply it by a phase (a number on the unit circle of complex numbers, e.g. i, or -1, or (1+i)/\sqrt{2}) then we can't tell the difference between that and the original state in any way!  That's because, when we work out the patterns of interference, we only care about the relative phases between different histories, not the absolute phase of the whole system.  So it's good to remember that there is a slight redundancy in our description here: two states that differ by a phase are really the same state.)

Now if we have a system with N possible states, then we can imagine a higher dimensional geometry consisting of all possible superpositions of these N possible states (including, for mathematical convenience, those for which the probability doesn't add to 1).  This is called the Hilbert Space of that system.  It is a kind of vector space with N complex dimensions, which means in terms of real numbers it's a 2N-dimensional space.  But don't worry about these details for the moment.

(It's kind of hard to visualize a Hilbert space when N is greater than about 2, but it's still very useful mathematically!)

The simplest nontrivial Hilbert Space is the one with N = 2 states.  (I'll give a physical example in a moment.)  This would normally involve a 4-dimensional space, but to keep things as simple as possible, I give you permission to ignore the bit about complex numbers and just think about a 2-dimensional plane.  (This is the space of all states of the form

a\mathbf{X} + b\mathbf{Y}

where a and b are now real numbers.)  Then we can think of X as a unit vector pointing along the x-axis, and Y as a unit vector pointing along the (wait for it...) y-axis.

Perhaps a picture will help:

The Hilbert space for a system with 2 states.

As you can see, the Hilbert space has an origin, which is the point in the middle which represents "zero".  Each state is a represented by a vector coming out of the origin, pointing in some direction.  (But remember that -X is really the same state as +X, since they differ by a -1 phase.  I didn't draw -X on the picture, but if I had it would be 180º around from X.)  The Born Rule tells us that length = total probability squared.  That means that in order for a vector to be a state-in-good-standing, it needs to be length 1.  (In other words, by the Pythagorean Theorem, the sum of the squares of its (x,y) coordinates needs to add up to 1).  So don't ask me what the physical meaning of the "zero" vector is, since it doesn't have one.

A physical example of an N = 2 state system would be the polarization of a photon coming straight at you from your computer screen.  Light can be either horizontally polarized (the X state, corresponding to an electric field that points in the x direction) or it can be vertically polarized (the Y state, corresponding to an electric field that points in the y direction).  Now since physics is rotationally symmetric, it's obvious that if light can be horizontal or vertical, it can also be diagonal.  So you might have naïvely thought the photon would have infinitely many possible states.  And in a sense this is true, but each of these diagonal states is really just a quantum superposition of the X and Y states.

Yet on a plane, the choice of axes is arbitrary.  You can rotate the coordinate system by 45º, and it would be just as good as the original coordinate axis.  In the same way, we are currently thinking of X and Y as the two possible states of the system (with every other state being a superposition of X and Y)—but this is an arbitrary choice!  We could just as well say that every state is a superposition of (\mathbf{X} + \mathbf{Y}) / \sqrt{2} and (\mathbf{Y} - \mathbf{X}) / \sqrt{2}!  So actually every state is a quantum superposition, of certain other states.

Although the choice of coordinate axis is arbitrary, it is important that the states you pick are all "orthogonal" to each other (i.e. at right angles in the Hilbert space).  That is what tells you that it represents a set of  mutually exclusive possibilities.  Any such set of N orthogonal states is called a basis of the Hilbert space.  (The plural of "basis" is "bases", pronounced BASE-EES.  Just like the plural of "index" is "indices".)  A basis gives the possible set of outcomes for some particular way to measure the system.

For example, suppose we start with a diagonal photon in the (\mathbf{X} + \mathbf{Y}) / \sqrt{2} state, and we measure it to see whether it is horizontally or vertically polarized.  (Maybe by passing it through some kind of material in which these two polarizations follow different trajectories.)  What happens?

Well, people disagree about interpretation (what is ultimately going on), but everyone agrees on the practical set of rules you'd use in the laboratory.  We just look at the state (\mathbf{X} + \mathbf{Y}) / \sqrt{2}.  It has an amplitude of 1/\sqrt{2} to be X, and also 1/\sqrt{2} to be Y.  By the Born Rule, we've got to square these numbers, so we get a 1/2 chance for it to be horizontal, and a 1/2 chance for it to be vertical.

Let's suppose it turns out to be vertical (the Y state).  Then from now on, the particle behaves just as if it had been in the Y state all along.  (This is called "projection" or sometimes "collapse of the wavefunction"; but see my remarks on decoherence earlier in this post.)  For example, if we measure it a second time to see if it is in the Y state.  If we check to see whether it is in the X state, it is definitely not.

But now we can ask a separate question: is it in the (\mathbf{X} + \mathbf{Y}) / \sqrt{2} state, or the (\mathbf{Y} - \mathbf{X}) / \sqrt{2} state?  This corresponds to sending it through a different kind of filter, which discriminates between the two 45° diagonal polarization choices.  We would then find a 1/2 chance of it being the former, and a 1/2 chance of it being the latter.

Supposing it turns out to be (\mathbf{Y} - \mathbf{X}) / \sqrt{2}, this is a bit paradoxical.  Since if we had just started off asking whether the (\mathbf{X} + \mathbf{Y}) / \sqrt{2} photon was in the (\mathbf{Y} - \mathbf{X}) / \sqrt{2} state, Nature's answer would have been "Nope.  Definitely not.  Those states are orthogonal and therefore if it's the one, it's not the other!"

But somehow, merely by answering a series of questions about the photon's polarization, we managed to trick Nature into converting the photon from its original polarization to one 90° away, which is inconsistent with the first.  By measuring the photon we have affected it!

So we see that, somehow, we can get the photon to be definitely - or | polarized, or definitely / or \ polarized.  But we can't get both of these things to be definite simultaneously.  This is an uncertainty relationship.  It's analogous to the "Heisenberg uncertainty principle" where you can't measure position and momentum at the same time; so that measuring one makes the other uncertain.  (Although it's not exactly the same, since position and momentum are continuous variables, while each polarization choice is a yes-no question.)

In the case we are considering, we're been lucky that the Hilbert space is directly related to two dimensions of the physical space.  That means that the rotation of axes in the Hilbert space is the same thing as a rotation of physical space.  In general, however, we are not so lucky and the Hilbert space is more abstract.  But it is still true that there are a bunch of different possible bases of the Hilbert space, that are related by rotations in the Hilbert space.  (Since the Hilbert space is complex, we are really only interested in those rotations that don't mess with the notion of "multiplying-by-i".  These are called unitary transformations.)

As long as I'm talking about complex numbers, I should mention that there's also such a thing as circularly polarized photons, which involve complex superpositions like (\mathbf{X} \pm i \mathbf{Y}) / \sqrt{2}.  But most of the bizarreness of superpositions can be illustrated without thinking about complex numbers.

Posted in Physics | 28 Comments

Theology: Less Speculative than Quantum Gravity

A reader, Martin B, asked me a question in response to my review of Krauss' talk on “A Universe from Nothing”.  I had written:

"Atheists such as Krauss scorn theology as being completely non-empirical. They claim it is not based on evidence of any sort. I find it extremely ironic when this sort of atheist thinks that speculative quantum gravity ideas are just the right thing to further bolster their atheism. Suppose you think that Science is better than Religion because it is based on evidence, and suppose you also want to refute Religion by using Science. Here's a little hint: consistency would suggest using a branch of Science that actually has some experimental data!”

Martin asks:

But isn't there empirical data that suggests "speculative quantum gravity” is real? It's not taken out of the blue, is it?

Anyway, the problem I have with religion/faith is that it's so arbitrary. Depending on who you ask there are all kinds of idea of what's "true” when it comes to theology. May I ask what it is that makes you think Christianity stands out and is more believable than other religions and faiths on this planet?


It is common for atheists to assert that religion is based entirely on speculation, and that therefore there is "no evidence" for it.  Now I don't agree that religion is based primarily on speculation, but I also don't agree that speculation counts as "no evidence".  Let me explain.

Speculation, in the particular sense we are considering, is defined by various dictionaries as follows:

  • "the forming of a theory or conjecture without firm evidence" (Google)
  • "ideas or guesses about something that is not known" (Miriam-Webster)
  • "reasoning based on inconclusive evidence; conjecture or supposition" (American Heritage)

In other words, speculation is essentially what you do when you don't know something for sure, so you sit around without guidance and try to figure out what makes the most sense.

Now sometimes when we sit around and think about things, we find a really good reason to think that something is in fact the case.  For example, we might find a rigorous mathematical argument.  In that case, we would talk about having a "proof" instead of mere speculation.

More controversially, many philosophers have also believed themselves to have deduced certain propositions by thinking about them carefully.  The track record for this is not very good, since philosophers can't agree on which things are in fact provable in this way, and some of them have claimed to prove things which later turned out to be false (e.g. Kant thought that Euclidean geometry and Newtonian mechanics were necessary truths!).  However, it is plausible that at least some philosophical arguments are strong enough to be considered "proofs".  (Even if you are a skeptic about the ability to deduce most truths about the world by philosophical reflection, you probably came to that conclusion by thinking about it philosophically, so there's no escape.)  Also, Logic and Probability Theory are sometimes considered branches of Philosophy, and these seem to be on fairly solid footing for most purposes (at least if we ignore the puzzles raised by quantum mechanics).

Be that as it may, normally our experience is that, at least about most subjects, "armchair reasoning" is not very likely to lead people to the truth, unless it is supplemented by some source of data which is based on empirical evidence.  Two particular fields of study which do involve large quantities of empirical data, are History and Science.  The former is based on testimonies, documents, and artifacts left behind by those who lived in the past, while the latter is based on repeatable observations carefully scrutinized by the scientific method.

I would judge that normally the strength of evidence we obtain from the fields I've mentioned is as follows:

\text{Math & Logic > Science > History > Most Philosophy}

However, this is just a general expectation based on averages; specific cases might turn out differently.  As I said before, some philosophical arguments are very strong (e.g. if you don't believe the philosophical arguments that we can learn things about the external world based on observation, you can't have any grounds for believing in Science either.)  Math proofs are supposed to be completely certain, but if they are thousands of lines long it is easy for errors to sneak in.

And, in cases where historians or scientists don't have enough strong enough evidence to prove the truth about something they care about, they too will resort to weaker evidence, including (educated) speculation.  Just because an argument is made by people who work in a History or Science Department, doesn't necessarily make it non-speculative.  You have to look at what (if any) actually supports the statement!

Now, it is clear that educated speculation is right more often than chance would predict.  It has often happened that scientists have brilliantly guessed in advance correct theories of Nature, based on partial or incomplete evidence.  This is the sort of thing theorists get Nobel prizes for.  (If they were guessing based on chance, you'd expect they'd never get it right, since the space of logically possible ideas is huge.)  On the other hand, it also often happens that the brilliant conjectures turn out to be completely false.  So reasonable forms of speculation do involve a kind of evidence.  It's just not a very strong kind of evidence.  How strong it is, depends on just how many leaps of conjecture one takes, beyond what is already known.

Therefore, we should not conflate "speculative" with "no evidence".


So when you say:

But isn't there empirical data that suggests "speculative quantum gravity” is real? It's not taken out of the blue, is it?

I entirely agree with you.  Quantum gravity isn't an idea which just comes out of the blue with no evidence whatsoever.  If I thought that were true, I wouldn't work on it professionally!

We know that Quantum Mechanics is a good description of the world of atoms and other small stuff.  We know that General Relativity is a good description of situations in which gravitational fields and/or the speed of light are important.  It stands to reason that there must be some mathematical model which embraces both sets of ideas into one, mathematically consistent description.  Since the physical world exists, there must be some description of it in situations where both quantum and gravitational effects are important.  (I suppose conceivably the description might not involve math and equations, but if not that would be a total surprise in light of previous experience with new models of physics.  Normally math is the best language for describing Nature in a precise way.)

So the mere fact that there is such a thing as quantum gravity is not particularly speculative.  But most of our specific ideas about quantum gravity are highly speculative.  Some reasons for this:

  • Dimensional analysis suggests that in order to see actual effects from quantum gravity, we'd have to look at distance scales equal to the planck length, which is about 10^{-35} meters (details here if you want the math.).  For comparison the Bohr radius (the approximate size of atoms) is about 5 \times 10^{-11}, and the smallest distance scale we've ever been able to probe with the Large Hadron Collider is about (\hbar c) / (14\,TeV) = 8.8 \times 10^{-20}\,\text{m}.  So quantum gravity is smaller compared to the tiniest thing we can measure, then atoms are to us!  So in the absence of some really clever and dramatic experiment, it will be a really long time (if ever), before we have any direct experimental evidence of quantum gravity effects.
  • One could also try to look at what happened in the very, very early universe, but once again this puts quantum gravity earlier than anything we have good evidence for, with the possible exception of inflation (there is decent evidence for inflation, although it is not confirmed for sure; also we don't know whether it happened at the same time scale as quantum gravity or not.)
  • The attempt to combine quantum mechanics with gravity leads to severe conceptual difficulties, making it difficult to say what we even mean by a quantum spacetime.  In addition there are seeming paradoxes which nobody knows how to resolve.
  • Our current best candidate for a theory of quantum gravity, string theory, is understood well only when the strings are weakly interacting (or when it is dual to certain other theories which don't involve gravity.)  In truly quantum gravitational situations, even if we assume string theory is right, we're still in the dark about how to formulate it precisely, let alone calculating what it says.  Also string theory, although it has certain very beautiful aspects, is a very complicated construction which includes many elements (supersymmetry, extra dimensions, GUTs, etc.) that have not been confirmed experimentally as separate ideas, let alone as a combined package.
  • The next most popular candidate, loop quantum gravity, space at the Planck scale is described by a network labelled by numbers, but there is no agreement on how to describe time evolution, nor is is clear whether a continuous-seeming spacetime emerges as we zoom out to larger distance scales.

So the situation is desperate, but for that reason also exciting!

Now the particular idea which Krauss was using, the Hartle-Hawking "no boundary wavefunction of the universe", has in some ways even less evidential support than string theory itself (it certainly doesn't seem to logically follow from string theory, though it might or might not be combined with it).  It's just a particularly beautiful proposal for the state of the universe.  The best that can be said for it is that it is specific, simple, and elegantly relates the laws of physics to the initial conditions.  The worst that can be said about it, is that it may be mathematically ill-defined, and probably contradicts observational data (such as the fact that the universe contains any stuff at all).

So I think I was justified in saying that:

The crucial physics here is totally speculative!  It was entirely based on speculative ideas about quantum gravity which anyone working in the field would admit are not proven.

But when I say totally speculative, I don't mean there's no support at all!  I just mean really really weak evidence.  I'm not trying to bash Hartle or Hawking here, who I'm sure would agree with my assessment.  Quantum gravity is hard!  We're doing the best we can.

(Commenter St. Scott Church said something similar here.)

But I think it's crazy, if an atheist thinks religion is based entirely on silly speculations, to turn to this as their paradigmatic example of something which is supported by strong evidence.  I've also criticized Quentin Smith (a better philosopher than Krauss) for the same offense.


Now let's talk about religion.

On this blog, I've discussed before certain philosophical arguments for Theism, which I think are pretty good, so far as armchair reasoning goes.  But I don't think that the strongest evidence for religion comes from this source, and indeed I had a huge long disclaimer at the beginning of that series in which I said so.

What these philosophical arguments point to, in my opinion, is something like Ethical Monotheism, which is sort of the lowest common denominator shared by traditions as diverse as Judaism, Platonism/Stoicism, Christianity, Islam, Sikhism, Baha'i, certain sects of Hinduism, and Deism.  (So believing in Christianity does not require that you think everything about other religions is false and misguided.)

But it's clearly impossible to prove something like Christianity from purely abstract philosophical arguments, since it involves a lot of particular doctrines about Jesus (particularly the Trinity and Incarnation etc.) which are much too specific and weird to derive by philosophical plausibility arguments.  (Is this similar to what you mean by saying religion / faith is "arbitrary"?)

Instead, I would say that the primary reason for believing in Christianity comes from History—although some elements of philosophical reasoning and personal religious experience come into it as well.  I said above that History was based on collecting testimonies and documents from past eras.  And this is what the New Testament is.

The primary event on which the Christian faith is based on is the Crucifixion and Resurrection of Jesus.  (Followed by his Ascension into heaven, and the giving of the Holy Spirit at Pentecost in order to start the Church.)  These events were observed by normal human beings like us, using their ordinary sense data.  Those people are no longer alive, but they left behind documents, collected in the New Testament, which describe the teachings and miracles of Jesus Christ and his Apostles (those who were the eyewitnesses to his Resurrection, listed by St. Paul about 20-25 years after the event here, although he omits the women who first went to the empty tomb and were the first to see Jesus, as described in the Four Gospels.)

Now whatever the New Testament is, it is not philosophical speculation.  (I will get to other religions in just a moment.)  Various of its documents clearly claim to be the records of people who literally saw supernatural events with their own eyes.  It could be lies, or some sort of mistake, or perhaps legends which grew up later (although I find all of these theories implausible for various reasons, in part because of the large number of claimed eyewitnesses and in part because the claims arose so early and clearly in the development of the religion).  What it certainly is not is a bunch of philosophers, theologians, and mystics sitting around meditating on the nature of the universe and trying to figure out what makes sense to them.

As I have argued before, type of evidence in question (muliple written claimed testimonies) is considered by historians to be strong evidence whenever it supports non-supernatural events, for example the Assassination of Julius Caesar.  (Indeed, ancient history would be basically impossible without it.)  The quality of the historical documentation compares quite favorably to that supporting similar events at around that time and place.  So unless we have a strong prejudice against the supernatural—or have some other specific reason to disbelieve it—we should believe it.

(And, incidentally, you should not have a strong prejudice against the Supernatural, among other reasons because of the abundant documentation of miracles which have occurred in more modern times.)

I argued above that History is, in general, more reliable than Philosophy.  For this reason, I would argue that the accounts of the Resurrection of Jesus are more evidentially important than things like e.g. philosophical arguments for Materialism / Naturalism, arguments about how a good God could allow evil in the world, and so on.  Those things are speculation, this is data.

Of course, once you accept the Christian data-points, recorded in the New Testament, you still have to do some philosophical/theological analysis to figure out exactly how to explain the extraordinary event.  I'm not claiming that e.g. the doctrine of the Trinity was directly observed by human beings.  Instead people had to work through the facts (e.g. Jesus claims to be divine in some way and this is backed up by his ability to do miracles; but he also prays to God as the Father, and accepts the Jewish teaching that there is only one God; then he promises to send the Holy Spirit to live in the hearts of those who follow him, who also seems to carry the authority and power of God) and when they worked everything out they had the doctrine of the Trinity.  Using the language of Science, this is a theory rather than a fact, but it is a good theory because it is the simplest explanation of the facts in question.  (Of course atheists and members of other religions will generally deny that the facts were as the New Testament claims, but that is a completely different question than whether the reported facts support the theory.  Just as, if there is controversy over whether a scientist falsified his data, this is a separate question from whether the data, if true, supports the theory.)

I don't want to give the impression that Christianity is only about stuff that's happened in the past: Christians also believe that the Holy Spirit is present in believers, in order to guide us into the truth and to form in us the kind of loving character that Jesus had.  Some Christians have also had few dramatic communications from God or other mystical experiences, but this is quite secondary compared to learning to live life together as a holy community of people.  Once you come to believe it is true, then faith is indeed necessary to continue along the path even when nothing much seems to be happening.

Religion is about the encounter of the soul with God.  It seems clear that most people don't come to faith by robotically analyzing the evidence (or to disbelief, for that matter).  But I still think people should carefully consider the evidence when deciding whether to believe.  It is important to check that one is not being deceived by something false.


May I ask what it is that makes you think Christianity stands out and is more believable than other religions and faiths on this planet?

Gladly.  When analyzing a religion for truth, I would ask questions such as these (none of these criteria are necessarily intended to be definitive when taken in isolation):

  1. Has the religion persuaded a significant fraction of the world population, outside a single ethnic group, to believe in it?
  2. How does the religion relate to previous and subsequent religions?
  3. Did the religious founder claim his message came from supernatural revelation, or is it only the reflections of some wise philosopher who didn't claim to have divine sanction for their teaching?
  4. Are the primary texts describing some sort of mythological pre-history, or are they set in historical times?
  5. Related, does it sound like fiction, or does it sound like history?
  6. How long was it between the time when the supposed supernatural events took place, and when they were first written down (in a document that has had copies of it preserved).  Is it early enough to suggest the text is based on testimony rather than later legends?
  7. What are the odds that the purported supernatural events could have occurred for non-supernatural reasons?
  8. Did the main witnesses benefit materially from their testimony, or did they suffer for it?
  9. Is there significant evidence of fraud among the originators of the religion?
  10. What is the general moral character of the religious teaching?
  11. Do people who are serious about this religion generally feel that they are put into an actual relationship with the divine?

In a future blog post, I will try to provide my own personal answers for how well various religions satisfy these criteria, and why I think Christianity is the most convincing case of divine revelation that has occurred.  However, I've included these questions separately from my answers, in order to encourage you to think about them on your own.

Sometimes I meet people with a sort of learned epistemic helplessness, just in the area of religion.  The attitude is: well, group A claims this miracle, and group B claims this divine revelation, and I am completely at a loss and unable to even begin to say which claim is more plausible!  Therefore I won't accept any of them.

Yet when it comes to less important matters in their everyday life, they are perfectly able to use their brain to decide what is credible and what is not.  If you really want to know what is true, I'm convinced you are able.

Look, and maybe you'll find.  Ask, and you might just get it.  Keep on knocking at that door, without giving up, and—if there's anyone on the other side—surely it will be opened to you.

Posted in Scientific Method, Theological Method | 32 Comments

Quantum Mechanics I: Interference

A bunch of people have been asking me about the interpretation of QM.  Now, every interpretation of QM predicts (or claims to predict) the same experimental results in any experiment (or at least, any realistically feasible experiment).  Otherwise they wouldn't be rival interpretations, they would be rival theories, and we would just do an experiment to see who is right.

So before discussing what QM actually means, it's good to get the ground rules down—the ones that all physicists agree are the right ones to use to predict the results of actual experiments.

Let's suppose we're doing a physics experiment, which I am going to describe in an extremely abstract way, because that's the kind of person I am.  A (somewhat idealized) way to describe a certain class of experiments is as follows:  We start out by preparing the initial configuration of the apparatus to be in some particular configuration (or "state"), let's call it A.  For example, we start with a radioactive atom.  We can let the experimental apparatus evolve on its own, isolated from the rest of the world, until it reaches some final configuration.  Then we look inside, e.g. 10 minutes later, and check what the current state of the experiment is.  Perhaps the atom has now decayed into something else.  Let's call this final configuration B.

Several aspects of this description are clearly idealizations.  There are always some limitations in our ability to control and/or know the initial condition A; the system is never going to be completely isolated from the rest of the world no matter how hard I try.  And in some experimental setups this may be a good thing—that is, we may want to deliberately reach in to measure and/or adjust the system, part way through its "time evolution".  (Unlike biologists, we physicists use the word evolution any time anything changes!)   And, at the end of the process, we're never going to be able to measure the final outcome with perfect precision either.

But I'm a theorist so I can ignore the messiness of real life, whenever it pleases me to do so.

Now if the laws of physics were deterministic (and if we know what they are, and we know the initial state completely precisely...) then in principle we could simply solve all of the relevant equations and find out what exactly the final state would be.   So after 10 minutes, any given A will become some particular B with probability 1.

(In practice, this calculation is often impossible because of phenomena like chaos where (in some systems, not others) the final outcome depends very, very sensitively on the initial conditions.  For chaotic systems, you need to know the initial conditions to exponential precision in order to predict the future.   This is why we can't predict the weather accurately for more than about a few days out, because the number of digits accuracy you'd need to measure things to is proportional to the number of days!)

But the actual laws of physics are stranger than that.  They are not deterministic.  I think I've read that some Philosophers of Science claimed that Determinism was an important assumption underlying the possibility of doing Science at all.  Well, Determinism is false, and yet we scientists still have jobs.  I know, 20-20 hindsight, but it was still a dumb thing to say (if anyone in fact ever said it, which I haven't done the research to confirm...).

So let's try again.  Once again, we'll set up our experiment in a particular initial state A.  But now, there are several possible final states B, B', B'' etc.  Let's suppose we want to calculate the probability for some specific one: B.  So the sane, sensible way of doing this, would be to think of all the different ways that A could evolve in time to become B.  To actually do calculations, you apply the rules of probability theory:


  • For any particular process by which A can evolve to B (a history), we survey all the events which happened in that history, and calculate the probability of each individual event (using our knowledge of the laws of physics, as worked out from experiment or theory)  Then we multiply those probabilities to calculate the probability of that particular history.
  • If there is more than one distinct history going from A to B, then we add up the probabilities of each history (since each of them are separate possible ways to get B), to get the total probability of observing B.
  • At the end of the day, the probabilities for all possible final outcomes should add up to 1.

Note that, since probabilities are between 0 and 1, multiplying them makes them smaller, as befits situations where multiple unlikely things need to happen to get from A to B.  On the other hand, adding them makes them bigger, as makes sense if there's more than one way for something to happen.

This is the sort of probability theory which would make sense a priori to our rational minds.  The kind from which you can prove sensible results like Bayes' theorem.  But the universe doesn't really work that way either!

One way to think about QM is that it's like a Behind the Looking Glass version of probability theory, where things almost work like how you expect them to, but not quite.  The basic weird idea of quantum mechanics that instead of assigning each path from A to B a probability (which is a real number between 0 and 1) you assign to each path an amplitude (which is a complex number whose absolute value is less than or equal to 1).

A complex number can be thought of as just a vector lying in a two-dimensional plane.  In order to specify it, you need to know how long it is (the "absolute value" of the complex number) and what direction it points in (the "phase" of the complex number).  Of course, if the absolute value is zero, then the phase is meaningless, since the complex number is just 0.

In QM, the absolute value squared of an amplitude represents the probability for an event to happen.  This is called the Born rule, and it is the necessary interface for getting actual predictions about the world out of the theory.

So let's suppose you have two different possible ways to go from A to B.  (A classic example is the double slit experiment, where a particle passes through a screen which has two holes in it, and then reaches one of several possible locations on the detector.)

If the two possible histories have the same phase, then they constructively interfere and the probability of B happening is more than you would expect, from adding up the probabilities of the two histories.  On the other hand, if the two possible histories have opposite phases, then they destructively interfere, and the final probability is less than you would expect.  In fact, if the amplitudes are equal and opposite, then the total probability of getting to B is exactly 0!

(More generally, amplitudes constructively interfere if they are at an acute angle in the complex plane, and destructively interfere if they are at an obtuse angle.  For right angles, the Pythagorean Theorem + the Born Rule tells you that you get the naive expected answer from just adding up the probabilities.)

So, to summarize, instead of doing the thing that makes sense, you do this instead:


  • For any particular process by which A can evolve to B (a history), we survey all the events which happened in that history, and calculate the amplitude for each individual event (using our knowledge of the laws of physics, as worked out from experiment or theory.)  Then we multiply those amplitudes to calculate the probability of that particular history.
  • If there is more than one history going from A to B, then we add up the amplitudes of each history (since each of them are separate possible ways to get B), to get the total probability of ending up at B.
  • The probability of observing B is given by taking the absolute value squared of the total amplitude.  Unlike amplitudes, this is always a real number between 0 and 1.  Also, the laws of physics are chosen so that, at the end of the day, the probabilities of all possible final outcomes still add up to 1.  (This requirement is called unitarity).  QM may be weird, but it's not that weird.

(You may wish to go back and compare this, point by point, with the Not-Batshit-Crazy-Probability-Theory earlier in the post.)

So, if you have a system with N different initial states (and therefore N possible final states), you can specify the time evolution over any given time t by writing all of the possible transition amplitudes from each possible initial state A, A', A''... to B, B', B''... in an N x N matrix U(t), with complex numbers in each slot.  If you know about the math of matrices, this matrix is required to be unitary: UU^\dagger = U^\dagger U = I.  That's what enforces unitarity, the rule that probabilities add to 1 no matter which state you start with.

Now, if you wanted to know which specific states are allowed, or which specific unitary matrix to use, then you need to specify a particular quantum mechanical theory, e.g. a harmonic oscillator, or Quantum Electrodynamics., or the Standard Model.  QM is a framework for constructing theories, not a specific theory.  Just like Newton's Law F = ma, or the rules of classical physics, are a general framework; only experiments can tell you which particular forces actually exist in Nature.

In the next post of the series, I'll spell out some of the implications of this framework, and then maybe I'll be in a position to talk about interpretation.

Posted in Physics | 14 Comments