Interpreting the Quantum World II: What Does It Mean?

In the first installment of this series, we immersed ourselves in the quantum realm that lies beneath our everyday experience and discovered a universe that bears little resemblance to it. Instead of the solid, unambiguously well-behaved objects we’re familiar with, we encountered a unitary framework (\hat U) in which everything (including our own bodies!) is ultimately made of ethereal “waves of probability” wandering through immense configuration spaces along paths deterministically guided by well-formed differential equations and boundary conditions, and acquiring the properties we find in them as they rattle through a random pinball machine of collisions with “measurement” events (\hat M). This is all very elegant—even beautiful… but what does it mean? When my fiancé falls asleep in my arms, her tender touch, the warmth of her breath on my neck, and the fragrance of her hair hardly seem like mere probabilities being kicked around by dice-playing measurements. The refreshing drink of sparkling citrus water I just took doesn’t taste like one either. What is it that gives fire to this ethereal quantum realm? How does the Lord God breathe life into our probabilistic dust and bring about the classical universe of our daily lives (Gen. 2:7)? We finished by distilling our search for answers down to three fundamental dilemmas:

1)  What is this thing we call a wave function? Is it ontologically real, or just mathematical scaffolding we use to make sense of things we don’t yet understand?

2)  What really happens when a deterministic, well-behaved \hat U evolution of the universe runs headlong into a seemingly abrupt, non-deterministic \hat M event? How do we get them to share their toys and play nicely with each other?

3)  If counterfactual definiteness is an ill-formed concept, why are we always left with only one experienced outcome? Why don’t we experience entangled realities?

Physicists, philosophers, and theologians have been tearing their hair out over these questions for almost a century, and numerous interpretations have been suggested (more than you might imagine!). Most attempt to deal with 2), and from there, back out answers to 1) and 3). All deserve their own series of posts, so let me apologize in advance for only having time to do a fly-by of the more important ones here. In what follows I’ll give an overview of the most viable, and well-received interpretations to date, and finish with my own take on all of it. So, without further ado, here are our final contestants…


This is the traditionally accepted answer given by the founding fathers of QM. According to Copenhagen, the cutting edge of reality is in \hat M. The world we exist in is contained entirely in our observations. Per the Born Rule, these are irreducibly probabilistic and non-local,and result in classically describable measurements. The wave function and its unitary history \hat U are mere mathematical artifices we use to describe the conditions under which such observations are made, and have no ontic reality of their own. In this sense, Copenhagen has been called a subjective, or epistemic interpretation because it makes our observations the measure of all things (pun intended :-) ). Although few physicists and philosophers would agree, some of the more radical takes on it have gone as far as to suggest that consciousness is the ultimate source of the reality we observe. Even so, few Copenhagen advocates believe the world doesn’t exist apart from us. The tree that falls in the woods does exist whether we’re there to see and hear it or not. What they would argue is that counterfactuals regarding the tree’s properties and those of whatever caused it to fall don’t instantiate if we don’t observe them. If no one sees the tree fall or experiences any downstream consequence of its having done so, then the question of whether it has or not is irreducibly ambiguous and we’re free to make assumptions about it.

Several objections to Copenhagen have been raised. The idea that ontic reality resides entirely in non-local, phenomenologically discrete “collapse” events that are immune to further unpacking is unsatisfying. Science is supposed to explain things, not explain them away. It’s also difficult to see how irreducibly random \hat M events could be prepared by a rational, deterministic \hat U evolution if the wave function has no ontic existence of its own. To many physicists, philosophers, and theologians, this is less a statement about the nature or reality than the universe’s way of telling us that we haven’t turned over enough stones yet, and may not even be on the right path.

For their part, Copenhagen advocates rightly point out that this is precisely what our experiments tell us—no more, no less. If the formalism correctly predicts experimental outcomes, they say, metaphysical questions like these are beside the point, if not flat-out ill-formed, and our physics and philosophy should be strictly instrumentalist—a stance for which physicist David Mermin coined the phrase “shut up and calculate".

Many Worlds

One response to Copenhagen is that if \hat U seems to be as rational and deterministic as the very real classical physics of our experience, perhaps that’s because it is. But that raises another set of questions. As we’ve seen, nothing about \hat U allows us to grant special status to any of the eigenstates associated with observable operators. If not, then we’re left with no reason other than statistical probability to consider any one outcome of an \hat M event to be any more privileged than another. Counterfactuals to what we don’t observe should have the same ontic status as those we do. If so, then why do our experiments seem to result in discrete irreducibly random and non-local “collapse” events with only one outcome?

According to the Many Worlds (MWI) interpretation, they don’t. The universe is comprised of one ontically real, and deterministic wave function described by \hat U that’s local (in the sense of being free of “spooky-action-at-a-distance”) and there’s no need for hidden variables to explain \hat M events. What we experience as wave function “collapse” is a result of various parts of this universal wave function separating from each other as they evolve. Entangled states within it will be entangled while their superposed components remain in phase with each other. If/when they interact with some larger environment within it, they eventually lose their coherence with respect to each other and evolve to a state where they can be described by the wave functions of the individual states. When this happens, the entanglement has (for lack of a better term) “bled out” to a larger portion of the wave function containing the previous entanglement, and the environment it interacted with, and states are said to have decohered. Thus, the wave function of the universe never actually collapses anywhere—it just continues to decohere into the separate histories of previously entangles states that continue with their own \hat U histories, never interacting with each other again. As parts of the same universal wave function, all are equally real, and questions of counterfactual definiteness are ill-formed.

The advantages of MWI speak for themselves. From a formal standpoint, a universe grounded on \hat U and decoherence that’s every bit as rational and well-behaved as the classical mechanics it replaced, certainly has advantages over one based on subjective hand grenade \hat M events. It deals nicely with the relativity-violating non-locality and irreducible indeterminacy that plague Copenhagen as well. And for reasons I won’t get into here, it also lends itself nicely to quantum field theory, and Feynmann path integral (“sum over histories”) methods that have proven to be very powerful.

But its disadvantages speak just as loudly. For starters, it’s not at all clear that decoherence can fully account for what we directly experience as wave function collapse. Nor is it clear how MWI can make sense of the extremely well-established Born Rule. Does decoherence always lead to separate well-defined histories for every eigenstate associated with every observable that in one way or another participates in the evolution of \hat U? If not, then what meaning can be assigned to probabilities when some states decohere and others don’t. Even if it does, what reasons do we have for expecting that it should obey probabilistic constraints?

And of course, we haven’t even gotten to the real elephant in the room yet—the fact that we’re also being asked to believe in the existence of an infinite number of entirely separate universes that we can neither observe, nor verify, even though the strict formalism of QM doesn’t require us to. Physics aside, for those of us who are theists this raises a veritable hornet’s nest of theological issues. As a Christian, what am I to make of the cross and God’s redemptive plan for us in a sandstorm of universes where literally everything happens somewhere to infinite copies of us all? It’s worth noting that some prominent Christian physicists like Don Page embrace MWI, and see in it God’s plan to ultimately gather all of us to Him via one history or another, so that eventually “every knee shall bow, and every tongue confess, and give praise to God (Rom. 14:11). While I understand where they’re coming from, and the belief that God will gather us all to Himself some day is certainly appealing, this strikes me as contrived and poised for Occam’s razor.

In the end, despite its advantages, and with all due respect to Hawking and its other proponents, I don’t accept MWI because, to put it bluntly, it’s more than merely unnecessary—it’s bat-shit crazy. According to MWI there is, quite literally, a world out there somewhere in which I, Scott Church (peace be upon me), am a cross-dressing, goat worshipping, tantric massage therapist, with 12” Frederick’s of Hollywood stiletto heels (none of that uppity Victoria’s Secret stuff for me!), and D-cup breast implants…

Folks, I am here to tell you… there isn’t enough vodka or LSD anywhere on this lush, verdant earth to make that believable! Whatever else may be said about this veil of tears we call Life, rest assured that indeterministic hand grenade \hat M events and “spooky action at a distance” are infinitely easier to take seriously. :D

De Broglie–Bohm

Bat-shit crazy aside, another approach would be to try separating \hat U and \hat M from each other completely. If they aren’t playing together at all, we don’t have to worry about whether they’ll share their toys. Without pressing that analogy too far, this is the basic idea behind the De Broglie-Bohm interpretation (DBB).

According to DBB, particles do have definite locations and momentums, and these are subject to hidden variables. \hat U is real and deterministic, and per the Schrödinger equation governs the evolution of a guiding, or pilot wave function that exists separate from particles themselves. This wave function is non-local and does not collapse. For lack of a better word, particles “surf” on it, and \hat M events acting on them are governed by the local hidden variables. In our non-local singlet example from Part I, the two electrons were sent off with spin-state box lunches. All of this results in a formalism like that of classical thermodynamics, but with predictions that look much like the Copenhagen interpretation. In DBB the Born Rule is an added hypothesis rather than a consequence of the inherent wave nature of particles. There is no particle/wave duality issue of course because particles and the wave function remain separate, and Bell’s inequalities are accounted for by the non-locality of the latter.

There’s a naturalness to DBB that resolves much of the “weirdness” that has plagued other interpretations of QM. But it hasn’t been well-received. The non-locality of its pilot wave \hat U still raises the whole “spooky action at a distance” issue that physicists and philosophers alike are fundamentally averse to. Separating \hat U from \hat M and duct-taping them together with hidden variables adds layers of complexity not present in other interpretations, and runs afoul of all the issues raised by the Kochen-Specker Theorem. We have to wonder whether our good friend Occam and his trusty razor shouldn’t be invited to this party. And like MWI, it’s brutally deterministic, and as such, subject to all the philosophical and theological nightmares that go along with that, not to mention our direct existential experience as freely choosing people. Even so, for a variety of reasons (including theories of a “sub-quantum realm” where hidden variables can also hide from Kochen-Specker) it’s enjoying a bit of a revival and does have its rightful place among the contenders.

Consistent Histories

As we’ve seen, the biggest challenge QM presents is getting \hat U and \hat M to play together nicely. Most interpretations try to achieve this by denying the ontological reality of one, and somehow rolling it up into the other. What if we denied the individual reality of both, and rolled them up into a larger ontic reality described by an expanded QM formalism? Loosely speaking, Consistent Histories (or Decoherent Histories) attempts to do this by generalizing Copenhagen to a quantum cosmology framework in which the universe evolves along the most internally consistent and probable histories available to it.

Like Copenhagen, CH asserts that the wave function is just a mathematical construct that has no ontic reality of its own. Where it parts company is in its assertion that \hat U represents the wave function of the entire universe, and it never collapses. What we refer to as “collapse” occurs when some parts of it decohere with respect to larger parts leading, it is said, to macroscopically irreversible outcomes that are subject to the ordinary additive rules of classical probability. In CH, the potential outcomes of any observation (and thus, the possible histories the universe might follow) are classified by how homogeneous and consistent they are. This, it’s said, is what makes some of them more probable than others. A homogeneous history is one that can be described by a unique temporal sequence of single-outcome propositions, such as, “I woke up” > “I got out of bed” > “I showered” … Those that cannot be, such as ones that include statements like “I walked to the grocery store or drove there” are not. These events can be represented by a projection operator \hat P from which histories can be built, and the more internally consistent they are (per criteria contained in a class operator \hat P), the more probable they are.

Thus, in CH \hat M is not a fundamental QM concept. The evolution of the universe is described by a mathematical construct, \hat U that can be interpreted as decohering into the most internally consistent (and therefore probable) homogeneous histories possible for it to. The paths these histories take give us a framework in which some sets of classical questions can be meaningfully asked, and other can’t. Returning to our electron singlet example, CH advocates would maintain that the wave function wasn’t entangled in any real physical sense. Rather, there are two internally consistent histories for the prepared electrons that could have emerged a spin measurement: Down/Up, and Up/Down. Down/Up/Up/Down isn’t a meaningful state, so it’s meaningless to say that the universe was “in” it. Rather, when the entire state of us/laboratory/observation is accounted for, we will find that the universe followed the history that was most consistent for that. There is no need to discriminate between observer and observed. Decoherence is enough to account for the whole history, so \hat M is a superfluous construct.

CH advocates claim that it offers a cleaner, and less paradoxical interpretation of QM and classical effects than its competitors, and a logical framework for discriminating boundaries between classical and quantum phenomena. But it too has its issues. It’s not at all clear that decoherence is as macroscopically irreversible as it’s claimed to be, or that by itself it can fully account for our experience of \hat M. It also requires additional projection and class operator constructs not required by other interpretations, and these cannot be formulated to any degree practical enough to yield a complete theory.

Objective Collapse Theories

Of course, we could just make our peace with \hat U and \hat M. Objective collapse, or quantum mechanical spontaneous localization (QMSL) models maintain that the universe reflects both because the wave function is ontologically real, and “measurements” (perhaps interactions is a better term here) really do collapse it. According to QMSL theories, the wave function is non-local, but collapses locally in a random manner (hence, the “spontaneous localization”), or when some physical threshold is crossed. Either way, observers play no special role in the collapse itself. There are several variations on this theme. The Ghirardi–Rimini–Weber theory for instance, emphasizes random collapse of the wave function to highly probably stable states. Roger Penrose has proposed another theory based on energy thresholds. Particles have mass-energy that, per general relativity, will make tiny "dents" in the fabric of space-time. According to Penrose, in the entangled states of their wave function these will superpose as well, and there will be an associated energy difference that entangled states can only sustain up to a critical threshold energy difference (which he theorizes to be on the order of one Planck mass). When they decohere to a point where this threshold is exceeded, the wave function collapses per the Born Rule in the usual manner (Penrose, 2016).

For our purposes, this interpretation pretty much speaks for itself and so do its advantages. Its disadvantages lie chiefly in how we understand and formally handle the collapse itself. For instance, it’s not clear this can be done mathematically without violating conservation of energy or bringing new, as-yet undiscovered physics to the game. In the QMSL theories that have been presented to date, if energy is conserved the collapse doesn’t happen completely, and we end up with left-over “tails” in the final wave function state that are difficult to make sense of with respect to the Born Rule. It has also proven difficult to render the collapse compliant with special relativity without creating divergences in probability densities (in other words, blowing up the wave function). Various QMSL theories have handled issues like this in differing ways, some more successfully than others, and research in his area continues. But to date, none of the theories on the table offers a slam-dunk.

The other problem QMSL theories face is a lack of experimental verification. Random collapse theories like Ghirardi–Rimini–Weber could be verified if the spontaneous collapse of a single particle could be detected. But these are thought to be extremely rare, and to date, none have been observed. However, several tests for QMSL theories have been proposed (e.g. Marshall et al., 2003; Pepper et al., 2012; or Weaver et al., 2016 to name a few), and with luck, we’ll know more about them in the next decade or so (Penrose, 2016).


There are many other interpretations of QM, some of which are more far-fetched than others. But the ones we’ve covered today are arguably the most viable, and as such, the most researched. As we’ve seen, all have their strengths and weaknesses. Personally, I lean toward Objective Collapse scenarios. It’s hard to believe that something as well-constrained and mathematically coherent as \hat U isn’t ontologically real. Especially when the alternative bedrock reality being offered is \hat M, which is haphazard and difficult to separate from our own subjective consciousness (the latter in particular smacks of solipsism, which has never been a very compelling, or widely-accepted point of view). Of the competing alternatives that would agree about \hat U, MWI is probably the strongest contender. But for reasons that by now should be disturbingly clear, it’s far easier for me to accept a non-local wave function collapse than its take on \hat M. Call me unscientific if you will, but ivory towers alone will never be enough to convince me that I have a cross-dressing, goat-worshipping, voluptuous doppelganger somewhere that no one can ever observe. Other interpretations don’t fare much better. Most complicate matters unnecessarily and/or deal with the collapse in ways that render \hat M deterministic.

It’s been said that if your only tool is a hammer, eventually everything is going to look like a nail. It seems to me that such interpretations are compelling to many because they’re tidy. Physicists and philosophers adore tidy! Simple, deterministic models with well-defined differential equations and boundary conditions give them a fulcrum point where they feel safe, and from which they think they can move the world. This is fine for what it’s worth of course. Few would dispute the successes our tidy, well-formed theories have given us. But if the history of science has taught us anything, it’s that nature isn’t as enamored with tidiness as we are. Virtually all our investigations of QM tell us that indeterminism cannot be fully exorcized from \hat M, and the term “collapse” fits it perfectly. Outside the laboratory, everything we know about the world tells us we are conscious beings made in the image of our Creator. We are self-aware, intentional, and capable of making free choices—none of which is consistent with tidy determinism. Anyone who disputes that is welcome to come up with a differential equation and a self-contained set of data and boundary conditions that required me to decide on a breakfast sandwich rather than oatmeal this morning… and then collect their Nobel and Templeton prizes and retire to the lecture circuit.

The bottom line is that we live in a universe that presents us with \hat U and \hat M. As far as I’m concerned, if the shoe fits I see no reason not to wear it. Yes, QMSL theories have their issues. But compared to other interpretations, its problems are formalistic ones of the sort I suspect will be dealt with when we’re closer to a viable theory of quantum gravity. When we as students are ready, our teacher will come. Until then, as Einstein once said, the world should be made as simple as possible, but no simpler.

When I was in graduate school my thesis advisor used to say that when people can’t agree on the answer to some question one of two things is always true: Either there isn’t enough evidence to answer the question definitively, or we’re asking the wrong question. Perhaps many of our QM headaches have proven as stubborn as they are because we’re doing exactly that… asking the wrong questions. One possible case in point… physicists have traditionally considered \hat U to be sacrosanct—the one thing that above all others, only the worst apostates would ever dare to question. Atheist physicist Sean Carroll has gone so far as to claim that it proves the universe is past-eternal, and God couldn’t have created it! [There are numerous problems with that of course, but they’re beyond the scope of this discussion.] However, Roger Penrose is now arguing that we need to do exactly that (fortunately, he’s respected enough in the physics community that he can get away with such challenges to orthodoxy without being dismissed as a crank or heretic). He suggests that if we started with the equivalence principle of general relativity instead, we could formulate a QMSL theory of \hat U and \hat M that would resolve many, if not most QM paradoxes, and this is the basis for his gravitationally-based QMSL theory discussed above. Like its competitors, Penrose’s proposal has challenges of its own, not the least of which are the difficulties that have been encountered in producing a rigorous formulation \hat M along these lines. But of everything I’ve seen so far, I find it to be particularly promising!

But then again, maybe the deepest secrets of the universe are beyond us. Isaac Newton once said,

“I do not know what I may appear to the world, but to myself I seem to have been only like a boy playing on the seashore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.”

As scientists, we press on, collecting our shiny pebbles and shells on the shore of the great ocean with humility and reverence as he did. But it would be the height of hubris for us to presume that there’s no limit to how much of it we can wrap our minds around before we have any idea what’s beyond the horizon. As J. B. S. Haldane once said,

"My own suspicion is that the Universe is not only queerer than we suppose, but queerer than we can suppose." (Haldane, 1928)

Who knows? Perhaps he was right. God has chosen to reveal many of His thoughts to us. In His infinite grace, I imagine He’ll open our eyes to many more. But He certainly isn’t under any obligation to reveal them all, nor do we have any reason to presume that we could handle it if He did. But of course, only time will tell.

One final thing… Astute readers may have noticed one big elephant in the room that I’ve danced around, but not really addressed yet… relativity. Position, momentum, energy, and time have been a big part of our discussion today… and they’re all inertial frame dependent, and our formal treatment of \hat U and \hat M must account for that. There are versions of the Schrödinger equation that do this—most notably the Dirac and Klein Gordon equations. Both however are semi-classical equations—that is, they dress up the traditional Schrödinger equation in a relativistic evening gown and matching handbag, but without an invitation to the relativity ball. For a ticket to the ball, we need to take QM to the next level… quantum field theory.

But these are topics for another day, and I’ve rambled enough already… so once again, stay tuned! 



Haldane, J. B. S. (1928). Possible worlds: And other papers. Harper & Bros.; 1st edition (1928). Available online at Accessed May 17, 2017.

Marshall, W., Simon, C., Penrose, R., & Bouwmeester, D. (2003). Towards quantum superpositions of a mirror. Physical Review Letters, 91 (13). Available online at Accessed June 9, 2017.

Pepper, B., Ghobadi, R., Jeffrey, E., Simon, C., & Bouwmeester, D. (2012). Optomechanical superpositions via nested interferometry. Physical review letters, 109 (2). Available online at Accessed June 9, 2017.

Penrose, R. (2016). Fashion, faith, and fantasy in the new physics of the universe. Princeton University Press, Sept. 13, 2016. ISBN: 0691178534; ASIN: B01AMPQTRU. Available online at Accessed May 16, 2017.

Weaver, M. J., Pepper, B., Luna, F., Buters, F. M., Eerkens, H. J., Welker, G., ... & Bouwmeester, D. (2016). Nested trampoline resonators for optomechanics. Applied Physics Letters, 108 (3). Available online at Accessed June 9, 2017.

Posted in Metaphysics, Physics | 24 Comments

Interpreting the Quantum World I: Measurement & Non-Locality

In previous posts Aron introduced us to the strange, yet compelling world of quantum mechanics and its radical departures from our everyday experience. We saw that the classical world we grew up with, where matter is composed of solid particles governed by strictly deterministic equations of state and motion, is in fact somewhat “fuzzy.” The atoms, molecules, and subatomic particles in the brightly colored illustrations and stick models of our childhood chemistry sets and schoolbooks are actually probabilistic fields that somehow acquire the properties we find in them when they’re observed. Even a particle’s location is not well-defined until we see it here, and not there. Furthermore, because they are ultimately fields, they behave in ways the little hard “marbles” of classical systems cannot, leading to all sort of paradoxes. Physicists, philosophers, and theologians alike have spent nearly a century trying to understand these paradoxes. In this series of posts, we’re going to explore what they tell us about the universe, and our place in it.

To quickly recap earlier posts, in quantum mechanics (QM) the fundamental building block of matter is a complex-valued wave function \Psi whose squared amplitude is a real-valued number that gives the probability density of observing a particle/s in any given state. \Psi is most commonly given as a function of the locations of its constituent particles, \Psi\left ( \vec{r_{1}}, \vec{r_{2}}... \vec{r_{n}} \right ), or their momentums, \Psi\left ( \vec{p_{1}}, \vec{p_{2}}... \vec{p_{n}} \right ) (but not both, which as we will see, is important), but will also include any of the system’s other variables we wish to characterize (e.g. spin states). The range of possible configurations these variables span is known as the system’s Hilbert space. As the system evolves, its wave function wanders through this space exploring its myriad probabilistic possibilities. The time evolution of its journey is derived from its total energy in a manner directly analogous to the Hamiltonian formalism of classical mechanics, resulting in the well-known time-dependent Schrödinger equation. Because \left | \Psi \right |^{2} is a probability density, its integral over all of the system’s degrees of freedom must equal 1. This irreducibly probabilistic aspect of the wave function is known as the Born Rule (after Max Born who first proposed it), and the mathematical framework that preserves it in QM is known as unitarity. [Fun fact: Pop-singer Olivia Newton John is Born’s granddaughter!]

Notice that \Psi is a single complex-valued wave function of the collective states of all its constituent particles. This makes for some radical departures from classical physics. Unlike a system of little hard marbles, it can interfere with itself—not unlike the way the countless harmonics in sound waves give us melodies, harmonies, and the rich tonalities of Miles Davis’ muted trumpet or Jimi Hendrix’s Stratocaster. The history of the universe is a grand symphony—the music of the spheres! Its harmonies also lead to entangled states, in which one part may not be uniquely distinguishable from another. So, it will not generally be true that the wave function of the particle sum is the sum of the individual particle wave functions,

\Psi\left ( \vec{r_{1}}, \vec{r_{2}}... \vec{r_{n}} \right ) \neq \Psi\left ( \vec{r_{1}} \right )\Psi\left ( \vec{r_{2}} \right )... \Psi\left ( \vec{r_{n}} \right )

until the symphony progresses to a point where individual particle histories decohere enough to be distinguished from each other—melodies instead of harmonies.

Another consequence of this wave-like behavior is that position and momentum can be converted into each other with a mathematical operation known as a Fourier transform. As a result, the Hilbert space may be specified in terms of position or momentum, but not both, which leads to the famous Heisenberg Uncertainty principle,

\Delta x\Delta p \geqslant \hbar/2

where \hbar is the reduced Planck constant. It’s important to note that this uncertainty is not epistemic—it’s an unavoidable consequence of wave-like behavior. When I was first taught the Uncertainty Principle in my undergraduate Chemistry series, it was derived by modeling particles as tiny pool ball “wave packets” whose locations couldn’t be observed by bouncing a tiny cue-ball photon off them without batting them into left field with a momentum we couldn’t simultaneously see. As it happens, this approach does work, and is perhaps easier for novice physics and chemistry students to wrap their heads around. But unfortunately, it paints a completely wrong-heading picture of the underlying reality. We can pin down the exact location of a particle, but in so doing we aren’t simply batting it away—we’re destroying whatever information about momentum it originally had, rendering it completely ambiguous, and vice versa (in the quantum realm paired variables that are related to each other like this are said to be canonical). The symphony is, to some extent, irreducibly fuzzy!

So… the unfolding story of the universe is a grand symphony of probability amplitudes exploring their Hilbert space worlds along deterministic paths, often in entangled states where some of their parts aren’t entirely distinct from each other, and acquiring whatever properties we find them to have only when they’re measured, many of which cannot simultaneously have exact values even in principle. Strange stuff to say the least! But the story doesn’t end there. Before we can decipher what it all means (or, I should say, get as close as doing so as we ever will) there are two more subtleties to this bizarre quantum world we still need to unpack… measurement and non-locality.


The first thing we need to wrap our heads around is observation, or in quantum parlance, measurement. In classical systems matter inherently possesses the properties that it does, and we discover what those properties are when we observe them. My sparkling water objectively exists in a red glass located about one foot to the right of my keyboard, and I learned this by looking at it (and roughly measuring the distance with my thumb and fingers). In the quantum realm things are messier. My glass of water is really a bundle of probabilistic particle states that in some sense acquired its redness, location, and other properties by the very act of my looking at it and touching it. That’s not to say that it doesn’t exist when I’m not doing that, only that its existence and nature aren’t entirely independent of me.

How does this work? In quantum formalism, the act of observing a system is described by mathematical objects known as operators. You can think of an operator as a tool that changes one function into another one in a specific way—like say, “take the derivative and multiply by ten.” The act of measuring some property A (like, say, the weight or color of my water glass) will apply an associated operator \hat A to its initial wave function state \Psi_{i} and change it to some final state \Psi_{f},

\hat A \Psi_{i} = \Psi_{f}

For every such operator, there will be one or more states \Psi_{i} could be in at the time of this measurement for which \hat A would end up changing its magnitude but not its direction,

\begin{bmatrix} \hat A \Psi_{1} = a_{1}\Psi_{1}\\ \hat A \Psi_{2} = a_{2}\Psi_{2}\\ ...\\ \hat A \Psi_{n} = a_{n}\Psi_{n} \end{bmatrix}

These states are called eigenvectors, and the constants a_{n} associated with them are the values of A we would measure if \Psi is in any of these states when we observe it. Together, they define a coordinate system associated with A in the Hilbert space that \Psi can be specified in at any given moment in its history. If \Psi_{i} is not in one of these states when we measure A, doing so will force it into one of them. That is,

\hat A \Psi_{i} \rightarrow \Psi_{n}

and a_{n} will be the value we end up with. The projection of \Psi_{i} on any of the n axes gives the probability amplitude that measuring A will put the system into that state with the associated eigenvalue being what we measure,

P(a_{n}) = \left | \Psi_{i} \cdot \Psi_{n} \right |^{2}

So… per the Schrödinger equation, our wave function skips along its merry, deterministic way through a Hilbert space of unitary probabilistic states. Following a convention used by Penrose (2016), let’s designate this part of the universe’s evolution as \hat U. All progresses nicely, until we decide to measure something—location, momentum, spin state, etc. When we do, our wave function abruptly (some would even be tempted to say magically) jumps to a different track and spits out whatever value we observe, after which \hat U starts over again in the new track.

This event—let’s call it \hat M—has nothing whatsoever to do with the wave function itself. The tracks it jumps to are determined by whatever properties we observe, and the outcome of these jumps are irreducibly indeterminate. We cannot say ahead of time which track we’ll end up on even in principle. The best we can do is state that some property A has such and such probability of knocking \Psi into this or that state and returning its associated value. When this happens, the wave function is said to have “collapsed.” [Collapsed is in quotes here for a reason… as we shall see, not all interpretations of quantum mechanics accept that this is what actually happens!]


It’s often said that quantum mechanics only applies to the subatomic world, but on the macroscopic scale of our experience classical behavior reigns. For the most part this is true. But… as we’ve seen, \Psi is a wave function, and waves are spread out in space. Subatomic particles are only tiny when we observe them to be located somewhere. So, if \hat M involves a discrete collapse, it happens everywhere at once, even over distances that according to special relativity cannot communicate with each other—what some have referred to as “spooky action at a distance.” This isn’t mere speculation, nor a problem with our methods—it can be observed.

Consider two electrons in a paired state with zero total spin. Such states (which are known as singlets) may be bound or unbound, but once formed they will conserve whatever spin state they originated with. In this case, since the electron cannot have zero spin, the paired electrons would have to preserve antiparallel spins that cancel each other. If one were observed to have a spin of, say, +1/2 about a given axis, the other would necessarily have a spin of -1/2. Suppose we prepared such a state unbound, and sent the two electrons off in opposite direction. As we’ve seen, until the spin state of one of them is observed, neither will individually be in any particular spin state. The wave function will be an entangled state of two possible outcomes, +/- and -/+ about any axis. Once we observe one of them and find it in, say, a “spin-up” state (+1/2 about a vertical axis), the wave function will have collapsed to a state in which the other must be “spin-down” (-1/2), and that will be what we find if it’s observed a split second later as shown below.

But what would happen if the two measurements were made over a distance too large for a light signal to travel from the first observation point to the second one during the time delay between the two measurements? Special relativity tells us that no signal can communicate faster than the speed of light, so how would the second photon know that it was supposed to be in a spin-down state? Light travels 11.8 inches in one nanosecond, so it’s well within existing microcircuit technology to test this, and it has been done on many occasions. The result…? The second photon is always found in a spin state opposite that of the first. Somehow, our second electron knows what happened to its partner… instantaneously!

If so, this raises some issues. Traditional QM asserts that the wave function gives us a complete description of a system’s physical reality, and the properties we observe it to have are instantiated when we see them. At this point we might ask ourselves two questions;

1)  How do we really know that prior to our observing it, the wave function truly is in an entangled state of two as-yet unrealized outcomes? What if it’s just probabilistic scaffolding we use to cover our lack of understanding of some deeper determinism not captured by our current QM formalism?

2)  What if the unobserved electron shown above actually had a spin-up property that we simply hadn’t learned about yet, and would’ve had it whether it was ever observed or not (a stance known as counterfactual definiteness)? How do we know that one or more “hidden” variables of some sort hadn’t been involved in our singlet’s creation, and sent the two electrons off with spin state box lunches ready for us to open without violating special relativity (a stance known as local realism)?

Together, these comprise what’s known as local realism, or what Physicist John Bell referred to as the “Bertlmann’s socks” view (after Reinhold Bertlmann, a colleague of his at CERN). Bertlmann was known for never wearing matching pairs of socks to work, so it was all but guaranteed that if one could observe one of his socks, the other would be found to be differently colored. But unlike our collapsed electron singlet state, this was because Bertlmann had set that state up ahead of time when he got dressed… a “hidden variable” one wouldn’t be privy to unless they shared a flat with him. His socks would already have been mismatched when we discovered them to be, so no “spooky action at a distance” would be needed to create that difference when we first saw them.

In 1964 Bell proposed a way to test this against the entangled states of QM. Spin state can only be observed in one axis at a time. Our experiment can look for +/- states about any axis, but not together. If an observer “Alice” finds one of the electrons in a spin-up state, the second photon will be in a spin-down state. What would happen if another observer “Bob” then measured its spin state about an axis at, say, a 45-deg. angle to vertical as shown below?

The projection of the spin-down wave function on the eigenvector coordinate system of Bob’s measurement will translate into probabilities of observing + or – states in that plane. Bell produced a set of inequalities bearing his name which showed that if the electrons in our singlet state had in fact been dressed in different colored socks from the start, experiments like this will yield outcomes that differ statistically from those predicted by traditional QM. This too has been tested many times, and the results have consistently favored the predictions of QM, leaving us with three options;

a)  Local realism is not valid in QM. Particles do not inherently possess properties prior to our observing them, and indeterminacy and/or some degree of “spooky action at a distance” cannot be fully exorcised from \hat M.

b)  Our understanding of QM is incomplete. Particles do possess properties (e.g. spin, location, or momentum) whether we observe them or not (i.e. – counterfactuals about measurement outcomes exist), but our understanding of \hat U and \hat M doesn’t fully reflect the local realism that determines them.

c)  QM is complete, and the universe is both deterministic and locally real without the need for hidden variables, but counterfactual definiteness is an ill-formed concept (as in the "Many Worlds Interpretation" for instance).

Nature seems to be telling us that we can’t have our classical cake and eat it. There’s only room on the bus for one of these alternatives. Several possible loopholes have been suggested to exist in Bell’s inequalities through which underlying locally real mechanics might slip through. These have led to ever more sophisticated experiments to close them, which continue to this day. So far, the traditional QM frameworks has survived every attempt to up the ante, painting Bertlmann’s socks into an ever-shrinking corner. In 1966, Bell, and independently in 1967, Simon Kochen and Ernst Specker, proved what has since come to be known as the Kochen-Specker Theorem, which tightens the noose around hidden variables even further. What they showed, was that regardless of non-locality, hidden variables cannot account for indeterminacy in QM unless they’re contextual. Essentially, this all but dooms counterfactual definiteness in \hat M. There are ways around this (as there always are if one is willing to go far enough to make a point about something). The possibility of “modal” interpretations of QM have been floated, as has the notion of a “subquantum” realm where all of this is worked out. But these are becoming increasingly convoluted, and poised for Occam’s ever-present razor. As of this writing, hidden variables theories aren’t quite dead yet, but they are in a medically induced coma.

In case things aren’t weird enough for you yet, note that a wave function collapse over spacelike distances raises the specter of the relativity of simultaneity. Per special relativity, over such distances the Lorentz boost blurs the distinction between past and future. In situations like these it’s unclear whether the wave function was collapsed by the first observation or the second one, because which one is in the future of the other is a matter of which inertial reference frame one is viewing the experiment from. Considering that you and I are many-body wave functions, anything that affects us now, like say, stubbing a toe, collapses our wave function everywhere at once. As such, strange as it may sound, in a very real sense it can be said that a short while ago your head experienced a change because you stubbed your toe now, not back then. And… It will experience a change shortly because you did as well. Which of these statements is correct depends only on the frame of reference from which the toe-stubbing event is viewed. It’s important to note that this has nothing to do with the propagation of information along our nerves—it’s a consequence of the fact that as “living wave functions”, our bodies are non-locally spread out across space-time to an extent that slightly blurs the meaning of “now”.  Of course, the elapsed times associated with the size of our bodies are too small to be detected, but the basic principle remains.

Putting it all together

Whew… that was a lot of unpacking! And the world makes even less sense now than it did when we started. Einstein once said that he wanted to know God’s thoughts, the rest were just details. Well it seems the mind of God is more inscrutable than we ever imagined! But now we have the tools we need to begin exploring some of the way His thoughts have been written into the fabric of creation. Our mission, should we choose to accept it, is to address the following;

1)  What is this thing we call a wave function? Is it ontologically real, or just mathematical scaffolding we use to make sense of things we don’t yet understand?

2)  What really happens when a deterministic, well-behaved \hat U symphony runs headlong into a seemingly abrupt, non-deterministic \hat M event? How do we get them to share their toys and play nicely with each other?

3)  If counterfactual definiteness is an ill-formed concept and every part of the wave function is equally real, why do our observations always leave us with only one experienced outcome? Why don’t we experience entangled realities, or even multiple realities?

In the next installment in this series we’ll delve into a few of the answers that have been proposed so far. The best is yet to come, so stay tuned!


Penrose, R. (2016). Fashion, faith, and fantasy in the new physics of the universe. Princeton University Press, Sept. 13, 2016. ISBN: 0691178534; ASIN: B01AMPQTRU. Available online at Accessed June 11, 2017.

Posted in Metaphysics, Physics | 10 Comments

Scott Church guest blogging

In order to cover for my recent wrist injury (which is getting better, but slowly, thanks for asking), I've invited St. Scott Church to fill in a bit.

Scott is a frequent commenter; going out of his way to be helpful when explaining things to non-experts, but with little taste for nonsense from those who ought to know better.  He got an M.S. in Applied Physics from the University of Washington in 1988, and now works as a photographer in Seattle.  Here is his personal website.

Scott will be writing a series of at least 2 posts on the Interpretation of Quantum Mechanics (and maybe later on, some other topics).

Also, to all those who have left thoughtful questions, sorry that I can't attend to them all right now.  But I still might respond to a few of them.  (I will of course continue to read and moderate the comments.)

Posted in Blog | 1 Comment

Spam filter problems

For the past couple months my spam filter (Akismet) has falsely identified a rather large number of legitimate comments as spam.

(For those of you who arrived on the internet yesterday, "spam" is off-topic comments trying to get people to click on links to buy things.  Mostly it is left by "bots" that automatically scan the internet.   When I installed a second layer of protection called "Cookies for Comments" a few months ago, Akismet was processing over a million spam comments a month, causing a slowdown on the server!  The vast majority of these were caught and removed by the filter, but sometimes it gets it wrong and lets spam through (a "false negative") or rejects legit comments (a "false positive").

I'm periodically checking the spam filter to rescue these false positives (just did 2 today), but you can help me out by doing the following:

  • Send me an email if you try to leave a legitimate comment and it does not appear on the website within a few comments.  You can find a working email for me on my personal website, which is linked to in the bar at the top of the page.
  • If convenient, go ahead and include a copy of your comment in the email.  (Generally it's a good idea to save long comments on your home computer before submitting, but if you didn't do this, you can often reclaim it by pressing the `back' button on your browser.)
    My spam filter keeps a copy of all comments flagged as spam for 15 days, so I probably don't actually need this, but rarely there are other technical problems that cause comments to disappear.
  • Please don't take it personally if your comment doesn't appear.  The spam filtering is done automatically by a hidden algorithm, and I don't have anything to do with it!
    If you are an insecure person, please don't waste time worrying that maybe you stepped over an invisible line and accidentally insulted me, and therefore I blocked your comments without telling you.   If you are a flesh-and-blood human being, your comment was probably legitimate.
    While I do occasionally remove "by hand" comments that violate the rules, I generally try to notify the person by email, or in that comments section, except for the worst offenders.  So unless you went on a blasphemous tirade or are an obviously-insulting troll, that's probably not you!  (And even if that is you, you are certainly entitled to respectfully ask by email—once, anyway—for an explanation of why your comment was deleted.)
  • All this assumes you left me a real email address.  Of course, if you violated the rules by leaving a fake email address, then you might not receive my explanation.  In that case, you deserve what you get, and I may also delete your comment!  (But sometimes, in the case of commenters otherwise engaging in good faith, I have looked the other way on this issue, in order to show mercy to the weak.)
    Obviously, I promise not to give your email address to the spammers, or otherwise share this information without your permission!
  • It is also necessary for your web browser to accept "cookies" in order for you to successfully leave a comment.  If this happens to you, you will be redirected to a page with an explanation & instructions.  If you are wrongly redirected to this page, please send me an email saying so.  Also, if for some reason you don't want to accept cookies from other websites, you can add an exception for Undivided Looking.

Christ is risen from the dead,
Trampling down death by death,
And upon those in the tombs
Bestowing life!

-Paschal troparion

Posted in Blog | 2 Comments

Christ is Risen!

Alleluia!  Christ is Risen!

Rafael - ressureicaocristo01.jpg

The Resurrection of Christ by St. Raphael.

Tapestry version in the Vatican museum, actually I don't have a lot more than that to say right now.  But it seemed relevant, so I thought I'd post it.  If you want to read more about the significance of this event, click here.

Posted in Theology | 14 Comments