I finally got a copy of Cox and Forshaw’s The Quantum Universe, and a little time to read it, in hopes that it would shed some light on the great electron state controversy. I haven’t finished the book, but I got through the relevant chapter and, well, it doesn’t, really. That is, the discussion in the book doesn’t go into all that much more detail than the discussion on-line, and still requires a fair bit of work to extract a coherent scientific claim.

The argument basically boils down to the idea that the proper mathematical description of a universe containing more than one fermion is a many-particle wavefunction that is overall antisymmetric under the exchange of any two electrons. That is, if you numbered every electron in the universe, you would write the wavefunction down one way, and if you swapped the numbers on two of the electrons, then re-wrote the wavefunction, you would get the same thing you had the first time, but with an overall negative sign. Thus in seeking to make a quantum model of a hydrogen atom in your living room, you would need to write down some sort of 10^{90}x10^{90} Slater determinant (or whatever the actual total number of electrons in the universe is) to get the proper state of the many-(many-many-many…)-body system. The total energy of this state will depend on the energy of all of the individual electrons, and some complicated overlap integrals between every single electron state and every single other electron state, so I hope you have a lot of paper or a really fast (quantum?) computer to help you work it out.

In a sort of hyper-strict technical sense, this is true (though it would not produce any measurable consequences), and this is the origin of the claim that fiddling around with a piece of matter here on Earth changes the energy of every other electron in the entire universe. The problem here arises from the fact that Cox is being sloppy with his language, trying to have it both ways. If you’re going to get hyper-strict and insist that the proper description of the wavefunction of the universe is a 10^{90}-body wavefunction that is overall antisymmetric, then you can’t simultaneously talk about the energy state of a single electron in a particular location as if that is a meaningful thing. If the proper description is a gigantic many-body wavefunction, then there aren’t any single-electron states in specific locations. Everything is a gigantic antisymmetric mush filling the entire universe, and good luck talking sensibly about that.

(This is sort of characteristic of the entire book to this point, but I’ll write a more coherent review of it once I finish the last couple of chapters.)

There’s also a bit of a problem with the specific nature of the claim being made. There are two ways to take the claim that changing the state of an electron here on Earth changes the state of every other electron in the universe: one is to imagine that you have driven the electron from its ground state to some excited state. As several clever people pointed out, this doesn’t work for the rubbing a bit of diamond example Cox uses, unless his hands are somehow hotter than the surface of the Sun, and thus able to provide an electron volt or so to promote an electron to another band, but leaving that aside, the proper hyper-strict description of this would be that the many-body electron wavefunction for the entire universe would move from its ground state to some low-lying excited state that is also overall antisymmetric. This might seem like it would involve shifting a lot of stuff around, but those are the only energy states available to the system, so it doesn’t actually amount to much.

The other way to read it is that rubbing a piece of diamond here on Earth dumps some extra energy into the lattice of carbon atoms making up the local potential energy for electrons in the vicinity, and this changes the available energy states very slightly. Such a change might be adiabatic– that is, if the many-body electron wavefunction of the entire universe was in the ground state initially, it would still be in the ground state. If it was in the *n*^{th} excited state of the electron wavefunction for the entire universe, it would continue to be in the *n*^{th} excited state. The absolute energy of those states would change very slightly as a result, but the relative position wouldn’t change. People in AMO physics use this sort of trick all the time, particularly in BEC experiments– as long as you change the potential slowly compared to the motion of the particles within it, you can manipulate the total energy without taking it out of the ground state.

(The other possibility is that the change isn’t adiabatic, in which case after the shift, the proper description of the state of the universe is a superposition of all possible many-body energy states for all of the electrons in the universe, which will evolve at some incredibly slow rate. Which is so messy I don’t even want to think about it. I suspect that the adiabaticity criteria wouldn’t be that hard to meet, though– you’d need to be fast compared to something like the age of the universe, and slow compared to the motion of the electrons themselves, and neither of those is particularly constraining.)

In either of these situations, a hyper-strict description of the situation would say that the energy of the many-body wavefunction of all the electrons in the universe changed. So, in that regard, the claim is technically true, even though it’s not remotely useful.

The really relevant question is what can you measure in an experiment, in which case you’re looking at the probability of obtaining a certain value for the energy of a sub-part of the gigantic many-body system through its interaction with a measuring apparatus at a particular position in space and time. And none of these changes will produce any change that there would be any hope of ever measuring.

Cox and Forshaw try to justify all this with an appeal to the example of a double-well potential, which gives a way of estimating what sort of difference you’re talking about. They give a wavefunction down at the bottom of the page describing how an electron localized on one side of the system evolves in time, and with a little complex algebra, you can show that the probability of finding it on one side or the other oscillates at a frequency equal to the energy splitting between the states divided by Planck’s constant. For the case of the smallish barrier they animate, the difference is about 0.005 energy units, leading to an oscillation over about 200 time steps. But they also give an example of a barrier that’s about 100 times higher, and that gives an energy difference of about two parts in 10^{50}, which would take 10^{50} time steps to oscillate. Which isn’t too bad if your time steps are Planck times, I suppose, but I wouldn’t wait around for that in seconds.

The relevant quantity for determining the energy difference will be something like the negative exponential of the area under the barrier between wells (in dimensionless units, measuring energy as some multiple of a characteristic energy of the system, and distance in de Broglie wavelengths), in which case, almost any real situation is going to involve differences many orders of magnitude smaller than the part-in-10^{50} difference they give for their toy model. So any realistic treatment of the system will completely ignore these shifts.

A superscript tag not closed has led to some text formatting issues.

You know this whole Coxgate thing is a little bit baffling because they also make the mistake of taking the Feynman sum-over-paths formalism as a literal description of what is going on between measurements and they do this quite early on, much before the “everything is connected” argument.

I think I can confidently say that whatever is going on between measurements it is certainly not this. It does not make any sense as a literal description and is completely basis dependent. Why has no one criticized them on this issue? Is it just because Feynman and Hawking do the same thing, so it must be OK?

I think it’s a combination of the literalized Feynman picture being fairly common in pop-QM and it being a little hard to figure out exactly what they are saying in the early parts. The idea of doing a Feynman-picture treatment of QM from the very beginning is sort of appealing (in roughly the same way that, say, Sakurai’s “start with spin-1/2” approach is appealing), but I’m not wild about the execution. I’ll talk about that more when I write up the whole book.

I think there are stronger things you can say. Will messing with your crystal change the expectation value of any operator with support on some finite region of space-time separated from you by a spacelike interval? It will not.

Not evenby a too-tiny-to-measure amount.Now, that doesn’t necessarily refute Cox’s claim, because I don’t think “the energy of an electron” sufficiently strictly defined is such an operator. But that just further points out how separated from empirical reality the claim is.

Why restrict it to many-body wave function for all electrons? Electrons interact with other particles so why not construct a many-body wave function of all particles and fields in the universe. Clearly a little warming of a diamond crystal will have an effect on all particles and fields in the universe of order 0.

@Matt Leifer: Isn’t the sum-over-classical-paths taken as just a means to calculate the desired amplitude? It’s not really meant as a literal description – do they imply that it describes what is “really” happening at intermediate times in the book ?

@twistor Anyone who says that the sum-over-paths formalism implies that particles really do take all possible paths is overstating things, and they do indeed do this, and so does Hawking in his latest book.

Is it really necessary for his claim to be testable? Scientists only accept testable theories for good reason, but just as an interesting fact based on current theory to tell an audience doesn’t seem like it necessarily needs to be testable.

Is it really necessary for his claim to be testable? Scientists only accept testable theories for good reason, but just as an interesting fact based on current theory to tell an audience doesn’t seem like it necessarily needs to be testable.The problem is, not only is this claim untestable, it plays directly into one of the most common lines of quantum chicanery, namely that the interconnectedness of all things can somehow be leveraged to achieve magical effects.

Were it not giving aid and comfort to people trying to make a dishonest buck convincing the gullible that quantum mechanics is magic, I’d be happy to shrug it off. But despite the brief disclaimers in the book, this

doesfeed directly into the worst sorts of kookery, and as a result, I’m not too happy with it.So long as we’re being hyper-strict, the number of electrons fluctuates, so this should really be a superposition of different high-dimensional universe-filling mushes, should it not?