Quantum Consciousness and the Penrose Fallacy

Blogging on Peer-Reviewed ResearchOver at Neurophilosophy, Mo links to an article by a physicist, posted on the arxiv, that claims to explain visual perceptions using quantum mechanics:

A theory of the process of perception is presented based on von Neumann’s quantum theory of measurement and conscious observation. Conscious events that occur are identified with the quantum mechanical “collapses” of the wave function, as specified by the orthodox quantum theory. The wave function, between such perceptual events, describes a state of potential consciousness which evolves via the Schr\”odinger equation. When a perceptual event is observed, where a wave-function “collapse” occurs through the process of measurement, it actualizes the corresponding neural correlate of consciousness (NCC) brain state. Using this theory and parameters based on known values of neuronal oscillation frequencies and firing rates, the calculated probability distribution of dominance duration of rival states in binocular rivalry under various conditions is in good agreement with available experimental data. This theory naturally explains recently observed marked increase in dominance duration in binocular rivalry upon periodic interruption of stimulus and yields testable predictions for the distribution of perceptual alteration in time.

This sort of “quantum physics explains consciousness” stuff has a long history, and most of it is gibberish. In particular, there tends to be a good deal of circularity in most arguments invoking von Neumann. The “testable predictions” line is new, though, so I decided to download the paper and take a look.

There’s a lot of quantum background in the first couple of sections, but the basic phenomenon being studied is the idea of “binocular rivalry,” visual illusions– like the spinning dancer that’s been all over the place recently– that could be seen in one of two states. Manousakis says that these should be modeled as a quantum superposition of two brain states, and constructs a model that appears to reproduce the distribution of time between “flips” between states.

So, is he really on to something?

Well, the model he produces does reproduce the general shape of the curves, but it’s not clear to me that there’s anything especially quantum about it. He engages in a good deal of quantum handwaving to justify the model, but in the end, what he’s got is just a system in which there’s some probability of seeing either state, and the relative probabilities of the two oscillate (that is, when the probability of seeing one is high, the other is low, and some time later, they switch). He attributes the switching to quantum measurements collapsing the state of the system, and generates a distribution of switching times for various parameters.

The thing is, there’s nothing intrinsically quantum about this arrangement– lots of things oscillate, lots of things are probabilistic, and the mere fact that something is oscillating and probabilistic does not mean that it’s quantum. He goes through a bunch of work to derive these probabilities from a unitary matrix, but that’s just linear algebra, and again, the use of matrix algebra doesn’t make it a quantum system. If it did, then Excel would be quantum, because I can use it to do least-squares curve fitting, and you can describe those in matrix notation.

I’m also a little dubious about calling this a “prediction.” I mean, yes, the model he uses does generate a distribution of switching times with the right general shape, while a different model apparently does not. But it’s really more of a fit than a prediction– the parameters used in the model do not appear to have come from any other measurement, but rather have been chosen to give the best possible agreement between the model distribution and the distributions measured in psychology experiments. Which is fine as far as it goes, but unless you can compare the parameters (a couple of time scales, one for the oscillation and one for the perception of time in the brain) to similar parameters measured by other means, or use those parameters to generate curves for some other condition, you haven’t really proved anything. He applies the model to two different situations (two versions of the same experiment, one involving subjects who were on LSD), but it’s essentially a curve fit both times– none of the parameters are the same, and there’s no explanation for the changes.

It’s nice that the model reproduces the shape of the distribution, and this may or may not be something new– I don’t know anything about neuroscience, so I don’t know if the competing model he dismisses is actually the best anybody else has to offer, or if this is another case of an arrogant physicist reinventing the wheel for another discipline. But really, there’s nothing convincingly quantum here. There’s a slightly weird invocation of the Quantum Zeno Effect, but I really don’t see anything that couldn’t be done with classical probability distributions.

The whole “consciousness is quantum” business is a case of what I tend to think of (perhaps unfairly) as the “Penrose fallacy,” because I first ran into it in The Emperor’s New Mind. The argument always seems to me to boil down to “We don’t understand consciousness, and we don’t understand quantum measurement, therefore they must be related.” And that just doesn’t make any sense at all.

If you want me to believe that quantum processes are responsible for the workings of the brain, I need to see something that’s intrinsically, well, quantum. Oscillations and probability don’t cut it. Some sort of interference effect would. I need a reason to believe that we’re dealing with a wavefunction (or, better yet, a density matrix), and not just a probability distribution.