Parallel Universes and Morality

A little while back, when I complained about the treatment of the multiverse in Anathem, a number of people commented to say that it wasn’t all that bad. And, indeed, they were right. Compared to last night’s History Channel program on “Parallel Universes,” Stephenson’s book is a miracle of subtle nuance, teasing out the crucial distinctions between different theories, and making them clear to the reader.

Yeesh. That was so actively irritating that I don’t know where to start. So I won’t– you can read what I wrote in the earlier post, and apply it to the History Channel, ten times over.

Instead, I’ll take this opportunity to complain about something else that bugs me whenever the subject of Many-Worlds or multiverses comes up: the attempt to claim that it has moral implications. There were at least two different points at which they made the argument that there must exist universes containing versions of ourselves in which all possible actions are explored. Then the narrator, or one of the talking heads, would come in with some breathless exclamation about how this has profound ethical implications, because there’s some universe out there in which you break the law and get away with it.

The claim that an infinite universe, parallel universes, or the Many-World Interpretation of quantum theory have something to say about ethics is not quite as stupid as the “Atheism is evil because without God there is no morality” argument. It comes pretty close, though.

Sadly, this sort of thing is distressingly popular. It’s rare for Many-Worlds to come up in a science fiction context without somebody dredging up the old Larry Niven story in which somebody proves that Many-Worlds is true, and then the researchers all commit suicide as a result. I don’t think the reasoning is exactly “It must happen in some universe, so it might as well be this one,” but it’s not a whole lot less dumb.

Let me be clear: There is absolutely no moral or ethical content in Many-Worlds, multiverse cosmology, or the massive infinite-universe inflationary scenario. None whatsoever. Zip, zero, nada.

Ethics is about determining right action. This is not a calculation that is affected by the existence of alternate universes. Knowing that some alternate-universe version of me will kick a puppy doesn’t make it all right for me to kick puppies, any more than knowing that George Bush is down with waterboarding makes it all right to torture prisoners. The right thing to do is the right thing to do, regardless of what anybody else does or doesn’t do, in this universe or any other.

To the extent that we have free will, or believe that we have free will, it is incumbent upon us to choose to do the right thing. If some sort of quantum fluctuation causes some alternate-universe version of you to do the wrong thing, that does not reflect poorly upon your character, and noble deeds in an alternate universe do not mitigate discreditable actions in this universe. It’s up to us to do the right thing, whatever that may be.

The morality of multiple universes is a wonderful topic for dorm-room stoner bullshit (and crap television documentaries, more’s the pity), but absent a great deal of marijuana, I just don’t see any way to take this nonsense seriously. It’s not science. It’s not even philosophy.

34 thoughts on “Parallel Universes and Morality

  1. I agree. I never bought into this many-worlds-affects-morality-and-behavior business even when Larry Niven first did it in “All the Myriad Ways”. The argument usually runs something like, “Why should I not kill Chad? Even if I choose not to, some other version of me chooses to do so!” That’s stupid – you’re stuck with your personal branch of the multiverse; you might as well make it one in which you’d like to live, to the extent that you can. No one is going to decide to, e.g., throw herself off a bridge because an alternate version of herself won’t jump. Who cares what alternate versions of you are doing?

  2. Yeah, but the problem is, if you take the conclusions of the many worlds hypothesis out to their farthest logical extremes– that all possible actions are taken– that does have profound implications on our understanding of “free will,” which in turn has profound implications on our understanding of morality.

    If all actions are taken– if they are all, literally, acted out, in all possible permutations– what exactly is meant by freedom? What is meant by choice? You assert that it’s incumbent on us to “choose” right actions, but as you are using it, the word appears to be a meaningless noise.

    And contra your assertions, this sure as hell is philosophy. Kierkegaard would have had a field day with it.

  3. Actually there may be a quirk of MW theory which has possible moral implications as well as serves to create doubt over the theory as such. It’s called “quantum suicide” (c.f. quantum immortality, check Wikipedia.) If we imagine that only your conscious selves serve as possible carriers of what “you observe” as you split up into MW, then you can do this:

    Set up the Schrödinger’s cat experiment with yourself inside. Well, there is always a chance you will survive even after many half-lives of the death-dealing source. Hence there are always some “yous” which can enter some “branches” of MW. Sure, “you” die most of the time but the conscious survivors are the ones saying “Hey, here I still am after all this time.” Hence you can never die! If you get into the chamber, you will always “still be there” wondering WTH happened and how you survived millions of half-lives of an unstable nucleus. BTW this does not depend on whether consciousness collapses WFs or is “weird” since any sentient being can only notice its survival, right? Heh, check out my humor piece on this at http://www.tidewater.us.mensa.org/Newsletter/Current/mtides01.pdf. (That will move to archives next month.)

    It is eerily reminiscent of the MV (but different constants) idea of explaining anthropic fine-tuning. OK, so the moral relevance: is it ethical to try and kill yourself this way? Your “self” always seems to survive, but what of all the other versions you made die (and what if death is not sudden …) Also, what if I talk someone else into doing it, say some nerd reading this doggerel right now … Or maybe I make them. If the theory is right, that person will not experience death – i.e. will only experience continued life (and you can get out of the machine too if it has a time limit on possible action.) But I will likely see the person die, so I “caused his death” in most of the worlds he leaves behind. See the problem?

    Finally, having that outcome ruins the whole “proportional” argument that sophistrizing MW advocates put out to imply that each of us has the same “expectation” of outcomes in MW as in ordinary collapse. BTW I don’t believe “decoherence” is a non-circular argument for making “apparent” collapse work either.

  4. What good does it do me in THIS universe if one of my alternates in another universe can get away with murder?

    If I try it in this universe, I go to jail in this universe. So what if one of my alternates is a jackass or outright criminal? Let HIS universe deal with him. It has nothing to do with MY universe.

    I’m sure there’s a universe somewhere where Adolf Hitler was accepted into the art academy and became a renowned painter. What good does that do our universe?

  5. One can imagine a universe in which pain and fear do not exist, where living things are impervious to any kind of trauma, and where sentient beings enjoy their own deaths and manage to reproduce anyway. In such a place, ethical behavior would be very different from what it is here.

    In the meantime, when I stub my toe, it still hurts.

  6. ZacharySmith, to answer your first question: You ask “What good does it do me in THIS universe if …” but “ethics/morality” by definition is at least partly about what good or bad is does everyone else as well, for you to do this or that. Hence let’s leave aside the question of whether I have the right to see if I can kill myself – I think I can since it is my control over myself.

    However, do I have the right to coax someone else into the quantum “suicide machine”, knowing that person will have to experience his or her survival and therefore “never die” in there? But even though they can’t experience death as it were, “I” (whatever that means if a nutty idea like MW is taken as true) will almost surely see them die and therefore by all ordinary measures I am a murderer. Heh, I’d like to see some smartass build such a machine and then use that excuse to get away with killing the poor schmuck (get away in most universes as I am now tired of writing.)

    BTW it is quite possible to build this device. I challenge any supporter of MW to have the balls/etc. to actually get in one. We can even arrange a cash prize waiting outside of $10,000,000 or so, that you can only collect when the temporary danger period is over. Since we might set up a chance of only say 1:10,000,000 of a “you” getting out alive after the danger period, it is an eminently reasonable bet for the rest of us. But since you expect to experience surviving and winning, why not show your “faith” in MW and come out a multi-millionaire to boot? Not only that, but you can see the amazed expressions of the versions of us that see you walk out alive! We’d never live it down, heh.

  7. If all actions are taken– if they are all, literally, acted out, in all possible permutations– what exactly is meant by freedom? What is meant by choice? You assert that it’s incumbent on us to “choose” right actions, but as you are using it, the word appears to be a meaningless noise.

    That’s why I said “…or believe we have free will.” I would argue that there’s no practical difference between free will and the illusion of free will, in terms of what you ought to do with it.

    I also don’t think that parallel universes have any implication for free will that isn’t already raised by neuroscience. Which has the added bonus of being based on actual science, as opposed to fanciful speculation largely decoupled from reality.

  8. Well, at the risk of being too prevalent here, but I think this is relevant: Suppose we change the quantum suicide scheme so it doesn’t actually kill you if the agent nucleus decays. Let’s say, it injects you with something causing total but temporary unconsciousness. Let’s call this “quantum stupefaction” 😉 Well, the logic regarding “awareness” should work the same way at first. “I” expect to be one of the tiny minority that remains awake and knows it, for how can that be “experienced” any differently than if most of me actually died?

    Yet after awhile, all the other me`s start waking up and there are so many more of them then those who never got put to sleep. So, now what? If “I” can “expect” to be able to say, “How fortunate, I didn’t even have to become unconscious” what then takes over the expectations when the others wake up? So what do I really have to “likely” look forward to: not even being knocked out at all, or being put asleep and then waking up? The first option follows the original logic of quantum suicide, but the latter follows what we expect from the total chance of being put asleep and then waking up. Quantum [non-]Suicide was at least a potentially viable if wacky notion, but I think this makes the whole idea (and by extension, multi-worlds “theory”) look silly

  9. i thought this was all worked out on that star trek episode where all the bad guys in the alternate universe had goatees.

  10. That’s why I said “…or believe we have free will.” I would argue that there’s no practical difference between free will and the illusion of free will, in terms of what you ought to do with it.

    Really? Tell me, what ought one to do with a lack of free will, and how might one decide to do it?

  11. Really? Tell me, what ought one to do with a lack of free will, and how might one decide to do it?

    I don’t care.
    Really, “What if we don’t have free will?” is one of the least interesting debates in the history of ideas. It’s right up there with “What if we’re all being simulated in a giant computer?” and “What if the entire universe was created last Tuesday, and just made to look really old?”

    “Free will does not exist” is the philosophical equivalent of the wavefunction that is zero everywhere. Yeah, it’s always a perfectly valid solution, but it’s not an interesting solution.

  12. I would argue that there’s no practical difference between free will and the illusion of free will, in terms of what you ought to do with it.

    Yes! Finally someone agrees with me.

    As for the ethical consequences of the MWI…
    I’m of the opinion that actions should be judged by all their possible results, weighted by their respective probabilities. But I came to this conclusion before I understood the MWI.

    Quantum immortality seems like a lot of bunk to me (the fun, philosophical type of bunk). I mean, what happens if there’s a branch of the many worlds in which you are guaranteed to die? Who is living in that branch? Why can’t it be you living in that branch? Sure, if quantum immortality were true, that would have a big impact on ethics, but then so would the existence of an afterlife which eternally rewards homophobes…

  13. Yeah, and therein lies the problem– you’re conflating, “I am not interested in this subject,” with “This subject has no meaning and no information content.”

    And really, it’s not my fault if you’re so bored by the subject that you make confused statements about the ought-ness of our actions under conditions where we cannot actually choose them.

    Not to mention, writing a big long blog post about something is a curious way of signaling your boredom with the topic.

  14. So…

    If MW were true, I could act altruistically towards my counterparts in parallel universes by committing some heinous crime in this universe. Because, if I did it here and suffered the consequences, at least one of my other “I”‘s would not do it, and would not be punished for it.
    (Ofcourse ignoring the “I”‘s that would be rewarded for committing the same crime in another universe, or those that would get punished for not committing it in their universe, or…)
    Hey, this is deep, man… anyone got a lighter?…so, what were we talking about again?…

  15. 1. The piece of Many-Worlds that nobody ever takes seriously in these sorts of discussions is that different futures (places on the wavefunction that have our own spot as probable history (disregarding crazy high-entropy possible histories)) are *weighted* by features of the present.
    Yes, there’s some futures in which I flip out in a minute from now and go on a killing spree, but given my current neurological makeup, I’m pretty confident that those futures are negligibly weighted. And things so negligibly weighted shouldn’t matter to our moral thought if we’re not getting bloody metaphysical about it.

    2. I agree that, if we were rational, Many-Worlds ought not to matter to our moral deliberations; but we’re not rational. One *positive* effect I think it has is that if you’re thinking about taking a substantial risk (e.g. driving home drunk), you can’t just “hope you get away with it”; you’re resigning yourself to committing homicide in a significant portion of your futures.
    If we were rational, then even in a single universe we’d weight proportionately against actions with a substantial probability of harm; but we human beings tend to fail at that on a regular basis. Thinking of all possible futures happening with the relevant weights actually mitigates that bias for me somewhat.

  16. I have not read all the comments here, but I do not believe anyone has considered this.

    First and foremost, where did the concepts of ethics and morality arise from?
    To put it quite simply, we made them up! These concepts do not arise without a social context such as the one we’ve created – one requiring stability, for if there were no stable social constructs, our species would not have advanced as far as we have. Morality is a human-borne concept, nothing further. Now it could be argued that other social species (on this planet) have rudimentary moral concepts, but the point stands.

    This universe, as well as any others if they exist, are NOT human-borne – human beings are a product of the universe.

    Similarly, it is a further human folly to think that the choices we make (“we” in this case, implicitly referencing the concept of free will) lead to events in other universes, or even more ludicrous, that the antithesis to each of our decisions creates alternate universes. It’s called self-interest – the drive to live, to not die, to avoid harm, those are all aspects of self-interest. The difference between the survival instincts of an insect, and humans’ tendency to aggrandize their self-concepts are in their essence, one in the same drive or motivation.

    If the multi-universe idea is correct, then we ourselves (including our choices we like to call free will) are the result of random consequences of the interactions between sub-atomic particles and the like. In the words of the late George Carlin, “Enjoy the ride.”

  17. Not to mention, writing a big long blog post about something is a curious way of signaling your boredom with the topic.

    I’m not bored with it, I’m contemptuous of it. That readily lends itself to blogging.

  18. I have never heard anyone suggest a connection between multiple universes and morality, so I am very surprised at your claim to have come across such suggestions with any regularity. Where do you find this stuff?

    I find that (some versions of) multiple universe theory are extremely depressing, as they imply that there’s nothing anyone can do about the total amount of suffering in existence, and that there are universes where everyone is doomed to suffer to the greatest possible extent, with the least possible relief. All we can do is to try and help people living in our own universe, and try not to think about all the others.

    I find this picture of reality impossible to swallow. We experience an analogous hopelessness in our own universe, but here it is multiplied infinity-fold. But again, these are my own thoughts on the matter, not a paraphrase of thoughts that I’ve read elsewhere.

  19. FYI, the actual argument that many-worlds theory has moral implications has nothing to do with free will. It’s premised on utilitarianism and goes something like this:

    1. If MWT is true, then each thing that could happen does happen in some branch.

    2. Whether I do this or that only makes a difference to what happens in *my* branch—it does not change the fact that each thing that could happen does happen in some branch.

    3. Utilitarianism: the right action is the one that has the best consequences *for the universe*—not just for *my* branch of the universe.

    4. Therefore, all actions are equally right, since all actions have the same consequences *for the universe*.

    The argument is a bad one, in my opinion, because it is premised on utilitarianism, which is a bunk theory of morality. Nevertheless, many people take utilitarianism very seriously, and hence ought to take this argument seriously. Unfortunately, Chad completely dropped the ball on this one since he thought the argument to silly to even bother presenting.

    Further discussion of the argument, it’s history in analytic philosophy, and an attempt at constructing a non-utilitarian version of it, can be found in a paper by Mark Heller, available here:

    http://www.springerlink.com/content/kh22l61jm4355716/

  20. Sorry, I think philosophers use terms slight differently here than physicists. Where I said “branch” you should understand “universe” and where I said “universe” you should understand “multiverse”.

  21. dtlocke…how does (4) follow from (1)+(2)+(3)? Also, you describe a fairly generic version of consequentialism, could you be specific as to which flavor of utilitarianism you are using here?

    General comment for many of the posts. People tend to downplay that in many ethical systems out there (Kantian deontology, Mill’s utilitarianisms, DCT, etc.) there is a critical role of motive and/or consequences used in the evaluation of the moral agent / action. It is not clear how people claiming that many-worlds, parallel worlds, multi-versus, etc. impact ethical evaluation take motive or consequences into account.

  22. Michael,

    Do you think that (4) does *not* follow from (1), (2), and (3)? [Of course, (4) should actually say “If MWT is true, then…”] What do you need help with?

    Also, I’m using the version of utilitarianism that I stated–i.e., the one that says the right action is the one that has the best consequences for the universe (= multiverse in physicist lingo). You may know this as “act utilitarianism”.

  23. dtlocke,

    I’m just needing help with the logical flow of the argument. (My logic classes were many years ago.)

    Like how would someone understand the following argument?
    (1) If my cat is asleep, then it must be noon.
    (2) I have to have a report turned in shortly after noon. .
    (3) My report must be formatted in either APA or MLA style.
    (4) Therefore, if my cat is asleep, my cat at breakfast, since she is purring.

    Each statement is true, but needs some unpacking before we actually get a sound argument.

    Or for the more math inclined, its the infamous problem in grad school of going from 1+1=2, to actually proving it.

    How do all actions have the same results for the multiverse? Any action made within a single universe would still only be judged moral/immoral within that universe. Unless you are working with a consequentialist theory that accounts for judging a moral action both within a single universe or across the multiverse. Only by making it across the multiverse could you get close to making the net calculus work out so that all actions are equal (whatever that means).

    We need a more robust account of which moral theory you are trying to use here. Act-utilitarianism simply defines whether you judge individual acts (versus rule-utilitarianism which will average out best results over time by following a rule). You still must account for what you mean by “best consequences for the universe”…e.g., terrorists in Iraq would probably vote for “best consequences for the universe” to be equivalent to “death to all U.S. Soldiers and non-believers.” Hence, you actually need to define which utilitarian theory you are using, not a generic consequantialist frame.

    Once you establish “best consequences” and the scope of what counts in the calculation, then we can begin to articulate whether or not free will pops up, whether or not there are any moral implications related to many-world theory, and whether or not free will has anything to do with morality in this case.

  24. Michael,

    I find your response baffling. I said right in my original post that the utilitarianism I am working with says that “the right action is the one that has the best consequences *for the universe*—not just for *my* branch of the universe.”

    I also said in my second comment that by “branch” I mean what physicists mean when they say “universe” and by “universe” I mean what physicists often mean when they say “multiverse”. Hence, the utilitarianism I’m working with says, in physicist lingo, that the right action is the one that has the best consequences for the multiverse—not just my universe.

    Doesn’t this answer your question?

  25. dtlocke,
    I don’t know whether your argument is philosophically sound, but scientifically, it fails.

    It’s premised on the idea that we “choose” which branch we go down. No, that’s bad QM, bad! That’s on the level of What the Bleep. Quantum probabilities are independent of anything we might call “human will”. If anything, quantum probabilities determine your choices, not the other way around. And that’s presupposing that the brain is primarily governed by quantum randomness as opposed to just regular classical randomness.

    I think to exercising your will, you basically ignore all that randomness and say, “I’m going to choose this anyway, regardless of what part of the universal wavefunction I’m in.”

  26. Dtlocke,

    I think you are getting closer, but i’m still as baffled as you. Your argument does not logically follow. As it stands, even with your modifications, it simply is not valid.

    Even if it were valid, it remains unsound because of the vacuous nature of some of the claims. I can’t help you with making it valid, but I’ve been trying to suggests that you can strengthen your argument if you give us a better account of a specific form of utilitarianism you are using. As it is, you continue to leave us with a very generic form of consequentialism. This is analogous to someone telling me I should believe in god to be saved, but then refusing to tell me what he means by god. Or telling me i should believe in evolution, but not clarifying which version of evolution he expects me to believe in except something like ‘that idea that says we evolve.’ Fair enough, but there are various ideas about exactly how species evolve, and to effectively evaluate one and its impact in other areas of inquiry, we have to know some of the details.

    Your consequentialist account of ethics does not give enough detail to be useful for evaluation. I still have no idea what you mean by “best consequences for the multiverse.” Are you implying a Millian version of utilitarianism where you want to minimize pain/unhappiness and maximize pleasure/happiness? Or are you a preference utilitarian? Or perhaps the best universe is one where scientists rule and religiouss people are second class? Is your brand of utilitarianism favorable of individual liberty or a extreme form that encourages slavery and experimenting on prisonerss?

    What do you mean by claiming that all actions are equally right, since all actions have the same consequences?

    Are you claiming that the theory is inadequate because it is impossible to make ethical judgments? Or is it inadequate because it makes all ethical judgments equivalent?

    This is a key question because if a moral agent’s actions locally increases the good, but globally causes bad, how do you judge the moral act as wrong if there is no causal chain from a local universe to the global multiverse?

  27. The most irritating aspect about MWI is that when people make claims about histories, they only sum up the ones that look like reasonable classical histories.

    Most of the histories have blatant inconsistent nonsense: A killing B and B being resurrected by a conflagration of carnations that happened to take the shape of B and then A remembering being killed by B but being resurrected by ants.

    Anyone who has studied quantum path integrals knows that only the really jagged trajectories contribute, and the continuous ones that would seem to make sense classically have measure zero.

  28. David B., I’m not a fan of the MWI myself, but to be fair the histories used there are not the ones appearing in the path integral, but the consistent histories approximating the wavefunction due to decoherence. Only after we trace out the environmental degrees of freedom we end up with classical looking alternatives. Then, one is free to use their favorite interpretation of classical probability theory. If you have a taste for the esoteric, you can refer to different values of a random variable as different “worlds”, but you can do that whenever you have different alternatives with assigned probabilities, it is not specific to quantum mechanics.

  29. Like dtlocke, I’m afraid Chad’s post simply doesn’t engage with the real philosophical arguments for why a many-worlds or multiverse hypothesis would have radical ethical implications. For more detail, see my old post on ‘Multiversal Ethics‘.

    Chad claims: “The right thing to do is the right thing to do, regardless of what anybody else does or doesn’t do

    But this is plainly false. All else equal, it’s good and right for me to donate a bunch of money to Oxfam. This is because doing so would serve an valuable end, namely the welfare of people in the third world. But now suppose my action would influence others in ways that would ultimately undermine this end. Suppose, for example, that a nuclear-armed madman has credibly declared that he will nuke all of Africa if I donate to Oxfam. In that case, it would obviously be wrong for me to go ahead and donate. The downstream consequences — including those where other agents play an intermediary causal role — matter.

  30. Miller, when philosophers use “choose” unless they specifically tie themselves to a position in the free will debate they aren’t assuming some ontological significance to choice. That is choice may well be an emergent phenomena from quantum mechanics. The position that choice is irreducible and produces different routes is Libertarianism. I suspect that while it is popular with religious people most don’t assume it. (And even among the religious Calvinists adopt a position where Libertarianism is false)

    Without going down the black hole that is debate over free will I’ll just say that the issue of choice seems orthagonal to the question at hand.

  31. Mundanes attribute Moral sentiments to Multiverse for the same delusional reasons that they attribute Moral sentiments to Darwin. They do not object to General Relativity, nor Quantum Mechanics, nor Newtonian Universal Gravitation (except maybe in Joshua 10:13 King James Bible
    “And the sun stood still, and the moon stayed, until the people had avenged themselves upon their enemies. Is not this written in the book of Jasher? So the sun stood still in the midst of heaven, and hasted not to go down about a whole day.”), because those don’t challenge their demon-haunted world. That is science, may safely be ignored, except that it must be stamped out of the curriculum and explained to children if it threatens cult beliefs. The issue to these “elect” is not that there are alternative interpretations of quantum evolution by unitary operators (though they twitch at the word “evolution” in quantum evolution or cosmic evolution) but by anything that undermines their cult beliefs in “Free Will.” If my students or their parents pull that one on me this semester, I will assign them to write an essay on the free will theorem of John H. Conway and Simon B. Kochen.

Comments are closed.