The Simulation vs. Murphy

The more I think about the simulation hypothesis, the less likely I think it is.

Sure, it’s cool as can be to think that our distant descendants might create a gigantic computer simulation to replay their past, meaning us. There are even reasons why they might want to do it–ancestor worship, the study of history, the wish to see how things might have turned out if something were changed, even the rather sadistic yen to see how we would handle this (mwahaha!). They could even do it many times, which possibility has led some to argue that if they could, they must, and then the real world is only one of many worlds, most of which are simulations. Therefore the odds favor the simulation hypothesis.

Could they do it? Some computer scientists argue that even a computer the size of the universe would not be big enough to simulate the entire universe. There are just too many atoms, electrons, photons, and so on to keep track of. Yet if we consider that the only way we have of knowing the universe is through our senses (yes–we use instruments such as radio telescopes, but they report to us through our senses), then the problem is no longer that of simulating the universe, but of simulating the stream of data that reaches the brain via the senses. That seems more manageable, and thus more likely.

Will they do it? That is, do we in fact live in a simulation? To answer this question, we must consider that eternal verity known as Murphy’s Law and its many corollaries: Whatever can go wrong, will (https://www.maths.nottingham.ac.uk/plp/pmzibf/some.html).

As most people know, Murphy’s Law is a particular curse of computer systems. Servers crash. Hardware glitches. With software, bugs exist in the code, and fixing them creates more bugs. Software may work quite well most of the time, but from time to time it does some pretty fluky things. This is so common that a major part of the software industry is devoted to creating patches to fix bad code, and then to fix the patches themselves. Indeed, contemporary software is so complex that it cannot possibly be perfect and bug-free (https://www.betabreakers.com/the-myth-of-bug-free-software/). Future software will be even more complex and imperfect. Patches will still be essential.

What would we see if the simulation software were buggy? Charles Forte was famous for collecting strange tales of disappearing people, appearing people, mysterious crop circles, animals falling from the sky, and so on (https://psi-encyclopedia.spr.ac.uk/articles/forteana). Such things might seem to qualify, but oddly, since the dawn of the computer age and social media–which one might expect to make reports of such things spread rapidly and widely–their numbers have fallen off. Sure, there are still crop circles, but social media are dominated by pictures of cute kitties, and there is nothing Fortean about them.

Now if the Moon disappeared one night, that would be a worthy fluke. And if it came back the next day, that would be a pretty convincing sign of a patch. You could say the same thing if Tasmania or Malta vanished, or if the force of gravity changed. If Washington, D.C., or the Georgia state house vanished, that might actually be due to a patch.

But such things just don’t happen. Not even rarely. Maybe the flukes are smaller. Like two days ago you were single, but you woke up yesterday morning with a spouse’s head on the pillow beside you (not just the head, idiot!), and this morning it was a different spouse, with kids yelling in the next room. What will the next patch do to you?

Nope. That doesn’t happen either. Maybe someone taps you on the shoulder and claims to have a relationship with you? Maybe that’s a bug in the software, but more likely it’s booze. Or perhaps a scam.

Suddenly you’re brushing your teeth with the other hand? Parting your hair on the wrong side of your head?Mildly disturbing, perhaps, but at some point the flukes get so small that you shrug and say, “Who cares?” At what point is the bug too small to bother patching?

The granularity of any changes also suggests interesting things. If you’re the only one who sees a problem, then what’s being simulated is surely your stream of sense data. If some see it and others don’t, same thing, and it’s more of a server-scale issue. If everyone sees it–as they would a change in gravity–the bug affects the whole simulation. If the whole universe, including you, is being simulated–which experts say looks impossible–no one should ever notice a bug at all, whether there are any or not.

Do little flukes happen? Sure. But are they because of buggy software? Murphy’s Law says such things happen anyway, and Occam’s Razor (which says we should choose the simpler of two equally likely hypotheses) says we should just blame Murphy. It would have to be a pretty drastic fluke–like the Moon or Tasmania disappearing and then returning–to make “simulation” the simpler hypothesis.

So. How much credence should we give the simulation hypothesis? The lack of bugginess not adequately covered by Murphy’s Law says, “Not much.” To say otherwise would require believing in perfect software.

And that would require a genuine miracle.

Leave a Reply

Your email address will not be published.