Friday, November 29, 2019

The Doomsday Argument

Series on the Anthropic Principle:
The Anthropic Principle
The Doomsday Argument

65 million years ago, dinosaurs roamed the earth. Then, all of a sudden, they were gone. Something happened, we think it was a meteor strike, that made the earth uninhabitable for them. 251 million years ago, a major volcanic eruption and the ensuing global climate change killed most of the life on the planet. In total, there were 5 major extinctions in Earth’s history.

Human stories are full of tales of the end of the world. In Norse mythology, there is Ragnarok. In the Bible, Armageddon. In modern times, we have the Terminator, Galactus, disaster movies. We have fears of climate change, asteroid strikes, nuclear war, and uncontrollable artificial intelligence. It’s clear we have a question deep within our psychology: is humanity about to end?

The astrophysicist Brandon Carter thought so 35 years ago. The argument went like this: assume human population continues to grow exponentially until some cataclysmic event reduces our numbers to near or total extinction. If the population doubles every 50 years, then by the Anthropic Principle, you had a 50% chance of living in the final 50 years of human civilization, and a 50% chance of dying sometime in the entirety of human history before then.

If this were the entire human population over all time, probability says we would expect to find ourselves in the spike on the right side.
This seems crazy, but it isn’t immediately obvious why. When I argued against it way back in the day, I was completely wrong. The first thing I said was that because you and I are you and I, we are not randomly selected from human history. This is nonsense. If we choose randomly from all humans who ever have or ever will live, you and I are among the possible choices. It’s a perfectly valid framework for asking questions, and that’s what the Anthropic Principle does.

The other bad argument I had against it was this: “Pick a point in history between the dawn of agriculture and the end of time, and the Doomsday Argument will give the same result: humanity is about to end.” This is true, but it is not a counter to the Doomsday Argument. The fallacy was that I switched from a random sampling of humans to a random sampling of time. And because population increases over time, a random sampling over time gives the earlier times undue weight.

So then what is the counter-argument? There are a few. The first one that jumps out to me is the fact that it is really hard to think of a scenario that will make humanity go totally extinct. Think of your favorite existential disaster: catastrophic global warming, nuclear annihilation, super-virus, supervolcano, autonomous weapons that decide they want to destroy all humans. If any of these were to wipe out the vast majority of the human species, it would be a terrible, tragic event unparalleled by anything in human history. However, we only need a small number to survive, and they will be able to repopulate the planet and restore civilization.

A big enough asteroid strike could make the Earth uninhabitable, but we have the technology to see asteroids that big years before they would hit us, sometimes decades, plenty of time to nudge them off-course. There are no supernova or gamma ray burst progenitors close enough to harm us with their explosions, nor alien civilizations close enough to invade. And the sun won’t get hot enough to turn Earth into another Venus for another few hundred million years.

What could cause humanity to go extinct? There is only one realistic example that I can think of: superintelligent AI that wants to exterminate humanity. How likely is this? Well, that deserves a discussion of its own. We can rest assured that there are many extremely intelligent people thinking about this topic, and working hard to foresee the possible risks and dangers of developing artificial intelligence. Of course, there is always a possibility that we will miss something, but the more we think about it and work on it, the more likely we are to spot the dangerous paths and go around them.

The other thing that could cause humanity to go extinct lies in the unknown unknowns. It may be that some technology will be invented that is extremely easy to make, and can wipe out humanity. This is sometimes called a “black ball technology.” If such a potential technology exists, then anyone with the right equipment, materials, and recipe would be able to destroy humanity, and given that there are billions of people on the Earth, we would be in serious trouble.

However, the very thing that makes us vulnerable to black ball technology also guards us against it: technological progress and expansion. Once we are able to leave our home planet and start new civilizations on other planets and moons and giant artificial space habitats, then we will be able to survive even something that destroys all life on Earth.

A second problem with the Doomsday Argument is that it assumes humanity will continue to grow exponentially, and then be cut down to a level so low it cannot recover. We already know this model is incorrect, because the rate of human population growth is slowing. If we apply the anthropic principle to another function, say, a linear increase, we find that we have a slightly more than 50% chance of living within the final third of human history. That final third could have a range of anywhere between 100,000 years and 150,000 years. And 50% means there is an equal probability of living outside of that time. So with linear growth, the Doomsday Argument doesn’t tell us much of anything at all!

Since the area under the curve is about equal in each section, we would be about equally likely to find ourselves in either of them.
The final problem with the Doomsday Argument is that it assumes the growth of civilization will stop with some catastrophic event. There are plenty of other models of human population that work perfectly fine with the anthropic principle. For instance, if it looks more like a bell curve, we would be more likely to find ourselves near the top. If it levels off, we would be equally likely to find ourselves anywhere along the level period. If it looks bumpy and wavy, we are more likely to find ourselves near one of the peaks. It’s very difficult to say what the human population curve will look like in the future, because every model carries with it plenty of assumptions.

There is one sense in which the Doomsday Argument seems to get things right. When we look out into the universe, we see a vision of a possible future where life, humanity’s descendants, and perhaps alien species, thrive and live around every star and in the spaces between. The number of people in such a multitude of societies astronomically outnumbers the number of humans who have ever lived. Therefore, regardless of what the population curve looks like, the odds of living on such a civilization’s planet of origin before it spreads out into the galaxy is astronomically small. It’s like winning the lottery, and the reward being another lottery ticket, which wins again, and is rewarded with a third lottery ticket, which wins yet again.

Yet here we are. And we have two options. One, we can look at the staggeringly improbable odds that we would find ourselves at this time in such a universe, and hang our heads in despair, declaring that these odds mean such a civilization is doomed never to happen. The other option is to accept that the future has not happened yet; it depends on what we do now. So since we have already won the cosmic lottery, let’s do our part to build the machine that generates our winning tickets. The world hasn’t ended yet, so as Samuel L. Jackson says, let us act as though it intends to spin on.

No comments:

Post a Comment