Note: this post no longer meets the standards of rationality on this blog. The arguments I present are flawed and lacking in detail, and my views on some of the topics have changed. I may revisit them in the future, and when I do, I will add links from here.
Science is the pinnacle of thought, plowing relentlessly into the unknown, conquering it and making it ours. Science has shown us great truths, that the universe is amazingly old, that all life is related, that the sun is a star of billions in the galaxy, and the Milky Way is one galaxy of billions in the universe, and countless more. Science is a monolith of thought, knowledge, and understanding. Yet despite this—or perhaps because of it—a few ideas have made into the mainstream consciousness of the scientifically-minded community that do not meet the rigorous standards of a scientific claim. In this discussion, I will go through a few of them, explaining the arguments, and then pointing out where they fail. We shall see that they all share a common problem.
Note: I am not talking about scientific ideas that are misunderstood. That is another discussion. This is about ideas that many scientists and science enthusiasts subscribe to, and perhaps should not.
The Doomsday Argument (updated discussion here)
By Indigodeep on Deviantart |
Because I want to believe humanity will flourish for the rest of the lifetime of the universe, I have a personal interest in debunking this argumet. Luckily, it is quite easy to do so. The Doomsday Argument assumes that you and I are randomly picked from all of human history. We are not, though, we are you and I. Pick a point in history between the dawn of agriculture and the end of time, and the Doomsday Argument will give the same result: humanity is about to end.
The Doomsday Argument also fails to consider any particular method for human extinction. It is not easy to wipe out an intelligent species, and the more we advance, the more scenarios we can avoid. We have space programs that can find and deflect asteroids that are on a collision course with Earth. Some disasters might wipe out a significant number of us, but we will be able to rebuild and return, advancing further than before. We can weather out nuclear winter in underground bunkers. We can adapt to global warming. There are still possibilities we cannot avoid, like the sun suddenly increasing its temperature or a nearby star going supernova, but we can catalog these possibilities, and it is reasonable to assume that none of them will happen within a few billion years. Despite the cynicism of the Doomsday Argument, it looks like humanity will be here for a long time.
The Simulation Hypothesis
Our simulation capabilities are getting better and better. We are at the point where we can put a billion particles in a virtual box, shake it up, and watch them form into galaxies and galaxy clusters. The particles we use have the mass of a thousand suns, but if computation continues to get better, there may be no limit to the complexity of the simulations we can run. In fact, a billion years from now when we have mega-computers that surround entire stars, we may be able to simulate a universe the size of our own, with particle-perfect precision. Given the vast number of stars in the universe, there are bound to eventually be a staggering number of simulated universes, compared to our one real universe. With this information, we are infinitely more likely to find ourselves in one of the many virtual universes than in the one real universe.
Except there are a few major flaws in this argument. First, we are conscious creatures. Our consciousness resides in our brains, and brains are very different from computers. We do not yet have a science of consciousness, and have no idea if consciousness can be created by a computer algorithm.
Second, the same logic that tells us that we are probably in a simulation would tell us that the universe that simulates us is also probably a simulation, and the universe that simulates it is probably a simulation too. The argument looks at the complexity of our universe, and concludes that, since we can make universe-precise simulations in our universe, then our own universe is probably a simulation. In fact, the higher up we go, the more complex the universe we find ourselves in, the more likely it will be able to generate universe-precise simulations, and thus the more likely it is to be a simulation itself! It is an infinite regress, which blows up the further out we go. Any valid argument would have to result in the conclusion going the other way around, with simpler universes being more likely to be simulations than complex ones.
Some universe has to be the real thing. And until it has been demonstrated otherwise, I see no reason not to live as if the one we find ourselves in is it.
The Technological Singularity
Again consider how rapidly computation is progressing. Fifty years ago, the first people walked on the moon, using less computing power than a Texas Instruments calculator. Now, we have built programs that can beat humanity’s best chess and go players, and it is only a matter of time before a true Artificial Intelligence is born that can beat the best human at everything. Once this happens, it will upgrade itself, making it even better. Then it will upgrade itself again, and again, and again, and by the next day we will all be ruled by an all-powerful robot god. It is not a question of if, but when. And it is coming soon.
Except it’s a lot more complicated than that. Though human brains are often compared with computers, they really have very little in common. Computers may be able to do mathematical calculations far beyond the human capability to comprehend, but they do exactly what they are told, sometimes so exactly as to frustrate us to no end at their stupidity. For example, if we told an AI to “make paper clips,” and left the room, we might return to find it has turned the entire house into paper clips. And even if we can code some “common sense” into our AI, there are still things that humans will always be better at, like philosophy and the arts. Suggesting a computer that can outdo us at those things goes far into the realm of science fiction. The bottom line is, all it takes to avoid our AI worries is smart programming. We have to be responsible with what we create, no differently when we are talking about AI than any other technology.
There is also an implicit worry that a superintelligent Artificial Intelligence will become conscious, and want to wipe us out as evolutionary competitors. This is simply baseless worry, brought about by our human tendency to project our own perceptions and behaviors onto the universe. All life in nature, including us, evolved in competition with other life. That which was violent and destructive toward other life forms was more likely to survive. When we look back through our history and see the wars and the subjugation we wrought, it is because evolution favored those of our distant ancestors who had those things in their nature. AI is different; we decide its nature. So all we have to do to avoid our creation revolting against us is to program it with an unchangeable command to value above all else the freedom of human beings to pursue their own fulfillment.
The technological singularity is a much more nuanced than I have covered here, and it is interesting enough that I might write a whole other discussion about it sometime in the future.
Conclusion
All of these arguments have the same problem: they rely on the extrapolation of trends, and neglect to take into account any of the real factors that come into play. There is no immediate existential danger that we are not in the process of taking steps to prevent. We have no examples of perfect-precision simulated universes, nor any evidence that consciousness can exist within them. And we have no evidence of any computer having any intent at all, much less the intent to destroy us. These ideas are gold mines for science fiction; it is quite popular these days to have stories about AI gone amok or Earth being threatened by some catastrophe, and The Matrix did fairly well too. But science fiction is all they are.
For a final word, I want to point out that just because they are bad arguments does not mean they aren’t possible. It merely means it is not reasonable to take them as definite or inevitable at the present time. Science has such a good track record of providing solid, airtight arguments, that we run the danger of letting our guards down and simply accepting everything that comes from a scientific source. But it is important to remain agnostic when we do not have enough information to say one way or another, and we must take these ideas as warnings, so that we can look to the future with clarity and responsibility.
No comments:
Post a Comment