New to SciFic? Start at Why I Love Fiction.

Friday, September 22, 2017

Moral Theory IV: Serving Oneself

Moral Theory:
I. Intuitionism
II. Authoritarianism
III. God
The Is-Ought Problem
IV. Ethical Egoism
V.  ???
VI. ???

Last time, we argued that objective moral Truth is independent of the existence of God. After learning this, one might come to believe that without a divine source, the only measure of morality left is for people to do what is best for themselves, to spend life pursuing their own goals, interests, and desires. In this view, if helping others does not benefit someone, they have no moral obligation to do so. Basing one’s moral principles on self-interest is called Ethical Egoism.

Ethical Egoism is a two-sided coin. On the one hand, it can inspire people to work hard and put lots of effort into mastering skills, learning, and becoming a better person. The book Atlas Shrugged by the infamous Ethical Egoist Ayn Rand, despite its mind-numbingly shallow characters, has inspired and motivated people to compose beautiful songs, write best-selling novels, or start successful businesses. A few hours after I started reading it, I put it down and cleaned my apartment. There is just something about the call to get up and do something worthwhile that puts fire in people’s veins.

The cover for Faith of the Fallen by Terry Goodkind,
who portrays Ethical Egoism as heroic.
On the other hand, Ethical Egoism has no judgment for those who can screw other people over and get away with it. It would say there is nothing wrong with being a con artist or a thief or a money-grubber if doing so won’t have any negative effect on you or your future. And since competition inevitably favors those who play underhandedly, societies built on Ethical Egoism make it easy for the selfish and devious to rise to the top. Once in power, they rewrite the laws to further benefit themselves at the expense of everyone else. It is Social Darwinism, survival of the fittest applied to human wellbeing, and Ethical Egoism only shrugs.


It is a mixed package. The grandness of the life it offers is tempting, but there is a terrifying ugliness to the world it leads to. Our instincts pull us both ways. But we are looking for an objective foundation for morality, so for the moment, we will try to put aside our emotions and see whether Ethical Egoism has a rational foundation.

Ethical Egoism says that we should always aim toward that which would be the most fulfilling to ourselves, or in other words, we should always seek to maximize our own satisfaction. But if we follow that to its logical conclusion, we get a world of Social Darwinism, where the vast majority of people are not satisfied. This does not feel like what morality should be, However, that is not actually a refutation of the theory, but an appeal to intuition, and we have already shown that intuition is not the basis for morality.

One could, in fact, still make a case that Ethical Egoism is the ultimate answer to objective morality. It may be that by everyone doing what is best for themselves, the standard of living is raised for everyone, even though the inequality between the top and the bottom makes it look unfair. However, we can show that is not the case. There is a thought experiment in game theory called the Prisoner’s Dilemma, which exposes Ethical Egoism’s fatal flaw. As I explain it, you can follow along in the image below.

Suppose you and someone else are partners in crime. You are caught, and questioned separately. The system is not very just, and you are given an offer: If neither you nor your partner confess, you will both get one year in prison. If you both confess, you will both get three years in prison. If one of you confesses and the other does not, the one who confesses will get off free and the other will get four years in prison. You are told that your partner has also received the same offer.

Now you weigh the possibilities. If your partner confesses, you get three years if you confess and four if you don’t. If your partner does not confess, you get one year if you do and none if you don’t. Either way, it is better for you if you confess. If you and your partner are Ethical Egoists, you will both confess and get three years. However, if you and your partner are playing as a team, neither of you will confess, so that you both get one year, which is best overall.

In cases like the Prisoner’s Dilemma, Ethical Egoism actually leads to outcomes that are worse for the people making decisions than some more altruistic moral system would. One could argue that the true option that lines up best with one’s self-interest would be to not commit crimes and to try to improve the legal system, but the Prisoner’s Dilemma is a proof of concept; only the logic matters, not the details. Ethical Egoism defeats itself, failing as a candidate for objective morality.

So far in the Moral Theory series, we have looked at the naive ideas of morality that people fall into with no or little thought, and which professional moral philosophers no longer consider worth pursuing. For the final two episodes, we will take on the big guns. We hope that the glimpses they provide us of moral Truth are as inspiring as Ethical Egoism, without the looming specter of Social Darwinism.

Friday, September 15, 2017

The Is-Ought Problem

Moral Theory:
I. Intuitionism
II. Authoritarianism
III. God
The Is-Ought Problem
IV. Ethical Egiosm
V.  ???
VI. ???

When searching for an objective basis for morality, we run into a problem big enough to break from our search and devote a whole discussion to it. As we defined at the beginning of the series, a morality is a system that claims that some actions and consequences are inherently better than others, and that we ought to make our choices in accordance with these. When making a case for a way that people should act and a way things should be, it is very difficult to make an argument that is based on facts rather than mere opinions. This was formalized three hundred years ago by philosopher David Hume as the is-ought problem. We have already run into this in this series with the Euthyphro Dilemma, where we found out that even God's opinion is still just an opinion.


To tackle the is-ought problem, we will have to start with factual observations and hope to end up with a standard by which to measure moral systems. First, we observe that moral intuitions exist within us, planted partially by our upbringing and culture and partially by biology. We have two types of moral intuitions: normative moral intuition, the feeling that some actions should be taken and others not; and circumstantial moral intuition, the feeling that some circumstances you or others could be in have more of a rightful claim to existence than other possible circumstances. As we saw in the first part of this series, these intuitions are not sufficient as a basis for objective morality, because they are subjective.

In addition to moral intuitions, we also have a sense of satisfaction, which is a pleasant feeling we get when our actions and others' line up with our normative moral intuitions, and their consequences line up with our circumstantial moral intuitions. There are many different kinds of satisfaction. There is the feeling of rest after a productive day, the feeling of doing something productive, gratefulness for your life situation, many types of happiness and pleasure, and more. The kinds of things that can trigger satisfaction in us are ingrained in us by our DNA, though they vary slightly from person to person. It is worth noting that following our moral intuitions does not always lead us to satisfaction; they are different things. All moral systems are built to generate satisfaction, though each one is different in whose satisfaction matters and what specific types of satisfaction they emphasize.

So far, I have made only observations, "is" statements. Now we are ready to bring in the "ought." Because all moral systems are aimed toward satisfaction, if "ought" is to mean anything, it will raise up actions and systems that generate lots of satisfaction. After all, the moral intuition-satisfaction system is why we have a concept of "ought" in the first place.

We should give this breaking of the is-ought dichotomy a critical look. It gives off vibes of the naturalistic fallacy, the assumption that what is good is good because it is natural. However, what is considered natural is much broader and conceptually fuzzy than the argument that has been made in this discussion. We have specifically targeted the relevant, factual statements about morality, and come to the conclusion that the reason the idea of "ought" exists is because of our moral intuitions and capability of satisfaction. If we were to ask why "ought" should be pointed at satisfaction, it would be taking the concept out of its domain of meaning, like asking why logic is a valid way of thinking. The question ceases to make sense.

Without the concept of satisfaction, no one would have anything to strive toward, and all actions would turn out to be equally meaningless. The possibility of satisfaction exists within us, and therefore we have moral intuitions and develop moral theories. One could still argue that this is not a full answer to the is-ought problem, and that there is no reason that the natural purpose of morality should be the purpose of morality, but it is better than its alternatives, which are mere opinions, which lead to Relativism. Knowing this, from now on in our search for objective morality we will pay special attention to how each theory deals with satisfaction.

Friday, September 8, 2017

More Musings on Consciousness

Recommended Pre-Reading:
Consciousness 1
The Scientific Jigsaw Puzzle
Equivalence


Eighty years ago, there was a puzzle in science. Sometimes atoms broke apart into smaller atoms, releasing even smaller particles and energy. But when all of the remaining energy and momentum was totaled up, some of it was missing. Conservation of energy and momentum are two of the foundational laws of physics, so this was a real problem. To try to solve it, the physicist Wolfgang Pauli proposed that there was another particle, a neutrino, that escaped detection. Decades passed, and then finally someone was able to build a detector to search for these neutrinos, and found them.

The story of the neutrino is one way that things are discovered in science. There is a puzzle with a missing piece, people make suggestions for what that piece might be, and eventually someone finds the answer. Consciousness, however, is a different story. When we look at the human brain, it seems to function completely fine on its own, with just matter and electricity. Human behavior, decision-making, aesthetic taste, and basically everything about us can be understood by our DNA and the electrical signals running through our neurons. Unlike atomic decay with a neutrino-shaped hole, the brain puzzle is complete, and we perplexedly have a consciousness-shaped piece left over.

An extra puzzle piece that we cannot find a place for means something is wrong with the picture as a whole. In order to solve this problem, we have to identify and question our basic assumptions about our paradigm. Examining my own beliefs, I find I have always assumed an idea similar to substance dualism, that consciousness is its own substance, independent from other things. But what if it is simpler than that? What if consciousness is just what matter, or some forms of matter, looks like from the inside? This equivalence would mean that what seems to be its own piece of the puzzle is really just another way of looking at the pieces that are already in place.

At first, this seems preposterous. Rocks don’t have consciousness. Wind doesn’t have consciousness. Statues cannot see or hear. But this is where something like Integrated Information Theory comes in. A conscious experience would be meaningless unless it happens in a system of interconnected elements, each of which affects all of the others. Another way of saying this is that consciousness can only be meaningful if the information in the system is irreducibly complex and under constant change. Think of a brain. It is a vast network of neurons, connected in loops, branches, and highways. When a neuron fires, it pulses against other neurons, some of which are set off in turn. This is part of an unbroken, lifelong chain of cycles and patters. This is integrated information.


Let's do a thought experiment, where we imagine a mind whose only experience is seeing the color black. This mind has no concept of the outside world, no concept of sound, or touch, or space or time. Its entire existence is the perception of the color black. But by having nothing to compare the color black against, it does not even truly experience sight. So even though it experiences a quale, the color black, it is not conscious in the sense that we think of the word. Instead, we might call it proto-conscious.

Now imagine a mind whose subjective experience consists of only a single, unchanging scene, perhaps the equivalent of a photograph of a sunny day at the park. This mind does not think, it does not feel, and it does not get bored. Because it does not analyze the image nor remember having seen it moments before, the mind does not perceive time passing. To it, the entire lifespan of the universe is a single instant. This mind too is only proto-conscious.


The idea of proto-consciousness being everywhere in the universe, even in non-meaningful states devoid of change or organization, seems like something a mystic or spiritualist would claim. Nevertheless, given the information we have, I believe it is a sensible possibility. I am sure you have had the experience of waking up from sleep feeling a strong emotion, as if you were just in the middle of doing something stimulating, but could not remember what you were dreaming. When we are awake, our stream of consciousness is full of thoughts that flit in and out of our awareness. Most of the time we forget them instantly, and it is as if we never thought them in the first place. In other words, we have conscious and semi-conscious experiences that fade from existence and are forever lost to memory. Of course, even these examples require brains full of neurons in order to exist, but even so, is it unreasonable to hypothesize that qualia momentarily pop in and out of existence in matter all over the universe like virtual particles in a vacuum?

I often wonder if computers might have some kind of consciousness. They work completely differently from brains, storing and processing information as static bits instead of unbroken flows like neurons, but considering what we have been talking about so far, I think it is a reasonable question. If they are conscious, is it anything like human consciousness, or is it completely alien, impossible for us to imagine or comprehend? I see no reason why human consciousness should be typical of consciousness in the universe, and that there would not be senses and emotions impossible for human brains to have, or even other types of experiences we cannot imagine.


I don’t know whether mind-matter equivalence is the answer, but if it is, it would answer a lot of questions and eliminate a lot of assumptions. The puzzle would fit together, all its pieces intact, revealing a new edge to build from and explore. Perhaps we could figure out how to build artificial consciousness, or enhance our own to experience things never dreamed of. Perhaps we could learn exactly how conscious animals are, and how best to treat them ethically. A new ocean would be revealed, ready to be explored by science and science fiction.

Friday, September 1, 2017

The Profoundness of Equivalence

Toolbelt of Knowledge:
Algorithms
Skepticism
Equivalence


Have you ever had an argument with someone, only to find that you actually agree, but were just using different words? Have you ever memorized a long number in chunks, and then found out that someone else had memorized the number by breaking up the chunks differently? What you have stumbled upon is the principle of equivalence, the fact that something that is true can be thought about in different, yet equally valid ways.

For example, we all have our own ways of representing reality. Some of us think of reality in pictures. Some see everything in terms of words. Others see everything as equations. These are all different, but they are equally valid ways of representing reality. They are equivalent.

Equivalence appears all the time in physics. The most famous example is Einstein's Elevator, a thought experiment that led Einstein to formulate his theory of gravity, General Relativity. Einstein imagined waking up in an elevator, his body floating above the floor as if in space. There would be no way to tell if he was actually in space, or if he was freely falling down the elevator shaft. This showed him that zero-gravity and free-fall are equivalent, which, with some help from math and a few experiments, led to the realization that gravity is not truly a force but a distortion in spacetime.

For a more down-to-earth example, suppose you are holding a brick. We call it a solid object, and think of it as the stuff taking up the space within its boundaries. But we also know it is made of atoms, which are mostly empty space. So is the brick a solid substance, or is it mostly empty space? It is both. “Solid substance” and “mostly empty space of rigidly arranged atoms,” though they seem completely different, are two ways of saying the same thing. Equivalent.

Of course we also have to watch out for false equivalence, when we treat two things as the same when they are really not. Nothing is as simple as it first seems, and figuring out how to tell the difference is part of learning. The glass is half empty, and it is half full.

Being aware of equivalence and learning to identify it can help us make sense of the world and what is happening around us. Sometimes things that seem unrelated or even contradictory are actually the same thing. If we practice finding these connections, we start to see that the world is a lot more understandable than we may have realized.

Friday, August 25, 2017

Legendary Heroes

See also:
Legendary Villains

In each of our minds, we hold an ideal person that we want to be, a standard that we strive for. But none of us is perfect, and none of us measures up to the archetype we set up. Our failures discourage us. But in fiction we can find characters who pass the test, who succeed in embodying these ideals we want to see in ourselves. When we see the struggle and pain they overcome to keep from failing, our hope is renewed and we are inspired to pick ourselves up and try again, ever improving ourselves toward that unattainable standard we strive for. The characters who do this for us we call heroes. Some heroes resonate with us so well that they transcend the stories they come from and become icons of the culture. They become legends, and are known even to those who have never heard their stories.

Superman

There is no other way to start a discussion of heroes than the cape-bearing face of heroism himself. With a heart of gold and always putting others before himself, Superman is hand-crafted to be the perfect image of what is good. Not only this, but he has the power to act on his compassion, with his ability to fly, his unlimited strength, his invulnerability, and many lesser-known superpowers. Yet with knowledge as limited as any other man, Superman has to deal with a heavy burden of responsibility. He knows he can save the world, but also that by not knowing enough or losing control for a moment he could become the cause of its destruction. All this combined gives us the closest thing to the hero archetype that humanity has ever put to the page.

Batman

Almost Superman's opposite, Batman has no superpowers, just loads of money and technology and a desire for justice. Batman deals with everything dark, from a dark city to a dark costume to enemies who embody the dark parts of humanity. Batman is the yin to Superman’s yang. His lack of powers and the fact that he continually has to face and conquer his own dark side makes Batman more relatable than Superman. His nemesis, the Joker, embodies the monster that Batman is always in danger of slipping into if he gives up for even a moment. His strength in the face of humanity’s ugliest depths and his incredible drive to press on in the face of it all has inspired many a child and adult alike.

Goku

Goku is the face—and the hair—that brought Japanese animation to the rest of the world. As far as powers go, he is basically Japanese Superman. However, in place of Superman’s compassion toward humanity and drive to service, Goku values honor above all else. To prove himself the best, Goku will not fight anyone except at their strongest, often risking the lives of his friends, innocent bystanders, and sometimes entire planets to do so. He will also let his friends fight for their lives and get beaten down to the brink of death—and sometimes past it, given that his universe has a few death-reversing loopholes—before interfering, because he wants them to push themselves to their limits and have an honorable defeat. Yet when there truly is no other option, Goku proves himself worthy of the title of hero by showing that he is willing to sacrifice himself to save those he cares about.

The Doctor

A nameless, immortal time-traveler who has a tool that can do almost anything, and who cheats death by generating a new face. He has saved Earth about fifty times in as many years, though he himself has aged thousands. He has even been known to save the entire universe now and then. After twelve personas and a life so long as to boggle the minds of mortals, the Doctor faces existential questions that humans almost never run into, questions that drive his enemies to hatred and nihilism. His greatest weapon: his wit. The Doctor almost never carries a weapon, opting instead to win all of his battles by thinking ahead of his opponent—or guessing, no one can ever be really sure he knows what he is doing. The Doctor is a legend and a myth, a savior and a destroyer, as wise as God and as foolish as a child. He is a hero, an angel, a mentor, a messenger, a destroyer, and the universe’s instrument of fate. A hero, but more than that. He is the Doctor.

This is the end of my list. I am sure there are others. I feel that some heroes from mythology like Beowulf or Hercules or Thor deserve a place among the legends, but I do not know enough about them to do them justice. I also did not mention any run-of-the-mill protagonists, defined by the story following their viewpoint and making us want them to succeed, as the term “hero” is popularly used today. These “heroes” include the likes of Luke Skywalker and Harry Potter, who are not exactly beacons of wisdom and self-discipline. The ones who made it are those who will be remembered long after their stories have been forgotten, who shine like the mythical figures of the ancient Greeks, half human, half god.

Friday, August 18, 2017

Moral Theory III: Looking to Divinity

Moral Theory:
I. Intuitionism
II. Authoritarianism
III. God
The Is-Ought Problem
IV. Ethical Egoism
???
???

Last time in the Moral Theory series, I talked about Authoritarianism and its pitfalls and dangers. But the big problems with authoritarianism all stem from the fallibility or deviousness of the authority we give our trust to. What if we could find an authority that we could be certain was perfect, all-good, all-knowing, and all-powerful? This is the center of the monotheistic religions, an authority called God who knows all, is infallible, and is the essence of moral perfection.


Up until now, I have used the word “morality” vaguely, using it to describe our sense that some actions are good and some bad, and the good actions are preferable over the bad ones. However, to go any further we are going to need a more precise definition. From now on, when I say morality, I mean a system by which to determine how good or bad a choice or action is. Objective morality, or moral Truth, or the source of morality, if it exists, is a morality derived from observation and reason that applies to everybody, and overrules personal morality if they come into conflict. Subjective morality or personal morality is a morality held by a single person or a group of people. It is possible for a morality to be subjective even if everyone in the universe agrees on it. Relativism is what happens if there is no objective morality, only subjective moralities, whether manifest or potential, for which there is no way to say one is better than another.

The common belief about God and morality is that God is the only possible grounding for objective morality. I believe morality is objectively grounded elsewhere, as we will see by the end of this series, but today we shall examine whether the existence of God is in fact a possible source for moral Truth at all.


If God is the source of objective morality, then how are they linked? The answer is not as simple as it might seem. There is a classic dialogue going back to Plato called the Euthyphro Dilemma. The story goes that the philosopher Socrates asked the priest Euthyphro what it means to be morally good. The priest replied that to be good is to follow the will of God (well, the gods, but the dialogue works just as well for a single God). The idea that morality comes from following the will of God is called Divine Command Theory. Socrates then asked if God wills actions because they are good, or if the actions are good because God wills them. At first glance this might seem like two ways to state the same thing, but the difference is critically important. If God wills actions because they are good, then God is following a moral Truth grounded outside of himself, which we should be able to figure out and understand for ourselves. If actions are good merely because God wills them, then who is to say that God is actually good? What reason do we have to follow him, other than the fact that he is the strongest? God’s will, in that case, is just another subjective morality. Thus, Divine Command Theory fails to link God and morality together.

There is an alternative to Divine Command Theory, which attempts to solve the Euthyphro Dilemma. It is the theory of Divine Attributes. It argues that moral actions are good not because God commands them, but because God embodies them in his character. We should love because God is loving. We should respect because God is respectful. We should speak the truth because God is truthful.

But this runs into a new problem. Which attributes of God count as moral attributes? Should we seek to gain ultimate power because God is all-powerful? Should we seek to gain authority over others because God has authority over us? Should we become arbiters of life and death because God is an arbiter of life and death? In order to know which characteristics of God we should emulate and which belong to God alone, we must already have knowledge of moral Truth by which to differentiate them.

And, though it has been obscured, the Euthyphro Dilemma still applies. The same arguments that applied to God’s will can be applied to God’s character. If God had a different character, would morality be different? If so, why should we say that God’s nature is good? Morality would then be relative. Or is there some principle that makes it impossible for God’s character to be different? If so, wouldn’t that principle be the ultimate grounding of moral Truth, rather than God’s character, which is forced to conform to it?

Both Divine Command Theory and Divine Attributes Theory fail to provide grounding for objective morality. Both fall prey to the Euthyphro Dilemma, and Divine Attributes Theory adds the question of which of God’s personal qualities count as moral attributes to be emulated and which do not. So we see that the existence of God does not affect whether or not there is moral Truth. If morality would be relative without the existence of God, it would also be relative with the existence of God. Simply being completely wise, infinitely powerful, etc. does not make one’s opinion objective, nor does it make one’s personal qualities an objective standard of moral perfection. If moral Truth exists, we must search for it elsewhere, and we will start on that next time.

Friday, August 11, 2017

Truth

Recommended Pre-Reading:
Representationalism
The Limit of Philosophy
Skepticism


The Sheikah symbol from the Legend of Zelda. It represents, among other things, truth forgotten by history.
At my undergraduate college, I took a course titled “What is Truth?” The class compared the methods of science, social science, and the humanities for obtaining knowledge. As you can probably guess, I liked the class a lot. However, as the weeks went on, the discourse seemed to be steering toward a conclusion so absurd I could hardly believe it was happening: Truth does not exist. As soon as I recognized this, I struck to the heart of the matter. “The statement, ‘there is no Truth,’” I said, “cannot be true. Therefore Truth must exist.” A few of my classmates accused me of cheating, and we spent the rest of the period arguing.

On one level, the argument against Truth is not entirely without justification. People have been peddling ideas that are unjustified or proven false, as truth for as long as the concept has been around. When people are shown the cracks in the ideas they once believed, the natural reaction is to question the rest of their beliefs, and wonder if their reasons for believing them true are actually valid. Their questioning takes them to the fact that logic cannot prove itself, and they throw up their hands in defeat and say that Truth simply does not exist.

But this line of reasoning fails to see the difference between Truth and knowledge. Truth, as they search for it, is something foundational, beyond question, and from which absolutely certain knowledge can be built. But they are mistaken. To demonstrate, I propose a revolutionary concept: Truth-in-itself, the grand essence of reality, which is what it is, and exists independently of knowledge or perception. Truth-in-itself is unconditional. It depends on nothing. it simply is. Whenever I speak of Truth with a capital T, this is what I mean. Small-t truth I use whenever I speak of things I am certain of beyond reasonable doubt, or in the logical-mathematical sense of the word.

If Truth exists, then, what is it? Well I’m sorry, but I cannot tell you. To explain why, I will use an analogy. In the Bible, the second Commandment that God gave Moses was “Thou shalt not make for thyself an idol.” One of the popular interpretations of this is that if the Israelites made a symbol to represent God, they would end up worshipping the symbol, forgetting the God it was supposed to represent. Religious doctrines, ideologies, scientific paradigms, beliefs about the world and human nature—these are all Truth idols. When we worship our beliefs, taking their truth for granted, we forget that real Truth transcends all knowledge.

So are we then cursed to wander the Earth in search of something that cannot be found, never being any wiser or more knowledgeable than a newborn baby? Well it is not so simplistic. Though we may never come to know the whole Truth with absolute certainty, we can be confident that on average, the search in itself brings us ever closer. Yes, we sometimes take wrong turns, but if we are always examining what we think we know, we will find ourselves on the right track again before long.


Before I finish this topic, I’d like to bring up a problem I have seen people fall into. I call it the trap of second belief. Most of us live the beginning section of our lives believing what we do without reason, simply because we always have. When we examine our natural beliefs, we are struck by the holes and blinds spots in it. Stunned by learning that we have been ignorant our whole lives, we turn to what we have always seen as its opposite. We feel a great weight lift off our shoulders, and say, “I once was blind but now I see.” The reality is, though we may in fact see a little better, we are still mostly blind. This new belief, though we came to it with a measure of critical thinking instead of mere instinct and habit, is nonetheless still an idol. If we are wise we will see that throwing off our old, blind belief was not the end of our Truth-seeking journey, but the beginning.

The Truth is out there. We may never find it, and if we do we can never be justifiably certain, but it is there. Anyone who takes a belief as true and closes their mind to other possibilities, even if they arrived at their conclusion by rejecting a false belief, has substituted Truth for an idol. Anyone who gives up and says Truth does not exist is lost. To really serve Truth, we must always admit to the possibility that we are wrong and leave the door open to be persuaded by a good argument. The search for Truth is never-ending, but it is ever-satisfying.

Friday, August 4, 2017

The Wisdom of Skepticism

Toolbelt of Knowledge:
Algorithms
Skepticism
Equivalence


We find ourselves in a confusing and chaotic world, full of lights, colors, sounds, and forces pushing us this way and that. In order to find the comfort that comes when things make sense to us, we want to figure out why things appear the way they do. When we look to others and ask our questions, we get a multitude of answers, some of which are contradictory, and some of which seem more reasonable than others, though when pressed we may not be able to explain why. It may be tempting to go with whatever feels the most true when we hear it, but feeling true is not the same as being true. If we really want to search for truth, whatever it turns out to be, the first step is to recognize that anyone might be wrong, including ourselves. Therefore, when someone claims to know something, we should take a step back and ask ourselves whether we have good reason to believe what they are saying. Waiting to believe until an idea has been tested is an example of skepticism.

Skepticism is the mindset of putting beliefs, claims, and knowledge to the test. As humans, we are not perfect. There is always the possibility that we don’t know everything about a topic, or that we have assumptions we are not aware of, or we have made logical errors in our thinking. If we really want to believe what is as close to truth as possible, we should be skeptical of everything, even things we have believed and known for our entire lives. Not even the beliefs that we consider part of our identity, like social, rational, or religious beliefs, should be allowed a free pass. This can be frightening. After all, we may find that our most fundamental beliefs, upon which we have built our worldview and all of the rest of our knowledge, might be based on nothing better than the fear of not knowing what the real answers could be.

Many people, not understanding what skepticism means, use the word as a rationalization for disbelief. No science denier wants to be called a denier. They would rather be called skeptics. But a devotion to disagreement is no more skepticism than devotion to agreement. To be truly skeptical, you must always have your mind open to being changed if presented with a good enough argument.

Of course, we should be skeptical about science. After all, that is how science gets done. Scientists are always testing their ideas, looking for the things that don’t fit right, and then pushing and pulling on them until something breaks. When an idea with the strength to overcome every rigorous test that can be thrown at it finally meets something it cannot handle, it is cause for celebration among the scientific community. Skepticism is what keeps science thriving. No one is more skeptical about, in the true sense of the word, about science than the scientists.


We live in a culture where agreement is linked with politeness, and changing our minds is linked with deviousness. If we hold back on belief when someone tells us something, they might feel like we don’t trust them. If what they are telling us is something they hold as part of their identity, like a religious or political belief, they may feel like we are attacking them personally. When we change our minds about things, which happens a lot for skeptical thinkers, society does not see it as growing as a person, but as being wishy-washy and lacking conviction. And if we are being honest, the scope of the dark parts of reality that we will have to say “I don’t know” about can be frightening to the point of existential crisis. But this is the price we must pay in search of truth, and in return we will find the greater sense of personal security that comes with being clear and honest with ourselves about the level of certainty we have for what we know.

The ability to hold back belief, to question, to examine ideas for consistency and whether they fit into the puzzle of reality, and to always keep in mind the possibility of being wrong, is a skill that everyone should want to practice. Reality is always bigger than we know, and the only way to get a larger view of it is to allow our imperfect constructions of it to fall down when the cracks appear, so that we may build new, more accurate constructions in their place. To do that, we must always be turning the things we hear over and examining them from all sides, including everything you read on this blog. Some of the things I say are pretty weird, I know. But with skepticism as your tool, you can look at every idea that comes your way with a critical eye and separate the mud from the gold.

Friday, July 28, 2017

What is the Darwinian Model?

Recommended Pre-Reading:
The Power of the Algorithm

The word “Darwinian” automatically brings thoughts of Evolution, molecules coming together which turn into fish, which turn into amphibians, which turn into dinosaurs and mammals, which turn into people. But this is only the result of evolutionary theory when applied to life within the context of the scientific puzzle. The theory itself is based on a mathematical model about self-replicating information. As with all science, understanding the model is key to understanding the theory. This discussion will be a little more technical than usual, but it will be worth it, because although the Darwinian model is so widely misunderstood, it is relevant in so many philosophical and scientific discussions across the board.


Let us start off by putting life and all that baggage aside, and stripping the model down to its basic abstract concepts. We start off with two elements: a population of self-replicating objects, and an environment in which they exist and do their stuff. The objects are completely based on information that they hold within themselves. This information determines how they look, the way they work internally, and how they react to the environment.

If the objects all replicate themselves exactly, nothing interesting happens. But let's look at what happens when we throw in some variation. Perhaps when they replicate they take pieces of their information and splice it together with another of their kind's information, making something new. Variation causes each generation to be different from what came before, and each individual to be unique.

Now we have a bunch of objects, all that look and behave slightly differently from each other. What is going to happen? Well, depending on the environment, some of the objects are going to survive and replicate better than others. The environmental factors that affect this are called selection pressures. As a result, the looks and behaviors that are detrimental to replication in the environment are going to become scarce, perhaps until they are erased, and the looks and behaviors that make it easier to replicate are going to spread over the whole population. This process is called natural selection.


If natural selection were the whole story, the population would lose information over time, and eventually degenerate to the point where it has only the most basic skeleton of what it needs to survive within its environment. When this happens, there can be no more variation, and every object in the population will be exactly the same. But if we add randomness to the variation, either from copying errors or changes in the information before replication, the system flourishes anew. Random variations in the information of replicating objects are called mutations.


Once a mutation enters the information pool, it behaves according to natural selection, just like any other piece of information. If it is beneficial to replication, it spreads into the population. If not, it disappears. There is nothing special about a mutation after it has occurred; it is just another piece of information at that point. Mutations keep the pool of information fresh with options, so that the population can continue to adapt better to the environment, or readapt if the environment changes.

The union of these two concepts, natural selection and mutation, is Darwinian Evolution. A population of self-replicating, information-based objects, reproducing their information with small random variations, in an environment that causes some traits to be better at replicating than others, leading to information diversity and increasing complexity. There are many parameters that can be tweaked, including changing or adding environments, the methods of replication, the rate and severity of mutations, and many others.

So that is the theory. Does it work? Well, it is an algorithm, which means it is not too hard to test if you know how to program. You can create a bunch of self-replicating virtual objects and a virtual environment to put selection pressures on them, and run it and see what happens. Experiments like this show that the Darwinian model works exactly as predicted.

Okay, Darwinian Evolution works. Does this model apply to life? To find out, we have to ask whether life has all the required features to fit the model. Can life be understood as populations of information-based, self-replicating objects? Yes. The replication, we call reproduction. The information is in life’s DNA, a collection of molecules in every living cell. The information in DNA is in the form of nucleotides (C, G, A, or T), which make up the ladder rungs in the picture we are all familiar with. A single strand of DNA has enough information to fill entire libraries. Does DNA mutate? Yes. DNA mutations can be caused by all kinds of things, and can be as minor as a single different nucleotide, as major as tearing the DNA apart, or anything in between. Does life exist in one or more environments that exert pressure on it and lead to natural selection? Of course. Darwinian Evolution does indeed apply to life, and mountains of evidence from fossil studies to DNA studies to adaptation studies to lab experiments on microbes confirm it.


Darwinian Evolution of life is the theoretical foundation of biology and psychology, the cornerstone upon which everything in the disciplines is built and without which nothing would make sense. But the same model also applies to all kinds of systems, including ideas, cultural values, popular trends, businesses, and computer programs. Even if you never study biology, the Darwinian model is a powerful tool of understanding, and well worth the time and effort to learn.

Friday, July 21, 2017

Moral Theory II: Trust and Obey

Moral Theory:
I. Intuitionism
II. Authoritarianism
III. God
The Is-Ought Problem
IV. Ethical Egoism
???
???

Last time we looked at how the default starting point for moral principles is the conscience, and how, although there is overlap, each person’s conscience is unique. This time, we are going to look at another default starting point, deference to authority. Many times the first thing someone does when they see the failures of the individual conscience is to look for someone wise, with better moral intuition than their own, to instruct them in how to live. There is a responsibility that comes with freedom, responsibility of thought. One has to think for oneself and find the answers through reason. This can, understandably, be frightening, which can drive many to give up their freedom and follow someone else's dictates. This view of morality is called authoritarianism.


Authoritarianism can also be imposed upon people in childhood by an environment of strict rules. These rules might be imposed by parents, a church, or an extortionist corporation or government agency, or more subtly as pressure to conform to the norms of the community. In this way, instead of coming to authoritarianism from a place of intuitionism, a person starts directly at authoritarianism.

Giving yourself completely to an authority can paradoxically feel tremendously freeing. After all, you do not have to go through the trouble and uncertainty that comes from making your own decisions. Working for a cause bigger than yourself, especially if you make a leap of faith because you do not fully understand it, can give you a great drive and sense of purpose. Authoritarianism can be tempting to those who see it from without, and intoxicating to those participating from within.

But authoritarianism has a dark side, and it is deep and filthy. Despite the fact that authorities come in all flavors, the most ardent followers of any of them will believe that they have discovered the one moral truth, and that their rules of right and wrong are absolute, for them, for you, and for everyone over all time. We find among those enthralled with this mentality followers of the worst of causes, from Nazis to Stalinists to Crusaders and Jihadists. Having given their moral judgment wholly to their dear leaders, those people, who may have begun with healthy consciences, participated in the worst of evils.


Of course, authoritarians are only as good or bad as the leaders they follow. Authoritarianism fails to answer the bigger question of morality's objective source, merely deferring it to someone who is assumed to have greater mental ability and better tuned intuitions. When questioned why they hold their moral principles, an authoritarian might say, “I cannot explain it, but I know it is true.” To me, this is a sign of hollow beliefs, devoid of any rational foundation.

Sometimes, those who begin authoritarian come to see the problems with it, and convert to intuitionists, just like those who see intuitionism's problems may try to compensate by giving themselves to authority. However, as we will see soon, there are other options. In the next entry of the Moral Theory series, we will explore one particular case of authoritarianism that attempts to get around its problems, and after that, we will begin to look at basic principles and try to construct an objective moral theory from the ground up, which will rely neither on authority nor intuition.

Friday, July 14, 2017

When Will There be Another Einstein?


A hundred years ago, everything we thought we knew about space and time was turned upside down. Albert Einstein proposed the Special and General theories of Relativity, which described how space and time rotate when something moves close to the speed of light, and how spacetime gets bent and warped by gravity. He also contributed plenty to all kinds of sub-disciplines of physics. Einstein is the iconic genius, instantly recognizable in photographs by his wild cloud of wispy white hair. For someone who made such a visible mark on scientific history, it is natural for us to wonder when someone else like him will come around.

But we have to recognize that extreme intelligence does not automatically lead to paradigm-changing theories. Einstein was in the right place at the right time; all the pieces of the puzzle were in place, and the world of physics was ready to hear what he had to say. After his theories hit the journals, the specific problems he had been worked on were solved. Difficult as it is to turn the world upside down, it is much harder to turn the world upside down again. There simply was not enough information to solve the problems that were left, and the technology to run the necessary experiments was not ready.

So when we ask when the next Einstein will appear, we may just have to look a little harder to find them. Plenty of progress has been made in modern physics since Einstein’s time, some of which has had a comparable level of impact on the theoretical frontier as Einstein’s work. Here are some of the people I know of who might qualify as the next Einstein, some of whom are still alive today.

Leonard Susskind


While pondering the connection between the electromagnetic force and the nuclear forces, Susskind discovered the equations he was working with described the vibrations of strings. These strings, if real, are as small compared to a proton as protons are compared to human beings. He found that this “String Theory” not only explained the electromagnetic force and nuclear forces, but gravity as well. It is a true Theory of Everything. String Theory has never been tested, but it has become incredibly popular among theoretical physicists.
Susskind also had a series of arguments and public debates against Stephen Hawking about what happens when matter falls into a black hole—and he won. He has also made other contributions to theoretical physics.

John von Neumann (NOY-man)


A true polymath. Von Neumann’s name pops up all over the place in physics, mathematics, computer science, and statistics. Among science fiction fans, he is best known for the von Neumann probe, a theoretical model for a robot that can create copies of itself from raw resources. They are often depicted as scourges of the galaxy, but could also be used for peaceful and safe exploration. Von Neumann’s contributions to science have played a significant role in making it the monolith it is in the 21st century.

Stephen Hawking


A theoretical astrophysicist who made a name for himself by his groundbreaking work on black holes, and doing it with a terrible handicap: he cannot move! He has been wheelchair-ridden for most of his life by ALS. He cannot hold a pen or chalk, nor type on a keyboard. He interfaces with his computer by twitching his cheek, and by doing this he writes mathematically intensive papers, books for a general audience, and he participates in high-level intellectual conversations regarding the future of earth, humanity, and technology. Hawking is a living testament to the power of technology and perseverance in the face of extreme challenge.

And these are only a few. Einstein himself was not the only genius of his own time. There were a group of people who pioneered modern physics, and there has been a strong community ever since. Both in science and outside of academia as well, many men and women with Einstein-level intelligence live and have lived. The theoretical frontier keeps moving, poetry gets written, songs get composed, and new and better ways of doing everyday things get discovered. When will there be another Einstein? We have never been without Einsteins. The next great paradigm shift of physics will happen when enough data has been collected, all the puzzle pieces are in place, and the scientific community is ready.

Friday, July 7, 2017

Moral Theory I: Intuition and the Conscience

Moral Theory:
I. Intuitionism
II. Authoritarianism
III. God
The Is-Ought Problem
IV. Ethical Egoism
???
???

Up until now, I have written about looking at things from different perspectives and trying to figure out descriptions of what they are and what they do. We humans are curious, always trying to understand the world and predict future possibilities from our understanding. Yet there is another dimension I have barely touched on, the idea of should. We prefer some actions and outcomes to others. We believe there are a collection of ways of life that all people should follow, and we call the actions that are in accordance with these ways of life, as well as their consequences and things that encourage these actions, good. And the entire system, we call morality.


As the atom opens the world of physics to chemistry, biology, neuroscience, and beyond, so a study of morality opens wide the space for discussions about humanity, politics, religion, and philosophy. To really get to the meat of these topics, we first need a solid intellectual grounding in the many colors and facets of morality. That is why I am starting this series, which will take us through the landscape of ideas and establish my own views so that you may understand where I am coming from.

In my second year of college, my class had a discussion. I don’t remember what it was about, but I remember raising my hand and saying something that I believed was so obvious that it did not need to be said, “what is wrong is wrong.” Everyone turned to look at me with strange expressions, and the professor said, “that is a bold statement. Can you back it up?” I was taken aback. I had believed it was logically trivial, the law of identity, that a thing is itself. The possibility that anyone could disagree had not even entered my mind.

So what did I miss? Well, I had a subconscious assumption that right and wrong are basic elements of reality, that actions are simply right or wrong, and that is the end of the story. I did not realize that there was more, that nuance, context, and circumstance change the game.

This was an example of the most common and naive theory of morality, Intuitionism. It is naive because it is what people follow before the philosophical question of morality enters their awareness. Intuitionism is the idea that morality is common sense. Everybody knows it, and simply has to pay attention to their conscience to live a good life and get along with others.

At first glance, there seem to be moral truths that are commonly accepted and understood the world over, like murder. But if you look closer, you will find that we have different ideas on what murder is. Does it count if it is self-defense, military action, abortion, honor killing, suicide, or capital punishment? It seems we don’t agree on the morality of killing after all, but only the semantic statement that the word for “killing that is wrong” should be “murder.”

Ask a random person if some action is right or wrong. For example, drinking alcohol. Some people will say it is good, and some people will say it is bad. But if you ask them why, you may find something intriguing: that they will either fumble around for an explanation they have never bothered to search for before, look at you as if you are stupid for not knowing, or even suspect that you are trying to corrupt vulnerable minds just by asking the question. Few will have made the effort to come to their view by starting from deeper principles.

The fact is, if we look at the way people act and the things they believe, we will see that everyone has different intuitions regarding morality. Should government support go to those who have gambled their lives away? Do men and women have specific roles to play in the household, or should people be free from gender-based constraints? Different people’s consciences tell them different things. Intuitionism as a moral theory leads directly to Relativism, the idea that morality has no objective basis, but depends only on the feelings of the person making a decision.


In light of this, we might be tempted to look down on Intuitionism. But in the heat of the moment when we don’t have time to weigh all the consequences in accordance with rational principles, we are all Intuitionists. If we want to be principled, then we must examine morality from a philosophical perspective in the times when our heads are clear, in order to train our intuition for the moments of decision. To find such a moral theory, we need to ground it in something outside of ourselves, which does not change and is universal to all of humanity. This is what the rest of this series is going to be about.

Friday, June 30, 2017

Reality and Its Images

The Nature of Reality:
Realism and Idealism
Quasi-Realism
Representationalism

A few hundred years ago, a philosopher named René Descartes had a revelation. He realized that he believed many things about the way the world was, but had never thought about why. When he examined his knowledge, he found a complex web of hierarchies and feedback loops, with no clear foundation. So he decided to doubt as many things as possible, to search for the immutable foundation on which true knowledge could be built. What he discovered was revolutionary: his beliefs, his perceptions, and even his very senses might be manipulated, and for all he knew, the very world he lived in could be an illusion.

We live in a world of sight, sound, touch, taste, smell, up and down, hot and cold. These, and more, are our senses, and they work together to paint a picture of a world we accept as real. We see our hand in front of our eyes. We can reach forward and touch the desk, feeling its smooth yet grippy texture. We can walk out the door and see the sky, feel the breeze, smell the summer air. It feels so real. So present. So true.

As I walked along the sidewalk on my way home from the university one day, I was struck by the sight of a tree. It was not unusual, just one of the average variety you can find all over the Midwestern United States. It had a long, thick trunk, branches that spread out like a shelter from the sky, and a shaggy head of leaves. I reached out and touched it, feeling the roughness of the bark, and was hit full force by the fact that this tree as I saw and felt it was a creation in my mind, an image built from nerve signals feeding into my brain. Yet the tree itself has its own existence, independent from and different from the way I perceive it to be.

We unconsciously assume that the world as we see it is reality, but in fact it is not. What we perceive is our mind's picture of reality, constructed from the data our senses bring to it. To illustrate, let's talk about what it means to see something. First, light falls on the object, bouncing off it. Some of that light goes into our eyes, where it is focused onto the retinas in the backs of our eyeballs. The rod and cone cells in our retinas translate the light into electrical signals, which travel up our optic nerves into our brain. Finally, the visual cortex in our brain processes the electrical signals into the colors and shapes that we experience. From object to light to electrical signal to picture, the object and the image that we mistake for the object are four steps removed.

In order for us to perceive something, there must be a representation of it constructed in our mind. This representation is not the thing itself; it is always at least one step removed. The stunning consequence of this is that reality-in-itself cannot be known. It is logically impossible. The only exception to this is experience itself. When we see a flash of light, we can know with absolute certainty that we are seeing a flash of light, but whether it came from a real lightning bolt or the illusion of a lightning bolt is open to question. This is the metaphysical theory of representationalism.

So what is real? If we cannot experience reality-in-itself, if everything including the ideas constructed in your brain about what is written on this page are mere representations, is it possible to know anything? Can truth exist? I would say yes. Even though we can never know anything directly, that does not mean we cannot know it at all. The truth of our picture of reality is proportional to how well we can draw lines between the pieces of our picture and reality. Of course, because we cannot perceive reality directly, neither can we be absolutely sure the connecting lines between our picture and reality actually work. But it does not matter if we know it works, only if it does work. This leaves us with uncertainty, but it can be made small enough that we can justify being pragmatic about it and assuming we know what is true.

Representationalism implies something that we may never have thought of before: any organization of information, be it neural impulses, computer bits, written symbols, frozen magnetic fields, or anything else, as long as it has a systematic correlation with reality, is equally true. This idea of different but equally true ways to represent reality can show us new ways to connect with others. I never feel like I truly understand any idea or concept until I can see it as a picture in my mind's eye. One of the physics professors at my university sees the world entirely in equations. Some of the students I teach think in terms of words. Finding ways to translate between these views lends to richer, more comfortable communication with people, and I even find it to be fun.

Friday, June 23, 2017

The Limit of Philosophy


Philosophy is one of the greatest strengths of humankind. It has given rise to science, ethics, political systems, and cultures. All it really is, is thinking about things and trying to figure them out. For example, if you ask, “what is a house?” you may think of a picture of a building with a door and a window, a roof, and interior walls that separate a living room from a kitchen from a bedroom from a bathroom, and consider the question answered. But philosophy recognizes that this is only part of the answer. Such an organization of matter alone does not make something a house. First, someone must imbue it with a purpose, specifically as a place to spend time in, to form a familiarity with, to store other bits of matter that we call our property in, and more.

But we can’t stop there. Philosophy digs deeper. What is familiarity? What is property? We can try to answer these, but for any answer you can ask a “but what if” question, casting the answer into doubt. Here you might start to get concerned about a problem. Since the answer must be given as another statement, you can ask another “but what if” question about the answer. And another one about the next answer, and so on. It is like when a toddler asks “why” over and over and over again.

Let's take a common sense statement: “reality exists independently of our perceptions.” We can ask, “but what if it’s reality just a projection of our minds?” In response, we can say “okay, we can't know it absolutely, but we can determine from the patterns in which our experiences fit together that the likelihood of an external reality not existing is infinitesimally small.” But we can ask, “but what if this perception of order is an illusion? Our minds might be fooled into thinking random, unrelated sensations are connected.” We can reply, “We can submit our senses and memory to the laws of logic to see that it is so.” And still we can ask, “but what the very laws of logic are one of the things our minds confuse us about, and logic does not actually exist?”


Even “I think, therefore I am” is open to “but what if your mind is only fooling you into believing existence is necessary for experience?” As far as I can tell, the only statement that begs no “but what if” question is “I experience,” because experience itself is the only thing we experience directly. Of course, “but what if you’re wrong” is a catch-all, but it is not useful because it does not present an alternative possibility.

If you are like me, these questions, like the “why” game, sound silly and annoying. Sure I can’t prove it, but like, I know I exist. And we have to assume the universe exists, and a lot of other things, if our lives are going to function. Accepting assumptions for their usefulness like this is called pragmatism. We should always be questioning things and exploring possibilities, but in order to function, we have to do things. The Ancient Greek father of western philosophy Socrates once said, “One thing only I know, and that is that I know nothing.” We would be wise to adopt a similar view ourselves, understanding that everything we know is pragmatic, based on assumptions. We should keep in mind that everything, even the most common sense of facts, even the very methods we use to determine what is true, might actually be wrong. I do not mean that we should throw our hands up and abandon everything, only that we should listen to people with alternative views, and be open to well-reasoned arguments.

Friday, June 16, 2017

The Delight of Waiting

It’s that time of the year again. The world's big video game companies have just gotten together at the Electronic Entertainment Expo, E3, to announce new games and release dates. It is a time of excitement, and the internet is full of hype, talking about all the trailers and teasers. But there is a bittersweet side to all this: we will have to wait for months before we can get our hands on those shiny games. In this era of instant gratification, when we have an unfathomable reservoir of entertainment to amuse ourselves with, it can be frustrating to have something withheld from us.


Sometimes what we want is not available. Perhaps we cannot afford it, like a house big enough for a family, or it has not been released yet, like the next generation of video games we just got an avalanche of trailers for, or you have not been able to find it, like the intimate connection of a romantic relationship. It may seem like the things we desire most are always those which are out of reach. This is not just confirmation bias; the grass on the other side of the river always looks greener. When something catches our eye and it is available right now, we can just order it up with a few clicks, and within a couple of days it is ours. We enjoy it for a little while, and then lose interest, finding ourselves back where we were before, wanting something new again.

Does this mean that we are stuck with an absurd hand of cards, plagued with desires that burn white hot within us but can never be sated? Before we despair, let's try looking at things from another perspective. What if it is not things that we truly crave, but the well-earned payoff of a struggle? We find that the more we have wait, look forward to, or work hard for something, the sweeter it is when it finally comes to us. Waiting a year or two after seeing the trailer for a movie or video game lets it stew and simmer, the aroma readying our appetite for the moment it is ready.

So when the antsy feeling of anticipation comes, we should not look for ways to distract ourselves from it, but embrace it as an essential part of the greater experience of our desire. The longing, looking forward, imagining what it will be like, is the heat of the cooking fire that makes it fulfilling. Once we adopt this view, we may even find that things which are available now might be better if we wait for another time. The new episode of your favorite show can wait until some evening when you are tired and in the mood to sit back and enjoy something. As for now, I am going to patiently wait for the games we saw at E3, soak in the anticipation so that I am ready to fully enjoy each game when it is released, and in the meantime be happy and work on projects that will pay off in the future.

Friday, June 9, 2017

The Power of the Algorithm

Toolbelt of Knowledge:
Algorithms
Skepticism
Equivalence


We all know what it is like to find better ways to get things done, or to be shown a better way by someone else. Usually, this is due to trial and error. We drive the same route to the grocery store every time until we are hit with a flash of inspiration, and try a different one. If it works better, we make it the new norm. If it does not work as well, we go back to the old way. This is knowledge; we are blind, flailing about in the darkness, and when something works, we repeat it. But there is a Pool of Bethesda, an elixir that gives us sight, that elevates us from mere knowledge to the heights of mind beyond knowledge, which is understanding. This elixir is the algorithm.

An algorithm is a set of exact instructions. We don't normally use algorithms when talking to each other. If you are thirsty, you might say, "would you please bring me a glass of water?" and this is enough for the other person to know what you mean. To instruct someone to bring you a glass of water with an algorithm, you would say:

Stand up, turn toward the water pitcher, take steps until you are within an arm's reach of the pitcher, grab a glass, grab the pitcher, pour water from the pitcher into the glass until the glass is 85% full, put the pitcher down, turn toward me, take steps until you are within arm's reach of me, extend the arm holding the glass toward me, let go when I have the glass securely in my hand.

This algorithm looks obnoxious. It is not clear from this example why algorithms are useful. So let's look at another one:

If the ground beneath you is slanted, accelerate in the direction of steepest descent.

This is a simple algorithm that tells a ball what to do when it is on the ground, with nothing but gravity pushing or pulling on it.

It is said that math is the language of the Universe. Mathematical equations can be solved with algorithms, even the ones whose solutions cannot be written. Algorithms are the link between the abstract, logical world of mathematics and the concrete, physical world of reality. They are the only things that computers understand, instructions for storing and loading data, and performing tasks based on the data. Even the software that recognizes voices follows algorithms to figure out which algorithm you are trying to tell it to follow.

This is where the real potential of algorithms comes into view. Imagine you have a list of numbers and you want to know which is the biggest. Consider the following algorithm:

Start by writing the first number in the list as “biggest number so far.” Look at the next number in the list. If it is bigger than the biggest number so far, replace the current biggest number so far with the new number. Repeat this for every number in the list.


When this algorithm finishes, we can be completely certain that the number in “biggest number so far” is the largest number in the list. If there were a larger number in the list, it would have replaced the one in “biggest number so far.” This is a relatively simple example, but algorithms can be applied to much more complex questions. We could easily find the largest number on a small list just by looking at it, but this algorithm will just as easily find the largest number on a list with a million entries, even though it would take a human a long time, and he or she might make a mistake. Now we start to see the awesome power of algorithms. Whereas normal human trial and error always leaves uncertainty, where improvement could lie, a properly constructed algorithm can bring you answers with absolute certainty, or as much certainty that is logically possible with the data you have so far.

Going back to the grocery store example, human guessing always leaves the possibility for a faster route to be found. But there are only a finite number of routes, so one of them must be the fastest. If you use the right algorithm, you can find it with certainty, leaving no possibility for a faster way to be found. A crude version of this algorithm might go something like this:

Given the time it takes to drive down each street and your currently known fastest time, compare the times for every possible route. If the time passes the currently known fastest time, move on to the next route. If you reach the grocery store and your time is less than the currently known fastest time, save the route as the new fastest route, and save the time as the new currently known fastest time.

After all routes have been tested, you will know the fastest route, leaving no possibility for a faster one to be found.

image credit
But reality is more complicated than that. What about traffic, or weather? Well, we can add those as parameters into an updated version of the algorithm. A parameter is something that can change, and when it does, the result of the algorithm will be different. Every time we think of something new to worry about, we can add a parameter to the algorithm to take care of it. This algorithm is also extremely inefficient. For instance, it includes routes where you just drive in loops until the time runs out. But there are ways to make the algorithm more efficient. It could say to stop if it reaches an intersection it has already been to on this route. It can backtrack when it runs out of time instead of starting completely over. Slow segments can be remembered and excluded from future iterations. Modern GPS software has advanced, streamlined algorithms that update on the fly, which is how they can tell you the best way to get across the country, and still work even if you take a wrong turn.

But there is more. Not only can algorithms tell us the best ways to do things, but they can show us how nature works as well. Everything that exists has a way that it behaves, a nature. If it sometimes acts one way and sometimes acts another way, then there must be a reason why, a more basic nature that it follows all the time, which tells it when to behave this way and when that. Sometimes a ball rolls to the left and sometimes it rolls to the right, but this can be understood by knowing it is always pulled in the most downhill direction. Sometimes it does not roll at all, but bounces or floats. But this too can be understood by knowing that it is pulled on by gravity, among other properties and interactions. Every time we think of an exception to an object’s behavior, the exception can be understood by a deeper knowledge of its nature. The most basic levels of nature, where all behaviors can be understood, are called the laws of physics.

Since everything in reality has a nature, a way that it exists, algorithms are a perfect tool for science. Given a set of assumptions, algorithms can prove a result. There are even algorithms, such as Bayes' Theorem, which show us which assumptions are most likely to be valid given the data that we have collected!

You may think that there are surely some things that algorithms don't apply to. Take human behavior, for example. Humans are notoriously unpredictable. You would imagine there is no way you could ever find an algorithm that will predict what humans will do. But humans exist, so we must have some way that we exist, some nature. And having a nature means our behavior can be understood by an algorithm, if an extremely complicated one. If you are still skeptical, then let me not only give you a logical argument, but proof—we have already found it: our DNA. DNA tells life what it will do, including millions of parameters. DNA tells our cells how to make a brain, which can adapt to all kinds of circumstances. Everything you could ever possibly do is allowed by your DNA.

Algorithms, sets of detailed and precise instructions, can bring us to another level of understanding. When properly constructed, they can show us the best ways to do things and what is most likely to be true with mathematical certainty. They are the Universe’s gift bestowed upon us mortals, so that we may understand the reality we find ourselves in. Though we may still need to use trial and error to find the correct algorithms to use, once we have them they validate themselves. As I sit here at my computer, I marvel at the circuits and switches that make it possible for the taps of my fingers to produce letters on the screen—proof of algorithms’ power. As a scientist and a scholar, I owe most of what I know to these abstract formulaic methods. What wonderful things.