Friday, July 27, 2018

What is Knowledge?


What does it mean to know something? We have this thing in our head called knowledge, which is a collection of notions about the world and how it works, and which helps us to act in ways that make sense. But we know that it is possible to be wrong, and wrong knowledge is not knowledge at all, but false belief. At this point in history, when we are waking up from our intellectual bubbles and seeing all the different views people have, many of which seem to go against common sense, it seems like a good idea to take a look at knowledge and find out what it is and how it works.

When we are children, knowledge is simple. Our parents and other people we trust tell us things, and we believe them. For the purpose of this discussion, I will call this method radical credulity. Of course, now that we are older, we understand that this way of thinking lets incorrect ideas in just as easily as correct ones. This is one reason we keep our kids in safe environments with trustworthy people.

A simple method to filter out ideas that are probably incorrect from ideas that are probably correct is to believe things that are reliably useful. This is called pragmatism. How do we know the Earth is more sphere-like than flat? Because treating the Earth as a sphere gets our airplanes to their destinations, while treating it as flat does not. The pragmatist view is that we believe things that let us reliably predict the consequences of our actions, so that we can effectively do what we are trying to do. It’s as the old defense of the scientific method says: we believe it because it works.

But pragmatism has its shortcomings. For example, most of the time, we live as if the Earth is flat, so there is not usually any problem with believing it to be so. However, there are circumstances where this belief could be catastrophic. Of course the pragmatist will say that we should treat the Earth as flat or round depending on the situation, and the real truth of its shape doesn’t matter. However, for many of us, it isn’t good enough to believe things because they are useful; we want to believe things because they are justifiably true, and pragmatism does not do this for us.

In the middle of the last century, psychologist Jean Piaget came up with a theory of knowledge called constructivism, which says we don’t simply acquire knowledge, we create it as a logical network. When we hear a new claim, we evaluate it by how well it fits with what we already know, and if we find no contradictions, we add it to the network. If we do find a contradiction, we either toss it out or reevaluate the belief that it conflicts with. Right away, we see something in constructivism that was missing from radical credulity and pragmatism: logic. The beliefs we hold are connected to each other by threads of non-contradiction.

However, as we all know, it is possible to have beliefs that are false. Adding a new belief that doesn’t conflict with a false belief doesn’t help us come to the truth. One way to attempt to rectify this is to take the beliefs that we are most confident and passionate about as an immutable foundation, and build our knowledge of the world around them. In philosophy lingo, these beliefs are called basic beliefs.

Of course, if people just take whatever they please as basic beliefs, we will find people with all kinds of beliefs that contradict each other’s, and they’ll stubbornly yell at each other until they’re blue in the face. Faced with this problem philosophers sought what could be called properly basic beliefs, truths which are so obvious and undeniable that it is impossible for them to be false. DesCartes famously took his own existence to be properly basic, and the philosophy of empiricism holds the validity of logic, mathematics, and observation as such.

Unfortunately, we run into another problem: we cannot agree on what beliefs should count as properly basic! Take any belief that is proposed as properly basic, and you will be able to find people who doubt it. Mathematics? Can be doubted. Objective reality? Can be doubted. “I think, therefore I am”? Can be doubted! What’s more, since properly basic beliefs are supposed to be the foundation upon which all other knowledge is constructed, the only argument that can be made for a belief to be properly basic is, “can’t you see it’s obvious?” Not exactly up to academic standards!

In the absence of anything that could justifiably be called properly basic, we might, with heavy heart, be tempted to conclude that knowledge is, in fact, impossible, and that everything is just mights and maybes. This is a pessimistic outlook, and not one most of us are comfortable with. In order to avoid it, we might choose a basic belief on radical credulity, usually called “faith” in this context. Or we might revert to pragmatism, and choose a belief that has proved reliable time and again as our basic belief.

I, however, subscribe to a third option, and that is to view knowledge in terms of probabilities instead of just yes or no. Although it may be impossible to know anything with a justified certainty of 100% with an infinite number of decimal places, we can be justifiably 90% certain, or 99.999% certain. We may not be able to calculate the numbers, but with practice we can guess the ballpark.

How is the level of certainty of a belief determined? By how well it connects into the knowledge network. Reality itself is one giant network where everything connects to everything else, so the larger a person’s knowledge network and the more interconnected it is, the more likely the beliefs in the network are to be true. To understand why, the jigsaw puzzle analogy is apt. When building a puzzle, there is a small chance that two pieces will fit, even though they don’t actually go together. But the chance that the same piece will fit incorrectly on two sides is much smaller. So to be sure you have the right piece, you want to try to connect it to the picture by more than one side. The chance of it being the right piece is even higher if there is a fourth piece connecting the two connecting pieces together, so that you have a square of four pieces. And the more pieces that can be added on to the connecting pieces, the higher the chance of each of them being the right piece.

Knowledge is like that, except there are plenty of extra pieces that don’t go to the puzzle, the chance of an incorrect connection is much higher, and the pieces can hook on to an arbitrarily large number of other pieces, which don’t have to be right next to each other. The knowledge puzzle also gets scaled up to more complex levels. With knowledge, you can have two packages of tightly-knit beliefs, but these packages only have a few connections between them. Imagine two balls of string connected to each other by three threads. Each ball is tightly connected, so they each individually have a high chance of being true, but their connection to each other is tenuous. If you discover that the two packages of beliefs contradict each other, either by learning something new or by thinking about them both in new ways, then you might have to make the tough decision to let one of them go.

When a contradiction is found between two sets of beliefs that one has, it is called cognitive dissonance, and depending on the complexity of the beliefs in question, as well as how attached we are to them, it can manifest as a physical headache. We instinctively want to get rid of the cognitive dissonance as quickly as possible. There are two ways to do this. The first, is to commit to whichever beliefs are most important to you, taking them, at least temporarily, as basic beliefs. The second takes longer, but it leaves you in a more stable place, and that is to take apart each package of beliefs and reevaluate them in the broader context of your total knowledge network, and by learning about the relevant topics from a variety of external sources.

A mind well-practiced in the art of knowledge construction will take time every so often to reevaluate the pieces of their knowledge network, to make sure it all fits together properly. There are many techniques to this, which we explore on this blog in the “Toolbelt of Knowledge” series.

There is still one teeny tiny issue with constructivism without basic beliefs, which you may have picked up on. Constructivism itself is a model, a sub-network of nodes within the larger network of a person’s knowledge. In particular, the belief that “the more solidly integrated a belief is within the network, the more likely it is to be true,” is itself a node in the network. This means that it must be subject to the same reevaluation process as everything else, or be taken as properly basic on faith.

But we don’t do that kind of faith here at SciFic. As you know if you’ve read “The Limit of Philosophy,” we prefer to race headlong into the trippy world of metalogic. So what happens when we allow ourselves to doubt the very method we use to determine what is true? Well, we just do the same thing we do with everything else: evaluate it. If it does not measure up to its own standards, then we get rid of it. If it is self-consistent, and we don’t have any alternative methods that are more self-consistent than this one, then we might as well use it. But one last question: why should we use self-consistency as a measure for whether a method of determining truth is valid? Because, as human beings, we are psychologically driven toward consistency. Of course, that’s not a logical reason, but remember, the most fundamental question is not “what is true?” but “what should we do?” and our action is driven by our unconscious psychology rather than logic.

As children, we are told all kinds of claims, which we accept on radical credulity. Then, we evaluate new things by a combination of how useful they are and how well they integrate into our networks of knowledge. A mature, practiced thinker will not take any claim as foundational, but evaluate and reevaluate every part of their network by how well it connects with the rest. That is knowledge.

Friday, July 13, 2018

Beneath the Words



Why do we speak? Why do we write? It’s because we have something to communicate, so we do it with the words whose definitions represent it most correctly, right? No, not usually. We naturally think that whenever we speak, the most important thing is the topic of conversation, but really the content of the words rarely matters; it’s the context that lies beneath them that is important. A person wants to convey a feeling, or connect with another person, or find mental stimulus, or show dominance, and so they choose their words, tone, and body language in order to do it. The literal definitions of the words are often little more than a distraction.

When we engage in small talk, we aren’t really talking about our relatives and the weather, but showing the other person that they are worth our time. Or, we might be showing off our competence, that we have something useful to offer about any topic the conversation turns to. Or, by uttering specific phrases or sympathies with certain viewpoints, we might be signaling different aspects of ourselves, to see if we can connect with the other person.

Oftentimes people will repeat sayings or claims in order to signal they belong to a group. “Government isn’t the solution, it’s the problem,” “all white people are a little bit racist,” “Make America great again,” and “The Lord is risen,” are all examples. People who say these things may believe that they’re conveying a literal message, but really they’re feeling out whether the person they are talking to belongs to the same group as they do.

When someone says, “global warming is a hoax,” they aren’t talking about global warming. Instead, they are signaling to their in-group, as described above, but they are also doing something else. Because it contradicts the words of those in power, it gives them a thrill of freedom, effectively saying, “you can’t tell me what to say.” Alternatively, when a representative of an oil company says the same words, they are trying to get money. Neither is actually talking about the scientific phenomenon of global warming.

When does the literal content of the words matter? Remember, the fundamental question is what we should do, not what is true. Everything in our lives comes down to determining what actions we choose and how much effort we put into them. Thus, the literal meanings of our words are only directly important when those who are speaking are trying to get those who are listening to cooperatively do something. More specifically, instructions, laws, teaching, and problem solving.

But what about philosophy? What about intellectual conversations and blog discussions? Aren’t those impossible without taking the literal meanings of the words? That’s true, but the meanings of the words are only important because they facilitate the purpose of the discussion, which is to engage the rational part of the brain. Let’s take this blog for example. It’s very analytical, talking about my thoughts and observations about things. Its lifeblood is the content of the words. But it’s not these words that matter; I could talk about any of a million things, and it would serve its purpose just fine. That purpose is to think out loud, to arrange my thoughts and perceptions into a picture that makes sense. And your purpose for reading it is most likely the same.

But seriously, I’m a writer. Shouldn’t I, of all people, believe in the value of words? Of course I do. It’s just that, as a writer, it’s more important to me than to anyone to recognize that it’s not the definitions of the words that are most important, but understanding the purpose beneath the words, so that I can use them in the best way to fulfill that purpose. A good story needs to make logical sense in order to keep the logical part of the brain satisfied, but more importantly it has to engage the reader’s emotions and imagination. If a story just tells about how this happens and the characters go there and do that, it’s boring and not worth reading. This is where the power of songs and poems come from. You’ve probably noticed that the best ones aren’t the ones that tell a logical story or are literally descriptive, but the ones that stir the emotions and the imagination. It’s not the words that matter, but the purpose beneath the words.


Friday, July 6, 2018

A House United


This Fourth of July, I find myself thinking about the wellbeing of the United States. We find ourselves politically polarized, with a populist President whose hobby is trolling the world on Twitter. Income inequality continues to grow, the people of the working middle class remaining where they are or becoming poor. Worries of global warming and technology getting out of hand fester beneath the ever-present slumbering form of nuclear war.

There are two general methods people believe will solve our problems. If you’re on the political left, you might think the government needs to get its act together, increasing taxes on the rich and on industries which harm the environment, and increase spending on the poor and organizations and institutions that foster creativity, like education. If you’re on the political right, you might think the government needs to pull back on regulations and taxes that keep businesses from utilizing the power of the market to its full potential.

On top of that, there is a gargantuan mess of social issues that our political groups have grabbed onto, so that if you disagree on what I just mentioned above, you probably disagree on the rest too. Regulation of weapons. Abortion. Equity for women, racial minorities, and LGBT+. For these and many more, the conversation rarely goes further than whether we are for it or against it, so that we can gauge whether the person we are talking to belongs to our group or the other group. This mindset is called tribalism, and it is a tragedy, because it treats important, complex, nuanced issues as nothing more than a means to determine sides.

There was a famous experiment called the Robbers Cave Experiment, named after the park it took place at, done by social psychologist Muzafer Sherif. At a summer camp, a group of boys did bonding activities, and came to like each other. A few days into the week, it was revealed that there was another group of boys at the park. This group had gone through the same program as the first group, but neither of the groups had known about each other. Now that they did, phase two of the experiment began.

The groups were given activities where they had to compete with one another. The group that won got the prize, and the group that lost got nothing. After a few days of this, the groups hated each other. They called each other names, stole, and even got violent. At this point, the experimenters knew it was time for phase three.

Phase three of the experiment had a series of crises come up, fabricated by the experimenters. These crises affected all of the boys in both groups, and the only way to solve them was for all of them to work together. After a few of these crises, the groups started getting along, despite the bad blood that had been brewing from phase two.

The Robbers Cave Experiment shows us that when we humans compete, we think negatively about each other, but when we cooperate, we think positively about each other. Of course, it is not conclusive that it applies to everyone, since all of the subjects were boys, and they had similar backgrounds. But the result feels right; we can put ourselves in the boys’ shoes, and see ourselves behaving the same way. Also, the Robbers Cave experiment was carried out 60 years ago, and all of the experiments in the field since then have agreed with its results.

Progressives and conservatives, however, are not the same, on average. In the Big Five Traits theory of personality, people with high openness to new ideas and experiences are more likely to be politically progressive—wanting to try new things to make the world better—and people with high conscientiousness are more likely to be conservative—wanting to make sure the world keeps running and doesn’t fall apart. And while it is not a hard and fast rule, and personality traits are malleable with effort, it shows that political orientation has a genetic component.

If you take a moment to think about what “progressive” and “conservative” mean; making things better, and not making things worse; you will see that both perspectives are absolutely necessary for keeping a society alive. There isn’t a right side and a wrong one here. We, the US—no, the world—need both conservative and progressive heads at the table in order to make it through our 21st century problems. The great human endeavor did not start on July 4, 1776, nor did it end. Let’s stop competing over the country, and take a serious look at the issues. Let’s listen to each other, and work together to create a country and a world where all people can live freely and safely to pursue their own happiness in whatever form it takes.