When we are children, knowledge is simple. Our parents and other people we trust tell us things, and we believe them. For the purpose of this discussion, I will call this method radical credulity. Of course, now that we are older, we understand that this way of thinking lets incorrect ideas in just as easily as correct ones. This is one reason we keep our kids in safe environments with trustworthy people.
A simple method to filter out ideas that are probably incorrect from ideas that are probably correct is to believe things that are reliably useful. This is called pragmatism. How do we know the Earth is more sphere-like than flat? Because treating the Earth as a sphere gets our airplanes to their destinations, while treating it as flat does not. The pragmatist view is that we believe things that let us reliably predict the consequences of our actions, so that we can effectively do what we are trying to do. It’s as the old defense of the scientific method says: we believe it because it works.
But pragmatism has its shortcomings. For example, most of the time, we live as if the Earth is flat, so there is not usually any problem with believing it to be so. However, there are circumstances where this belief could be catastrophic. Of course the pragmatist will say that we should treat the Earth as flat or round depending on the situation, and the real truth of its shape doesn’t matter. However, for many of us, it isn’t good enough to believe things because they are useful; we want to believe things because they are justifiably true, and pragmatism does not do this for us.
In the middle of the last century, psychologist Jean Piaget came up with a theory of knowledge called constructivism, which says we don’t simply acquire knowledge, we create it as a logical network. When we hear a new claim, we evaluate it by how well it fits with what we already know, and if we find no contradictions, we add it to the network. If we do find a contradiction, we either toss it out or reevaluate the belief that it conflicts with. Right away, we see something in constructivism that was missing from radical credulity and pragmatism: logic. The beliefs we hold are connected to each other by threads of non-contradiction.
However, as we all know, it is possible to have beliefs that are false. Adding a new belief that doesn’t conflict with a false belief doesn’t help us come to the truth. One way to attempt to rectify this is to take the beliefs that we are most confident and passionate about as an immutable foundation, and build our knowledge of the world around them. In philosophy lingo, these beliefs are called basic beliefs.
Of course, if people just take whatever they please as basic beliefs, we will find people with all kinds of beliefs that contradict each other’s, and they’ll stubbornly yell at each other until they’re blue in the face. Faced with this problem philosophers sought what could be called properly basic beliefs, truths which are so obvious and undeniable that it is impossible for them to be false. DesCartes famously took his own existence to be properly basic, and the philosophy of empiricism holds the validity of logic, mathematics, and observation as such.
Unfortunately, we run into another problem: we cannot agree on what beliefs should count as properly basic! Take any belief that is proposed as properly basic, and you will be able to find people who doubt it. Mathematics? Can be doubted. Objective reality? Can be doubted. “I think, therefore I am”? Can be doubted! What’s more, since properly basic beliefs are supposed to be the foundation upon which all other knowledge is constructed, the only argument that can be made for a belief to be properly basic is, “can’t you see it’s obvious?” Not exactly up to academic standards!
In the absence of anything that could justifiably be called properly basic, we might, with heavy heart, be tempted to conclude that knowledge is, in fact, impossible, and that everything is just mights and maybes. This is a pessimistic outlook, and not one most of us are comfortable with. In order to avoid it, we might choose a basic belief on radical credulity, usually called “faith” in this context. Or we might revert to pragmatism, and choose a belief that has proved reliable time and again as our basic belief.
I, however, subscribe to a third option, and that is to view knowledge in terms of probabilities instead of just yes or no. Although it may be impossible to know anything with a justified certainty of 100% with an infinite number of decimal places, we can be justifiably 90% certain, or 99.999% certain. We may not be able to calculate the numbers, but with practice we can guess the ballpark.
How is the level of certainty of a belief determined? By how well it connects into the knowledge network. Reality itself is one giant network where everything connects to everything else, so the larger a person’s knowledge network and the more interconnected it is, the more likely the beliefs in the network are to be true. To understand why, the jigsaw puzzle analogy is apt. When building a puzzle, there is a small chance that two pieces will fit, even though they don’t actually go together. But the chance that the same piece will fit incorrectly on two sides is much smaller. So to be sure you have the right piece, you want to try to connect it to the picture by more than one side. The chance of it being the right piece is even higher if there is a fourth piece connecting the two connecting pieces together, so that you have a square of four pieces. And the more pieces that can be added on to the connecting pieces, the higher the chance of each of them being the right piece.
Knowledge is like that, except there are plenty of extra pieces that don’t go to the puzzle, the chance of an incorrect connection is much higher, and the pieces can hook on to an arbitrarily large number of other pieces, which don’t have to be right next to each other. The knowledge puzzle also gets scaled up to more complex levels. With knowledge, you can have two packages of tightly-knit beliefs, but these packages only have a few connections between them. Imagine two balls of string connected to each other by three threads. Each ball is tightly connected, so they each individually have a high chance of being true, but their connection to each other is tenuous. If you discover that the two packages of beliefs contradict each other, either by learning something new or by thinking about them both in new ways, then you might have to make the tough decision to let one of them go.
When a contradiction is found between two sets of beliefs that one has, it is called cognitive dissonance, and depending on the complexity of the beliefs in question, as well as how attached we are to them, it can manifest as a physical headache. We instinctively want to get rid of the cognitive dissonance as quickly as possible. There are two ways to do this. The first, is to commit to whichever beliefs are most important to you, taking them, at least temporarily, as basic beliefs. The second takes longer, but it leaves you in a more stable place, and that is to take apart each package of beliefs and reevaluate them in the broader context of your total knowledge network, and by learning about the relevant topics from a variety of external sources.
A mind well-practiced in the art of knowledge construction will take time every so often to reevaluate the pieces of their knowledge network, to make sure it all fits together properly. There are many techniques to this, which we explore on this blog in the “Toolbelt of Knowledge” series.
There is still one teeny tiny issue with constructivism without basic beliefs, which you may have picked up on. Constructivism itself is a model, a sub-network of nodes within the larger network of a person’s knowledge. In particular, the belief that “the more solidly integrated a belief is within the network, the more likely it is to be true,” is itself a node in the network. This means that it must be subject to the same reevaluation process as everything else, or be taken as properly basic on faith.
But we don’t do that kind of faith here at SciFic. As you know if you’ve read “The Limit of Philosophy,” we prefer to race headlong into the trippy world of metalogic. So what happens when we allow ourselves to doubt the very method we use to determine what is true? Well, we just do the same thing we do with everything else: evaluate it. If it does not measure up to its own standards, then we get rid of it. If it is self-consistent, and we don’t have any alternative methods that are more self-consistent than this one, then we might as well use it. But one last question: why should we use self-consistency as a measure for whether a method of determining truth is valid? Because, as human beings, we are psychologically driven toward consistency. Of course, that’s not a logical reason, but remember, the most fundamental question is not “what is true?” but “what should we do?” and our action is driven by our unconscious psychology rather than logic.
As children, we are told all kinds of claims, which we accept on radical credulity. Then, we evaluate new things by a combination of how useful they are and how well they integrate into our networks of knowledge. A mature, practiced thinker will not take any claim as foundational, but evaluate and reevaluate every part of their network by how well it connects with the rest. That is knowledge.