Think Logically
Lesson 4: Follow Reason (2 of 2)
Reason, we’ve said, is the faculty that takes us from the perceptual level to the conceptual level. It allows us to classify things, to form generalizations, to make judgments, to project the far-off future and analyze the distant past. It is what makes us thinkers.
But just as we’re born not knowing how to walk, we’re born not knowing how to think. The difference is that everyone eventually learns how to walk. You don’t find freshmen rolling down the halls of their high school. You do, however, find plenty of grown men and women uninterested in what’s true, or unable to see through lousy arguments and bullshit claims.
Becoming a thinker starts with caring what’s true. And here the biggest risk isn’t that you’ll openly declare, “To hell with the facts, I want this to be true.” It’s that you’ll engage in self-deception through motivated reasoning. Motivated reasoning isn’t true reasoning, but the pretense of reasoning: the goal is not actually to reach the truth, but to prop up your current set of beliefs, defend your self-image, protect yourself from painful emotions, and look good to your peers. You seek out evidence to confirm what you want to believe, ignore evidence that conflicts with what you want to believe, and reinterpret what you can’t ignore to avoid changing your mind.
I find that I’m much more prone to this on personal issues than intellectual issues. On intellectual issues, particularly when I know my views are outside the mainstream, my natural inclination is to wonder: Why do I think I’m right and so many intelligent people are wrong? Are they seeing something I’m not? What are the best arguments against my position? What biases do I have that might prejudice me? But on personal issues, I’m much more likely to dig in my heels and become defensive if I’m being criticized. “What do you mean I’ve been aloof and ignoring your needs? I asked you last Thursday how you were doing!”
Motivated reasoning is such a seductive trap that it’s not enough to set genuine reasoning as an intention. You have to actively work to expose and uproot any self-deception. Charles Darwin, for instance, made this an explicit policy as he developed his theory of natural selection.
I had also, during many years, followed a golden rule, namely that whenever a published fact, a new observation of thought came across me, which was opposed to my general results, to make a memorandum of it without fail and at once; for I had found by experience that such facts and thoughts were far more apt to escape from the memory than favourable ones.
In The Scout Mindset, Julia Galef recommends using various thought experiments to push back against motivated reasoning. Whenever you’re thinking through an issue or making a decision, you should ask yourself: If my incentives were different, would I reach a different conclusion? If I was an atheist, would I find this argument for God persuasive? If I was in my partner’s shoes, would I consider withholding this information a lie? If my peer group held the opposite view, would I still defend my current view? If this study didn’t support but contradicted my position, would I still find it persuasive?
Galef says this last question dramatically changed her approach to her book. During her research, she found a paper claiming that motivated reasoning causes people to have more success in life. She was certain it must have major methodological flaws, and sure enough, she found them. “Then, somewhat grudgingly, I did a thought experiment.” She asked herself, what if the study had supported her thesis? “In that case, I realized, my reaction would have been: ‘Exactly as I suspected. I’ll have to find a place for this study in my book!’” This realization led her to reexamine all of the studies she was planning to cite with the same rigor she would have used had they contradicted her thesis. “Sadly, this ended up disqualifying most of them.”
It’s not enough to feel that you’re being rational. Rationality requires actually being rational. It requires putting in the work to ensure that you’re following the evidence wherever it leads rather than stacking the deck in favor of what you want to believe.
Becoming a thinker doesn’t just mean caring about what’s true in some narrow sphere. It means cultivating a deep curiosity about the world, other people, and yourself. In particular, a thinker is curious about causality: he wants to understand how things work. Not only to know who won the Battle of Shiloh, but to understand why the American Civil War was fought, why the Union won, why Reconstruction failed, how that impacted black Americans in ways that continue to have effects today.
At a more personal level, let’s say you’re a student struggling to maintain good grades. Or maybe you’re acing your classes but at the expense of any fun or social life. To be a thinker is to ask why? Why am I struggling while others succeed? Why am I in the library twelve hours a day while some of my classmates are thriving in class and out of class? Can I do better?
When Cal Newport, now the best-selling author of books such as Deep Work, arrived in college, he noticed he was spending hours and hours on his schoolwork, reading and re-reading textbooks and class notes until the information (hopefully) made its way into his head, constantly feeling behind, and pulling no small number of all-nighters.
It was a truly chaotic existence. But when I looked around, all of my friends seemed to be having the same experience—and none of them seemed willing to question it. This didn’t sit right with me. I wasn’t content to work in long, painful stretches and then earn only slightly above-average grades for my efforts.
So Cal became curious: how could he become more efficient? He started experimenting and ultimately found a method that allowed him to achieve straight A’s while spending less time studying. “By my senior year it got to the point where, during finals periods, I would sometimes pretend to be heading off the library just so I wouldn’t demoralize my roommates, who were preparing for yet another grim all-nighter.”
He didn’t stop there. Cal started questioning other students able to perform at a high level while remaining relaxed and engaged with college life, trying to assess what causes they were enacting to achieve these effects. He would go on to write a book on strategies for overcoming procrastination, taking targeted notes, preparing efficiently for exams, and writing standout term papers. Being curious about cause and effect is how we learn—and how we thrive.
But curiosity is itself a skill we can practice and improve. It involves asking questions, and we can learn which kinds of questions are most fruitful to ask. Some of the most important questions a thinker asks include:
“What is it?” This is the question that helps us identify the nature of the things we deal with.
“Why?” This is the question that helps us think causally and allows us to understand the past.
“What for?” This is the question that allows us to project long-range purposes and invent the future.
In the rest of this lesson, we’ll probe the kinds of questions and strategies that allow you to gain, validate, and use knowledge. The questions and strategies that will help you distinguish truth from falsehood and integrate what you know into an ever-expanding sum.
Learn to conceptualize
Reason operates by concepts. Aside from proper nouns, all words stand for concepts. Dog, cat, computer, neutron, justice—these and countless other concepts reflect our ability not just to see and hear the things in front of us, but to grasp increasingly complex relationships between the things we see and hear (and the complex relationships between things so small, large, or distant that we can never see or hear them).
A full theory of how and when to form concepts is outside the scope of this book. For those interested, I recommend starting with philosopher Harry Binswanger’s book How We Know. For our purposes, the key idea is that we can’t just arbitrarily group things together using concepts. Taking dogs, trash, and sulfur and calling them all “stinkies” is a mental dead end. Almost nothing you learn about a dog will apply to sulfur.
Concepts work by grouping together things that are essentially similar so that we can apply what we discover about some of the units to the others. We couldn’t grow food if seeds didn’t share certain properties. We couldn’t generate electricity if turbines didn’t share certain properties. If we weren’t confident there were essential similarities between airplanes, then no way in hell would we risk stepping aboard a passenger jet.
But the fact that concepts aren’t automatically valid carries with it an enormous responsibility. We have to do the work to make sure our concepts are valid. Most people don’t. Our most disastrous thinking errors often are not the result of fallacies like appeal to authority or begging the question, let alone formal deductive fallacies like affirming the consequent. All too often, our most disastrous thinking errors come from embracing illegitimate concepts.
For a concept to be valid you have to be able to answer the question: “What facts of reality give rise to the need for this concept?” You need to be able to know clearly what the units of the concept are, what they’re being distinguished from, and why it’s legitimate to treat them as units, i.e., why they are essentially similar. Many concepts don’t meet these criteria.
Some concepts don’t refer to anything at all. This includes concepts that are mystical in nature (“god,” “angel,” “afterlife”). It also includes bad philosophic concepts, like Kant’s “noumena” and Hegel’s “dialectic.” And it includes certain scientific concepts, such as “epicycle.” The point is not simply that these concepts refer to things we can’t perceive. We can’t perceive electrons. But we can infer the existence of electrons from what we do perceive. Concepts that lack units refer to things that cannot be connected at all to perceptual reality. (We do have legitimate concepts for imaginary things, like “wookie,” but in this case the units are “an alien species in the fictional world of Star Wars.”)
Far more important are concepts that misclassify things that do exist. Sometimes, for example, we treat superficially different things as essentially different. For example, people who argue that gay couples shouldn’t be able to marry because marriage “is between a man and a woman” take an essentially similar phenomenon—a legal union of romantic partners committed to building a life together—and insist on an artificial distinction between opposite sex couples and same sex couples. Or take the concept “racism.” Some argue that racial minorities cannot be racist—that racism equals prejudice plus power. But this means making an artificial distinction between something that’s essentially similar: judging someone based on skin color.
The most common conceptual error, however, is treating superficially similar things as essentially similar. Rand calls these “package deals.” “‘Package-Dealing’ is the fallacy of failing to discriminate crucial differences. It consists of treating together, as parts of a single conceptual whole or ‘package,’ elements which differ essentially in nature, truth-status, importance or value.”
Package deals are everywhere. “Stakeholder,” for example, treats as essentially similar a company’s shareholders, employees, customers, local communities, and the government. All of these are people “affected by the business,” ignoring a crucial distinction between the owners of a business, who have the right to control and profit from it, and groups whose main choice is whether to voluntarily deal with the company or not. The concept “stakeholders” obliterates that difference in order to strong-arm business owners into surrendering control and profits to non-owners.
Or take the concept “judgmental.” This concept treats as essentially similar uninformed, prejudicial judgments about people and informed, rational judgments. It equates someone who says, “Zoomers are lazy and entitled” with someone who thoughtfully concludes, “Lucas is lazy and entitled.” The result is that we’re taught to view all negative judgments, particularly negative moral judgments, as wrong per se (except the negative moral judgment that someone is judgmental).
Or take the concept “selfishness.” People often call two radically different kinds of people selfish—the short-term, predatory huckster, and the virtuous person who seeks his own happiness rationally; an Elizabeth Holmes and a Steve Jobs. The implication of this package deal is that our basic moral choice is either to lie, cheat, and steal without regard for other people—or to sacrifice our own interests for other people. There’s no category for the person who pursues his own interests, neither sacrificing himself to others nor others to himself.
The test for package deals is to look at how the concepts are used in practice, to look at the specific concretes the concept is meant to apply to, and ask: do these things really belong together? Or is there some important difference that’s being ignored or denied? Is it equating the moral and the immoral, the true and the false, the rational and the irrational, the important and the unimportant?
Bad concepts equal bad thinking. To use a concept is to make a declaration: “This is the right way to look at the world.” Don’t be a conceptual slut. Do not use a concept unless you know exactly what it means and you’re rationally convinced it gets reality right.
Validate and Connect
Your senses give you direct access to reality. Your concepts allow you to go beyond what you perceive, to acquire knowledge that applies to all human beings, all organisms, all matter. You use your concepts to make judgments: this is true, that is good, this is false, that is bad.
Just as your concepts aren’t automatically legitimate, the judgments you make with these concepts aren’t, either. You need to work to ensure your ideas conform to facts. At the most basic level, this means distinguishing the cognitive from the non-cognitive: drawing a line between what I observe and infer on the one hand—and what I feel and what others say on the other.
Typically, this is what people mean when they talk about objectivity. Being objective means going by the facts—regardless of your (or anyone’s) wishes, hopes, fears, or desires. That’s a great start, but it’s only a start. To truly be objective, you need to self-consciously apply a method that validates your knowledge—a method that keeps your ideas connected to facts and that allows you to reliably go from what you perceive to what you don’t.
All genuine knowledge consists of what you directly perceive or what you logically infer from what you perceive. But what does it mean to be logical? Traditional logic classes focus on deductive arguments. You start with a general proposition and apply it to a less general case. All men are mortal; Socrates is a man; therefore, Socrates is mortal. At each step you’re guided by the basic law of logic: the law of contradiction. Since A is A, the same thing cannot be both A and non-A at the same time and in the same respect. Since contradictions can’t exist in reality, they must not exist in thought.
Deductive reasoning is vital, but logic is about far more than deduction. “Logic,” in Rand’s definition, “is the art of non-contradictory identification.” It includes both deduction and induction (forming generalizations). And, importantly, logic isn’t primarily about evaluating a single argument out of context. To be logical is to seek to integrate all your knowledge into a consistent whole—a whole rooted in the world you perceive through your senses. To achieve that goal requires two basic processes: reduction and integration.
Reason requires reduction
What ties your knowledge to reality? Think of a chain you use to tie your dog to a tree. If you wanted to know with total confidence that your dog wouldn’t escape, you’d start at her collar and check each link. You’d go back, link by link, to make sure the chain was strong, until you reached the starting point, the tree.
We learn by building more abstract knowledge on less abstract knowledge. We learn to count. Then we use that knowledge to learn to add and subtract. We use that knowledge to learn to multiply and divide. Then algebra. Then calculus.
Conceptual knowledge exists in a hierarchy—more abstract knowledge is built on less abstract knowledge. To validate conceptual knowledge means to reduce it by going back through that hierarchy, working to take less solid knowledge back to more solid knowledge—ultimately to what you can directly perceive.
Think of Darwin. After years as a naturalist, he developed a hypothesis about the origin of species: they evolve through the mechanism of natural selection. But he didn’t stop there. He wanted to know: is this really true?
After Darwin developed his hypothesis, he spent the next two decades trying to establish whether it was true. To do that, he had to be able to answer questions like, “Why do I believe that there is sufficient natural variation in organisms to allow for natural selection to take place?” He confirmed that in part by spending the better part of a decade studying barnacles and concluded that, yes, you do observe significant variation among specimens.
Or, “Why do I believe that natural processes can account for the global distribution of species?” He wondered whether it was possible for a seed to float large distances across the ocean and take hold in a far-off location. When botanists told Darwin the salt water would kill the seeds, he ran his own observational tests and found that, in fact, seeds could be immersed in salt water and still germinate a month later.
Or, “Why do I believe that the minor variation we see in nature is capable of producing an entirely new species?” One line of evidence came from what he called artificial selection. He found, for example, that selective breeding of pigeons by human beings could produce a new species over the course of only a few centuries.
Whether you’re assessing a complex scientific theory, a political policy, or a career change, the key to making sure your ideas are connected to reality is to ask of any idea you hold: “Why do I believe this is true?”
For example, you read an article claiming that we need socialized medicine in the United States, and you find yourself getting angry. You say to yourself: “I disagree with this, but why do I believe that socialized medicine would be bad?”
Let’s say you can’t come up with any reasons. What you don’t do is google “why socialized medicine is bad,” click on the first study you see, and conclude: ah hah! Socialized medicine is bad because the Journal of Truth did a study that shows Canada’s system has longer wait times for cancer treatment than the US! That’s a prime example of motivated reasoning.
What you’re trying to get at is the actual reasons that persuaded you of an idea in the past. If you really can’t recapture that, or if you never did go through a thought process where you considered the evidence for and against socialized medicine, then you have no business believing it’s bad (or good). You have to file that conclusion as “A plausible idea I need to think more about.”
Now, let’s say you ask yourself, “Why do I believe socialized medicine would be bad?” and you do come up with an answer: socialized medicine destroys medical progress. Great. Now why do you believe that? You think: socialized medicine makes healthcare “free” at the point of purchase. But it still has costs: the medical staff, the medical equipment, the drugs, the hospital, the electricity that powers the hospital. So, you think, what’s the impact if a person can use these resources without paying for them?
Just as a “free” restaurant would see its costs skyrocket as everyone ordered steak and lobster, so socialized medicine would cost taxpayers an unsustainable amount as people demanded the best tests and the most expensive treatments. Eventually, the government would have to control costs by rationing care. You’d get huge waiting lists for treatment, as we see in Canada and the United Kingdom. Drug companies and medical-device makers, meanwhile, would have to accept a tiny fraction of what they earn today, meaning they could do less R&D, and certainly wouldn’t invest in highly expensive, speculative treatments—say, like mRNA vaccines—since they wouldn’t be able to recoup their costs. You might think: this is why most medical innovation happens in the US: precisely because drug prices aren’t fixed by the government.
In most contexts, that’s a sufficient reduction. You’ve taken your idea, “socialized medicine is bad,” and you’ve made it more precise, closer to what you can perceive. It’s not a full reduction because you’re not literally going all the way down to the perceptual level. If you later get into an argument or want to write a book on freedom in healthcare, you might have to do more work to fully reduce the idea. But for now you’ve done the work to know what you believe and why you believe it.
Reason requires integration
The most obvious fact about great thinkers is that they see connections no one else has noticed. Maybe the most striking example in history is Newton grasping that the same force that causes objects to fall to the earth causes the motions of planets in the sky. But more relatable examples abound. I always think of a young Steve Jobs helping to explain computers to a world unfamiliar with them as “a bicycle for your mind.”
Mental connections are integrations. All knowledge involves integration. Concepts integrate percepts. Generalizations integrate observations. Principles integrate generalizations. Philosophy integrates principles into a single, unified, consistent view of the world.
With each step you can see more of reality and see it more clearly. When Newton integrated planetary motion with terrestrial motion, that allowed us to apply what we learned about terrestrial motion to astronomy and vice versa.
Integration doesn’t just expand your knowledge—it checks it. Integration is how you discover contradictions among your ideas. As Rand puts it, “No concept man forms is valid unless he integrates it without contradiction into the total sum of his knowledge. To arrive at a contradiction is to confess an error in one’s thinking; to maintain a contradiction is to abdicate one’s mind and to evict oneself from the realm of reality.”
We’re all familiar with people trying to trap us in an argument by showing that we’re contradicting ourselves. “You believe this. You believe that. But this and that are inconsistent, so you must be wrong about this, that, or both.” People pointing out your inconsistencies isn’t always fun, but it is a gift. To the extent the contradiction is real, your critic is helping you integrate.
But the value of integration in rooting out contradictions goes far beyond intellectual debates. How many times have you been tempted to tell a “white lie” to a friend to protect their feelings? “Oh, no. You haven’t gained weight.” “That haircut looks great on you.” “Bro, someone needs to call the police because those guns should be illegal.” But think about what you’re actually doing when you tell these kinds of lies. On the one hand, you want the best for your friend. On the other hand, you’re not giving them the information they need to make good decisions. You’re treating them as children incapable of handling the fact that their weight is unhealthy, their haircut isn’t flattering, their workout regimen isn’t panning out. Well, that’s a contradiction. “I want the best for my friend” and “I’m lying to my friend” don’t integrate.
Integration doesn’t happen automatically. It requires volitional effort. You have to choose to integrate—you have to actively work to relate your knowledge. What does this mean in practice? Integration doesn’t mean that every time you hear an idea you sit down and go through every other thing you know in search of connections and contradictions. At the simplest level, it just means asking yourself questions: “What does this remind me of? What other things is this related to? Do I sense that this is connected to anything else I know, and if so, can I make that dim sense more vivid?”
Integration is what makes knowledge useful. An idea disconnected from the rest of your ideas isn’t knowledge. Your intellectual firepower consists of your ability to bring the full sum of what you know to every issue you encounter.
Follow evidence
One of the most important thinking skills you can develop is “mental filing.” Most people engage in haphazard mental filling. Every idea in their head has the same standing: “Stuff I believe.” So whether it’s 2+2=4, slavery is evil, Epstein didn’t kill himself, matter is made up of atoms, the FDA saves lives, there is life on other planets, or a vegan diet is the healthiest way to eat, they make no cognitive distinctions. Same for when they encounter new ideas: their reaction is binary: “I believe this” or “I don’t believe this.”
Mental filing means expanding your cognitive vocabulary and then carefully assessing ideas accordingly. For example, it’s valuable to have a file for “Interesting things I’ve heard.” These are ideas that are plausible, but where you haven’t done the work to assess them. “The world is made out of atoms.” You’ve heard it since you were a kid, but unless you studied at least some of the steps scientists went through to prove it, you don’t know it.
Another valuable file is, “Things I find confusing.” Typically, people treat anything that confuses them as false. Worse, if it comes from an authority they respect, it can get labeled in effect as, “Something I believe but don’t understand.” Proper mental filing means putting stuff that you don’t understand into the “confusing” folder until you eventually come to understand and evaluate it.
Maybe your most important set of folders is for ideas where you have to assess evidence.
Assessing evidence
Some ideas are binary: you either know them or you don’t. “There’s milk in the fridge.” There’s no collecting of evidence—you just go look in the damn fridge. But many ideas require you to collect and assess evidence over time, and there your knowledge moves through stages. You start out not knowing something. Then you get a little evidence for it—it’s possible. You get more evidence—it’s probable. You get sufficient evidence—it’s certain. Good thinking requires knowing where you are in that progression and filing ideas in the appropriate evidentiary folder.
One of my favorite shows is the History Channel’s Pawn Stars. It’s about a real pawn shop in Las Vegas that specializes in rare, high-end items. In one episode, a guy brings in a guitar he claims was owned by Jimi Hendrix. The question is: Is this really Hendrix’s guitar? The pawn stars bring in an expert.
The expert examines the guitar carefully. It’s a white ’63 Fender Stratocaster, and Hendrix was known to have played that type of guitar. At this point the expert might think: it’s possible this was Hendrix’s guitar. It’s the right kind of guitar from the right time period. That constitutes some evidence. But it’s not sufficient. There were lots of ’63 Fender Stratocasters not owned by Hendrix.
Next, the expert observes that the guitar has scuff marks on the top of the neck, which indicate that it had been played by a left-hander. The guitar also has a whammy bar that’s been straightened, which Hendrix was known to have done. The owner then shows the expert photos of Hendrix playing a guitar that looks exactly like the one in the shop. Now the expert can say: this is probably Hendrix’s guitar.
Finally, the owner shows the expert documents that explain the guitar’s “chain of custody” from Hendrix to intermediaries and finally to him. Once the expert sees that the guitar’s serial number matches the serial number listed in the documentation he has sufficient knowledge to conclude: “Yes, this was Jimmy Hendrix’s guitar. I’m certain of it.”
Notice what the expert is doing. He is taking the idea, “This is Jimmy Hendrix’s guitar,” which he doesn’t yet know to be true, and connecting it to what he does know. He is able to see that everything about this Strat is consistent with the claim it was owned by Hendrix, nothing is inconsistent with the claim it was owned by Hendrix, and there is sufficient evidence to meet the standard of proof used to authenticate memorabilia.
That, in essence, is the process that’s involved in assessing evidence: you define a standard of proof, and then you evaluate the extent to which the evidence you have meets it. If you meet the standard of proof, then there are no longer rational grounds for doubting the conclusion.
Note that it isn’t easy to define a standard of proof. It takes real thought and expertise. For example, an authenticator has to know a lot about what can and can’t be faked by counterfeiters and formulate the standard of proof in such a way that it mitigates against fakes. Similarly, a scientist can’t leap from “the evidence is consistent with my hypothesis” to “therefore the evidence supports only my hypothesis.” He has to know enough to be able to say, “This is the range of rational hypotheses, and so this evidence supports my hypothesis and only my hypothesis.”
The case for certainty
Certainty has a bad rap today. The one thing you’re allowed to be certain of is that no one can be certain of anything. But that’s because almost everyone misunderstands what certainty is.
The basic building block of epistemology is the concept “fact.” A fact is something that accurately describes the world whether we know it or not. Three hundred years ago, it was a fact that matter was made up of atoms—but it was a fact that nobody knew.
The concept that distinguishes ignorance from our grasp of a fact is “knowledge.” Knowledge, as Rand puts it, is “a mental grasp of a fact(s) of reality, reached either by perceptual observation or by a process of reason based on perceptual observation.” A scientist who understands the evidence for the atomic theory has knowledge, “Atoms are real.”
“Fact,” then, is a metaphysical term. “Knowledge” is metaphysical and epistemological. “Certainty” is a purely epistemological term: it says that you’ve met the standard of proof and are entitled to regard your conclusion as knowledge. It is illogical not to believe the conclusion.
But this has an important implication: you can be certain of something—and later discover that your conclusion was imprecise or wrong. “Certainty” doesn’t mean you’re omniscient or infallible. It means you’ve gone through the process to achieve knowledge, and there are no longer grounds for doubt.
But human knowledge doesn’t stand still. You continue to learn and to expand your knowledge. This can lead you to qualify a past conclusion. For example, Newton discovers the laws governing the behavior of macroscopic objects. Einstein later comes along and qualifies what Newton discovered: his laws only apply under small relative velocities and a weak gravitational field to a certain degree of precision. This new knowledge doesn’t overthrow old knowledge—it expands it. Einstein didn’t invalidate Newton: he discovered something more than Newton.
But new knowledge can also uncover old errors. I served on a jury once in a spine-chilling stalking case. The evidence for the defendant’s guilt was overwhelming. He briefly dated the woman in question. Then he started acting strange and aggressive and she broke off the relationship. After that, she started receiving threatening texts—from a disposable cell phone bought with the defendant’s credit card. The cell phone company could place the defendant’s regular cell at the location where the texts were sent. The victim saw the defendant around her house at odd times. Fliers were plastered around her work calling her a slut and whore—and police confirmed that someone who looked like the defendant was seen passing them out. The defense blamed the defendant’s twin brother (who had never met the victim and didn’t live in the area), but said they couldn’t locate the brother to have him testify. We convicted the defendant.
I regard our conclusion the defendant was guilty as certain. But now suppose years later I learned that there was evidence the twin brother was in the area, and that he hated the defendant and told people he wanted to frame him for a crime. And, let’s even say the twin felt remorseful after a decade and confessed that he concocted the whole scheme. I would conclude: I was certain—and I was wrong. I made a mistake based on incomplete information that was apt to mislead.
If it’s possible to be certain and wrong, then what good is certainty? Why not just say the best you can do is achieve probability? Well, for starters, you can’t assign something a probability if you have no idea what would count as certainty. If you have no clue what it would mean to prove something, then you have no clue whether something counts as evidence, i.e., whether it tends to prove a hypothesis.
But, second, you need a concept to distinguish when you have rational grounds for doubting a conclusion from when you don’t. When you lack sufficient evidence for a hypothesis, you have rational grounds for doubt. In a jury context, this is precisely what counts as reasonable doubt: there’s some—maybe a lot—of evidence implicating the defendant. But not enough to meet the standard of proof.
But when you do have sufficient evidence, there are no longer rational grounds for doubt. The only doubts that can be offered are irrational “maybes.” “I don’t have any evidence, but maybe the twin brother did it.” “I don’t have any evidence, but maybe there will be new evidence that overturns the hypothesis.” “I don’t have any evidence, but maybe another hypothesis nobody has thought of explains these facts.” “I can’t point to any errors you’ve made in reasoning, but maybe you’ve made one.”
“Maybe,” Leonard Peikoff has said, is a fighting word. Just as you need grounds to say something is true, so you need grounds to say it might be true. Just as you need grounds for belief, so you need grounds for skepticism. Every assertion, no matter how tentative, requires you to have reasons. An assertion—any assertion—made without grounds, without evidence, without reasons is a claim based on emotion. Claims based on emotion aren’t possible, they aren’t probable, they aren’t certain—they’re arbitrary.
Beware the arbitrary
An arbitrary claim is devoid of evidence. It’s any kind of claim where the person’s attitude amounts to, “I can’t prove it, but prove it ain’t so.” The right response to arbitrary claims is to dismiss them without consideration.
Why? Because there’s no logical way to consider them. You can’t integrate them or reduce them because there’s no evidence you can use to relate the idea to reality. They’re not possible, not probable, not certain—they’re not even false. They’re worse than false. To conclude that an idea is false is a cognitive assessment: “This contradicts what I know.” The arbitrary is something you can’t bring into any relation with what you know. You can’t evaluate it because there’s nothing to evaluate.
Imagine a court proceeding where the arbitrary was allowed.
“The phone was purchased with the defendant’s credit card.” “Maybe someone stole his card and bought the phone.”
“The defendant was seen passing out the threatening fliers.” “Maybe it was his twin brother.”
“His twin brother was in China.” “Maybe he bought a plane ticket under a false identity.”
“The defendant was caught on tape talking about how he was the one who committed the crime.” “Maybe the tape was doctored.”
“The defended confessed to the crime.” “Maybe the confession was coerced.”
Any of these “maybes” could be legitimate if evidence was offered for them. But to the extent there is no evidence, there’s nothing the jury can do to process them. The proper approach: ignore them. Pretend nothing has been said because, as Peikoff explains, “cognitively speaking, nothing has been said.” If you make a claim about reality, it’s your job to support it.
Not every proponent of the arbitrary openly says, “I have no evidence, prove it ain’t so.” Often they’ll give the appearance of giving reasons. Conspiracists, for example, will bombard you with an overwhelming amount of what can superficially appear to be evidence. Religionists will go through the motions of giving arguments for God’s existence. But these aren’t cognitive acts. It can take work to see that they aren’t cognitive acts, and that the arguments and evidence offered up are rationalizations for emotionalism, not part of a quest for truth. But once you see that, then you don’t have to examine the one billionth “news story” claiming that Trump actually won the election or the one billionth argument for God’s existence. You can reject the entire approach as arbitrary.
Now, you might think: Isn’t the fact that we can make mistakes some evidence that Watkins got his jury verdict wrong? Isn’t the fact that elections can be stolen some evidence that Biden stole the election from Trump? No. The evidence for a capacity is not evidence that capacity has been actualized in a particular case. The fact that elections can be stolen is not evidence that this particular election was stolen.
In sum, you need to assess evidence. Once you’ve formulated a standard of proof and met it, then the conclusion is certain. The only kinds of doubts left are arbitrary doubts. But the fact that a conclusion is certain does not mean that you never revisit it, and that you ignore new genuine evidence in order to neurotically protect your conclusion. Logic gives you an ongoing process for knowing—not one that eliminates the possibility of error, but one that minimizes and corrects errors over time. (This is one major difference between reason and faith, mysticism, or emotionalism: reason is self-corrective; other alleged sources of knowledge are not.)
Any concept that demands you be omniscient or infallible in order to achieve knowledge is out. Certainty can’t mean, “Impossible ever to overturn” because it’s only omniscience that would make a conclusion impossible to overturn. The challenge you face in life is not to distinguish conclusions where error is impossible from conclusions where error is possible. It’s to distinguish knowledge from non-knowledge given your lack of omniscience and capacity for error.
Learn from experts
Today it’s popular to urge people to “listen to the science.” Rarely does this mean: dig into the scientific literature and make up your own mind. Instead, it’s taken to mean: accept the conclusions of (some) scientific authorities without question. But blindly listening to scientists is as irrational as blindly listening to the Pope. Scientists, and experts more generally, should be seen as advisors—not infallible authorities.
We need experts. Just as the economic division of labor makes us all far more productive than we would be if we were all self-sufficient farmers, so the intellectual division of labor makes us all far more knowledgeable than we would be if we could only make use of knowledge we ourselves had discovered. But relying on experts doesn’t mean blindly accepting what they say.
So how should you go about rationally assessing claims made by experts—claims that you, as a non-expert, often cannot verify independently?
For starters, you need to develop baseline skillsets that allow you to make (relatively) independent judgments about expert claims. Above all, this means having a basic understanding of how to assess data-based claims like the ones that we hear in discussions of health, science, economic, and political issues. “Vaccines cause autism.” “Coffee is good for your health/bad for your health.” “Inequality hurts economic growth.” “Climate change will lead to devastating droughts.”
Data-driven claims involve three components: data, data processing, and interpretation. It turns out that though you can’t assess data-driven claims the way experts can, you can often spot problems at the data-level (input errors) and the interpretation-level (output errors) without expert-level knowledge. You might not understand the complex statistics that go into the claim that coffee is unhealthy, for instance, but you might be able to figure out that that conclusion was based on a small sample of elderly people. Or, on the output error side, you might find that though the media reported a causal connection—“coffee causes cancer”—the actual study simply reported a correlation between coffee consumption and elevated cancer rates. This kind of assessment isn’t enough to reliably draw true conclusions without the aid of experts but is often enough to protect you from many of the false claims you hear on social media. (The best introduction to data analysis skills I’ve found is Carl Bergstrom and Jevin West’s Calling Bullshit.)
But while improving your B.S. detector is an invaluable foundation, it still doesn’t tell you everything you need to know in order to use experts to help you reach the truth. For that, you need some further steps.
First, you have to judge the state of the field. Some fields have plenty of self-proclaimed experts, but the field itself is illegitimate (think: astrology). In other cases, the field might be at too primitive a state of knowledge to give reliable guidance. I suspect this is true in the field of nutrition, given the complexity of the problem they’re trying to solve, and the lack of consensus around even seemingly basic questions like what causes weight gain. In still other cases, the field has become politicized. In the field I’m most familiar with, climate science, funding, publication, and hiring decisions tend to encourage catastrophic predictions about CO2’s climate impact. This doesn’t mean that we should totally ignore nutrition and climate experts, but it does mean we have to be wary—especially if they are recommending radical changes like going vegan or rapidly eliminating fossil fuels. (For an outstanding guide to using experts to make sense of energy and environmental issues, see Alex Epstein’s book Fossil Future.)
Second, you have to judge the supposed expert(s). You need to assess whether they understand the field and whether they’re trustworthy. When the COVID-19 pandemic hit, for example, I saw people sending around videos from random general practitioners, based mainly on whether they liked the conclusions the GPs had reached. That’s like asking your dentist how to treat your throat cancer. In my case, I happened to be friends with Amesh Adalja, an infectious disease expert at Johns Hopkins who specializes in pandemics.
Amesh not only has expert credentials. He’s an expert explainer. Part of what establishes whether an expert is trustworthy is how clearly he can explain things to a layman. That means not only explaining his conclusion, but his reasoning. It means explaining his degree of certainty, explaining how much consensus there is among experts in his field, and why various experts might disagree with his conclusions. It means being able to answer your questions in ways that are clarifying. As a general rule, a trustworthy expert’s primary goal isn’t to try to convince you that he’s right—it’s to act as a guide to help you understand an area of knowledge that you cannot assess independently.
What should emerge is a reduction—not a full reduction that gives you a complete picture of an idea’s tie to reality, but a reduction based on expert testimony. When you have good reasons to trust the expert, and the expert explains his conclusion and the reasoning behind it in terms you can understand, then you have established an idea’s tie to reality to the extent that’s relevant given your context and purposes.
Note that you don’t need to rely on experts in every field of knowledge. In some fields, you’ll have sufficient expertise to reach independent judgments. In particular, philosophy does not require experts. Or, rather, we rely on professional philosophers to develop philosophical systems, which takes a lifetime (and genius). But philosophical knowledge doesn’t make use of specialized knowledge. It uses only knowledge available to everyone. An ethicist can help you think through a difficult issue by drawing your attention to arguments you had not considered. But there’s no specialized knowledge an ethicist has that you lack. You can judge his arguments independently.
Follow reason. Treasure your mind. Use your mind to gain knowledge about the world, other people, and yourself. That is the path to success—and to joy.

