You can’t know what you don’t know because you don’t know that you don’t know it

someone texting and driving
Image from here: https://www.safewise.com/faq/auto-safety/danger-texting-driving/

In another post, I mentioned that we don’t know what we don’t know.

This is the central problem in rational deliberation, and why so many people (such as anti-vaxxers) sincerely believe their beliefs are rational. They know what they know, but they don’t know what they don’t know. People have strong beliefs about issues, about which they sincerely believe they are fully informed because all of the places they live for information tell them that they’re right, and those sites provide a lot of data (much of which is technically true), and also provide a lot of information (much of which is technically true). But, that information is often incomplete—out of context, misleading, outdated, not logically related to the policy or argument being proposed.

There is a way in which we’re still the little kid who thinks that something that disappears ceases to exist—the world consists of what we can see.

I first became dramatically aware of this when I was commuting to Cedar Park, or Cedar Fucking Park, as I called it. I saw people talking on their cell phones drift into other lanes, and other drivers would prevent an accident, and the driver would continue with their phone call. They didn’t know that they had been saved from an accident by the behavior of people not on the phone. They thought that they were good at talking on the phone and driving because they never saw themselves in near-accidents. They never saw those near accidents because they were distracted by their conversation.

I have had problems with students who think they’re parallel-processing in class—who think they can be playing a game on their computer and pay attention to class—but they aren’t. We really aren’t as good at parallel-processing as we think. The problem is that the students would miss information, and not know that they had because, like the distracted drivers, they never saw the information they’d missed. They couldn’t—that’s the whole problem.

I eventually found a way to explain it. I took to asking students how many of them have a friend whom they think can safely drive and text at the same time—that, as they’re sitting in the passenger seat, and the driver is texting and driving, they feel perfectly safe. None of them raise their hands. Sometimes I ask why, and students will describe what I saw on the drive to Cedar Park—the driver didn’t see the near-misses. Then I ask, how many of you think you can text and drive safely? Some raise their hands. And I ask, “Do any of your friends who’ve been passengers while you text and drive think you can do it safely?”

That works.

For years, I’ve begun the day by walking the dogs up to a walkup/drivethrough coffee place (in a converted 24-hour photo booth—remember those?), and used to get there very early while it was still dark. There was one barista who didn’t notice me (the light was bad, in her defense). I would let her serve two cars before I’d tap on the window. She would say, “Be patient! I’m helping someone!” She sincerely thought that I arrived at the moment she noticed me, and immediately tapped on the window. It never occurred to her that I was there long before she noticed me.

When I talk to people who live in informational enclaves, and mention some piece of information their media didn’t tell them, they’ll far too often say something along the lines of, “That can’t be true—I’ve never heard that.”

That’s like the bad drivers who didn’t notice the near misses and so thought they were good drivers.

That you’ve never heard something is a relevant piece of information if you live in a world in which you should have heard it. If, however, you live in an informational in-group enclave, that you’ve not heard something is to be expected. There’s a lot of stuff you haven’t heard.

What surprises me about that reaction is that it’s generally an exchange on the internet. They’re connected to the internet. I’ve said something they haven’t heard. They could google it. They don’t; they say it isn’t true because they haven’t heard it.

That they haven’t heard it is fine; that they won’t google it is not. And, ideally, they’ll google in such a way that they are getting out of their informational enclave (a different post yet to be written).

In that earlier post apparently about anti-vaxxers, but really about all of us, I mentioned several questions. One of them is: If the out-group was right in an important way, or the in-group wrong in an important way, am I relying on sources of information that would tell me?

And that’s important now.

If you live in informational worlds that are profoundly anti-Trump, and he  did something really right in regard to the covid-19 virus, would you know? To answer: he couldn’t have, or, if he did, it was minor is to say no.

And the answer is also no if you’re relying on the arguments that Rachel Maddow says his supporters are making, as well as on your dumbass cousin on Facebook. Unless you deliberately try to find pro-Trump arguments made by the smartest available people, you don’t.

If you live in a pro-Trump informational world, and Trump really screwed up in regard to the covid-19 virus, would you know? To answer: no, or perhaps there were some minor glitches with his rhetoric is to say no.

And the answer is also no if you’re relying on the arguments that Fox, Limbaugh, Savage and so on say critics of Trump are making, as well as the dumbass arguments your cousin on Facebook makes. Unless you unless you deliberately try to find arguments critical of Trump made by the smartest available people, you don’t.

People are dying. We need to know what we don’t know, and remaining in an informational enclave will make more people die.

Anti-vaxxers, bad drivers, and other people who reason badly

banner for dhmo information siteYou can’t know what you don’t know. You can’t know what you weren’t told. You can’t know what you didn’t notice.

A lot of people outraged about anti-vaxxers think they’re ignoring facts. But they aren’t. I’ve argued with them, and they have a lot of facts, and a lot of those facts are true. The problem isn’t in their facts, but in how they think about what makes a good argument. Anti-vaxxers are a great example of how not to think about having a good argument—one shared by a lot of people.

Their argument is: “We shouldn’t require that people get vaccines because [this vaccine] is bad because [fact].” And so they know that they’re right because they live in a world in which they are continually “shown” that they are right. They are given lots of facts (which might even be true) and lots of information about what their opponents believe (most of which is straw man). If you don’t drift into the world of anti-vaxxers, you don’t know that.

Just to be clear: I think anti-vaxxers are full of shit. I sincerely believe that anti-vaxxers believe they are truthful. And they also have a lot of facts, many of which are true. But their fullofshitness isn’t about whether they have facts, or whether they are truthful. It’s about their logic. It isn’t about whether they have facts, but about how they reason, and about the informational worlds they choose to inhabit.

Here’s an anti-vaxxer argument I’ve come across more than once. It’s something along the lines of, “If you look at the ingredients for this vaccine, you can see it has this ingredient, and, if you look up that ingredient on the internet, you can see that it’s really dangerous.”

That argument is a series of claims, each of which is factually true. It really does have that ingredient, and you really can look it up, and you really can see that it is harmful. The facts are true, but the logic is dumb.

If we step away from whether people have “facts” to how they’re arguing, then you can see that those claims don’t lead to each other.

Dihydrogen Monoxide (DHMO) is a notoriously dangerous chemical. It is responsible for thousands of deaths every year, and it’s in biological and chemical weapons. There’s a list here  of its dangers, and they are many. So, if the logic of the argument above is good—this vaccine is dangerous because it has an ingredient that’s dangerous–, then the person making that argument has to support the claim that any medications containing DHMO are dangerous.

If it’s a bad way to argue in regard to DHMO, then it’s a bad way to argue about any of the chemicals in vaccines.

DHMO is water.

It’s a bad way to argue.

When I try to point this out to people, they often say something like, “But water is different. Water is okay—this stuff isn’t.” And they can’t understand that they’re arguing in a circle. They have an unfalsifiable belief. They believe what they believe because they believe it and can find supporting evidence. That’s motivated reasoning.

It doesn’t seem like a bad way to argue because people choose to live in worlds in which we only hear how great our beliefs are and how dumb the criticisms of our beliefs are. We don’t know that we’re getting a straw man. And we don’t know it because the most cunning (and damaging) versions of the straw man are something someone really said but edited, taken out of context, or not representative. So, for instance, a pro-vaccine article might point out that early vaccines were dangerous, and an anti-vaxxer could quote only that part, not making it clear that the comment was about the cowpox vaccine. Or, and I’ve had this argument, they quote someone associated with pharmaceuticals (such as Shkreli) and use that as proof that everyone involved with pharmaceuticals is a greedy villain who doesn’t really care about anyone’s health.

Once again, the claim (everyone involved with pharmaceuticals is a greedy villain who doesn’t really care about anyone’s health) is supported by facts I believe are true (I think most reasonable people would)—Shkreli really is a greedy villain, and he really was associated with pharmaceuticals. The facts are fine, but the logic is bad. If one person associated with pharmaceuticals can be taken to stand for everyone who advocates vaccines, then one person associated with anti-vax can be taken to stand for everyone opposed to vaccines.

And that should be the moment the person realizes it’s a bad way to argue, but they often don’t because their informational world is filled with dumb, hateful, and horrible things that “pro-vaxxers” have said. A person in the anti-vax world thinks it’s fair to take Shkreli to stand for everyone promoting pharmaceuticals because he is so much like all the other examples that slither through the anti-vax informational world. What that person wouldn’t know is that they are only seeing the most awful examples of the out-group, and they rarely (perhaps never) hear about bad behavior of in-group members.

They don’t know that they don’t know enough to have accurate stereotypes about the in- and out-groups. Because we can’t know what we don’t know (but that’s a different post).

Here I just want to point out that these two related problems (thinking we have a “good argument” just because it has true claims, and thinking it’s true because it confirms everything else we choose to hear) aren’t solved by looking for facts, or by asking ourselves if we’re reasoning rationally. And both of those ways of thinking about beliefs suck.

We can ask these questions:
• Am I open to persuasion on this issue, and, if so, what evidence would persuade me?
• If the out-group was right in an important way, or the in-group wrong in an important way, am I relying on sources of information that would tell me?
• Would I consider this an argument a good one if I flipped the identities in it? In other words, if the argument is “This thing [that I already believe is bad] is bad because [other claim]” would I be persuaded if the argument was “This thing [that I believe is good] is bad because [that same kind of claim]”?

That last one is hard for some people, so I’ll give some examples:

Let’s say that I, a fan of Hubert Sumlin, say, “Chester Burnette is a terrible President because he issues a lot of Executive Orders.” Would I be persuaded that “Hubert Sumlin is a terrible President because he issued a lot of executive orders”? If not, then I don’t really believe the logic of the argument I’m making.

If the answer to each of these questions is no, then regardless of how many facts I have, I have bad arguments.