Teaching about racism from a position of privilege

I’ve taught a course on rhetoric and racism multiple times (I think this is the third, but maybe fourth). It came out of a couple of other courses—one on the rhetoric of free speech, and the other on demagoguery, but also from my complete inability to get smart and well-intentioned people to engage in productive discussions about racism.

I never wanted to teach a class on racism because I thought that there wasn’t really a need for a person who almost always has all the privileges of whiteness to tell people about racism. But I had a few experiences that changed my mind. And so I decided to do it, but it is the most emotionally difficult class I teach, and it is really a set of minefields, and there is no way to teach it that doesn’t offend someone. And yet I think it’s important, and I think other white people should teach about racism, but with a few caveats.

Like many people, I was trained to create the seminar classroom, in which students are supposed to “learn to think for themselves” by arguing with other students. The teacher was supposed to act as referee if things got too out of hand, but, on the whole, to treat all opinions as equally valid. I was teaching a class on the rhetoric of free speech—with the chairs in a circle, like a good teacher–when a white student said, “Why can black people tell jokes about white people, but white people can’t tell jokes about black people?”

And all the African-American students in the class shoved their chairs out of the circle, and one of them looked directly at me.

That’s when I realized how outrageously the “good teaching” method—in which every opinion expressed by a student should be treated as just as valid as the opinion of every other student—was institutionalized privilege.

What I hadn’t realized till that moment was that the apparently “neutral” classroom I had been taught to create wasn’t neutral at all. I was trained at a university and a department at which nonwhites and women were in the minority, and so every discussion in which all values are treated as equal in the classroom necessarily meant that straight male whiteness dominated, just in terms of sheer numbers. Then I went to a university that was predominantly women, and white males still dominated. White males dominate discussion, while white fragility ensures that treating all views as though they’re equal is doing nothing of the kind. The “neutral” classroom treats the white students’ hurt feelings with being called racist as precisely the same as anything racist s/he might say. And they aren’t the same.

That “liberal” model of class discussion is so vexed, and so specifically vexed in terms of race, gender, and sexuality. Often being one of few women in a class, and not uncommonly being one of few who openly identified as feminist, I was not uncommonly asked to represent what “feminists” thought about an issue, and I’ve unhappily observed classes (or was in classes) where the teacher asked a student to speak for an entire group (“Chester, what do gay people think about this?”) It’s interesting that not all identities get that request to speak for their entire group. While I have seen teachers call on a veteran to ask what the entire class of “veterans” think, I have never been in a class where anyone said, “Chester, what do “working class people” think about this issue?” I’ve also never been in a class, even ones where het white Christian males were in the minority, where anyone asked a het white Christian male to speak for all het white males.

The most important privilege that het white Christian males have is the privilege of toggling between individualism and universalism on the basis of which position is most rhetorically useful in the moment. In situations in which het male whiteness is the dominant epistemology, someone with that identity can speak as an individual, about his experience. When he generalizes from his experience, it’s to position himself as the universal experience. Het white males are simultaneously entirely individual and perfectly universal.

The “liberal” classroom presumes people who are speaking to one another as equals, but what if they aren’t? The “liberal” classroom puts tremendous work on identities who walk into that room as not equal—they have to be the homophobic, racist, sexist whisperers. That isn’t their job. That’s my job. I realized I was making students do my work.

That faux neutrality also guarantees other unhappy classroom practices. For instance, students who disagree with that falsely neutral position do so from a position of particularity. The “normal” undergrad has asserted a position which seems to be from a position of universal vision, and so any student who refutes his experience is now not only identifying with a stigmatized identity, but self-identifying as a speaker who is simultaneously particular and a representative of an entire group. When your identity is normalized, you claim to speak for Americans; when your identity is marked as other, you speak for all the others in that category.

There’s a weird paradox here. Both the het white Christian male and the [other] are taken as speaking for a much larger group, but in the case of the het white male it’s that he is speaking for humanity at a whole. If he isn’t, if his identity as het white male isn’t taken as universal in a classroom, then some number of people in that category will be enraged and genuinely feel victimized and dismiss as “political correctness” that they have to honor the experience of others as much as they honor their own experience.

What the white panic media characterizes as “political correctness” is rarely about suppression of free speech (they’re actually the ones engaged in political correctness)—it’s about holding all identities to the same standards of expression. The strategic misnaming of trying to honor peoples’ understanding of themselves as “political correctness” ignores the actual history of the term, which was about pivoting on a dime in order to spin facts in a way that supported faction. In other words, the whole flinging poo of throwing the term “political correctness” at people asking for equality is strategic misnaming and projection.

The second experience was in a class that was about the history about conceptions of citizenship, I was trying to make the point that identification is often racial, and that the notion of “universal” is often racist. I gave the class the statistics about Congress—that it was about 90% male and also in the 90% (or more) white. I asked the white males in the class whether they would feel that they were represented if Congress were around 90% nonwhite nonmale. Normally, this set off light bulbs for students. But, this time, one student raised his hand and said, “Well, yes, because white males aren’t angry.”

Of course, that isn’t true, and I’d bet they’d be pretty angry about not being represented, but, even were it true, it would be irrelevant. That student was assuming that being angry makes people less capable of political deliberation—that anger has no place in political argument. That’s an assumption often made in the “liberal” classroom, in which people get very, very uncomfortable with feelings being expressed. And it naturally privileges the privileged because, if being emotional (especially angry) means that a person shouldn’t be participating (or their participation is somehow impaired) then we either can’t talk about things that bother any students (which would leave a small number of topics appropriate for discussion), or people who are angry about aspects of our world (likely to be the less privileged) are silenced before they speak—they’re silenced on the grounds of the feelings they might legitimately have.

So, if we’re going to have a class about racism, we’re going to have a class in which people get angry, and not everyone’s anger is the same. Racist discourse is (and long has been) much more complicated than a lot of people want it to be—we want to think that it’s easy to identify, that it’s marked by hostility, that it’s open in its attacks on another race. But there has always been what we now call “modern racism”—racism that pretends to be grounded in objective science, that says “nice” things about the denigrated group, that purports to be acting out of concern and even affection. That is the kind of reading that angers students the most, and I think it’s important we read it because it’s the most effective at promoting and legitimating racist practices. But it will offend students to read it.

And so the class is really hard to teach, and even risky. And that was the other point I realized. If we have institutions in which only people of color are teaching classes about racism, we’re making them take on the politically riskier courses. That’s racist.

I remain uncomfortable being a white person teaching about racism, and I think my privilege probably means I do it pretty badly. But I think it needs to be done.

 

 

 

 

 

 

 

 

 

III. Trying to solve the problems of factionalized politics by creating a more unified faction

[This is part of a longer piece, but I really want this part to be separate–it’s about Democrats trying to relitigate the 2016 election. And my basic argument is that we’re engaged in demagoguery about that election.]

In a healthy deliberative situation, people will consider the policy first and faction second. In a culture of demagoguery, people frame every issue as “us vs. them.” We’re in such a culture now, and the US was in such a culture in the antebellum era. And I think that culture meant that the people who wanted to deliberate—who wanted to consider various policy options, listen to various sides, think about the long-term consequences for all of us, who had a broader vision of “us” (one that included everyone affected by policy decisions), were demonized. And they are now.

And, unhappily, there are within the Democratic Party the two factionalized narratives about 2016 mentioned at the beginning. My basic argument about them is that they’re both wrong, as are a lot of narratives about 2016, insofar as they say that progressives’ winning more elections just requires… anything, or that it’s obvious that progressives need to do…. anything. What makes those narratives wrong is that they are monocausal (one thing caused our problems and/or one thing will solve them), and they rely on naive realism (the notion that the truth is obvious).

Factionalized narratives say “there are two choices, and every right-thinking person chooses this one.” Deliberative narratives say, “there are many choices, and each has to be assessed in the circumstance, and each one has to be considered in terms of the past and future.” Factionalized narratives say the right answer is obvious; deliberative narratives say it isn’t. People committed to factionalized narratives say “everyone does it.” I don’t think that’s true.

And I think the comparison to the very similar antebellum situation explains why I don’t think everyone does it. I’m not convinced that this simultaneous entirely factionalized reasoning and condemnation of faction was “true of both sides.” I didn’t read a lot of Northern newspapers from the 1830s, so I can’t say whether they were just as much engaged in doublethink regarding factionalism (it’s great and every member of the faction should do it and every member of the faction should condemn factionalism), but my reading of the Congressional Record suggests they didn’t. The book I never wrote was about how proslavery rhetors tended toward deductive reasoning (the facts on the ground must be these because that’s what my principles say they should be) on every political issue before them. The rhetors who were antislavery (or just nonproslavery) tended to reason inductively, and say that a principle must be wrong because the facts on the ground suggest so. I think that’s a research project that could be useful for thinking about our current political situation—to what extent are people holding their premises safe from disproof?

For instance, William Lloyd Garrison had a journal, The Liberator, and he also had a very specific stance on abolition. Within the community of people who believed that slavery should be abolished immediately, there were profound and passionate disagreements about whether: slaves’ engaging in self-defense violence was justified, the Constitution was neutral on slavery or actively proslavery, abolitionists should insist on immediate and full citizenship for all slaves, abolishing slavery necessarily meant full citizenship for women. Garrison had his views on those issues, which he held passionately and argued for vehemently, he was no saint (Frederick Douglass noted that Garrison was not free of racist notions), and he may not even have been right in his arguments, but his paper published full and fair arguments against his positions. He believed in his arguments so thoroughly that he was willing to read and publish arguments he thought wrong.

How much current media could withstand that test? How many citizens could be like Garrison, and read and publish arguments with which we disagree? And this isn’t even setting a high bar, since Garrison was far from perfect—in fact, he was deeply flawed. It wouldn’t be that hard to be Garrison, and yet most of us fail to meet that low bar.

Antebellum proslavery media never published anything critical of slavery, and the factionalized southern media never published anything critical of their faction. What they did is what’s called “inoculation.” The goal of this media was to become the only source of information for its faction members, and they did that through reprinting articles about the evil behavior of outgroups (even about completely fabricated non-events). The main thrust was 1) deliberation is unnecessary because all you need to know is that we’re good and they’re bad; 2) DON’T LISTEN TO THEM—here’s what they’re going to say, and it’s obviously stupid and evil; 3) there is a war on us, and anyone who doesn’t recognize that is either knowingly or unknowingly on the side of our enemies.

So, in a democracy, a lot of public discourse was about how political deliberation was not only unnecessary, but actively bad (and unmanly). And they condemned the other side by presenting bastardized versions of “the other side’s” argument, as though they knew that their position of “it’s absolutely clear” would be weakened by showing the other side in a reasonably accurate way. And this fascinates me about authoritarian discourse: there is an odd admission that authoritarian discourse relies on single-party rhetoric, that it can’t withstand argumentation. So, perhaps, what it’s claiming isn’t so obvious?

The goal of much political discourse in the antebellum era, as it was in Thucydides’ era, and as it is now, was the establishment of a single-party state. Thus, much democratic discourse was oriented toward the destruction of democracy in the name of only allowing one faction to participate in the setting of policy. Unhappily, that is the argument happening on the left. The argument—whether centrists or progressives should set the policy agenda—is profoundly and irrationally anti-democratic because it’s making the assumption is that the Democratic Party must be a single-faction party. Why make that assumption?

Arguments for policy only seem sensible when the policy seems to arise naturally from a narrative about our current situation. The two dominant purity policy solutions arise naturally from two different narratives about why we are in our current situation. So, in order to argue for a non-purity policy, I have to show what’s wrong with both purity narratives about 2016.

And, really, there are a lot of plausible explanations about the 2016 election. There are, loosely, two purity narratives: first, that Clinton lost because too many of Sanders’ supporters were fanatics who refused to be pragmatic and vote for a less than pure candidate (let’s call that fanatical group Sandersistas, and let’s call the people who promote this narrative the Clintonistas);[3] second, that Trump is President because the DNC foisted a weak milquetoast candidate on the Dems instead of an energizing progressive with a clearly populist policy agenda. But it’s worth looking at all the other narratives as well (I’ll list eight here and mention a few others along the way).

But before even going into them, it’s important to remember that Clinton won the popular vote by a large amount (that’s important for every explanation). And she was predicted as having a 95% chance of winning; the most dire polls put her chances at around 70%.

One factor to keep in mind is that a lot of Obama voters went for Trump, and the first explanation is a lot of them were motivated by sheer sexism. Second, the Right Wing Propaganda Machine had been attacking Clinton for 25 years, and if you throw enough mud, some of it sticks. Third, voter turnout. Fourth, her campaign blew it because they focused on meetings with big money donors toward the end rather than hand-clasping in battleground states because Clinton was arrogant.  Fifth, voter suppression.  The sixth explanation is millennial sexism. Seventh, there is the argument that Sanders poisoned the millennial vote.  Eighth, the DNC was wrong to go for a third-way neoliberal instead of Sanders, who would have won (a surprisingly complicated narrative, explained below).[4]

1 and 2. The first and second can be combined in that they represent simply the problems that come with a candidate who has spent a lot of time committing the crime of being a woman in public. And there is an argument that her faults in those regards are reasons she shouldn’t have gotten the Dem nomination. I sometimes hear those arguments made by people who like Clinton and her policies, and I understand the impulse behind them. I certainly met even young people who had what even they admitted was an irrational aversion to her—the research is pretty clear that it’s harder to remember that every attack on a person has been debunked than it is to have a vague cumulative semi-memory that the person is guilty. For some people, that Clinton had these liabilities was a reason that she shouldn’t get the nomination, and I think there are two versions of that argument—one seems to me reasonable (even if, ultimately, I disagreed with it) and the other is disturbingly anti-democratic.

The first is that, even if it’s through no fault of her own, Clinton was carrying unsurmountable liabilities, and therefore Democrats voting in the primaries shouldn’t vote for her. Women who have also committed Clinton’s crime often bristle at this argument, since they’ve heard it as the reason they can’t be promoted (“unfortunately, sexist men just don’t work as well with women, so you’ll never be a good manager”), given certain jobs (“juries just don’t like women lawyers”), pursue certain careers (“people just don’t trust the financial acuity of women money managers”). Their argument is that you don’t reduce sexism by pandering to it. And that’s a good argument.

But I also think it’s not unwise to think strategically about the likelihood of a candidate winning. So, while I wasn’t persuaded to vote against Clinton in the primaries on the basis of the argument that sexism and propaganda made her a bad candidate, I don’t think people who put it forward are spit from the bowels of Satan. They’re just people with whom I disagree.

The second version of this argument is more disturbing.  That argument is that the DNC should have put forward a “better” candidate. I find this disturbing because I don’t think the DNC should “put forward” any candidate. I realize that is, at least to some extent, what all organizations do—the elite in the organization try to position for election the people they think will make the best candidates—so I’m not naïve enough to think the DNC will remain absolutely neutral (and, in fact, I ranted at a lot of DNC fund raisers during the primaries because I was outraged that there were DNC-funded ads attacking Sanders). But, the absolute most the DNC should do is put its finger on the scale (and even that is problematic, discussed below)—Democrats need to elect candidates, not have them selected for us. Because Dems haven’t been doing well at the level of Governor or Senator, there weren’t a lot of possible candidates. Warren, Biden, and Booker all had reasons not to run, and other possibilities weren’t experienced enough. Thus, I reject the basis premise that the DNC should have selected any candidate for the Dems.

Third, voter turnout. Although there is some debate as to whether voter turnout cost Clinton the election, there remains a strong argument that it did. Or, at least, there’s a consensus that better turnout among nonwhite voters would have helped Clinton. But even people who agree that voter turnout would have led to a Clinton victory disagree as to what that factor means. Some people connect it to the argument below—that voter suppression was crucial in the election. Others argue that yet another reason that Dems (or the DNC) shouldn’t have gone for Clinton—she didn’t have the charisma to get people to put up with the (probably deliberate) long lines in heavily Dem polling places. Some people argue that the low voter turnout out was Sandersistas who refused to vote for Clinton (part of the narrative that they cost Dems the election) but I’ve never seen good evidence for that claim—it’s belied by the demographics of Sanderistas versus the low turnout. My impression, admittedly just from listening to (or reading) people who didn’t vote or didn’t vote for Clinton but might have, was that they believed the polls; they were certain she was going to win, and so didn’t think it was necessary for them to vote. They either didn’t vote, or engaged in a protest vote (to show the DNC that there are progressive voters). I’ll admit that, especially for people for whom voting would have required considerable sacrifice (such as taking unpaid time off work), this seems to me a reasonable attitude—95% is pretty much a sure thing for most people.

Fourth, the argument that Clinton’s campaign blew it because they focused on meetings with big money donors toward the end rather than hand-clasping in battleground states is unfortunately often connected to presenting Clinton as arrogant. And I have to say that I get twitchy when anyone uses the word “arrogant” in regard to a powerful woman (or powerful nonwhite).

It is not actually clear that Clinton did make a mistake with serious consequences in her strategies. More important, when we engage in hindsight, and consider counterfactuals (something I do in my scholarship frequently) we have to think about whether our sense that the outcome was obvious is the consequence of knowing the outcome. If you know of the dotcom crash of 2001, you can look back to various factors in 2000 and see all the evidence that it was coming, and then you can think to yourself what idiots people were for not seeing it. (You might even find quotes from some people who predicted it, and think what idiots everyone was for not listening to those geniuses). But that’s just intellectual shoulder-patting. Certainly, there was evidence of coming disaster, but there was also evidence that this was a new model of economic growth—you have to look at all the evidence people had in front of them in the moment and understand what reasons they gave for the choices they made.

To make considering counterfactual anything other than 20/20 hindsight, you have to ask: Were the choices reasonable within the context of that evidence, regardless of outcome?

Even if Clinton made the wrong decision, and there were people at the time who said that, the question should be whether she was making a decision that was obviously unreasonable in the moment, and I don’t think it was. For instance, her believing polls doesn’t make her arrogant—I think it’s reasonable for someone with her background to think she might know what she is doing. And what she was doing was believing the polls, and spending her energy getting money to throw downticket.

Had Clinton decided not to meet with big money donors and had instead worked on ensuring she won a supposedly unlosable election by on the ground campaigning, and had she won, I think the same people who are lambasting her now would be lambasting her as arrogant for just trying to get herself elected instead of raising more money for Dems generally.

I think this criticism amounts to lambasting her for having believed the polls. Since it’s a criticism I’ve heard repeated by people who themselves cited the polls as authoritative in October, I don’t find it a very interesting argument.

Fifth, Voter suppression. This is an interesting argument. There are lots of arguments that there was voter suppression, and that it was enough to flip the election. But, it’s also disputed, and there are also major sources that are silent on the issue (such as 538). There are two reasons I think it probably did happen—or at least there was a determined effort to make it happen. The GOP Noise Machine works by deflection and projection (or, more accurately, projection as deflection) and the ginned-up fear-mongering about voter fraud quacks and walks like a projection/deflection move. If it is projection/deflection, there either there was actual voter fraud—that is, interference with voting machines—or voter suppression. But that’s sheer speculation on my part.

The more plausible reason to think there was voter suppression and it was effective is that the GOP has spent so much money, time, and effort trying to make it harder for nonwhites to vote. They must think it’s effective.

The sixth and seventh are generally connected—that millennials are sexist, or Sanders otherwise ruined the election for Clinton (every once in a while someone makes the claim about Stein, but that’s rare).

Let’s start with the Clintonista explanation that Sanders is entirely to blame (and keep in mind that isn’t Clinton’s explanation). It doesn’t hold up to empirical testing. It’s generally made on the basis of several leaps of inference. The best empirical support (and it isn’t very good) for blaming Sanders’ supporters relies on equating Sanders’ supporters and millennials, and that’s a false equation.  Clinton won the popular vote, and lost by small amounts in key states. So, a good argument for Sandersistas having cost Clinton the election would show that there were enough of them in the very close states who didn’t vote for Clinton to have shifted the election. And I’ve looked for that data, and I can’t find it.

The closest is some numbers run by Brian Schaffner, who estimates that 12% of Sanders voters voted for Trump (but the number might be 6%).  In a tweet, Schaffner estimated the state levels. If those estimates are correct, then, had all of those people voted for Clinton, she would have won. (All of this is explained in John Sides’ August 24, 2017 Washington Post article, “Did Enough Bernie Sanders supporters vote for Trump to cost Clinton the election?”)

So, does that mean that Sanders supporters cost Clinton the election, or, as another article terms them, Sanders “defectors”? Note the loaded language.

This whole narrative makes me nervous, especially since it’s taking Schaffner’s work as more definitive than even he says it is. And it seems to be getting used as a weapon in the purity war rumbling around the left—Sanders voters are unreliable, likely to defect, were too self-righteous to vote sensibly, or too unwilling to compromise. It’s also getting used by people who want to argue that Dems should have gone for Sanders, since it’s proof that he would have won. (It isn’t, since Clinton picked up more than that number in GOP voters who “defected.”)

First of all, we need to stop with the language of “defecting” and even “costing.” Even Schaffner points out that the people who did that weren’t typically Democrats, and they were racist. Sanders always did worse than Clinton as far as non-whites, but his defenders argue that he was changing his message, and he would have attracted more. Had he genuinely persuaded the public that he was not racist, he would probably have lost this 12%. Schaffner’s speculation is important to note: “I think what this starts to suggest to me is that these are old holdovers from the Democratic Party that are conservative on race issues. And while Bernie wasn’t campaigning on that kind of thing, Clinton was much more forthright about courting the votes of minorities — and maybe that offended them, and then eventually pushed them out and toward Trump.”

So, these weren’t Sanders supporters, I’d say—just people who voted for him in the primaries. And they certainly don’t represent anything important about Bernie-bros, or the young progressives who want the Dems to become more progressive—this isn’t that category. In fact, Schaffner’s evidence suggest that group did vote for Clinton, or, at least, didn’t cost her the election.

It might be that the fact that Sanders’ supporters repeated a lot of fake news reports and pro-Trump talking points on social media convinced others in their feed to vote Trump or third party, but I haven’t found a study to suggest that’s the case. My highly individualistic impression is that the people who voted for Sanders in the primaries and refused to vote for Clinton were the kind that had never voted for a Dem anyway (and didn’t vote for Obama, on purity grounds), or they lived in Texas, so they don’t really count as game-changers. I know that there were people who voted for Obama and then voted for Trump, but the research doesn’t suggest that many of them were Sanders’ supporters who refused to vote for Clinton.

So, the notion that Clinton lost just because of Sandersistas doesn’t really make the grade of a falsifiable claim. It’s just a guess, and not even a very good one.

And why would we make that guess? There is much better evidence about other factors, such as voter suppression and overconfidence among Clinton supporters (who thought she had it in the bag and so they didn’t need to vote). 538 persuasively argues it was the Comey scandal and the impact on undecided voters (most of whom weren’t millennials). Why make a guess that blames fellow lefties? That seems to me unnecessarily and strategically unwise.

People tend to blame the outgroup for anything bad that happens, and, unhappily, it’s not unheard of for people to be more concerned about heretics than heathens. That is, we can be more concerned about cleansing our group of people who aren’t like-minded enough than about people who are openly opposed to us. It’s an irrational act to which people are drawn when the ingroup is shamed, and that’s what I think we’re doing. It seems to me a skirmish in a purity war.

It’s also incredibly patronizing and delegitimates a point of view—that Sanders was the better candidate—of people with whom there are shared goals.

I think this kind of move (like all skirmishes in a purity war) sets up a nasty dynamic—like two people fighting over who is at fault for burning the Thanksgiving turkey. Once a person says, “It’s your fault,” it’s incredibly difficult to get the conversation back into a useful realm in which people are problem-solving—it’s all about defending yourself.

I mentioned that I do know Sanders supporters who refused to vote for Clinton, some of whom never vote in Presidential elections (basically, any candidate popular enough to get a nomination isn’t pure enough for them—they liked that candidate when you had to buy the speech on vinyl at the show; it’s just hipster politics), but some of whom probably would have. And they live in Texas. In Texas, we are accustomed to being systematically disenfranchised, and every vote other than GOP is a symbolic action, so, although I disagree with that choice, I don’t think it’s evil or ridiculous or illegitimate or even unreasonable.

Eighth, Many people for whom I care deeply make the argument that the DNC was wrong to go for a third-way neoliberal instead of Sanders, who would definitely have won. In some versions, the argument is that the DNC pushed a lousy candidate onto the Dems and is therefore responsible.

I find it really weird that so many reasonable people make that argument without seeing how odd it is. It’s either false or nonfalsifiable (like the Clintonista narrative that blames Sandersistas). It’s also really patronizing since it delegitimates anyone who voted for Clinton.

I see this argument a lot. It necessarily has two sub-points: that Clinton only won because of DNC support, and that Sanders would have won the general election.  That first argument, although repeated a lot in certain circles, has some implications that, I think (I hope), the people making it would reject if made explicit.

Clinton won the open primaries, and Sanders won the caucuses. So, by any reckoning, Clinton got more votes than Sanders. This argument says that she did so only because the DNC supported her. That’s a really offensive argument. If Clinton only won because of the DNC support, then the underlying assumption is that all those people who voted for Clinton would have voted for Sanders if the DNC had supported him—that they would do whatever the DNC told them to do.

I want to leave that out there because I really think that people haven’t thought that one through. Is that really an argument they believe?

That argument is saying that Clinton supporters were mindless sheeple who would do whatever the DNC told them to. The narrative is that Sanders’ supporters really know how to vote and how to solve our problems, and Clinton supporters were just mindless followers who don’t really know what we need and how we should vote.

That’s patronizing, just as patronizing as Clinton’s saying that Sanders supporters were young and misled. I think it’s wrong—factually, morally, and strategically–in both cases. Clinton supporters, like Sanders supporters, had good reasons and good arguments for their point of view; neither group should be delegitimated. And the second someone argues for delegitimating the other major group in a community, they’re engaged in a purity war.

Since Sanders never did as well with nonwhites and women as Clinton, and Clinton never did as well as Sanders with young people, any narrative that says THEY didn’t have legitimate reasons for supporting their candidate is just appallingly patronizing. It has to stop.

But, let’s take it a step further. Is it clear that Sanders would have won? The poll that Sandersistas cite shows that Clinton would win. So, either it’s a bad poll, or Clinton might have been a less good choice, but not bad.

Sanders might have done better because he has the dangly bits, and so might not have been hurt by sexism, but Clinton lost white evangelical women, and there’s no reason to think Sanders would have gotten them (especially since he would have had anti-Semitism against him—a mirror image argument of the “don’t vote for Clinton because other people are sexist”), and there’s even less reason to think he would have gotten nonwhites. He still doesn’t get issues about race, after all. He still talks about “working class people” when he means “white working class.”

Antisemitism in the US is a non-trivial issue, and there has never been a candidate who wasn’t a practicing something, so there isn’t any good reason to think that he could have won over any bigots that Clinton lost. Unhappily, I think arguing that we shouldn’t have nominated Clinton because of sexism logically implies we shouldn’t have nominated Sanders because of anti-Semitism. If you’re arguing for Dems needing to pander to prejudices, then you need to be consistent in that (and there are still huge swaths of American public opinion that equates “liberal Jew” and “communist”). And that’s why I think they’re both troubling arguments.

At the time of the poll that showed that Sanders was the better candidate, there was a counter-argument that the GOP wanted Sanders to be the candidate, as they knew they could win against a Jewish socialist, and so they were holding fire. I was extremely dubious about that argument, so I spent a few hours looking at my normal Right Wing Propaganda Machine sources, and I ended up deciding it was true. It was striking that there weren’t any negative articles about Sanders after October or so of 2015. For instance, Sanders’ wife had some complicated financial dealings (personally, I don’t think they were even on the same radar as Trump), but there was no mention of them in the Noise Machine. The few articles about him were about how Clinton was victimizing him. That doesn’t mean that supporting Sanders was definitely a bad idea and anyone who did was an idiot. It just means that it’s reasonable to have supported Sanders but unreasonable to think he would definitely have won.

And here I have to emphasize the point I’m making—I think politics is very rarely capable of definitely right judgments, and it’s almost always a question of probabilities. Thus, there are a lot of positions on an issue that are reasonable, but they don’t all necessarily turn out to be right. Being reasonable doesn’t guarantee that one is right, and turning out to be wrong doesn’t mean that one’s position was unreasonable. So, I don’t think it’s obvious that Sanders would have won, but that doesn’t mean I’m certain he wouldn’t have. I do think his situation was more wobbly than many people realize. Therefore, people who voted for Clinton aren’t (and weren’t) obviously wrong, and people who voted for Sanders aren’t (and weren’t) obviously wrong–the right answer is not certain.

What most of my lefty friends don’t know (since, unlike me, they are sensible enough not to wander around in the GOP Noise Machine) is that Clinton was slammed for being socialist. I saw this a lot on friends’ social media too (and still do). For instance, here’s the National Review, not even a very extreme site (not as rabidly factional as Fox, let alone hate radio): I think it would have been an issue for Sanders as a candidate—perhaps not fatal (Obama got past it)—but an issue.

And here’s another point for which I have no data other than listening to people. The evangelical right has thoroughly politicized their churches, as they did during segregation, and it’s all about abortion. Unless Sanders was going to change the Dem stance on reproductive rights (which would have lost him huge numbers of people), he would have faced opposition from them. So, again, I think it was reasonable to support Sanders in the primary on the grounds that he was most likely to win; I think it was reasonable to support Clinton on those same grounds. I think it was reasonable to be unhappy there wasn’t a third Dem candidate.

I think we’re reasonable people. The premise of democracy is that no individual or group knows what is best for the community as a whole, that a community benefits from having people passionately committed to different political agenda, that pure agreement is never possible but respectful and grudging compromise is good enough, that listening to people with whom you disagree is useful, that important political change happens slowly, and that being certain and being right aren’t the same thing. I think Democrats should value democracy. I think we agree to have at least that much democracy within our party, and that means acknowledging that difference as to which is (or was) the best candidate is perfectly fine—people might have good reasons for disagreeing.

If the Dems are going to win elections (rather than replay what happened in the 80s) we need to agree to disagree together.

The Principled Position on Pussy-Grabbing

I crawl around the internet and argue with people. And there is a recurrent argument that, for me, is what’s wrong with our current political deliberation in a nutshell.

A person (often a woman) says she couldn’t vote for Hillary (note that Clinton is identified by her first name) because Clinton called the women her husband assaulted sluts and whores. So they voted for a man who bragged that he assaulted women, or they voted in a way that enabled a self-proclaimed sexual predator to become President because they wouldn’t vote for a woman who might have enabled a sexual predator. They wouldn’t vote for someone who did what they are doing by how they are voting. That’s interesting.

It’s interesting that the serious logical problems of that argument don’t occur to them. So, why don’t they?

It’s interesting that they’re trying to argue that their opposition to Clinton is principled, when the principle (don’t vote for someone who supports sexual predation) is violated by their arguing for a self-confessed (not just possibly an enabler) of sexual predation. Why vote for a self-confessed sexual predator (and thereby enable sexual predation) on the grounds that the other candidate might have enabled sexual predation? It’s also interesting how often these women claim that their stance is Christian, while they are cognitively reconciling voting for a self-confessed sexual predator, whose wife had porno photos (which conservative Christians claims to abhor, and yet neither he nor his wife has said they think those photos were a bad choice), who has a history of adultery, and whose “Christianity” only occurred when it was useful with believing they are promoting Christianity.

Okay, let’s take their argument at face value. They are saying that their position is not sheer factionalism—it isn’t that they would vote for roadkill were it the Republican nominee—they have principles for voting this way. Let’s call this argument the “sexual predation principle” argument.

And, obviously, it’s an argument that trips over its own tongue. Voting for a self-confessed sexual predator because you can’t vote for someone who is doing what you’re doing by voting for Trump (enabling a sexual predator) isn’t an argument from principle about abhorrence of sexual predation.

It’s something else entirely. So, what is it?

And here is something that makes it all more interesting. We have, on tape, Trump bragging about sexually assaulting women. There is no good evidence that Clinton said the accusers were whores or sluts. The sites that claim Clinton did that (and you can google it, because I don’t want to give them the clicks—they’re clickbaity sites) refer to an unsourced anonymous claim that someone said to someone that she had said it to them. There are no sites that quote Clinton directly, let alone show video or her calling the accusers sluts or whores.

I’ve argued with people who claim they saw a video of Clinton saying that. There is no video. There never was. (If there was , you would have seen it through all of 2016). That’s the known phenomenon of people creating an image of a claim they’ve heard over and over (for more on that, see Age of Propaganda). So, why do people have a clear image of a video that never existed?

Because their hatred of Clinton is so visceral as to be visual.

Well, okay, they hate Clinton, and they can list reasons. But are those reasons grounded in principle?

Here’s why that matters. There are, loosely, two ways to reason: one is grounded in ethical principles—that, regardless of who is doing something, you condemn or approve of that thing. Christ endorsed that method of thinking about ethics when he said “Do unto others as you would have them do unto you.” It’s also the good Samaritan story—an act is right or wrong on its own merits, and not on the basis of who does it.

The other method of thinking about whether something is right or wrong is the one Christ continually rejected—that a thing done by this kind of person is right (if you think that kind of person is right) and it’s wrong if it’s done by a kind of person you think is wrong. That kind of reasoning is purely factional (or tribal, if you prefer that term): people like you are good, and people not like you are bad.

It’s hard for people to see when we’re engaged in factional ethics because we can always come up with instances of bad behavior on the part of the other faction, and so we can sincerely believe our perception of our faction as always better is proven by evidence (aka, confirmation bias). But here’s what factional reasoning can’t do: hold all the factions to the same standards.

If Clinton was wrong to enable sexual predation, then Trump was worse.

That conclusion comes from holding principles the same regardless of faction, and people often don’t reason that way about ethics. People think that they’re behaving in a principled way when they’re reasoning on the basis, not of a logical principle, but a generalization about their group versus the other group–it seems like reasoning from a principle, but the logical principle is that “my group is good.”

And too much American political discourse is on those grounds, and that people reason factionally is shown most obviously when people point out the inconsistency. For instance, if you say to me, “Well, you say that Your Candidate is good because she cares about the environment, but she took $10 million dollars from an oil company to hide their oil spill,” a factional (and not principled) response is for me to say, “Well, Your Candidate did it too.” It doesn’t matter if Your Candidate did–that doesn’t mean mine didn’t.

Where that argument should go, if it’s a good one, is an acknowledgement on the part of everyone that both candidates did it, and then we can argue about which is worse

If you believe that your faction is always right, you might mistake reasoning from that premise (My faction is right; this person is a member of my faction; therefore, this person is right) as operating from a principle because you believe your faction to be more principled than any other.

Unhappily, a lot of the people who voted for a sexual predator did so because they believe that only the Republicans support Christ’s political agenda.

Let’s set aside the most obvious problems with that (Christ didn’t say “except for these people”), and just try to understand that these are people who believe that their political agenda is so Christian that they are justified in treating their political opponents in ways that violate what Christ said about how we should treat others.

What that means is that their political agenda is more important than a pretty clear commandment from Christ.

That’s political factionalism. Whether their political agenda is the same as what Christ would want is up for argument. Whether they’re violating what Christ said about doing unto others is not. They are, and they’re trying to come up with reasons as to why it’s okay.

So, it’s taking a particular and factional political agenda and insisting that only that agenda is good. That’s anti-democratic.

And here’s another way that it’s what’s wrong with American political discourse in a nutshell. It’s ignorant of history. American Christians have a long list of sins on our plate (especially conservative Christians)—policies that were, actually, sheer factionalism, in-group preference, or sheer prejudice. Advocating slavery, defending segregation, opposing unions or any protection for workers’ safety, refusing to allow Jewish refugees from Nazi Germany to come here—all of those things were presented by conservative Christians as the obvious political agenda of Jesus. Oddly enough, a lot of conservative Christians now want to claim those political stances as proof that they are right, but they’re evidence they’re probably wrong. Those positions were all progressive and liberal Christian movements, demonized by conservative Christianity. [1] Conservative, even moderate, Christians were opposed to Martin Luther King, Jr., and condemned him.

There is a second problem with trying to cite those movements as proof that what politically conservative Christians are doing now: all of those movements insisted on the “do unto others” test, the very one rejected by conservative Christians now

Support of Trump fails that test.

So, let’s stop pretending that “I voted for Trump because Clinton supported her husband” is some sort of principled stance. It isn’t. Let’s stop pretending that people who make that claim are feminists, or allies, or anything other than people who wanted Trump to get elected, and needed a reason that made them feel comfortable.

It’s what’s wrong with American political discourse in a nutshell because it looks as though the person is taking a principled stance, when, in fact, there is neither a logical nor ethical principle consistently applied. It’s a rabidly factional defense of a logically indefensible position. It’s just a way of managing the cognitive dissonance of voting for Trump only because he’s in their faction. But, let’s admit it isn’t principled, and it violates what Christ said about doing unto others.

 

[1] The appalling crime on the part of progressive Christianity, eugenics, (also supported by many conservative Christians) also violated the “do unto others” rule.

 

Handout for Denver talk

“Democracy and the Rhetoric of Demagoguery”

Here’s my argument: I think we can distinguish demagoguery from other forms of persuasive discourse on the basis of the presence of certain rhetorical moves, not the identity of the rhetors. I think, also, we should talk about the effectiveness of demagoguery in terms of how it plays into the informational worlds that people inhabit. Demagoguery isn’t an identity; it’s a relationship.

There are six methodological problems to consider with the “infer from rhetors I hate” project:

1. Looking for the commonalities among successful and hated rhetors assumes what is at stake—that it was something about their rhetoric or identity that enabled them to succeed, rather than there being a tremendous amount of luck, or their being in the right place at the right time. If we want to know what does enable that success, we need to look at unsuccessful demagoguery.
2. That method doesn’t enable us to see demagoguery we like—by beginning with rhetors we hate, we exclude consideration of our attraction to potentially damaging rhetoric.
3. It also prohibits empirical research on demagoguery. And here I’m advocating a kind of research I don’t do, but that I think is valuable. If we could come up with a fairly rigorous definition of demagoguery, then we could use strategies like corpus analysis in order to be more precise in our claims of causality and consequences.
4. Oddly enough, the standard criteria—motive, emotionality, populism—don’t even capture the most famous demagogues, or they end up capturing all political figures, so those criteria are both over- and under-determining.
5. These criteria are demophobic and elitist, as though rich and intellectual people never fall for demagoguery, and that just isn’t true.
6. Finally, by focusing on identities as the problem—bad things happen because we have powerful individuals who are demagogues—we necessarily imply a policy solution of purification. If the presence of these bad people is the problem, then we should purify our community of them. Since I’ll argue that policies of purification are, in fact, one of the consistent characteristics of demagoguery, that would mean, in the scholarly project of criticizing demagogues, we’re engaged in demagoguery.

Odd characteristics of demagoguery:
1. It’s obvious to us that their rhetor is a demagogue, but not to them. If the identity of demagogue is so obvious, why does it ever work?
2. If demagogues are magicians with word wands, why is it so hard to describe their impact/effect accurately?

“Time after time, Hitler set the barbaric tone, whether in hate-filled public speeches giving him a green light to discriminatory action against Jews and other ‘enemies of the state’, or in closed addresses to Nazi functionaries or military leaders where he laid down, for example, the brutal guidelines for the occupation of Poland and for ‘Operation Barbarossa’. But there was never any shortage of willing helpers, far from being confined to party activists, ready to ‘work towards the Fuhrer’ to put the mandate into operation” (Ian Kershaw, Hitler, the Germans 43)

“Nazi propaganda was not, and could not, be crudely forced on the German people. On the contrary, it was meant to appeal to them, and to match up with everyday German understandings [….] Thus, far from forcing unwanted or repellant messages down the throats of the population, Hitler and the Nazis carefully tailored what they said, wrote, and especially what they did, in order to win and hold the support of the people.” (Robert Gellately, Backing Hitler 259)

Characteristics of public discourse in train wreck moments:

• Policy questions are reduced to questions of identity, with need reframed as threat to the ingroup, and with identify bifurcated into “us” and “them”;
• The community or nation-state is reduced to the ingroup who are seen as the “real” Americans/Christians/Republicans/Progressives (so that, even if “they” are legally or historically part of the community, they are never considered “real” members);
• An outgroup is scapegoated for all the ingroup’s problems;
• Public discourse is predominantly performance of ingroup loyalty;
• Ingroup loyalty is demonstrated by insisting that policy discussions are unnecessary because the correct course of action is obvious to all people of goodwill (disagreement is fake—either the person disagreeing doesn’t really disagree, or is fooled by the outgroup);
• The community is described as threatened by the mere presence, let alone political power, of that outgroup, and so the solution is some version of purifying us of them;
• Because we are threatened with extinction, concerns like due process, human rights, and fairness are luxuries we can’t afford;
• The discourse is heavily fallacious, but not necessarily emotional, and can involve appeals to authority and expertise, and can look as though there is a lot of “evidence;”
• Nuance, uncertainty, deliberation, and skepticism are rejected as unmanly and disloyal (except for skepticism about claims made against ingroup members);
Finally, while there are overlaps with fascism (especially as Robert Paxton describes it), it isn’t necessarily fascist, or even political—it is an attack on Enlightenment notions of reason, universal rights, and inclusive deliberation.

Damaging assumptions that people commonly make about political decisions:

• When it comes down to it, the solutions to our political problems are straightforward. Our political issues are the consequence of not having enough good people in office—instead, we have professional politicians who aren’t really trying to solve things. (Stealth Democracy)
• Good people do good things, and it’s easy to recognize when someone is a good person, or when a plan of action is good. So, we don’t need to argue about policy—we just need to vote for the good people who are above (our outside of) professional politics.
• Good people speak the truth, and they don’t try to alter it through rhetoric—they are transparent. You’re better off with someone who doesn’t filter—even if what they say is offensive or not politically correct—because you can know that person. S/he won’t mislead you.
• A “rational” argument is a claim that is true (and that you can recognize easily to be true) supported by evidence, and presented in an unemotional way.

The definition I’m proposing:

Demagoguery is a discourse that promises stability, certainty, and escape from the responsibilities of rhetoric through framing public policy in terms of the degree to which and means by which (not whether) the outgroup should be punished for the current problems of the ingroup. Public debate largely concerns three stases: group identity (who is in the ingroup, what signifies outgroup membership, and how loyal rhetors are to the ingroup); need (usually framed in terms of how evil the outgroup is); what level of punishment to enact against the outgroup (restriction of rights to extermination).

(Some) Citations:
Berlet, Chip, and Mathew N. Lyons. Right-Wing Populism in America: Too Close for Comfort. New York: Guilford, 2000.

Burke, Kenneth. “Rhetoric of Hitler’s ‘Battle.'” The Philosophy of Literary Form: Studies in Symbolic Action, 3rd ed. Berkeley: U of California P, 1978.

Gellately, Roberts. Backing Hitler. Consent and Coercion in Nazi Germany.Oxford, Oxford University Press, 2001.

Hibbing, John R. and Elizabeth Theiss-Morse. Stealth Democracy: Americans’ Beliefs about How Government Should Work. New York: Cambridge U P, 2002.

Kershaw, Ian. Hitler: 1889-1936: Hubris. New York: Norton, 1998. Print.
—. Hitler, The Germans, and The Final Solution. NewHaven: Yale University Press, 2008.

Lakoff, George. Moral Politics: How Conservatives and Liberals Think, 2nd ed. Chicago: U of Chicago P, 1996.

Mann, Michael. The Dark Side of Democracy: Explaining Ethnic Cleansing. Cambridge: Cambridge UP, 2005.

Miller, Thomas P. The Formation of College English: Rhetoric and Belles Lettres in the British
Cultural Provinces. Pittsburgh: Pittsburgh UP, 1997.

Taleb, Nicholas. Fooled by Randomness, Random House & Penguin (2001-2005 2nd Ed.)

Ward, Jason Morgan. Defending White Democracy: The Making of a Segregationist Movement and the Remaking of Racial Politics, 1936-1965. Chapel Hill: U of NC P, 2014.

The Holocaust and Christianity

“Hitler attracted Christians by criticizing the liberalism of democratic government and by advocating a tougher, law-and-order approach to German society. He opposed pornography, prostitution, abortion, homosexuality, and the ‘obscenity’ of modern art, and he awarded bronze, silver, and gold medals to women who produced four, six, and eight children, thus encouraging them to remain in their traditional role in the home. This appeal to traditional values, coupled with the militaristic nationalism that Hitler offered in response to the national humiliation of the Versailles Treaty, made National Socialism an attractive option to many, even most Christians in Germany.” (11, _Betrayal: German Churches and the Holocaust_)

Sciencing in public

As someone really worried with how badly Americans argue about public policies, I’ve especially worried about highly politicized attacks on science, and how hard it is for scientists to get pretty basic concepts understood. As a historian of public argumentation, I’m unhappily aware that the tendency to attack scientific discoveries on purely political grounds isn’t new. And a lot of people have written things about how science is attacked, and bemoaned our inability to get scientific findings to have real impact on public policy, but I think those things haven’t had much impact because of their rhetoric.

Lots of people have said that scientists’ rhetoric is flawed because it’s too technical and academic, but, honestly, I don’t think that’s the problem. I think the two major problems that vex public uses of science in public policy are: culturally, we have a vague definition of what is a “science,” and second, we have a thoroughly muddled notion of what “objectivity” is.

And scientists themselves don’t help. In public, too many scientists conflate “science” and “what I think is good science” and appeal to an inconsistent epistemology.

What people engaged in research about climate change, vaccines, evolution, and gender need to understand is that the people who attack what some of us think of as science do so by citing what they think of as science.

Behind the arguments that we think of as “science” arguments are, it seems to me, two deep misunderstandings: first, what a “science” is; second, what epistemology (model of knowledge) is right. The first one is relatively straightforward, but the second, more complicated one, is the really crucial one.

Part of the problem is that the cultural understanding of what it means to be a “science” is muddled, and, for a large number of people, simply outdated. Until well into the 20th century, various disciplines were called “sciences” that had nothing to do with what we now think of as the scientific method, insofar as they relied on non-falsifiable claims (eugenics, for instance). But they called themselves sciences and they were accepted as such because they had numbers, they had experts, and they had peer-reviewed journals. For many people, that older notion of a “science” prevails: a science is something that is done by people with degrees in fields that seem kind of science-y and have a lot of math. (Look at the oft-shared list of “scientists” who say global warming is a hoax.)

There are various organizations out there (and long have been) with very clear political agenda that call themselves “sciences” or “scientific” and manage to mimic the rhetorical moves of sciences. This, too, is nothing new. When various organizations abandoned race as a useful concept, racists formed their own organizations and journals that only published “studies” that fit their political agenda (John P. Jackson’s Segregationists for Science describes this process elegantly). Meanwhile, they railed at the mainstream journals for being politicized. They managed to look like “science” to many people because they had authors who had degrees in science, some of whom worked as “scientists.” That notion of science is an identity argument: science is the work done by people we think of as scientists.

The same thing happened when psychologists decided that homosexuality was not a mental illness—organizations formed with the political agenda of only supporting research that pathologized homosexuality (and, once again, that condemned other research as “politicized”). And they call themselves scientific organizations, with “research” prominent in their titles. There are similar organizations and webpages (and some journals) for organizations that promote Young Earth Creationism, anti-vaccine rhetoric, attacks on climate change, and all sorts of other ideologically charged issues. And, as with the pro-segregationist rhetoric, they are explicitly politicized while projecting that condemnation onto their critics. Because they are explicit that they are looking for “science” that supports beliefs they already have, one of the very straightforward ways that they are not sciences is that their claims are non-falsifiable.

They are scientific, they say, because they can generate studies and data that support their beliefs. In the case of creationism and homophobia, the groups often insist that they are proving that Scripture and “science” say the same thing. They can support their readings with data or quotes from people with degrees in science, and with scientific-sounding explanations. That’s cherry-picking, of course, but it means that they can invoke the authority of “science” to support their claims.

(And here I should probably come clean: I self-identify as Christian, and I think they cherry-pick Scripture just as much as they cherry-pick “science.”)

When I first wandered into these places, where people at odds with the scientific consensus insisted that they were doing science, I just assumed that there were being deliberately disingenuous, but I no longer think so. For me, as for many people, there is “normal science,” which is the data being produced by people publishing falsifiable studies in peer-reviewed journals. Science, furthermore, has the quality that scholars in rhetoric call “good faith argumentation,” meaning that the people putting forward a claim can imagine being presented with data that would cause them to abandon it (there are some other characteristics, but that one is the important one here). But that isn’t how everyone thinks about science–it isn’t about method, but about the identity of the person doing the work.

Young Earth Creationists, for instance, fail at every point mentioned above (except posture). They can cite data to support their claims (some of which, but not much, is true), but they can’t articulate the conditions under which they would abandon their narrative about the creation of the earth.

So, why do they continue to think of themselves as doing science?

It’s the identity argument. As I said earlier, for many people, “science” is the activity done by people who have degrees in a science field, regardless of the institution, and regardless of the discipline. So, how do they distinguish between good and bad science? Good science is true.

For them, science is a relationship to reality—if you’re a “scientist,” then you have a direct connection to the logos that God breathed into the fabric of the universe. Thus, that 700 scientists would say that global warming is false shows that people with that kind of unmediated knowledge make a claim. That faith in unmediated knowledge is often called the “naïve realist” epistemology.

That “unmediated knowledge” is crucial to all this, and it’s where scientists trip themselves up. It’s important to understand that the people arguing for young earth creation believe that they can simply look and see the truth–so any argument that says “You’re wrong, because you can simply look and see a different answer” isn’t going to work rhetorically. They are looking, and they can find evidence to support their position.

And that raises the second, fairly complicated, problem about epistemology. And scientists have issues with this, I think, because when in public they’re naive realists, and they insist you’re either a naive realist or a postmodern relativist (really? do they think creationists are postmodernists? they’re pre-modernists), but when at home they’re skeptics. Science itself rejects naive realism, so scientists need to stop talking as though there is naive realism or post-modernism. (In fact, that’s how creationists talk, which is a different post.)

A non-trivial complication in how the public argues about “science” is that what I earlier called “normal science” is often advocated by people who do and don’t claim that they have unmediated knowledge of the world. That’s a rhetorical problem. Scientists and young earth creationists (and all the other advocates of bad science out there) appeal to and reject naïve realism.

Briefly, many defenders of science in public debates make two claims simultaneously: science is indisputably true; science is better than religion because scientists change their mind when presented with new evidence—science is falsifiable. In other words, science looks true to people AND the results of scientific studies are contingent claims that could be proven false. So, as I said, in public discourse, too many scientists appeal to naive realism, but the scientific method itself rejects naive realism.

To many people, that looks as though scientists are saying that, although we’ve changed our mind a lot in the past (meaning “science” can be wrong) we are absolutely right now. Or, more bluntly: science is true but it’s been false.

And, let’s be blunt: it’s been false. Eugenics was mainstream science. It had bad methods, but it was mainstream science, and it was taught in science classes. It didn’t look bad at the time. Medicine claims to be a science, as does nutrition, and it has made a lot of claims that scientists in those fields now believe to be false.

Scientists need to reject the false binary of “you either believe that science tells us things that are obviously true” or “you are postmodernist literary critic who believes that all claims are equally true.” That is not only a falsifiable claim, but a false one. Young earth creationists are cheerfully unaffected by postmodernism anything, and they say that they believe things that are obviously true. Also, there are very few “postmodernists” who say that “all claims are equally true”–Feyerabend comes to mind, and very few others, and no, that isn’t actually what Foucault or Derrida said. (And I don’t even really like Foucault or Derrida, and I think that’s just an outrageously ignorant way to characterize what they’re saying.)

Keep in mind, Popper said that objectivity isn’t about what an individual does. A claim is objective, he said, because it’s an object in the world, and he said an objective claim isn’t necessarily true. So, since Popper said that an individual scientist isn’t necessarily objective, is he a postmodern relativist?

Good science isn’t about the cognitive processes of individuals engaged in science; it’s about the arguments people in science have. When people claim that you either believe what “science” says right now or you’re a postmodernist relativist hippy, they’re rejecting the scientific method.

The whole premise of the scientific method, especially concepts like a control group, falsifiability, and double-blind studies, is that people are prone to confirmation bias (a good study doesn’t set out to confirm a hypothesis: it sets out to falsify one). The scientific method presumes that humans’ perception is clouded. That acknowledging that individuals can’t see the truth doesn’t make the underlying epistemology either solipsistic or relativist (both of which are, oddly enough, often misnamed as postmodernism—they long predate modernism, let alone postmodernism). It means that science generally exists in the realm of skepticism, sometimes radical, sometimes the mild version that Karl Popper called fallibilism. For Popper, there is a truth out there, and it can be perceived by individuals, but individuals are fallible judges of when we have and have not reached the truth.

Science isn’t about binaries. It’s about continua. There are some claims that could, in principle, have been falsified, but have so withstood such tests that it isn’t even interesting to consider the possibility—such as evolution. There are aspects of evolution about which there is disagreement, and about which new consenses continue to form (such as the direct ancestor of homo sapiens), but all of those disagreements are subject to proof and disproof through further research. And that is the difference between evolution and creationism: religious faith, by its very nature, cannot be subject to disproof. Science is, fundamentally, a rejection of naive realism and of binaries about certainty: it says we should be skeptical about all claims, and we should think about claims in terms of how certain we are of them.

It’s no coincidence that science and skepticism arose at the same time, and, in fact, that’s the argument that scientists make about how science is different from religion: a true scientist will abandon her beliefs if the data disconfirm them, but religion is about rejecting the data if it disconfirms the beliefs.

Let me rephrase my original statement of the problem: scientists make a rhetorical claim (their claims should be granted more credence because of how they are supported), and an epistemological one (their arguments are true). I sincerely believe that science is in such a bad way right now because too many advocates of science reject what they know: that science isn’t about being certain or not, but about how certain you are, and what are the conditions under which you should change your mind.

The epistemology underlying science is a skeptical one, and scientists know that. When they’re arguing in public, they need to stop acting as though there is either naive realism or postmodern relativism. Scientists are skeptics who argue passionately for their point of view.

Right now, our political world is demagogic, and that means that our political world is dominated by the notion that there are good people who perceive the obviously correct way to do things and those assholes. We disagree about who are the assholes, but we all agree that it’s a binary.

What science could and should do for us is show a different way of thinking about thinking–that the right course of action depends on a correct understanding of the world as it is, and there is no correct understanding immediately available to us, but there are understandings that look pretty damn good, given all the research that’s been done.

 

I’m not saying that scientists need to argue better in public; while I think the whole project of sciencing in public is wonderful, I also think, ultimately, scientists aren’t obligated to be rhetoricians. (Some of them are wonderful rhetoricians, such as Steven Weinberg, but that shouldn’t be a requirement.) Instead, I think we need, as a culture, a better understanding of how knowledge isn’t a binary between certain and uncertain, but a continuum. I think, oddly enough, that the solution to our current problem of fake science isn’t really in science, but in the study of knowledge.

Among Democrats (Compromise, Purity, and Lefty Politics)

Among Democrats, there are a lot of narratives about the 2016 election, and two of them are highly factional (that is, they assume an us or them, with us being the faction of truth and beauty and them being the people who are leading us astray). One is that Clinton’s election was tanked by Bernie-bros who were all young white males too obsessed with purity to take the mature view and vote for Clinton. The other is that the DNC, an aged and moribund institution, foisted Clinton onto Dems when she was obviously the wrong candidate.

Both of those narratives are implicit calls for purity, for a Democratic Party (or left) that is unified on one policy agenda—maybe the policy agenda is a centrist one, and maybe it’s one much further left—but the agreement is that we need to become more purely something. Both narratives are empirically false (or else non-falsifiable), patronizing, and just plain offensive. In other words, both of those narratives are driven by the desire to prove that “us” is the group of truth and goodness and “them” is the group of muddled, fuddled, and probably corrupt idjits.

And, as long as the discourse on the left is which “us” is the right us, progressive politics will lose.

There isn’t actually a divide in the left—there’s a continuum. People who can be persuaded to vote Dem range from authoritarians drawn to charismatic leadership (anyone who persuades them that s/he is decisive enough to enact the obviously correct simple policies the US needs) all the way through various kinds of neoliberalism to some versions of democratic socialism. And there are all those people who can vote Dem on the basis of a single issue—abortion or gun control, for instance. When Dems insist that only one point (or small range) on that continuum is the right one, Dems lose because none of those points on the continuum has enough voters to win an election. That’s why purity wars among the Dems are devastating.

While voting Dem is actually a continuum, there are many who insist it is a binary—those whose political agenda the DNC should represent (theirs) and those whose agenda is actually destructive, whose motives are bad, and who cause Dems to lose elections (everyone else—who are compressed into one group).

Here’s what’s interesting to me. It seems to me that everyone who wants Dem candidates to win recognizes that a purity war on the left is bad, and everyone condemns it. Unhappily, being opposed to a purity war in principle and engaging one in effect are not mutually exclusive. There is a really nasty move that a lot of people make in a rhetoric of compromise—we should compromise by your taking my position—and that is what a lot of the “let’s not have a purity war” on the left seems to me to be doing. Let’s not do that. Let’s do something else.

This is about the something else that we might do.

And it’s complicated, and I might be wrong, but I think that Dems will always lose in an “us vs. them” culture because, at its heart, the Dem political agenda is about diversity and fairness, and people drawn to Dem politics tend to value fairness across groups more than loyalty to the ingroup, so any demagogic construction of ingroups and outgroups is going to alienate a lot of potential Dem voters. Sometimes voting Dem is a short-term looking out for your own group, but an awful lot of Dem voters are motivated by the hope of creating a world that includes them. I don’t think Dems will succeed if we grant the premise that Dem politics are about resisting: that only the ingroup is entitled to good things.

But we’re in a culture of demagoguery, in which politics is framed as a battle between Good and Evil, and deliberation (in which people of different points of view come together to work toward a better solution) that we’re in a world of us vs. them, how can Dems create a politics of us and them? That is our challenge.

And I want to make a suggestion about how to meet that challenge that is grounded in my understanding of what has happened in the past, not just 2016 (although that is part), but also to ancient Athens, to opponents of Andrew Jackson, to opponents of Reagan, and in the era of highly-factionalized media. I want to argue that what seem to be obviously right answers are not obvious, and possibly not even right.

 

  1. In which I watch lefties tear each other to shreds and lose an election we should have won

When I first began to pay attention to politics, and saw how murky, slow, and corrupt it all was, it seemed to me that the problem was clear: people started out with good principles, and then compromised them for short-term gains, and so, Q effing D, we should never compromise. (I saw The Candidate as a young and impressionable person.)

I could look at political issues, and see the obvious course of action. And I could see that political figures weren’t taking it. Obviously, there was something wrong with them. Perhaps they were once idealistic, perhaps they had good ideas, but they were compromising, and, obviously, they shouldn’t; they should do the right thing, not the sort of right thing.

Another obvious point was how significant political change happens: someone sets out a plan that will solve our problems, and refuses to be moved. ML King, Rosa Parks, FDR, Woodrow Wilson, John Muir, Andrew Jackson (no kidding—more about his being presented as a lefty hero below) were all people who achieved what they did because they stood by their principles.

That history was completely, totally, and thoroughly wrong, in that neither Wilson nor Jackson were the progressive heroes I thought and that all of those figures compromised a lot, but, if that’s the history you’re given then you will believe that to compromise necessarily means moving from that obviously right plan (about which you shouldn’t have compromised) to one that is much less right, and the only reason to do that would be pragmatic (aka, Machiavellian) purposes. Therefore, substantial social change and compromise are at odds, and if you want substantial social change, you have to refuse to compromise. (Again, tah fucking dah—there’s a lot of that in easy politics.)

My basic premise was that the correct course of action was obvious, and, therefore, I had to explain why political figures didn’t adopt it. Why would people compromise a policy that is obviously right? And, obviously, they had to deviate from the right course of action in order to get political buy-in from people who value things I don’t value. Or they were bad politicians in the pocket of corporate interests. (Notice how often things seemed obvious to me.)

And then Reagan got elected. Reagan lied like a rug, and yet one of the first things his fans said about him was that he was authentic. He announced his run for Presidency by saying he would support states rights at the site of one of the most notorious civil rights murders. And yet his fans would get enraged if you suggested he appealed to racism.

People loved him, regardless of his policies, his actual history, his lies. They loved his image. (It’s still the case that people admire him for things he never did.)

When he was elected, lefties went to the streets. We protested. The people protesting were ideologically diverse—New Deal Dems, people who had said that there was no difference between him and Carter, radical lefties, moderate lefties, I even saw people who told me they intended to vote for Reagan because it would make the peoples’ revolution more likely, and they were now protesting that the candidate they had supported had won.

There were more than enough people out protesting Reagan’s election to prevent his getting reelected. And, in 1980, we all agreed that he shouldn’t be reelected. Unhappily, we also all agreed that he had been elected because there was too much compromising in the Dem party, that Carter was a warmongering tool of the elite, and the mistake we made was not have a candidate who was pure enough. And, so, we agreed, the solution was for the Dems to put forward a Presidential candidate who was more pure to the obviously right values and less willing to compromise on them. We didn’t get that candidate, we didn’t get a very good candidate in fact (he was pretty boring), but his policies would have been good. And a lot of lefties refused to vote for him.

Unhappily, it turns out we disagreed as to what those obviously right values were.

In 1980, the Democratic Party was the party of unions, immigrants, non-whites, people who believe in a strong safety net, isolationists, humanitarian interventionists, pro-democracy interventionists, people who believe a strong safety net was only possible in a strong economy (what would be later be called third-way neoliberals), environmentalists, people who were critical of environmentalists, and all sorts of other ideologically diverse people.

There wasn’t a party platform on which we could all agree. To support the unions more purely would have, union reps argued, meant virulently opposing looser standards about citizenship and immigration. The anti-racist folks argued for being more inclusive about citizenship and immigration. Environmentalists wanted regulations that could cause manufacturing to move to countries with lower standards, something that would hurt unions. People who wanted no war couldn’t find common ground with people who wanted humanitarian intervention. (And so it’s interesting how conservative the 1980 platform now looks.)

Dems, at that point, four choices: reject the notion that there was a single political agenda that would unify all of its groups (that is, move to a notion of ideological and policy diversity in a party); decide that one group was the single right choice; try to find someone who pleased everyone; try to find candidates who wouldn’t offend anyone; or engage in unification through division (get people to unify on how much they hated some other group).

Mondale was the fourth, most lefties went for the second or fifth. I think we should consider the first.

At the time I was a firm believer in the second, for both good and bad reasons. And lots of other people were too. What we believed is what I have come to think of as the P Funk fallacy: if you free your mind, your ass will follow. I believed that there were principles on which all right-thinking people agree, and that those principles necessarily involve a single policy agenda. Thus, we should first agree on principles, and then our asses will follow.

Lefty politics is the grandchild of the Enlightenment. We believe in universal rights, the possibilities of argument, diversity as a positive good, the hope of a world without revenge as the basis of justice. And, perhaps, we have in our ideological DNA a gene that is not helping us—the Enlightenment is also a set of authors who shared the belief (hope?) that, as Isaiah Berlin said, all difficult questions have a single true answer. I think the hope is that, if we get our theories right—if we really understand the situation—then the correct policy will emerge.

But, there might not be a correct policy, at least not in the sense of a course of action that serves everyone equally well. An economic policy that helps creditors will hurt lenders, and vice versa.[1] In trying to figure out then what kind of economic policy we will have, we can decide we’re the party of lenders, or we’re the party of borrowers, and only support policies that help one or the other. Or, we could be the centrist party, and try to have policies that kinda sorta help everyone a little but not a lot and therefore kinda sorta hurt everyone a little but not a lot. And thereby we’re promoting policies that everyone dislikes—I think Dems have been trying that for a while, and it isn’t working. But neither is deciding that we’ll only be the party of borrowers, since borrowers require lenders who are succeeding enough to lend.

The problem with the whole model of politics being a contest between us and them is that it makes all policy discussions questions of bargaining and compromise. What’s left out is deliberation. But that’s hard to imagine in our current world of, not just identity politics, but of a submission/domination contest between two identities. And, really, that has to stop.

Blaming the left for identity politics is just another example of the right’s tendency toward projection. The Federalist Papers imagines a world in which elections are identity-based (which the Constitution’s defenders saw as preferable to faction-based voting). Since most voters could not possibly personally know any candidate for President or Senate, they should instead vote for someone they could know, and whose judgment they trusted (see, for instance, what #64 says about the electors and the Senate). That person could then know the various candidates and make an informed decision as to which of them had better judgment. So, at each step, people are voting for a person with good judgment, to whom they were delegating their own deliberative powers.

That vision quickly evaporated and was replaced by exactly what the authors of the Constitution had tried to prevent: party politics. And then, by the time of Andrew Jackson, we got a new kind of identity politics: voting for a candidate because he seems to share your identity, and, will therefore look out for people like you. His good judgment comes not from expertise, the ability to deliberate thoughtfully, or deep knowledge of history, but from his being an anti-intellectual, successful, and decisive person who cares about people like you. Through the nineteenth century, the notion of an ideal political figure shifted from someone much smarter than you are to someone not threatening to you.

 

  1. Factionalism, Andrew Jackson, and the rise of identification

The problem that everyone to the left of the hard right has is the same: that we are in a culture in which rabid factionalism on the part of various right-wing major media is normalized, and anything not rabidly right-wing is condemned as communist. Lefties should be deeply concerned about factionalism (including our own), and careful about how we try to act in such a world. There is are several clear historical lessons for Americans as to what that kind of rabid factionalism does (I’ll just talk about Athens), and a clear lesson from American history as to how we should not try to manage it (the case of Andrew Jackson).

Here’s the short version. The US, when it was founded, was an extraordinary achievement on the part of people well-versed in the histories of democracies, republics, and demagoguery. Their major concern was to make sure that the US would not be like the various republics and democracies with which they were familiar. That included the UK (which was, at that point, immersed in a binary factionalism), various Italian Republics (especially Florence and Venice), the Roman Republic, and Athens.

And Athens is an interesting case, and something about which current Americans should know more. Knowing their Thucydides (via Thomas Hobbes, a post I might write someday), the authors and defenders of the constitution knew that Athens had shot itself in the face because at a certain point (just after the Mytilenean Debate, for those of you who care), everyone in Athens thought about politics in two ways: 1) what is in it (in the short-term) for me; 2) what will enable my political party to succeed?

No one worried about “what is best for Athens” with a vision of “Athens” that included members of the other political party. So, because Athens was in a situation of rabid factionalism, you would cheerfully commit troops to a political action if you thought it would do down the other party. Military decisions were made almost entirely on factional bases.

Thucydides describes the situation. He says that city-state after city-state broke into hyper-factional politics that was almost civil war. All anyone cared about was whether their party succeeded—no one listened to the proposals of the other side with an ear to whether they were suggesting something that might actually help. In fact, being willing to listen to the other side, being able to deliberate with them, looking at an issue from various sides—all of those things were condemned as unmanly dithering. Refusing to call for the most extreme policies or suggesting moderation wasn’t a legitimate position—anyone doing that was just trying to hide that he was a coward. Only people who advocated the most extreme policies was trustworthy; anyone else wasn’t really loyal to the party and so shouldn’t be trusted. Plotting on behalf on the party was admirable, and it didn’t matter how many morals were shattered in those plots—success of the party justified any means. But people weren’t open that they were willing to violate every ethical value they claimed to have in order to have their party triumph; people cloaked their rabid factionalism in ethical and religious language while actually honoring neither. So, Thucydides says, there was a situation in which every good value was associated with your party triumphing, and every bad value associated with their not triumphing.

People worried about their party, and not their country.

We can think, why would anyone do that? And yet, we might do it. No one thought to themselves, “I wish to hurt Athens and so I will only look out for my political party.” Instead, what they probably never thought, consciously, but was the basis for every decision was that only their group was really Athenian. So, they thought (and sincerely believed) anything that promotes the interests of my group is good for Athens because only my group is really Athenian.

Michael Mann, a scholar of genocides, calls this the confusion of ethos and ethnos. The “ethos” of a country is the general culture, and the “ethnos” is one particular ethnic group. What can happen is that specific group decides that it is the real ethos, and therefore any action against other groups is protecting “the people.” They are the only “people” who count. Seeing only your class, political party, ethnic group, or religion as the real identity of the group hammers any possibility of inclusive deliberation. It is also the first step toward the restriction, disempowerment, expulsion, and sometimes extermination of the non-you. While not every instance of “only us counts” ends in mass killing, every kind of mass killing—genocide, politicide, classicide, religoicide—begins with that move.

Even ignoring the issue of the ethics of that way of thinking, it’s a bad way for a community to deliberate. But what they did think, as Thucydides says, is that anything that helped you and your party was a good thing to do, even it was something you would condemn in the other party. You might cheerfully use appeals to religion to try to justify your policies, but if other policies better helped your party, then you’d use religion to justify those policies. No principle other than party mattered.

If the other side proposed a policy, you didn’t assess whether it was a good policy, you were against it. You were especially likely to be against it if it was a good policy, since then they would gain more supporters. You would gleefully gin up a reason that troops should be sent to a losing battle and put an opposition political figure in charge—losing troops (and a battle) was great if it hurt the party.

And so Athens crashed. Hardly a surprise.

In fact, the people of Athens were dependent on each other, and no group could thrive if other groups lost battles. Us and Them thinking forgets that we are us.

At the time of the American Revolution, the British political situation was completely factionalized. We might like to admire Edmund Burke, who so eloquently defended the American colonies, but even I (an admirer of his) know that, had his party been in good with George III (they weren’t) he probably would have written just as eloquent an argument for crushing the American Revolution. The authors of the Constitution were also well aware of other historical examples that showed the fragility of republics, especially Venice (one of the longest lasting republics), Florence, and Rome.

And those were the conditions the authors of the Constitution tried to solve through the procedure of people voting for someone whose authority came from intelligence and judgment. That is, the constitution worked by having people vote, not for the President directly (since you couldn’t possibly know the President personally) but for someone you could know—a state legislator, an elector—whose judgment you could assess directly. But factions arose anyway.

The factions were somewhat different from those in either Athens or Britain. In Athens it was (more or less) the rich who wanted an oligarchy, or really a plutocracy, with the wealthy having more power than the poor, and with very little redistribution of wealth. On the other side were the non-leisured (not necessarily poor, but not very wealthy either) who wanted at least some redistribution of wealth and a lot of power-sharing. But an individual’s decision to join a particular faction was also influenced by family alliances and personal ambition. In Britain, factions were described as country versus city (wealth that came from land ownership versus industry and finance) which may or may not be accurate. As in Athens, there were other factors than just economics, and that city-country distinction might itself have been nothing more than good rhetoric to explain factions that weren’t really all that different from each other.

In the US, by the time of Andrew Jackson’s rise (the 1820s), there was some division along economic lines (agriculture vs. shipping, for instance), and some along ideological ones (Federalist vs. Antifederalist), but they didn’t give a very clean binary. There were more than two parties, and even the major parties were coalitions of people with nearly incompatible political agenda (Whigs and Democrats were both strong in the North and South, for instance). Given both the youth of the country and the large number of immigrants, there weren’t necessarily family traditions of having been in one faction or another, and there wasn’t some kind of regional distinction (the North was still predominantly agricultural, and some “Northern” states had slaves until the 1830s, so neither the agricultural/industrial nor slave/not slave distinctions provided any kind of mobilizing policy identity). There wasn’t the odd role that the monarchy played in British political factions (for years, one faction attached to the monarch, and another to the son whom the monarch hated). US factions were muckled and shapeshifting.

A disparate coalition is particularly given to intrafactional fighting, splitting, and purity wars, and so there is generally a strong desire to find what is usually called a “unification device.” The classic strategy to unify a profoundly disparate coalition is two-part: unification through finding a common enemy; cracking the other side’s coalition with a wedge issue. If a party is especially lucky, that two-part strategy is made available through one issue. And that’s what US parties did in the antebellum era, and, after trying various ones, they ended up on fear-mongering about abolitionism, with some anti-Catholicism thrown into the mix.

Antebellum media was extremely factionalized. Newspapers were simultaneously openly allied with a particular party, rabidly factional, and passionate in their condemnations of faction.

“The bitterness, the virulence, the vulgarity, and perfidy of factious warfare pervade every corner of our country;–the sanctity of the domestic hearth is still invaded;–the modesty of womanhood is still assailed…” (“Party” U.S. Telegraph, June 24, reprinted from the Sunday Morning News). The anti-Jackson Raleigh Register had the motto “Ours are the plans of fair delightful peace, unwarp’d by party rage, to live like brothers” but spent the spring and early summer of 1835 in vitriolic exchanges with the Jacksonian Standard. One letter in the exchange, for instance, begins, “The writhing, twisting and screwing–the protestation, subterfuge and unfairness and the lamentation, complaint and outcry displayed in this famous production” (Raleigh Register February 10, 1835). (From Fanatical Schemes).

For instance, a newspaper’s criticism of a political party inspired a member of that party to threaten a duel, and, once the various rituals had been enacted that enabled a duel to be avoided, the person who had threatened a duel over his political faction having been criticized said, “I regard the introduction of party politics as little less than absolute treason to the South.”

When, from about 2003 to 2009, I was working on a book about proslavery rhetoric, this characteristic—that people operating on purely factional motives condemned factionalism—was one of the characteristics that made me begin to worry about current US political discourse, since it was so true of what I was seeing in American media. The most passionately factional media have mottos like “Fair and Balanced.” I have an acquaintance who consumes nothing but the hyper-factionalized media, and he has several times told me I shouldn’t believe something not-that-media because it’s “biased.” Clearly, he doesn’t object to biased media, since that’s all he consumes. And then I noticed that’s a talking point in various ideological enclaves—you refuse to look at anything that disagrees with the information you’ve gotten from your entirely biased sources on the grounds that they are biased.

If you push them on that issue, I’ve found that consumers of that extremely factional media respond to criticisms of their factionalism (and bias) with “But the other faction does it too”—a response that only makes sense in which every question is “which faction is better” not “what behavior is right.” So, even their defense of their factionalism shows that, at the base, they think political discourse is a contest between factions, and not a place in which we should—regardless of faction—try to consider various policy options. They live and breathe within faction.

Andrew Jackson was tremendously successful in that world, partially because of his conscience-free use of the “spoils system”—in which all governmental and civil service positions were given to supporters. And Jackson didn’t particularly worry about his policies; one of his major “policy” goals was abolishing the National Bank. Scholars still argue about whether he had a coherent political or economic policy in regard to the bank; what is clear is that he didn’t articulate one, nor did his supporters. Hostility to the bank was what might be called a “mobilizing passion,” not a rationally-defended set of claims. But that passion was shared with many who had almost gut-level suspicions of big banks, monetary controls, and a strong Federal Government.

It was such a widely-shared view that Jackson’s destruction of the Bank, and its direct consequence, the Panic of 1837, couldn’t serve as a rallying point for his opposition. And Jackson’s combination of popularity, use of the spoils system (including his appointment of judges—one of whom is an ancestor of mine), and strong political party worried many reasonable people that he was trying to create a one-party state. So, even as his second term was ending, people were trying to figure out how to reduce his power, and yet they couldn’t use what was quite clearly unsound economic policies.

There were more opponents of Jackson than there were supporters, but to call them disparate is an understatement. Some were pro-Bank, but too many were anti-Bank for that issue to be useful. There were a large number of anti-Catholics (some of whom might have been Masons), and also a few anti-Masons. Jackson’s bellicose (albeit effective) handling of the Nullification Crisis had alienated many of the South Carolina politicians whom he had trounced, but their stance on the tariffs (which had catalyzed the Nullification Crisis—they were trying to  nullify tariffs) was incompatible with manufacturers in other areas.

Jacksonian Democrats played two (related) cards quite effectively—they played to racism about African Americans by supporting disenfranchisement of African-American voters and engaging in fear-mongering about free African Americans at the same time that openly embraced Irish-Catholic voters (whose right to vote was still an issue in some places). They thereby drove a wedge between two groups that might have allied (poor Irish and freed African Americans), essentially offering the gift of “whiteness” to the Irish for their political support (this story is elegantly and persuasively told in How the Irish Became White). Because politics naturally works by opposites, this made Catholicism an issue on which other parties had to take a stand, and they stood to lose large numbers of voters no matter which way they jumped. The only thing that the various anti-Jackson parties shared was that they were anti-Jackson, and it’s hard to raise a lot of ire against a white guy who does a good job of coming across as a regular guy who really cares about “normal” people. In rhetoric, that’s called “identification”—a rhetor persuades an audience that s/he and they share an identity, and persuades them that the shared identity is all the information the audience needs.[2]

Elsewhere I’ve argued that John Calhoun tried to use fear-mongering about abolitionists (who were a harmless fringe group at that point) in order to unify proslavery forces behind him. It’s a great kind of strategy—you find some kind of hobgoblin that is politically powerless but that frightens a politically powerful group, and you present yourself as the only one who can save them from that hobgoblin. Unfortunately for everyone, Calhoun’s opponents simply picked up his method and American politics began an alarmism race to see who could out-fearmonger the others and call for increasingly extreme (and irrational) gestures of loyalty to slavery. Eventually, those gestures (such as the Fugitive Slave Law, the “gag rule,” the attempt to expand slavery past the Mason-Dixon Line, and, finally, the Dred Scott decision) generated as much fear and anger about The Slave Power as proslavery rhetors were generating about abolitionists.

Reagan was much like Jackson, in that his economic policies were vague but seemed populist, and he persuaded people that he really cared about them and understood them. He was normal, and he wanted normal Americans to be at the center of America.

Trump’s situation is different in that he has never had very high approval outside of his faction, but the rabidly factionalized media ensures that he has a deliberately and wickedly misinformed faction who are willing to pivot quickly for a new posture on a political issue.

What makes the two people similar, and like Jackson, is just that they have far more opponents than they have allies, and a highly mobilized base. As long as the opposition remains internally factionalized, they win. But, at this point, all that is shared among Trump’s opponents is opposition to Trump. The impulse might be to try to do what Jackson’s opponents did, and find some issue about which to fear-monger, or to do what Reagan’s opponents did, and remain factionalized. Right now, we seem headed toward the second, and in a somewhat complicated (and genuinely well-intentioned) way.

The advice seems to be that we need to have a unified and coherent policy agenda in order to mobilize voters. And, while I agree that simply being anti-Trump isn’t enough, I don’t think the unified and coherent policy agenda strategy will work either, for several reasons. The first reason is that it is trying to solve the problem of faction through faction. The second (discussed much later) is that it grounded in a misunderstanding of how Americans vote.

 

III. Trying to solve the problems of factionalized politics by creating a more unified faction

[Most of this section was pulled out and posted separately here.]

 

 

  1. The mobilizing passion/policy argument

Speaking of reasonable arguments and thinking about probabilities, what are reasonable ways to go on from here and not repeat the errors of the past? The two most common arguments as to what we should do now are both, I’ll argue, reasonable. I’ll also argue that they’re probably wrong. But they aren’t obviously wrong, and I doubt they’re entirely wrong. One is that we’re losing elections because we aren’t putting forward a charismatic enough leader who inspires passionate commitment to a clear identity (what I always think of as “the Mondale problem”). The second is that the problem with the Dems in 2016 is that they didn’t have a sufficiently progressive platform of policies, and so there wasn’t a mobilizing political agenda. Therefore, we should have clearer mobilizing identity or political agenda.

I think these are reasonable arguments, but I don’t think either of them will work—I’m not sure they’re plausible (they certainly aren’t sufficient), and I’ll explain why in reverse order.

First, as to the “we just need someone with a clear progressive policy agenda,” I have to say that a lot of lefties who make that argument in my rhetorical world turn out to have no clue what policies Clinton advocated. They lived in a world of hating on Clinton throughout the election, and so remain actively misinformed about her policy agenda (and the number of them who shared links from fake news sites in October was really depressing).

A lot of lefties are political wonks, and so we assume that everyone else is equally motivated by policy issues. Unhappily, a lot of research suggests that isn’t the case. The next section relies heavily on three books: Hibbing and Theiss-Morse’s Stealth Democracy (2002), Achen and Bartels’ Democracy for Realists (2017), and Parker and Barreto’s Change They Can’t Believe In (2014). I should say, before going through the research on the issue, that I’m not as hopeless about the prospects for more policy argumentation in American public discourse as I think these authors are, and I do think that improving our politics through improving our political discourse is the most sensible long-term plan. For the short-term, however, I think it makes sense to be pragmatic about how large numbers of people make decisions about voting, and they don’t do it on the basis of deep considerations of policy—or on the basis of policy at all.

John Hibbing and Elizabeth Theiss-Morse summarize their research: people care more about process than they do about policy, and they “think about process in relatively simple terms: the influence of special interests, the cushy lifestyle of members of Congress, the bickering and selling out on principles” (13). According to Hibbing and Theiss-Morse, people believe that the right course of action on issues is obvious to people of goodwill and common sense who care about “normal” Americans: people believe that there is consensus as far as the big picture and that “a properly functioning government would just select the best way of bringing about these end goals without wasting time and needlessly exposing the people to politics” (133). Hibbing and Theiss-Morse refer to “people’s notion that any specific plan for achieving a desired goal is about as good as any other plan” (224).

A disturbing number of people believe that the correct course of action is obvious, because it looks obviously correct from their particular perspective. And I should emphasize that it isn’t just those stupid people who do it. Even lefties—even academic lefties—who emphasize the importance of perspective, teach about viewpoint epistemology, and reject naïve realism can regularly be heard at faculty meetings bemoaning the benighted administration for its obviously wrong-headed policy. In my experience, there is always a perspective from which the administration’s response is sensible. Most commonly, something that puts a great burden on my department (and my kind of department) is a policy that works tremendously well for most of the university, or for the parts of the university that the administration values more. Sometimes the bad policies are mandated by the state or federal government, or sometimes they are, I think, a misguided attempt to improve the budget situation. From my perspective, their policies look bad; from their perspective, my preferred policy looks bad.

I’m not saying that both policies are equally good, or all perspectives are equally valid, or that there is no way out of the apparent conundrum of a lot of people who all sincerely care for the university disagreeing as to what we should do. I’m saying that it’s a mistake for any of us to think that the correct course of action is obviously right to every reasonable person. I’m saying we really disagree, and that determining the best policy is complicated.

Most important, I’m saying that the tendency to dismiss disagreement and assume that complicated problems have simple solutions is widespread.

Since this depoliticizing of politics is widespread, how do people explain all the disagreement about policies? Hibbing and Theiss-Morse argue that people believe that most politicians are self-interested, and bicker so much because they are submissive to the “special interests” that donate money to them: “The people would most prefer decisions to be made by what [Hibbing and Theiss-Morse] call empathetic, non-self-interested decision-makers” (86). They quote one of the participants in their research who “said he had voted for Ross Perot in 1996 because he felt Perot’s wealth would allow him to be relatively impervious to the money that special interests dangle in front of politicians” (123).

Hibbing and Theiss-Morse are persuasive on the profoundly anti-democratic way that people perceive “special interests.” They say, “Our claim is that the people see special interests as anybody with an interest. Since government is filled with people who have interests, the people naturally come to the conclusion that it is filled with special interests.” (226)

People use the term “special interest,” according to Hibbing and Theiss-Morse, “to refer to anybody discussing an issue about which they do not care” (222).

We see ourselves as “normal” Americans, whose needs should be central to American policy, and whose problems should be solved quickly and sensibly. Were government functioning well, that’s what would happen, but it isn’t happening because the people in office put “special interests” above people like us, so we want someone who conveys compassion and care for us.[5]

That claim—that voters care more about caring and quick solutions to their problems and are neither interested in nor moved by policy deliberation—is supported by Achen and Bartels’ Democracy for Realists, which reviews years of studies in order to refute what they call the “folk theory of democracy.” That theory assumes that democracy is “rule by the people, democracy is unambiguously good, and the only possible cure for the ills of democracy is more democracy” (53).

Achen and Bartels conclude that elections don’t represent some kind of wisdom of the people, but “that election outcomes are mostly just erratic reflections of the current balance of partisan loyalties in a given political system” (16). Achen and Bartels argue that voters’ perceptions of policies—even basic facts—are largely determined by motivated reasoning (people use their powers of reason to rationalize a decision they have made for partisan reasons) or simply out of a desire “to kick the government,” even for natural disasters over which the government had no control (118). People aren’t motivated to join a party because they like the policies: “The primary sources of partisan loyalties and voting behavior, in our account, are social identities, group attachments, and myopic retrospections, not policy preferences or ideological principles” (267). By “myopic retrospections,” they mean events that happened in a very short period just before the election, for which they are punishing the incumbents.

Achen and Bartels refer to Hibbing and Theiss-Morse, and other scholars, in their conclusion that “many citizens in well-functioning democracies” don’t understand the value of opposition parties and the necessary disagreement that comes with different points of view.

They dislike the compromises that result when many different groups are free to propose alternative policies, leaving politicians to adjust their differences. Voters want ‘a real leader, not a politician,’ by which they generally mean that their ideas should be adopted and other people’s opinions disregarded, because views different from their own are obviously self-interested and erroneous. (318)

There is a right way, in other words, and it’s the way that looks right to normal people, and it’s the one that should be followed.

Michele Lamont’s The Dignity of Working Men (2000) emphasizes that many men (especially white) gain dignity from seeing themselves as disciplined, and explain their success as completely their own individual achievement—they actively resent goods (such as support of various kinds) being given to people who don’t work (see especially 132-135; this was less true of African Americans whom Lamont interviewed, who tended to emphasize the “caring” self). And, especially for white men, wealth isn’t necessarily good or bad; they don’t necessarily resent people who are more wealthy, but they do resent people with higher status who look down on them (108-109). They want to feel respected and cared about (which may explain Trump’s success with precisely the kind of voter whom many people thought would resent his problematic record with small businesses).

What all of this means is that thinking that the issue for the Dems in 2016, or the issue at the state and Congressional level, is that we haven’t articulated a compelling and thorough policy argument is almost certainly wrong. People who voted for Obama and then voted for Trump weren’t drawn by his policies, but his identity. As Achen and Bartels remind us, voters often get wrong the policies of their favorite political figures or their own party. And voters are easily maneuvered by mild shifts in wording (asking people about ACA versus asking them about Obamacare, for instance). Large numbers of voters don’t care about policies.

They care about slogans—they care about being told that the party or politician cares about them, and will throw out the bastards, drain the swamp, clean house. Large numbers of people want to be reassured that their needs and desires for themselves are the only ones that matter and will be the first priority of the party/rhetor.

And a lot of voters vote on the basis of promises the candidate can’t possibly fulfill. This isn’t just something that their ignorant supporters do. Certainly, Trump promised to do things the President can’t do without thoroughly violating the Constitution (since he was proposing to dictate Congressional and judicial policies–but both Sanders and Clinton proposed policies there was no reason to think they could get through a GOP Congress. I’m repeatedly surprised at the reactions of large numbers of people to SCOTUS decisions–many people (including smart and sensible friends) don’t seem to understand that it isn’t the job of SCOTUS to make sure that laws are “just”–it’s their job to make sure they’re constitutional.

In the early spring of 2016, I was in a hotel in Louisiana eating the fairly crummy free breakfast, and two men behind me were discussing Trump (they liked him). When they talked about how he was going to do something about all those poor people who lived off of the government, one of them said, “Well, what are you going to do? You can’t kill ‘em.” Then they got onto the subject of his plan for ISIS. One of them said, “They’re complaining that he won’t say what his plan is. But of course he can’t say what it is.” The other said, “Right, then ISIS would know it!” Trump’s promise was to develop a plan to crush and destroy ISIS within 30 days of taking office. His plan, as it turned out, was to tell the Pentagon to come up with a plan—as though that had never occurred to Obama?

What they needed was to believe he was the kind of person who could solve problems. He told them political issues are simple, and he was a straightforward person who, like Perot, couldn’t be bought—he wouldn’t genuinely represent them and their interests. And now he is saying that it turns out every single issue is complicated.

I often wonder about those two guys, and what they make of all this. If research on people drawn to simple solutions is accurate, then they’re doing one of three things: 1) rewriting history, so that they never voted for him on the grounds that he could solve things quickly and easily; 2) making an exception for his finding things complicated, and using his new admission that he was entirely and completely wrong in everything he said about politics as additional evidence of his “authenticity” and sincerity (and, since all they care about is that he sincerely cares about them, they’re good); 3) regretting voting for him, but not rethinking why they voted for him, what their assumptions were about how to think about politics.

That’s what happened with the Iraq invasion, after all. People who had supported it denied they’d ever supported it, denied it was a mistake, or blamed Bush for lying to them. They didn’t decide that their process of making a decision about the war was a mistake—they didn’t stop watching the channels that had worked them into a frenzy about Saddam Hussein’s (non) participation in 9/11 or the (non)existence of weapons of mass destruction. They didn’t stop making political decisions on the basis of hating Dems, or trusting a political figure because he seemed like someone who cared about them.

So, no, we can’t reach that sort of person with a more populist political agenda because it isn’t about the political agenda.

I think it’s also a mistake to think that, since they’re engaged in demagoguery, and it’s winning elections for them, that’s what we should do. Demagoguery, a way of approaching public discourse that makes all political issues a question of us (angels) versus them (devils) works for reactionary politics because reactionary politics is attractive to “people who fear change of any kind—especially if it threaten to undermine their way of life” (Parker and Barreto 6). Reactionary politics, according to Parker and Barreto and also Michael Mann, arises when a group is losing privileges (such as whites losing the privilege of being able to see their group as inherently superior to non-whites). Democrats played that card for years, and it worked, but now it would alienate as many people as it would win (or more). The research on “moral foundations” is pretty clear that, while loyalty to the ingroup is important for people who self-identify as conservative, fairness across groups is important for people who tend to self-identify as liberal. Any rhetoric that says “this group is entitled to more than any other group” will alienate potential liberal voters.

While there is a lot of lefty demagoguery, it’s internally alienating. That is, the presence of internal demagoguery is what makes some people very hesitant to support the Democratic Party. And now we’re back to the two narratives of 2016—both are demagoguery, and both alienate people. We need to imagine a way to move forward that doesn’t involve any one kind of lefty becoming the only legitimate lefty.

And demagoguery won’t get us there.

And that brings us to the second option: find a charismatic leader. That’s a great idea, and we should always hope that our candidates can come across as people who really care about “normal” people (with, I would hope, a broader version of “normal” than reactionary politicians present), but 1) that is only an option if there is a deep bench of Democratic governors and Senators, and 2) that still doesn’t get a reasonable balance in Congress, state legislatures, or among governors.

So, what went wrong in 2016? We had a shallow bench. There are lots of reasons for progressives’ poor showing at the state and Congressional level—low progressive voter turnout in 2010 that enabled gerrymandering, a tendency for progressive voters only to come out for the Presidency, and various other complicated things (including the success of factionalized hate media). What won’t work is something I hear a lot of progressives say: “We just need to run more progressives.” People have been saying that for a long time, and trying it for a long time, and sometimes running progressives works and sometimes it doesn’t, so there is no “just” about it.

The first thing lefty voters need to do is get out the vote at the state level. And I think we need to be very clear that we care about all kinds of voters, and lefty rhetoric about hillbillies and toothless white guys doesn’t help, so we also need to shut down classism as fast as we shut down any other kind of bigotry.

And we can’t win within the parameters of demagoguery, so we need to stop trying to play within them.

 

  1. On the Democratic Party as a strategic coalition

At the beginning, I talked about my initial perception of politics as a contest between what is obviously the right course of action and various things that other people want—because they’re selfish, wrong-headed, corrupt, misguided. Compromise made a good thing worse because it was a question of how much bad had to be accepted in order to get some good done, and it should only be done for Machiavellian purposes. I think too many lefties operate within that model.

When the refusal to compromise goes wrong, it ends up landing people in purity wars, and those are never good for people who are trying argue in favor of diversity and fairness. Purity wars can work well for authoritarians, racists, and people with what social psychologists call a “social dominance orientation,” but they don’t work well for the left.

So, simply refusing to compromise isn’t going to ensure better policies; it can ensure worse ones if, as happened under Reagan (or in Weimar Germany in 1932), the refusal to compromise means that the left is entirely excluded. Saying that refusing to compromise can be harmful isn’t to say that all compromises are good. I’m saying compromise isn’t necessarily and always good, but neither is it necessarily and always wrong. I’m saying that we should stop assuming it’s always evil, and we should stop falsely narrating effective lefty leaders as people who refused to compromise—they compromised. In fact, every effective leader on the left was excoriated in their time for having compromised too much.

The refusal to compromise comes from thinking about politics as a negotiating between right and wrong. We might instead think of politics 1) as the consequence of deliberation, not bargaining, 2) as an acknowledgement of the limitations of our own perspective, and/or 3) as a sharing of power with those people who share our goals. I think lefties would do well to think of at least some compromises as coming out of one of those three factors.

Here’s what I now think: thinking about compromise as always and necessarily wrong is bad, but neither is every compromise right. There are times when you say there is some shit you will not eat, and I am known as a difficult woman because I have refused to go along with various motions, statements, policies, and actions. I have nailed more than a few theses to a door. But I think lefties’ failure to think about compromise as anything other than distasteful realpolitik comes from, oddly enough, a less than useful way of thinking about diversity.

I think too often lefties accept the normal political discourse of thinking in terms of identity (even though we, of all people, should understand that intersectionality means that there aren’t necessary connections between a person and their politics), so we imagine that we have achieved diversity when we have a party that looks diverse—as though that’s all the diversity we need. So, we aspire to a political party that is diverse in terms of identity and univocal in terms of policy agenda. And I don’t think that’s going to work.

Instead of striving for a group that is univocal in terms of policy but diverse in terms of bodies, we need to imagine a party that is diverse in terms of what the Quakers call “concern.”

Early in the history of the Society of Friends, meetings struggled with what we would now recognize as burnout—people at meetings would speak of the need for everyone to be concerned about this and that issue, and everyone couldn’t be concerned about everything. So, there arose the notion that the Light makes itself known in different people in different ways, and that each person has a concern which is not shared with everyone. I think that’s what we on the left should do—we should be people concerned with inclusion, fairness, and reparative justice, and who are open to different visions of how those goals might manifest in moments of concern (and policy).

There are, of course, problems with calling for more diversity of ideology on the Left, including that it means cooperating with people whose views we think wrong. And so we have to figure out how much wrong we’re willing to allow. LBJ allowed Great Society money to go to corrupt Democratic machines, believing it was a necessary first step; Margaret Sanger cooperated with eugenicists, since it got her money and support; FDR compromised with segregationists in regard to the US military; Lincoln was willing to talk like a colonizationist to get elected and compromised with racists about pay for black troops. I don’t think they should have made those compromises.

There are some compromises that shouldn’t be made, and so we shouldn’t—but we should argue about what those limits are. And there may be times that we decide to compromise on purely Machiavellian grounds; I’m not ruling that out. But I am saying that lefties shouldn’t treat every disagreement as something that must be resolved with pure agreement on the outcome—that’s just a fear of difference. Lefties disagree. We really, really, really disagree. Lefties need to imagine that disagreement is useful, productive, and doesn’t always need to be resolved. We need to imagine a politics in which each of us gets something important for our well-being and none of us gets everything. And we need to stop hoping and working for a party of purity.

 

 

 

[1] If it helps one side too much, of course, then both end up losing—if interest rates are too high, no one takes out loans, and then lenders are hurt; or high interest rates might tank the economy, which can make it hard for lenders to find money to loan.

[2] It’s generally done through division—you and I are alike because we both hate them. Salespeople will often do it on big ticket sales, and con artists always use it.

[3] One sign of how factionalized a situation is is how often when I’m talking about this I have to keep saying that not all Sanders supporters are Sandersistas and not all Clinton supporters are Clintonistas. As scholars of group identity say, the more that membership in a group is important to you, the more that any criticism of any member of that group will feel like a personal attack.

[4] One of the odder arguments I sometimes hear people make is that Clinton was at fault for not motivating them—it’s the Presidency, not a hamburger; you’re responsible for making choices, and not a passive consumer of marketing. (Talk about a neoliberal model of democracy.) That argument irritates me so much I won’t even list it as a reason.

[5] While Hibbing and Theiss-Morse maintain this is not authoritarianism, because people want a direct connection to the halls of power when the government is not being appropriately responsive, I would argue that neither is it democratic (little d) in that there is no value given to deliberation or difference. And, of course, it’s how authoritarian governments arise—people give over all their power of deliberation to someone who will do it for them. When they want it back, they can’t always have it.

Privilege and perspective-shifting

It’s interesting that there is such resistance to the notion of privilege. Every human knows that privilege is a thing. I grew up in a very wealthy area, and we all knew whose parents could pull strings, get their kid a part-time job from which s/he couldn’t be fired, intimidate the principal, get rules bent. Let’s call that kid That Guy (although he wasn’t always a guy). People who grew up around rich people (even if they were rich) should be the first to acknowledge the power of privilege, since they must have had direct experience of it, but often they’re the last. And it isn’t because they secretly put hoods on at night and attend white supremacist marches.

I think there are several reasons: the stories that privileged people tell themselves about That Guy, a tendency to think in binaries, a commitment to naïve realism (and the often-connected notion that good people have good judgment), imagining self-worth and achievement in a zero-sum relation, and the impulse to hear “check your privilege” as something other than “time to listen.”

As to the first, That Guy got away with everything–he was completely tanked, totaled his car, and yet didn’t get arrested—and that obviously doesn’t apply to us. He never earned anything, and never faced consequences. And he was an asshole. People hear the observation of privilege as an accusation that we are That Guy. People think they’re being called an asshole. Self-identity is comparative—rich people can feel “poor” if they hang out with richer people, attractive people can feel unattractive, and so on. As long as there is someone with more privilege than what we have, then we can feel that we aren’t That Guy, and therefore, don’t have privilege (or none worth considering).

That impulse to consider our privilege trivial because of how it compares to someone else is connected to the tendency to think in binaries, especially a binary central to American political discourse: makers or takers (producers or parasites). You either work hard and make/produce wealth, or else you are a lazy person who takes from those who make wealth. William Jennings Bryan’s rhetoric described bankers (and people in the city) as parasitical on the real wealth production of the farmers; Father Coughlin positioned “international finance” (his dog whistle for “Jews”) as against the real producers of wealth; Paul Ryan and current toxic populist rhetoric makes public servants and anyone on assistance (unless they are Republican) as takers, with the top 1% as the makers.

People who think that you are either a maker or a taker can point to the ways they make wealth and therefore are enraged at being accused of being a taker. That Guy is a taker, but we aren’t him, so we are makers. The mistake here is the maker/taker binary. Privilege has nothing to do with whether you’re a maker or a taker, and it isn’t an accusation of anything. It certainly isn’t an accusation that the person hasn’t worked at all, nor is it an accusation of being an asshole.

The maker/taker binary is attractive because of the dominance in American culture of the “just world model” (or “just world hypothesis”): the notion that good people get good things and bad people get bad things. That model means that we can reason backwards from outcomes to identities: a person has good outcomes (makes a lot of money, is healthy, is successful) has caused those outcomes to happen by their good choices, good faith, and good identity; a person who has bad outcomes (is financially struggling, unhealthy, unsuccessful, or has been the object of crime) has caused those outcomes through their poor choices, bad attitude, or lack of faith.

To tell someone that outcomes might be influenced by conditions outside a person’s choice (such as accidents of birth) is tremendously threatening to someone who believes strongly in the just world model. It threatens their sense of justice and belief in a controllable universe. And research suggests that being faced with uncertainty means that people will resort more firmly to their sense that their group is inherently good, so a privileged person, faced with evidence that the world is unjust, is likely to want to cling more fiercely to the notion that they are part of a good group.

And, if that person has a tendency to think in binaries then to say that outcomes might be influenced by conditions of privilege will be heard as saying that outcomes are purely the consequence of privilege—no choices involved. Thinking in binaries means that a person will tend to believe “monocausal” narratives (any outcome has one and only one cause). If the milk spilled, there was one action that caused it, and we can argue about whether it was yours or mine, but it can’t have been both, let alone the consequence of various factors.[1] So, privilege either determines everything or nothing; if a person who believes in monocausal narratives can find a single thing done by agency, then their life wasn’t purely the consequence of privilege, and therefore it wasn’t at all. For someone like that, individual agency is the single cause or has no impact at all.

When people ask that we consider privilege, it isn’t substituting one monocausal narrative (everything I have achieved is purely the consequence of things I have done) with another (everything you have achieved is purely the consequence of your privilege). It’s an observation about relative advantages. A person raised speaking a language has an advantage over someone who had to learn the language as an adult. Because of our tendency to assume that fluency with language necessarily means fluency of thought, we tend to think of people who come across as native speakers as more intelligent. So, a person who learned a language as an adult has to work harder than the native speaker to get taken seriously and be heard. That isn’t to say that the native speaker didn’t work at all—it isn’t a binary. It’s about relative advantage or disadvantage.

John Scalzi has an article I like a lot for explaining privilege, and it’s interesting to see how people in the comments misunderstand his point. His argument is that being a straight white male is like rolling high in the character-establishing point in a role-playing game. You have an advantage over someone else who rolled low, in every situation, all other things being equal.

What that means is that a person who has no disabilities and grows up in a wealthy family in a stable environment and is a straight white male necessarily has advantages over a gay black female in exactly the same situation. That’s a comparison that keeps everything other than gender, sexuality, and race the same. But a large number of the critical comments changed other variables, insisting that Scalzi was wrong because a rich (variable of wealth) gay black female would have advantages over a poor (changed variable of wealth) het white male.

That’s clearly not engaging Scalzi’s argument.

He says “all other things being equal” and a large number of examples ignore that part of his argument. And, really, the two of the three most common ways I see arguments about privilege go wrong is that they introduce other variables (especially class) or they think the observation of privilege is a claim that the privileged person has done nothing at all (the maker/taker binary).

Since so much cultural and political discourse has the maker/taker binary, it’s understandable that people would force the observation about relative advantage into the maker/taker binary, but let’s be clear: that’s a misunderstanding that’s on the hearer. Saying you have privilege isn’t saying you’re That Guy. It’s saying that, in this situation, you have relative advantage.

One of my favorite studies is one you can do in any classroom. Ask students to write the letter ‘E’ on a small piece of paper in such a way that, when they put it on their forehead, it will be correct for someone looking at them. In one version of this study, half the group was given a small amount of money, and they promptly did worse on being able to imagine the perspective of anyone else. Thus, giving relatively small signals of privilege to some students can make perspective-shifting harder for them.

That task, perspective-shifting, is crucial to democracy. Communities in which people only look out for their group (or for themselves) inevitably end up in highly-factional squabbling, in which people will cheerfully hurt the overall community just in order to make sure the other side doesn’t win. Democracies thrive when everyone involved believes that our best world is the best world for people whom we dislike. Democracy depends on people looking at more than what is best for them or their group to whether we are establishing processes by which we’re willing to live. And that requires not just looking at whether this policy benefits me, as the person I am, but whether I would believe it was a good policy were I a completely different kind of person.

Privilege makes perspective-shifting less necessary, and makes it easier for us to think of our perspective as the “normal” one. If we are naïve realists (that is, if we believe that reality is absolutely apparent to us and we just have to ask ourselves if something is true in order to determine it is) then we are likely to think there is never any other perspective, or, if there is, that there is never any benefit to looking at things from that perspective since our perspective is right.

And our perspective is likely to be that we worked hard for what we have, that we earned every inch of our way, so it is likely to seem ridiculous to have someone say that we have privilege.

It’s a natural human tendency to attribute our successes to our work (and worth) and our failures to externalities. Even That Guy thinks he worked hard, and so doesn’t recognize his own privilege. Privilege isn’t a binary—it’s on a continuum; it isn’t an accusation of being a worthless taker, but an observation about relative advantage. It shouldn’t be the end of a conversation, but the beginning of one.

 

 

 

 

 

[1] It’s striking to me that people who tend toward monocausal narratives also tend to think of cause purely in terms of blame, but they aren’t the same. Perhaps, just as I was getting a glass of milk my husband requested, I was startled by the mayor having chosen to sound the tornado siren. The causes of the spilled milk might include my having an active startle reflex, the tornado, the mayor, my husband requesting a glass of milk, my decision to get him one while I’m up, perhaps whatever it is (genetics? experience?) that caused my startle reflex, but none of those factors is one it makes any sense to blame.

Political Correctness

The term “politically correct” has a pretty straightforward origin, one that makes its current usage unintentionally ironic.

It was used by Stalinists who had a fairly complicated time keeping up with what the Politburo had determined was the correct thing to think or say. From the time that Stalin took over until he died, the Communist Party changed positions on a lot of things, but that’s a major problem for their kind of Marxism, since that kind of Marxism says that the truth is obvious to anyone not corrupted by capitalist ideology.

At the same time that Stalinist/Marxist ideology said that the true course of action was obvious to everyone, the Politburo flipped on the true course of action. The Kulaks were great; they were awful; Nazis were allies; they were enemies; this person was great; he was a villain. Thus, being a supporter of the USSR meant that you had to believe, at the same time, that the truth was always absolutely obvious to everyone who was objective AND now you had to contradict yourself in regard to what you said yesterday.

Thus, if you were loyal to your in-group, you needed continual updates as to what the latest “politically correct” stance was. The notion of political correctness started with Stalinists, and it had three sub-points:

  • First, being “politically correct” meant that you turned on a dime in order to support whatever was now seen as what you should say and believe—you were repeating the talking points that showed loyalty to your ingroup.
  • Second, that they contradicted what you said yesterday, or that the talking points contradicted each other, didn’t make sense in terms of other things your group was doing—all of that was actually a virtue. As Orwell pointed out, the true sign of loyalty is committing to a claim that you know is false and yet that you will insist is true. Sometimes people will misquote Tertullian on this: “I believe because it is absurd.” Publicly supporting rational and reasonable stances doesn’t show group loyalty, but insisting on the truth of obviously false claims? That shows true loyalty to the group

Later, “politically correct” came to be the term used to police language that was uninclusive. In other words, to be “politically correct” was to try to use language that didn’t actively offend someone—it meant trying be respectful of others and politically thoughtful in your actions.

In some groups, however, it meant that being a part of that group meant that you agreed with them in everything, including where you bought your clothes, what terms you used, what you read. Any deviation from what was obviously the politically correct action was a reason for someone to shame you. And so people who were tired of callout culture started using “politically correct” in an ironic way to express our discomfort with the assumption that being lefty meant pure agreement on all actions.

And then it got picked up on the right by people who used it as snark for anyone on the left, and for any kind of care for how we describe one another. To be “politically correct” in this world is to give any thought to others’ feelings.

And, so “politically correct” went from an unironic term used to shame people in a hierarchical system that could determine what were the correct talking points (or how partisans should spin things–the Stalinist usage) to a game of purity oneupsmanip (what some people call “callout culture”) to snarking about callout culture, to a term used to dismiss any kind of kindness or even politeness about what terms we use.

And there is an unintentional irony in that last usage. The pundits and their followers who throw the phrase at others the most (and in the most dismissive way) are the ones most likely to be politically correct in the original sense: that they can turn on a dime in terms of the political beliefs, all the while claiming to be absolutely truthful. And they love political figures and pundits who are honest and authentic and, as they say, “unbiased,” but who flop like a goldfish getting pawed by a cat. Hillary should be jailed, she shouldn’t; Obama wasn’t born in the US, his birthplace isn’t an issue; everyone should have healthcare, healthcare should be restricted to people with certain jobs; regime change is great, regime change is a disaster, regime change is great.

There is a strategy sometimes called “projection,” and sometimes called “strategic misnaming,” in which you simply accuse the opposition of doing what you’re doing. (“You’re the puppet!”) A lot of the accusation of “political correctness,” it seems to me is on the part of people who are themselves obsessed with being politically correct.

How to argue about whether something is racist

This class is about how to argue whether a text or action of some kind is racist; this isn’t about whether a person is racist. On the whole, that isn’t a productive argument (although you sometimes have to have it). The first step in a useful argument about whether something is racist is to try to figure out why we’re having the argument in the first place—what the determination of racist/not racist will do for us is what enables us to decide which definition of racism is the most relevant.

All of this may seem confusing to you, since you might be accustomed to thinking of “is this racist” as a straightforward question of right and wrong—if it’s racist, it’s morally wrong, and if it isn’t, then it’s morally right. And while I do think racism is morally wrong, I also think there is a continuum with some things being more racist than others (as you’ll find later, it’s even possible for something to be racist and anti-racist at the same time). Even if something is morally wrong, you’re likely to respond to it in different ways. For instance, your 90 year old not-quite-all-there grandpa might use a term we now consider racist but which was considered the polite term when he was young. You’d react to that differently than if someone your own age (who knows perfectly well it isn’t an okay term) uses it. You might not do anything at all with your grandpa, but drop like a hot rock the person your age.

If a person making hiring decisions for your company said something racist, you’d react differently than you would if some random person in line at the grocery store said the same thing. If you were HR, you might fire them—whether or not they intended to be racist, on the grounds that their mere presence on the hiring team jeopardized your company.

In this class, we’ll spend a lot of time talking about different definitions of “racist,” which ones are more useful than others, and under what circumstances.

On the whole, definitions of racism tend to emphasize one of several points on the rhetorical triangle: text, intent, consequence, relationship to context,or impact on audience.

For instance, for some people, as long as a text does not have racist epithets it isn’t racist (although those same people don’t usually immediately decide that a text with racist epithets is racist—more on that below). Others do decide that the use of racist epithet (by any character or in any context) is racist. Both those decisions rely purely on text.

This criterion—presence or absence of racist epithets—seems to me the least useful, in that there are the fewest instances in which it seems to me especially relevant. A person can, after all, argue for the expulsion or even extermination of another race without using racist epithets (in fact, that’s most commonly how it’s done). And some anti-racist texts can use racist epithets to persuade the audience that racism is harmful (the argument often made about Django Unchained).

Many people believe that the main problem with racism is that it is hostility against members of another “race”—that is, it is an issue of individuals’ feelings. If you define “racism” as “hostility toward members of another race,” then you will tend to look at texts for evidence of hostility—affective markers, boosters, and other linguistic signs of anger. That method also doesn’t work particularly well, as some of the most racist policies have been invoked in the name of kindness, with apparently calm tones, or by appealing to “facts” and “reality.” (White supremacists often call themselves “racial realists.”)

Emphasizing the feelings that individuals have is one example of how people imagine racism to be a problem of individual agency (rather than systems). In this model, racism exists because too many people choose to be racist, or allow themselves to slip into racist and ingrained habits. If enough individuals chose to stop responding in racist ways, then racism goes away. (That is a problematic assumption.)

Intent seems to me a slightly more usable criterion, but only for limited circumstances. It is important in social situations in which we’re trying to determine if a person should be forgiven. If a person says something racist, but didn’t mean to (didn’t realize it was a racist term, was thoughtlessly repeating a meme they didn’t understand to be racist), then you’re more likely to be willing to forgive them. If they keep saying that thing, although it’s been explained to them that they’re saying something racist, then we might conclude that they really do intend to be racist.

Intent matters in some legal situations (e.g., hate crimes) but not others (e.g., the question of disparate impact). “Disparate impact” is a kind of racism that doesn’t require any intent—if you have a policy with no intent of hurting a particular race (or religion), but that’s exactly what it will do, then you’ve got “disparate impact,” which has been ruled discrimination. If you ban hairstyles that you consider too casual, and they’re precisely and exclusively the ones worn by people of a particular race, then—whether or not it was your conscious intent—your policy has racist consequences. (A lot of school dress codes get challenged on exactly these grounds.)

Intent, like the question of feelings, assumes that racism is the consequence of individuals choosing or allowing themselves to be racist—that there is individual agency in racism. I think the reasoning works something like this: evil in the world is the consequence of individuals choosing to do evil things; racism is evil; therefore, racism must bet he consequence of individuals choosing to be racist. The assumption is that if we had a world in which no one intended to be racist, there would be no racism, but that isn’t the case.

Thus, intent matters for law and social castigation, but it’s of limited importance otherwise. For instance, google image search “beautiful hair.” You’ll see a very racist outcome—almost exclusively white women (and the nonwhite women usually have very, very high maintenance hair). But there was no one intending to create a racist cultural view of what is beautiful hair. There are people intending to sell products, and doing so within a racist culture.

One of the more straightforward ways to measure whether a text or action is racist is to look at whether it reinforces existing racist practices and structures. The most productive arguments, it seems to me, work within this framework. I think it’s helpful partially because it allows a more nuanced discussion—it’s possible to talk about how much harm something caused, what kind, and to whom (rather than a binary of harmful/not harmful). It’s also possible to talk more intelligently about texts that are both racist and anti-racist (South Pacific, To Kill a Mockingbird) if we think about harm; we can talk about the kind of harm the text or action tried to prevent or ameliorate and what kind of harm it caused.

Thinking about consequence also enables us to talk about the same act or text having different consequences in different era or with different audiences. Some critics of American Sniper argued that it was not seen as racist in its showings in Iraq because viewers saw it as demonizing a particular political party, but it had racist consequences in the US because viewers saw it as confirming demonized (and racist) views of Iraqis. It could be argued that To Kill a Mockingbird was progressive for its era, but now it’s actually regressive.

That last comment brings up the argument about relationship to context—what do we do about texts that are racist, but less racist than was the norm for their era or culture? If we think of “racist” as an absolute category—something is either racist or it isn’t—then we’re hopelessly entangled by these cases. If we can think of it as on a continuum, then we can talk about them more sensibly.

We have to be careful, however, not to assume that things have been getting steadily less racist as time goes on. Huckleberry Finn (1885) is much more racist than Uncle Tom’s Cabin (1852), and there was a lot of anti-racist being done in its era. Sometimes we excuse texts by overstating the dominance of racism in an era—even in eras in which it was common, there were people who spoke against it. While I don’t think that racism is subject to pure agency (people could simply choose to be or not to be racist), there are some choices. Being in a particular culture or time doesn’t force someone to be racist, after all. So, while I think it’s useful to put texts in contexts, it should be in service of a nuanced understanding of how the racism works in them, not as a “get out of racism free” card.

The criterion of impact on audience might be subsumed under consequence, but students have found it useful to separate them. The impact on the audience might cultural (the text problematizes or confirms common racist attitudes) but it might also be more individual (the text makes individuals uncomfortable in a good or bad way). For instance, while the word “niggardly” has nothing to do with the similar sounding racist term, it’s reasonable for some people to be made really uncomfortable by it, and so it’s reasonable to try to avoid using it, just on the grounds of how it makes people feel. On the other hand, while the phrase “welching on a bet” originally came from a racist stereotype about the Welsh, I don’t think anyone knows that anymore (or has that stereotype) so, at least in the US, it doesn’t seem to me helpful to call that a racist phrase.

My point in giving all these criteria is not to set out some kind of easy decision-tree on “is this racist.” Instead, I’m suggesting that students see these criteria as stases for arguments about racism. It’s hard to have a good argument on anything if interlocutors are on different stases, and a lot of people don’t realize that any argument can have multiple stases—and you can choose among them. Sometimes, for an argument about racism to become more useful, people have to agree first on the stasis, and that might mean that people might have to understand that their notion about how to determine “is this racist” isn’t a useful criterion. In this class, you’ll be looking at that question a lot–what is the stasis for this argument, and is it the most relevant and useful stasis?

A lot of definitions with racism describe it as a problem of individuals who have hostility based on irrational beliefs. Google gives us “prejudice, discrimination, or antagonism directed against someone of a different race based on the belief that one’s own race is superior.” And prejudice is “preconceived opinion that is not based on reason or actual experience.”

So, racism is an individual feeling antagonism against someone of a different race because she believes her race is superior, and that believe is not based on reason or actual experience.

Hubert Sumlin has accused Chester Burnette of being racist. If I agree with those dictionary definitions of racism, then I need to decide if Chester feels antagonism toward another race, if he thinks his race is superior, and if his beliefs have no basis in reason or experience.

One way to test whether that is a good definition would be to take a case we think a definition should catch. I think it’s reasonable to decide that the Nazis, since a major part of their political policy was the extermination of races, were racist. So, if this definition of racism doesn’t apply to them, then it isn’t a good definition.

Well, interestingly enough, if you apply those standards to major current defenders of the Nazis, you’ll find that they maintain they feel no antagonism to the other group but simply want separation. They insist that their support of Nazism is not unreasonable—they can cite experiences of the other races being bad, and one book advocating neo-Nazism has around 1k footnotes, so it gives reasons.

And, of course, the Nazis came to power on a policy of separation, not extermination, and the expulsion of illegal immigrants of that group, although the whole group was kind of suspect. So, that isn’t a good definition of racist.

And, let’s go back to Chester and Hubert. How would I would know what Chester feels and believes?

This is a surprisingly interesting question.

There is a period in cognitive development when children develop a theory of mind—that is, they understand that other people have ideas, feelings, and commitments that might be different from the ones they have (to put it crudely). Other people have other minds, in other words. (Not everyone develops that ability, and they go through life believing that everyone believes exactly what they do—people who disagree are just pretending to have different ideas.)

In some cases, developing a theory of other minds leads to the skill of perspective shifting—the ability to imagine what things would like from those other minds, and at least the effort to see things from that perspective. In an unhappy number of cases, however, it leads to the tendency to think that your mind is the one cleaved to reality, and those other minds are just unhinged or wrong. We also have a tendency to believe that our perception of those other minds is unmediated and perfect. This is such a profound problem that social psychologists call it the fundamental attribution error—it’s fundamental to a lot of other errors. We attribute views and feelings to other people, and then we believe we have perceived those views and feelings.

And that attribution is biased, especially by whether someone is in our ingroup or outgroup, what our feelings are toward that person (mad, afraid, attracted, sad). We tend to attribute bad motives to outgroup members and good motives to ingroup members, and can be astonishingly self-serving in our perceptions. For instance, if we’re feeling aggressive toward someone else, we’re tempted to attribute aggression to them, thereby making ourselves feel that our aggression is a justified response. Or, if we’re attracted to someone, we might interpret their inner state as attracted to us (that’s why stalkers never see themselves as stalking—they sincerely believe they know that the victim is or easily could be attracted to them).

So, imagine that we’ll use the standard dictionary definition of racist. It says “one’s own race,” so this is about what an individual does. And now we have to figure out if there is antagonism or prejudice, and if the person believes his/her race is superior.

If you believe that “Lithuanian” is a race, and you are racist about Lithuanians, believing them to be essentially stupid and criminal, you wouldn’t be able to use the common dictionary definition to recognize your racism (or the racism of anyone else in your ingroup who also believed Lithuanians are stupid and criminal). You would be able to ask yourself, “Is my belief about Lithuanians grounded in reason or actual experiences?” and answer, “Yes!” That’s because you would be able to think of a time that someone you thought was Lithuanian did some stupid. You might be able to think of an expert who also said they’re stupid. You’d be surprisingly likely to assume that every stupid person you met was Lithuanian, or to interpret smart things a Lithuanian did as stupid. If you were forced to acknowledge that there was a famous mathematician genius who was Lithuanian, you might decide she was probably adopted, illegitimate, or of mixed heritage. You might decide she wasn’t really a genius, but had just happened to get something right, or had stolen all her ideas from a non-Lithuanian. In fact, an infinite number of counter-examples would not change your mind about Lithuanians—more important, it wouldn’t even get you to see that your stance was unreasonable, because you could explain them all away.

You also might tell yourself that you don’t feel antagonism—you are just realistic about Lithuanians. And you don’t discriminate, you just don’t give them more than they deserve.

If a definition of racism is going to be helpful, it has to be one that enables racists to realize we’re being racist. And, that definition of “prejudice” isn’t helpful.

It isn’t any better if we’re trying to apply that definition to someone else. The inherent problem with believing that racism is a question of unreasonable feelings and bad intent is that we have to figure out the inner state of Chester, and we’re extremely likely to engage in the fundamental attribution error—if we think Chester is good, we’ll attribute good motives; if we don’t like Chester, we’ll attribute bad motives. There is a connected problem with this—we tend to think of racism as an issue of individual morality. Racism is immoral, and so people who are racist are immoral.

Therefore, if a person is not immoral, they can’t be racist. I am not immoral, therefore I can’t be racist. Ta effing dah.

It’s this false assumption that gets us into those weird cases of someone having said or done something racist and various people saying, “She can’t be racist, because she did these good things.”

There’s a second, and more complicated, problem with this way of thinking about racism—that it has to do with the intent, feelings, beliefs and/or prejudices of individuals. Take, for instance, this clip: https://everysinglewordspoken.tumblr.com/post/141726767183/total-run-time-of-all-nancy-meyers-directed. It’s every single word said by a person of color in every single Nancy Meyers-directed movie (it lasts about five minutes). Does Meyers exclude people of color from her movies because of antagonism? Probably not. And probably not because of any conscious notion that her race is better. She might not intend to discriminate—maybe she just doesn’t think to include non-whites in her movies; or maybe she is trying to get the most “bankable” stars (and audiences tend to be racist). It doesn’t really matter if she intends to discriminate against non-whites; she does. Having twelve hours of movie with five minutes of non-white is racist. But it isn’t just her racism—it’s the racism of audiences, producers, and the systems of movie-making.

So, how should we think about racism?

First, a few terms that will make all this more straightforward:

  • A “taxonomy” is a method of categorizing. If you have your closet organized by shirts/pants/skirts/jackets, that’s your taxonomy. You might instead organize your closet by what’s useful to have for different outside temperatures (warm/cold/moderate), circumstances under which you might wear them (formal/business casual/casual/athletic), or perhaps by color, how often you wear them, or some other taxonomy.
  • socially constructed. Some people think that a belief is either subjective (random, arbitrary, entirely in your head) OR objective (a perception of a brute fact that is entirely external to your brain, and which exists regardless of whether anyone believes it—also called “ontologically grounded”). That isn’t actually a very useful division. Where, for instance, would you put money? It isn’t subjective—I can’t draw Millard Filmore’s face on green paper and get Starbucks to give me coffee—but it isn’t a brute fact. I can get coffee in the US with a five dollar bill, but not in England. Money is a socially-constructed fact. It is a fact—it is an inescapable condition of our culture, but only because we’ve implicitly agreed to give it that kind of power.
  • salience. That word simply means the condition of sticking out. You might have troops in a line, but with a bubble that sticks out—that’s your salient. In any situation, there are all sorts of things you might notice, but some of them stick out. Those things have more salience. Salience is context-dependent, idiosyncratic, and/or socially constructed. If religion is important to you, then you might notice how many people of various religions there are in a group (or class, or room)—you might notice there are a lot of people who mention being Wisconsin Synod Lutheran. You might also notice that the class is quieter than you think normal. We tend to confuse salience with importance, and attribute causality to salient conditions. So, you would be irrationally likely to assume that the two salient things are causally connected—it’s quiet because there are a lot of Wisconsin Synod Lutherans.
  • confirmation bias. We tend to notice things that confirm our beliefs more, and more easily than things that don’t. Even when we try to “test” our beliefs, we generally do so in ways that enable confirmation bias. If, for instance, you believe that Wisconsin Synod Lutherans tend to be quiet, then you’ll decide that the class is quiet because there are so many of them. If you believe that Wisconsin Synod Lutherans tend to intimidate others, you’ll decide the class being quiet is an example of their having intimidated everyone into silence. A person for whom religion isn’t important, but who has a hypothesis that morning classes are more quiet, will, if it’s a morning class, conclude that this class proves that hypothesis.
  • The ingroup is the group you’re in that is important to your sense of identity. You have lots of them, and they become more or less salient depending on context. Being a Texan (or whatever state you’re from) is important if you’re around people from various states, and would be especially important if someone said something nasty about your state, but isn’t something you’d mention introducing yourself to other people in Texas.
  • An ingroup is often defined (or made more salient) by the sense of there being an outgroup which it is not. In terms of race, the concept of “white” only makes any sense if there is a “not white” to define it. (The various groups that are called “white” don’t actually have very much in common, and used to be designated as different races. But they can claim white because they aren’t non-white.)
  • essentializing (or naturalizing). Ingroups and outgroups are socially constructed—they’re both as real and as arbitrary as money. They have tremendous power but only because a culture decides they do, and individuals can’t suddenly decide, “I don’t see money” and have any impact on how the world works. (And they’ll still use money to buy groceries.) That so many things are socially constructed (such as the boundaries of states and nations, or the boundaries of groups) makes many people extremely nervous—especially people who don’t like uncertainty, ambiguity, or nuance. There are people who get anxious and angry if they are made aware that their taxonomies are socially constructed and might change. Their inability to manage uncertainty and ambiguity cognitively means that they will INSIST that those taxonomies are Real. Race is not, they will insist, a social construct, but a biological one—they will insist those categories are appealing to essences of people, to identities grounded in nature. They will take a socially constructed category (such as nation) and insist that the members of that nation are essentially the same in that they all have certain temperaments, political tendencies, or identities.
  • social/cultural goods. This is a vague notion, but useful. Any culture has things that are culturally marked as goods—money, prestige, status symbols (living in that neighborhood, getting treated this way by police, being able to go to those restaurants), political power.
  • zero-sum relationships. There are some relationships in which the more one category gets, the less the other category gets. So, if I have a discretionary budget of $100 per week, the more I spend on coffee, the less I can spend on going to listen to music. There is a zero-sum relationship between those two categories. If you spend less on coffee, you have more to spend on music. Many people grow up in a highly competitive family situation in which there is a sort of zero-sum relationship in parental love or attention—the more that a sibling gets, the less there is for you. So, a person in that world might think, “if my parents spend less love on my sibling, there is more for me.”
  • Moral Foundations research. The research (http://moralfoundations.org/) on this is pretty clear. People who self-identify as liberals tend to value fairness across groups as opposed to some notion of proportionality (fairness is you get what you deserve, which is what self-identified conservatives value). Self-identified conservatives value loyalty, authority, and sanctity more than liberals do.

Racism essentializes socially constructed taxonomies of ingroup/outgroup by relying on perceptions of salient characteristics of groups that we decide are true because of confirmation bias, and assumes those groups are in a zero/sum relationship as far as social/cultural goods; it therefore rejects any notion of fairness across groups.

Racism is a pernicious and toxic example of a relatively common phenomenon—that people have a tendency to categorize themselves and others in terms of groups, and to engage in “in-group favoritism.” It isn’t the consequence, let alone a necessary consequence, of that way of thinking—but a very nasty version of it. After all, while the tendency to think in terms of ingroups and outgroups is universal, making those groups racial categories isn’t—race is a relatively recent concept, not fully formed until the 18th or even 19th centuries, and different from other ways of thinking about groups in important ways.

In fact, “races” are socially constructed—what has counted as “white” has changed considerably, even in the last 100 years. In 1916, very powerful book was published that argued that there were three white races—not all whites, according to Madison Grant, were the same race (and race-mixing within whites was damaging to civilization). In the 19th century, racialist science sometimes claimed there were four races, sometimes three—that inability to agree is interesting, since racists claim that “race” is eternal and an obviously physiological category. If it’s obvious, why can’t they agree? In fact, even scientists promoting the notion of race as a scientific category couldn’t come up with a consistent definition of “race,” let alone one that fit the evidence they had.

Even in the early twentieth century, there were scientists who pointed out that the way eugenicists talked about race didn’t make any sense. They showed that people advocating racial purity used “race” in two very different meanings ways—there were the socially constructed categories, which were based on linguistic, political, and national boundaries (such as the notion of an “Irish” race); there were also possibilities of biological categories (such as Celtic) but 1) given the long history of human interaction, they were not discrete categories; and 2) those biological categories had nothing to do with the socially-constructed categories. Biologically, there was no “Irish” or “Italian” race, just nationalities. So, the notion that immigration quotas were backed by genetic data made no sense.

Racism doesn’t necessitate that people consciously think that their race is superior to others; it only requires ingroup favoritism (the unconscious tendency to perceive a situation as “fair” if our group gets slightly more).

It tends to get more antagonistic if we think our group and another (or several others, or all others) are in a “zero-sum” relationship. If we’re having a pie potluck, and each person who comes will bring more pie than he can eat, then more people who come means more pie for everyone. But, if we have a single pie and no one is bringing more, then the more people who come, the less pie there is for YOU. In that second circumstance, the more someone else gets, the less you get—someone’s gain is necessarily someone else’s loss. (There’s a complicated, but good, explanation of how it works in economics here: https://en.wikipedia.org/wiki/Zero-sum_game).

There are some people who perceive every situation as zero-sum, even when it isn’t. Hyper-competitive people will often feel as though praise for someone else’s accomplishment injures them (as though it takes something away from their accomplishments), and so they can sincerely believe that the best response to someone else achieving something is to try to criticize or undermine that person. One of the characteristics of a toxic relationship is that one or both people believe that they are threatened by the other person being successful, and most of the bad behavior of bridezillas can be attributed to a sense that any attention paid to anyone else is taking something away from the bride to which she is entitled.

While not all racism requires zero-sum thinking, it is interesting that a lot of racism is justified through sloppy social Darwinism (that human interactions are inherently a contest which result in the survival of the fittest—not actually a Darwinian concept at all). White people (that is, people who think of themselves as “white”) who think about all social interactions as zero-sum competitions have trouble seeing the problem with Nancy Meyers’ movies having so few non-whites, and to condemn as “political correctness” any movie more diverse. It isn’t necessarily that they think to themselves that whites are superior, but they are likely to believe that it is “normal” to be more interested in white people (white people are “naturally” entitled to more attention, white people problems are more “interesting” or “universal” than the problems of non-whites, which are “particular” to those groups).

People drawn to racist explanations and assumptions tend not to be very good at perspective-shifting (they also get really uncomfortable with ambiguity and complexity). They not only see things only from their perspective, but the more racist they are, the less able they are to acknowledge that there are any other perspectives that might be legitimate. They might believe that everyone who disagrees is just pretending (but secretly agrees), or they might believe everyone who disagrees is an idiot, or they might believe that their way of looking at things is unbiased and universal and other ways are biased and particular. So, someone who looked at things that way would have trouble being interested in literature or films about people not like them—such art requires perspective-shifting, and they’re bad at it. They would be likely to think of their reaction as the one everyone like them has, so, if a white director made movies with a diverse cast, they would see that as deliberately pandering to people who don’t really have a valid perspective (wanting more diverse art).

But, again, they wouldn’t experience themselves as feeling any antagonism (except to “political correctness”), nor would they be aware of some sense of their race being superior—they would just want to focus on “people like them.”

People who tend to be racist tend to be drawn to thinking in black/white terms in general (they have trouble thinking in terms of continua, matrices, or probabilities). In fact, they often actively angry if you tell them a situation is complicated, as they think you’re being indecisive. They tend to believe that groups are meaningful (you can get most of what you need to know about someone by deductions that you can make from their group membership); they tend to think in terms of identities and motives (they essentialize people); they tend to dislike new music, new food, new genres (of literature, movies, TV), and new places (they get agitated by difference). They tend to be naïve realists. They have trouble admitting error, let alone learning from mistakes.

One more point about outgroups. It seems to me common for people to have two kinds of outgroups—a group that is threatening because it is cunning and scheming, and another that is threatening because it is animalistic. The first group is often represented as controlling the animalistic group, and the second is often thought of as in a binary of either submissive (like a domesticated pet) or in rebellion. When racists talking about that first group as “cunning” people can think it isn’t racism, since it seems to be a compliment; when racists talk about that second group as childlike and happy, people can also miss the racism (since they seem to be saying “nice” things about the group). Sometimes racists will praise submissive members of the outgroup, as though that shows they aren’t racist—but they’re only praising members who “know their place.” It’s still racism.

As you’ll see in the class, there is a lot of disagreement as to whether racism is a new phenomenon—some people categorize it as a kind of hostility to outgroup that is inherent to the human condition. I’m dubious, since other kinds of hostility to outgroups allowed conversion or assimilation. Because racism “naturalizes” the differences (that is, puts them into nature) there is no possibility of being treated as equal to the ingroup—the Other will always be Other.