The unnecessary incompetence of Tom Cotton’s (mis)understanding of slavery as a “necessary evil”

revisionist history books

Tom Cotton has proposed a bill that would prohibit Federal funds being used to support the teaching of the NY Times The 1619 Project. He said, “As the Founding Fathers said, [slavery] was the necessary evil upon which the union was built, but the union was built in a way, as Lincoln said, to put slavery on the course to its ultimate extinction.”

This will shock absolutely no one, but I don’t think Tom Cotton has any idea what he’s talking about. The irony is that he is trying to take the scholarly highground, as though his objection to The 1619 Project is that it is factually and historically flawed, when, in fact, his argument is factually and historically flawed. I’m not sure I’d call his history revisionist, as much as puzzling and uninformed.

“The Founding Fathers” is a vague term, but Cotton seems to be using it to include the authors of the Constitution. (I’m not sure if he’s including others, as he seems to be, so I’ll put “the Founders” and “Founding Fathers” in quotation marks.) The Constitution was a document of compromise agreed to by people with widely divergent views on various topics, especially slavery. Therefore, it doesn’t really make sense to attribute one view about slavery to “the Founders”—there wasn’t one.

Even the same Founder could have different views. Jefferson, at the time of the founding, was what is generally called a “restrictionist”—slavery should be restricted to the existing slave states, as such a restriction would cause it to die out (as might well have been true). By 1819, however, he was describing slavery as a “wolf by the ear situation.” In 1820, he wrote in a letter, “We have the wolf by the ear, and we can neither hold him nor safely let him go. Justice is in one scale and self-preservation in the other.” Justice demands abolition, but Jefferson (like many people) worried that freed slaves would wreak vengeance on their oppressors. By 1820, Jefferson was in favor of expansion of slavery (and he was still a Founder).

It’s fair to characterize Jefferson’s view (in 1820) as an instance of the “necessary evil” topos, a defense of slavery common in the early 19th century. But that wasn’t his view at the time of the founding. And it certainly isn’t accurate to say that the Founders “put the evil institution on a path to extinction”—that certainly wasn’t what most of them were trying to do. Many of them (most?) hoped to preserve it eternally. In fact, as late as 1860, there remained the view that the Constitution guaranteed slavery, and that abolishing slavery would require a new Constitution. In 1850, the abolitionist William Lloyd Garrison burned a copy of the Constitution for exactly that reason, calling it “a covenant with death, and an agreement with hell.”

Cotton ignores several other important points. First, the “necessary evil” argument wasn’t very popular at the time of the Revolution or writing of the Constitution. In that era, it was more common for defenders of slavery to use the argument that slavery brought Christianity and civilization to slaves, and was therefore a benefit. By the early part of the 19th century, that argument became increasingly implausible, as manumission was increasingly prohibited (for more than you probably wanted to know about the 19th century history of arguments for slavery, see here). It’s at that point that one gets the “necessary evil” argument (which was never the only way that slavers talked about slavery). And, at that point, it was never an argument for abolishing slavery, let alone for it being “a necessary evil upon which the union was built.” I have no idea whom he thinks said that. I can’t think of anything a Founder said that could be interpreted as saying slavery was some kind of necessary phase through which the US had to pass.

To be blunt, I think Cotton has no clue what the “necessary evil” argument actually was. Robert Walsh’s 1819 An Appeal from the Judgments of Great Britain has a passage perfectly exemplifying the “necessary evil” argument:
We do not deny, in America, that great abuses and evils accompany our negro slavery. The plurality of the leading men of the southern states, are so well aware of its pestilent genius, that they would be glad to see it abolished, if this were feasible with benefit to the slaves, and without inflicting on the country, injury of such magnitude as no community has every voluntarily incurred. While a really practicable plan of abolition remains undiscovered, or undetermined; and while the general conduct of the Americans is such only as necessarily results from their situation, they are not to be arraigned for this institution.” (421)
This is essentially Jefferson’s argument—it’s evil, but there’s nothing we can do about it. Zephaniah Kingsley called slavery an “iniquity [that] has its origin in a great, inherent, universal and immutable law of nature” (14, A Treatise on the Patriarchal, or Co-operative System of Society As It Exists in Some Governments, and Colonies in America, and in the United States, Under the Name of Slavery, with Its Necessity and Advantages, 1829). Alexander Sims, in A View of Slavery (1834) said, “No one will deny that Slavery is a moral evil” (“Preface”). James Trecothick Austin, the Massachusetts Attorney General, wrote a response to Channing’s anti-slavery book called, appropriately enough, Remarks on Dr. Channing’s “Slavery.” Austin argued that slavery could never be abolished, and then said, “I utter the declaration with grief; but the pain of the writer does not diminish the truth of the fact” (25). The necessary evil line of defense is a self-serving fatalism about slavery–while pronouncing it evil (and thereby showing that one has the right feelings), this position precludes any action to end slavery: “Public sentiment in the slave-holding States cannot be altered” (Austin 24). What Cotton doesn’t understand is that the necessary evil argument was an argument for a fatalistic submission to the eternal presence of slavery.

Cotton doesn’t seem to be the sharpest pencil in the drawer, insofar as he seems not to understand his own argument. The necessary evil argument says slavery is evil. Cotton’s argument is that The 1619 Project is inaccurate because it presents the US as “an irredeemably corrupt, rotten and racist country,” which isn’t my read of the project at all. Oddly enough, were Cotton right, if the “Founding Fathers” had said that slavery “was the necessary evil upon which the union was built” (as he claims) then they would have been endorsing the major point of The 1619 Project, that slavery is woven into US history from the beginning. The Founders didn’t say that, but Cotton seems to think it’s true, so I’m not even sure what his gripe with the project is. He seems to me to be endorsing its argument while thinking he’s disagreeing? (Has he actually looked at it?)

Walsh also seems not to understand Lincoln’s argument(s) on slavery. In 1858, during the Lincoln-Douglas debates, Lincoln argued that “it was the policy of the founders to prohibit the spread of slavery into the new territories of the United States,” but Lincoln wasn’t claiming that “the founders” were opposed to the spread of slavery into all territories. As mentioned above, the “founders” had a lot of different views; Lincoln means specifically the Northwest Ordinance of 1787 which, among other things, prohibited the expansion of slavery above a certain point. (Sometimes people cite Lincoln’s 1854 speech as though it’s about the Founders, but it isn’t—it’s about the policies regarding the expansion of slavery from 1776 to 1849. Lincoln never uses the term “Founders” in that speech because he isn’t talking about them.) In both those speeches, Lincoln was talking about the expansion of slavery into states above the line established by the Northwest Ordinance, in that territory. He wasn’t a fool. He knew that Kentucky (1792), Tennessee (1796), Louisiana (1812), Mississippi (1817) and various other states had been admitted as slave states by the generation that Cotton seems to want to call “Founders.”

So, here are some of the things that Cotton gets wrong. There wasn’t a view that “the Founders” had about slavery; the Constitution didn’t put slavery on a path to extinction (and the “Founding Fathers” certainly didn’t see it that way); the “necessary evil” argument was an argument for fatalistic submission to the possibly eternal presence of slavery, not an argument for its abolition, let alone benefit for the country; even the necessary evil line of defense admitted that slavery was evil; I don’t know of any Founders who argued that slavery was a necessary phase for the country to go through; that certainly wasn’t Lincoln’s argument; The 1619 Project doesn’t present the US as irredeemable.

But he’s right that slavery was evil.








The salesman’s stance, being nice to opponents, and teaching rhetoric

books about demagoguery

I mentioned elsewhere that people have a lot of different ideas about what we’re trying to do when we’re disagreeing with someone—trying to learn from them, trying to come to a mutually satisfying agreement, find out the truth through disagreement, have a fun time arguing, and various other options. There are circumstances in which all of these (and many others) are great choices—I think it’s an impoverishment of our understanding of discourse to say that only one of those approaches is the right one under all circumstances.

We also inhibit our ability to use rhetoric to deliberate when we assume that only one approach is right.

I’ll explain this point with two extremes.

At one extreme is the model of discourse that has been called “the salesman’s stance,” the “compliance-gaining” model, rhetorical Machiavellianism, and various other terms. This model says that you are right, and your only goal in discourse is to get others to adopt your position, and any means is justified. So, if I’m trying to convert you to a position I believe is right, then all methods of tricking or even forcing you to agree with me are morally good or morally neutral.

From within this model, we assess the effectiveness of a rhetoric purely on the basis of whether it gains compliance. For instance, in an article about lying, Matthew Hutson ends with advice from someone who has studied that lying to yourself makes you a more persuasive liar.

“Von Hippel offers two pieces of wisdom regarding self-deception: “My Machiavellian advice is this is a tool that works,” he says. “If you need to convince somebody of something, if your career or social success depends on persuasion, then the first person who needs to be [convinced] is yourself.””

The problem with this model is clear in that example: if you’re wrong, then you aren’t going to hear about it. Alison Green, on her blog askamanager.org, talks about the assumption that a lot of people make about resumes, cover letters, and interviews—that you are selling yourself. People often approach a job search with exactly the approach that Von Hippel (and by implication, Hutson) recommend: going into the process willing to say or do whatever is necessary for you to get the job, being confident that you’ll get the job, lying about whether you have the required skills or experience (and persuading yourself you do).

Green says,

“The stress of job searching – and the financial anxieties that often accompany it – can lead a lot of people to get so focused on impressing their interviewer sthat they forget to use the time to find out if the job is right for them. If you get so focused on wanting a job offer at the end of the process, you’ll neglect to focus on determining if this is even a job you want and would be good at, which is how people end up in jobs that they’re miserable in or even get fired from.
And counterintuitively, you’ll actually be less impressive if it’s clear that you’re trying to sell yourself for the job. Most interviewers will find you a much more appealing candidate if you show that you’re gathering your own information about the job and thinking rigorously about whether it’s the right match or not.”

Van Hippel’s advice comes from a position of assuming that the liar is trying to get something from the other (compliance), and so only needs to listen enough to achieve that goal. The goal (get the person to give you a job, buy your product, go on a date) is determined prior to the conversation. Green’s advice comes from the position of assuming the a job interview is mutually informative, a situation in which all parties are trying to determine the best course of action.

If we’re trying to make a decision, then I need to hear what other people have to say, I need to be aware of the problems with my own argument, I need to be honest with myself at least and ideally with others. (If I’m trying to deliberate with people who aren’t arguing in good faith, and the stakes are high, then I can imagine using some somewhat Machiavellian approaches, but I need to be honest with myself in case they’re right in important ways.)

At the other extreme, there are people who argue that every conversation should come from a place of kindness, compassion, and gentleness. We shouldn’t directly contradict the other person, but try to empathize, even if we disagree completely. We should use no harsh words (including “but”). We might, kindly and gently, present our experience as a counterpoint. Learning how to have that kind of conversation is life-changing, and it is a great way to work through conflicts under some circumstances.

It (like many other models of disagreement) works on the conviviality model of democratic engagement: if we like each other, everything will be okay. As long as we care for one another, our policies cannot go so far wrong. And there’s something to that. I often praise projects like Hands Across the Hills or Divided We Fall that work on that model—our political discourse would be better if we understood that not all people who disagree with us are spit from the bowels of Satan. The problem is that some of them are.

That sort of project does important work in undermining the notion that our current political situation is a war of extermination between two groups because it reduces the dehumanization of the opposition. I think those sorts of projects should be encouraged and nurtured because they show how much the creation of community can dial down the fear-mongering about the other.

They are models for how genuinely patriotic leaders and media should treat politics—by continually emphasizing that disagreement is legitimate, that we are all Americans, that we should care for one another. But that approach to politics isn’t profitable for media to promote, and therefore isn’t a savvy choice for people who want to get a lot of attention from the media.

It also isn’t a great model for when a group is actually existentially threatened (as opposed to being worked into a panic by media). This model says, if we apply it to all situations, that, if I think genocide is wrong, and you think it’s right, I should try to empathize with you, find common ground, show my compassion for you. And somehow that will make you not support a genocidal set of policies? I do think that a lot of persuasion happens person to person, when it’s also face to face. I’ve seen people change their minds about whether LGBQT merit equal treatment by learning that someone they loved would be hurt by the policies they were advocating. I’ve also seen people not change their minds on those grounds. Derek Black described a long period of individuals being kind to him as part of his getting away from his father’s white supremacist belief system, but the guy went to New College; he was open to persuasion.

And I think it’s a mistake to think that kind of person-to-person, face-to-face kindness makes much difference when we are confronting evil. Survivors of the Bosnian genocides describe watching long-time friends rape their sister or kill their family. It isn’t as though Jews being nicer to and about Nazis would have prevented genocide. It wasn’t being nice to segregationists that ended the worst kind of de jure segregation. We have far too many videos that show being nice to police doesn’t guarantee a good outcome. People in abusive relationships can be as compassionate as an angel, and that compassion gets used against them. We will not end Nazism by being nice to Nazis.

That kindness, compassion, and non-conflictual rhetoric is sometimes the best choice doesn’t mean it’s always the only right choice. It can be (and often has been) a choice that enables and confirms extraordinary injustice. It’s often only a choice available to people not really hurt by the injustice. Machiavellian rhetoric is sometimes the best choice; it’s often not.




















Racism, Biden, Trump, and the bad math of whaddaboutism

boxes

John Stoehr has a nice piece about what he calls the “malicious nihilism” of Trump supporting media and pundits. They’ve stopped trying to argue that Trump is not racist, since he explicitly stokes racism, but, they’re saying, since Biden is a Democrat, and Democrats used to be the party of racists, then Biden is racist too: “Fine, the GOP partisans now say, Trump is a racist. The Democrats are just as bad, though. May as well vote for the Republican.”

That’s just plain bad math.

It’s easy to point to so many things Trump and his Administration has said and done that are racist. Critics of Biden point to one thing he said, and what the Democratic Party was like prior to 1970. Those are not comparable. That way of thinking about Biden v. Trump ignores the important questions of degrees, impact, persistence.

It’s a weirdly common way of arguing about politics, though, and even interpersonal issues. There was a narrative about the Civil War for a long time which was that “both sides were just as bad,” and it was the mutual extremism about the issue of slavery that led to war.[1] The “mutual extremism” was this same bad math. There was one President between John Adams and Abraham Lincoln who didn’t own slaves (JQ Adams), Congress was so proslavery that the House and Senate both banned criticism of slavery for years (the gag rules), the Supreme Court ruled that African Americans could never be citizens. Criticism of slavery in slaver states could be punished by hanging; the Fugitive Slave Laws enabled slavers to kidnap African Americans in “free” states. Pro-slavery rhetoric regularly called for race war should abolition happen, and began calling for secession to protect slavery in the 1820s. Commitment to slavery was so dominant in slaver states that they went to war against the US.

There were pro-slavery Presidents; there was no abolitionist President (JQAdams would, after his presidency, become anti-slavery, but not clearly abolitionist). No state had a death penalty for advocating slavery; there was no gag rule for advocating slavery; abolitionists didn’t advocate civil war or race war; no one could go into a slaver state and declare an African American to be free and face the same low bar that kidnappers in the “free” states faced.

They weren’t both “just as bad” because they didn’t equally advocate violence, they weren’t equally powerful, advocating civil war was commonplace on only one side, the laws and practices they advocated weren’t equally extreme.

I wrote a book about proslavery rhetoric, and when I would make this point—“both sides” weren’t “just as bad”—neo-Confederates would say, “What about John Brown?” That’s the bad math. If, on one side, advocating and engaging in violence is commonplace, then one example on the other side doesn’t mean they’re both just as bad. You can even bring in Bloody Kansas and not get the amount of violence (and advocacy of violence) commonplace in supporting slavery to be anything close to the violence on the part of critics of slavery.

Here is my crank theory about why people reason that way. A lot of people really don’t (perhaps can’t) think in terms of degrees. They think in terms of categories (this is not the crank theory party—it’s a fairly common observation). Thus, you’re racist or not, certain or clueless, proud or ashamed; something is good or bad, right or wrong, correct or incorrect; you’re in-group or out-group, loyal or disloyal. They don’t think about degrees of racism, certainty, pride, goodness, loyalty, and so on.

There’s a funny paradox. Because they don’t think in terms of degrees (or mixtures—something might be loyal in some ways and disloyal in others), they believe that you either have a rigid, black/white ethical system, or you’re what they call a “moral relativist.” They actually mean “nihilist.” So, they hear “right v. wrong might be a question of degrees rather than absolutes” as saying there is no difference between right and wrong—one of their crucial binaries is “rigid ethical system of categories or nihilism.” That binary imbues those other binaries with ethical value—being rigid about loyalty v. disloyalty seems to be part of being a “good” person.

Because people like this think in terms of putting things in a box—something goes in the box of good or bad, racist or not racist, loyal or disloyal, then, if they can find a single racist thing related to Biden, he and Trump are in the same box. And, therefore, that box can be ignored when it comes to comparing them, since they’re both in it.

And this brings us back to Stoehr’s point. The attachment to rigidity, the tendency to think in terms of absolutes and not degrees makes these people actually incapable of ethical decision-making. Since wildly different actions are thrown into the box of “bad” or “racist,” people who reason this way can’t tell right from wrong. They can end up allowing, tolerating, encouraging, or even actively supporting wildly unethical actions because of their inability to think in nuanced ways about ethics. It’s moral nihilism.




[1] There weren’t only two sides, so the claim that “both sides” were anything is nonsensical. There were, at least, six sides. Pro-slavery/pro-secession, pro-slavery/anti-secession, anti-slavery/pro-colonization, anti-slavery/pro-full citizenship, anti-anti-slavery, anti-pro-slavery.

When every political issue is a war, shooting first seems like self-defense

train wreck
image from https://middleburgeccentric.com/2016/10/editorial-the-train-wreck-red/

For some time, we’ve been in a world in which far too much media (and far too many political figures) defenestrate public deliberation in favor of treating every policy decision as a war of extermination between two identities.[1] When a culture moves there, it’s inevitable that some group engages in what might be called “pre-emptive self-defense.” We’re there. It’s a weird argument, and profoundly damaging, but hard to explain.

The first time I ran across the proslavery argument, “We must keep African Americans enslaved and oppressed, because, if they had power, they would treat us as badly as we are treating them,” I thought it was really weird. I’ve since come to understand that it isn’t weird in the sense of being unusual. But it’s weird in the sense of being uncanny—it’s in the uncanny valley of argumentation in two ways. First, it’s turning the Christian value of doing unto as others as you would have them do unto you into a justification of vengeance: do unto them as they have done unto you, (which is a pretty clear perversion of what Jesus meant). Except, just to make it weirder, it isn’t what they have done unto you, but what they might do in an alternate reality. And that alternate reality requires that they are as violent and vindictive as you.

The argument is something like, “Yes, I am treating other people as I would not want to be treated, and as they have not treated me, but it’s justified because it’s how I imagine they would treat me in a narrative that also is purely imagined.”

This weird line of argument turns up a lot in arguments for starting wars. Obviously, wars start because some group attacks another; someone is the aggressor. So, when you think about pro-war rhetoric, you’d imagine that the side that is the aggressor would justify that aggression. They don’t. Instead, they present themselves as engaging in self-defense. They claim that their aggression isn’t really aggression, but self-defense because the other nation(s) will inevitably attack them. It’s self-defense against something that hasn’t happened (and might never). Pre-emptive self-defense.

For instance, Hitler invaded Poland because he intended to exterminate it as a political entity, exterminate most of its population, use it as a launching spot for a war of extermination against the USSR, and then make it (and other areas) a kind of Rhodesia of Europe, with “Aryans” comfortably watching “non-Aryans” act as serfs. But that isn’t how he justified it in his public rhetoric. In his September 1, 1939 speech announcing an invasion that had already started, he said the invasion was an act forced on him, that he had engaged in superhuman efforts to maintain peace, but Poland was preparing for war. Invading Poland was self-defense because Poland was intending to invade Germany, and had already fired shots (they hadn’t). [2] The various wars against the indigenous peoples of what is now the United States, even when they openly involved massacres, were rhetorically justified as self-defense because the indigenous peoples were, so the argument went, essentially hostile to “American” expansion, and therefore an existential threat.

In other words, pre-emptive self-defense says, we are going to invade this other nation while claiming that it isn’t an invasion but self-defense (although we’re the invaders) because they were going to be invaders or would be invaders if they could. That’s nonsense. That’s saying I’m justified in hitting you because I think that, were I in your situation, I would hit me.

It’s such an unintelligible defense that it isn’t even possible to put it into writing without ending up in some kind of grammatical moebius strip. Yet it’s obviously persuasive, so the interesting question is: how does that rhetoric work?

As I’ve often said, I teach and write about train wrecks in public deliberation, what are sometimes called “pathologies of public deliberation.” While there is a lot of interesting and important disagreement about specifics regarding the processes, on the whole, there’s a surprising amount of agreement among scholars of cognitive psychology, political science, communication, history of rhetoric, military history, social psychology, history, and several other fields about some generalizations we can make about what ways of reasoning lead people to unjust, unwise, and untimely decisions. And, basically, that agreement is that if the issues are high-stakes and the policy decisions will have long-term consequences, then relying on cognitive biases will fuck you up good. And not just you, but everyone around you, for a long time.

As it happens, deciding about whether to go to war, how to conduct a war, and whether to negotiate an end to a war are decisions that activate all the anti-deliberative cognitive biases. (Daniel Kahneman has a nice article explaining how some cognitive biases are pro-war.) So, there’s an interesting paradox: cognitive biases interfere with effective decision-making, arguments about whether to go to war (and how to conduct it) have the highest stakes, and those decisions are the most likely to trigger the cognitive biases. We reason the worst when we need to reason the best.

And what I’m saying is that we bring in that bad reasoning to every policy decision when we make everything a war. When people declare that a political disagreement is a state of war (the war on terror, war on Christmas, war on drugs, culture war, war on poverty), they are (often deliberately) triggering the cognitive biases associated with war. The most important of those is that our sense of identification with the in-group strengthens, and our tolerance for in-group dissent decreases. Declaring something a war is a deliberate strategy to reduce policy deliberation. It is deliberately anti-deliberative.

And one of the anti-deliberative strategies we bring in is pre-emptive self-defense. In war, that strategy consists of months of accusing the intended victim (the country that will be invaded) of intending to invade. Then, once the public is convinced that the country presents an existential threat, invasion can look like self-defense. In politics, that strategy consists of spending months or years telling a political base that “the other side” intends an act of war, a complete violation of the rule of law, extraordinary breaches of normal political practices (or claims they already have), then “us” engaging in those practices–even if we are actually the aggressor–looks like self-defense. Pre-emptively. Thus, pro-slavery rhetors insisted that the abolitionists intended to use Federal troops to force abolition on slaver states, pro-internment rhetors argued that Japanese Americans intended to engage in sabotage (Earl Warren said that there had been no sabotage was the strongest proof that sabotage was intended).

I think we’re there with the pro-Trump demagoguery about “voter fraud” (including absentee ballots, the same kind that Trump used–there is no difference between “absentee” and “mail-in” ballots)–it’s setting up a situation in which pro-Trump aggression regarding voting will feel like pre-emptive self-defense.

I asked earlier why it works, and there are a lot of reasons. Some of them have to do with what Kahneman and his co-author said about cognitive biases that favor hawkish foreign policy:

“Several well-known laboratory demonstrations have examined the way people assess their adversary’s intelligence, willingness to negotiate, and hostility, as well as the way they view their own position. The results are sobering. Even when people are aware of the context and possible constraints on another party’s behavior, they often do not factor it in when assessing the other side’s motives. Yet, people still assume that outside observers grasp the constraints on their own behavior.”

In the article, Kahneman and Renshon call these biases “vision problems,” but they’re more commonly known as “the fundamental attribution error” or “asymmetric insight” with a lot of projection mixed in.

The “fundamental attribution error” is that we attribute the behavior of others to internal motivation, but for ourselves we use a mix of internal (for good behavior) and external (for bad behavior) explanations. So, if an out-group member kicks a puppy, we attribute the action to their villainy and aggression; if they pet a puppy, we attribute the action to their wanting to appear good. In both cases, we’re saying that they are essentially bad, and all of their behavior has to be understood through that filter. If we kick a puppy, the act was the consequence of external factors (we didn’t see it, it got in our way); but petting the puppy was something that shows our internal state. In a state of war, even a rhetorical war, we interpret the current and future behavior of the enemy through the lens of their being essentially nefarious.

And we don’t doubt our interpretation of their intentions because of the bias of “asymmetric insight.” We believe that we are complicated and nuanced, but we have perfect insight into the motives and internal processes of others, especially people we believe below us. Since we tend to look down on “the enemy,” we will not only attribute motives to them, but believe that we are infallible in our projection of motives.

And it is projection. I’m not sure whether the metaphor behind “projection” makes sense to a lot of people now, since they might never have seen a projector. A projector took a slide or movie, and projected the image onto a screen. We tend to project onto the Other (an enemy) aspects of ourselves about which we are uncomfortable. If there is someone we want to harm, then projecting onto them our feelings of aggression helps us resolve any guilt we might feel about our aggression.

These three cognitive processes combine to mean that, quite sincerely, if I intend to exterminate you (or your political group, or your political power), I can feel justified in that extermination because I can persuade myself that you intend to exterminate me, since that’s what I intend to do to you.

Pre-emptive self-defense rationalizes my violence on the weird grounds that I intend to exterminate you and so you must desire to exterminate me. Therefore, all norms of law, constitutionality, Christian ethics are off the table, and I am justified in anything I do. It’s a dangerous argument. It’s an argument that justifies an invasion.



[1] And, no, “both sides” are not equally guilty of it. For one thing, there aren’t two sides. On which “side” is a voter who believes that Black Lives Matter, homosexuality is a sin, gay marriage should be illegal, we need a strong social safety net and should increase taxes to pay for it, abortion should be outlawed, the police should be demilitarized and completely changed? What about someone who believes there shouldn’t be any laws prohibiting any sexual practices or drug use, there shouldn’t be a social safety net, taxes should be greatly reduced, abortion should be legal, we shouldn’t intervene in any foreign wars? Those are positions held by important constituencies (in the first case many Black churches, and in the second Libertarians). Some environmentalists are liberals, some social democrats, some Republican, some racist, some Libertarian, some Third way neoliberal. The false mapping of our political world into two sides makes reporting easier and more profitable, and it enables demagoguery.

In addition, not all media engage in demagoguery to the same degree. Bloomberg, The Economist, Foreign Affairs, Foreign Policy, Nation, New York Times, Reason, Wall Street Journal, Washington Post are all media that sometimes dip a toe into demagoguery, but rarely. Meanwhile, The Blaze, DailyKos, Fox, Jacobin, Limbaugh, Maddow, Savage, WND and pretty much every group named by SPLC are all demagoguery all the time.

[2] Hitler was claiming that “Germans” who lived in Poland were oppressed. But, he said, “I must here state something definitely; […]the minorities who live in Germany are not persecuted.” In 1939.

Some of the highlights from Trump’s interview on Fox

Trump

From this interview on Fox.

WALLACE:  But, sir, we have the seventh highest mortality rate in the world. Our mortality rate is higher than Brazil, it’s higher than Russia and the European Union has us on a travel ban.

[….]

TRUMP:  Kayleigh’s right here. I heard we have one of the lowest, maybe
the lowest mortality rate anywhere in the world.

TRUMP: Do you have the numbers, please? Because I heard we had the best
mortality rate.

TRUMP: Number, number one low mortality rate.

[…] [He’s lying. By some statistics, we have the tenth highest mortality rate.
John Hopkins has the US as seventh highest mortality rate. ]

WALLACE VOICE OVER: The White House went with this chart from the European CDC which shows Italy and Spain doing worse. But countries like Brazil and South Korea doing better. Other countries doing better like Russia aren’t included in the White House chart.

[….]

TRUMP:  [About the prediction that covid would go away in summer.] I don’t know and I don’t think he knows. I don’t think anybody knows with this. This is a very tricky deal. Everybody thought this summer it would go away and it would come back in the fall. Well, when the summer came, they used to say the heat — the heat was good for it and it really knocks it out, remember? And then it might come back in the fall. So they got that one wrong.

 [March 16, 2020, Trump said it would go away. He wasn’t alone in making that prediction, but it was a minority opinion, as covid was thriving in hot places even then. ]

[…]

TRUMP: [Fauci’s} a little bit of an alarmist. That’s OK. A little bit of an alarmist.

[….]

TRUMP: I’ll be right eventually. I will be right eventually. You know I said, “It’s going to disappear.” I’ll say it again.

WALLACE: But does that – does that discredit you?

TRUMP: It’s going to disappear and I’ll be right. I don’t think so.

WALLACE: Right.

TRUMP:  I don’t think so. I don’t think so. You know why? Because I’ve
been right probably more than anybody else.

[….]

 TRUMP: Chris, let the schools open. Do you ever see the statistics on young
people below the age of 18? The state of New Jersey had thousands of deaths.

Of all of these thousands, one person below the age of 18 – in the entire
state – one person and that was a person that had, I believe he said diabetes.

One person below the age of 18 died in the state of New Jersey during all of
this – you know, they had a hard time. And they’re doing very well now, so
that’s it.

[So, notice that, not only is unconcerned about staff, but he doesn’t seem to understand the concept of the children infecting others, let alone the issues related to long-term damage from the disease.]

[….]

TRUMP: And Biden wants to defund the police.

WALLACE: No he, sir, he does not.

TRUMP: Look. He signed a charter with Bernie Sanders; I will get that one
just like I was right on the mortality rate. Did you read the charter that he
agreed to with…

WALLACE: It says nothing about defunding the police.

TRUMP: Oh really? It says abolish, it says — let’s go. Get me the charter,
please.

WALLACE: All right.

TRUMP: Chris, you’ve got to start studying for these.

WALLACE: He says defund the police?

TRUMP: He says defund the police. They talk about abolishing the police.

[It doesn’t.]

[….]

TRUMP: Because I think that Fort Bragg, Fort Robert E. Lee, all of these
forts that have been named that way for a long time, decades and decades…

WALLACE: But the military says they’re for this.

TRUMP: …excuse me, excuse me. I don’t care what the military says. I do –
I’m supposed to make the decision.

[….]

WALLACE: You said our children are taught in school to hate our country.
Where do you see that?

TRUMP: I just look at – I look at school. I watch, I read, look at the
stuff. Now they want to change — 1492, Columbus discovered America. You know,
we grew up, you grew up, we all did, that’s what we learned. Now they want to
make it the 1619 project. Where did that come from? What does it represent? I
don’t even know, so.

WALLACE: It’s slavery.

TRUMP: That’s what they’re saying, but they don’t even know.

[…]

TRUMP:  Biden can’t put two sentences together.

[….]

TRUMP:  I called Michigan, I want to have a big rally in Michigan. Do you know we’re not allowed to have a rally in Michigan? Do you know we’re not allowed to have a rally in Minnesota? Do you know we’re not allowed to have a rally in Nevada? We’re not allowed to
have rallies.

WALLACE: Well, some people would say it’s a health…

TRUMP:  In these Democrat-run states…

WALLACE:  But, wait a minute, some people would that it’s a health
risk, sir.

TRUMP: Some people would say fine

WALLACE:  I mean we had some issues after Tulsa.

TRUMP:  But I would guarantee if everything was gone 100 percent, they
still wouldn’t allow it. They’re not allowing me to do it. So they’re not —
they’re not allowing me to have rallies.

[….]

[About the test of his cognitive abilities—Wallace says it’s an easy test]

TRUMP:  It’s all misrepresentation. Because, yes, the first few
questions are easy, but I’ll bet you couldn’t even answer the last five
questions. I’ll bet you couldn’t, they get very hard, the last five questions.

WALLACE:  Well, one of them was count back from 100 by seven.

TRUMP:  Let me tell you…

WALLACE:  Ninety-three.

TRUMP: … you couldn’t answer — you couldn’t answer many of the
questions.

WALLACE:  Ok, what’s the question?

TRUMP:  I’ll get you the test, I’d like to give it. I’ll guarantee you
that Joe Biden could not answer those questions.

WALLACE:  OK.

TRUMP:  OK. And I answered all 35 questions correctly.

[On healthcare]

TRUMP:  Pre-existing conditions will always be taken care of by me and
Republicans, 100 percent.

WALLACE:  But you’ve been in office three and a half years, you don’t
have a plan.

TRUMP:  Well, we haven’t had. Excuse me. You heard me yesterday. We’re
signing a health care plan within two weeks, a full and complete health care
plan that the Supreme Court decision on DACA gave me the right to do. So we’re
going to solve — we’re going to sign an immigration plan, a health care plan,
and various other plans. And nobody will have done what I’m doing in the next
four weeks. The Supreme Court gave the president of the United States powers
that nobody thought the president had, by approving, by doing what they did —
their decision on DACA. And DACA’s going to be taken care of also. But we’re
getting rid of it because we’re going to replace it with something much better.
What we got rid of already, which was most of Obamacare, the individual
mandate. And that I’ve already won on. And we won also on the Supreme Court.
But the decision by the Supreme Court on DACA allows me to do things on
immigration, on health care, on other things that we’ve never done before. And
you’re going to find it to be a very exciting two weeks.

 

 

 

 

 

A short list of fallacies

broken table
image from https://www.sportsfreak.co.nz/super-bung-bung/broken-table/

Arguments are always series of claims; a valid argument is one in which the claims are connected. Think of it like a table—if the legs aren’t connected to the tabletop, then the table will fall over. Fallacious arguments are ones that lack legs entirely, or in which they aren’t connected to the tabletop. In most disagreements, we are in the realm of “informal” argumentation; that is, when formal logic doesn’t necessarily help us. Often, what determines whether an argument is fallacious isn’t simply the “form” of the argument, but how it works in context.

Productive disagreements need the people disagreeing (the “interlocutors”) to argue about the same issue, use compatible definitions, fairly represent one anothers’ positions, hold one another to the same standards, and allow each other to make arguments.

There are lists of fallacies that make very fine distinctions, and are therefore very long and detailed—this is a list that seems to work reasonably well for most circumstances.

Fallacies of relevance

A lot of fallacies break that first condition: they are claims that aren’t relevant to the disagreement, but they are inflammatory. They either distract people into arguing about irrelevant topics or else shut down the argument altogether.

Red herring. Some people use this term for all the fallacies of irrelevance. Red herrings are claims that distract the interlocutors (or observers) from the trail we should be following. The phrase probably comes from a story in which someone drags a red herring across the trail of a rabbit to fool the pursuers. (“red herring”); the claim someone has made is so stinky that people get distracted.

Argumentum ad hominem/ad personum/motivism. Contrary to what many people think, an attack on an interlocutor is not necessarily ad hominem. It’s only ad hominem (or fallacious) if the attack is irrelevant. Attacking someone’s credibility on the grounds that they don’t have relevant authority, accusing someone of committing a fallacy, or pointing out moral failings is not necessarily fallacious, if those factors are relevant. If I say that you shouldn’t be believed because you’re a woman, and your gender is irrelevant to the argument, then it’s ad hominem. Ad hominem often takes the form of accusing someone of being part of a stigmatized groups, such as calling all critics of slavery “abolitionists” or any conservative a “fascist.” Sometimes that derails the disagreement, so that we’re now talking about how to define “socialist,” and sometimes it is so inflammatory that we stop having a disagreement at all and are just accusing one another of being Hitler. A somewhat subtle form of ad hominem is what’s often called motivism; i.e., a refusal to engage an interlocutor’s argument on the grounds that you know they’re really making this argument for bad motives. Sometimes people really do have bad motives, but they might still have a good argument. The problem with motivism is that it’s often impossible to prove or disprove someone’s motives.

Argumentum ad misericordiam/appeal to emotions. As with ad hominem, appeal to emotions is not always a fallacy—it’s a fallacious move when it’s an attempt to distract, when the appeal is irrelevant. All political arguments (perhaps all arguments) have an emotional component—otherwise, we wouldn’t bother arguing. If I argue that something is a bad policy because it will cost one million dollars, I’m appealing to the feelings we have about saving or spending money. If you say it’s a bad policy because it will kill ten children, you’re appealing to feelings just as much as I am. Those appeals to emotion are fallacious if they’re irrelevant (e.g., our current policy costs a million dollars and kills ten children, then the new policy isn’t a change in either factor, so those arguments are probably irrelevant), or if they’re being used to distract from other issues or end the disagreement. If, for instance, I refuse to discuss any aspect of the policy other than cost, or I engage in hyperbole about what will happen if we spend a million dollars, then my argument is a fallacious appeal to emotions. It’s also fallacious if I say that you should vote for me because I have a really cute dog, I’ve had a hard life, I’ll cry if you don’t vote for me—those are all fallacious appeals to emotion. Crying to get out of a traffic ticket is a fallacious appeal to emotions. (And that example brings up the problem that fallacies are often effective.)

Tu quoque/whataboutism. This fallacy is the response that, “You did it too!” It’s fallacious when whether the interlocutor did it is irrelevant. The problem with tu quoque is that, if I’ve lied, pointing out that you lied doesn’t mean that what I said was true. We’re now both liars. Sometimes the fallacy involves false equivalency. For instance, if you and I are running for Treasurer, and I say that you’re a bad candidate because you embezzled, and you say that I embezzled too, that might be fallacious. If you’ve been Treasurer of multiple organizations and embezzled substantial amounts every time, and I once took a pen home for personal use, it’s fallacious (it’s also the fallacy of false equivalency—one argument can be multiple fallacies at once). If I say that honesty is the most important thing to me, and I condemn someone else for lying, and I’m lying in that speech, that I’m lying while condemning liars might be a relevant point. At that point, you might talk about my motives and not be involved in motivism—you can point out that I don’t appear to be motivated to engaging in rational argument.

Appeals to personal certainty/argumentum ad vericundiam/bandwagon appeal. When we’re arguing, appealing to an authority is inevitable. Appeals to authority are fallacious when they’re irrelevant—the site, source, or person being appealed to is not an authority, is not a relevant authority, has not made a claim relevant to the argument. For instance, if I say that squirrels are evil, and my proof is that I’m certain of that (appeal to personal certainty), then, unless I’m a zoologist who specializes in squirrels, my opinion is irrelevant. Appealing to a quote from Einstein would also be irrelevant—while he’s an expert, he was never an expert about squirrels. Quoting Einstein “God does not play dice with universe” does not help in an argument about theism, since he isn’t a theologian, he was refuting quantum physics, and he later changed his mind about quantum physics—it isn’t a relevant claim or made by someone with relevant expertise. Saying that something is true because many people believe it (bandwagon appeal) is another form of appeal to irrelevant authority—many people have been wrong about things before. That many people believe something is relevant for showing it’s a popular perception, but probably not for showing that it’s true.

Fallacies of process

In formal logic (if p then q) a process is valid or not regardless of context, but in informal logic, it’s more complicated, and we often end up having to talk about whether something is a fallacy because there is a way in which the claims are related, but weakly, or not related but might appear so, or they don’t necessarily follow. The notion of whether something necessarily follows is important. The claim that “A caused B” might be true (“Being hungry caused me to eat cookies”), but the two terms aren’t necessarily related—I might have eaten something else. When things are necessarily related, then A always causes B. Fallacies of process involve claiming that B follows from A when it doesn’t.

Binary reasoning. Some people argue that this fallacious way of thinking is behind a lot of fallacies of argument. Binary reasoning is the tendency to put everything into all or nothing categories (black or white thinking). So, a person is either a Christian or a Satanist, Republican or Democrat. Since situations are rarely a choice between two and only two options, putting things into binaries is frequently fallacious.

Genus-species fallacy /fallacy of composition/fallacy of division/cherrypicking. Drawing a conclusion about an entire category (genus) from a single example (species) is a fallacy, or even from a small set of examples. We tend to fall for that fallacy because of confirmation bias, a bias that means we notice (and value) data that confirms what we already believe. We’re also prone to let striking examples mean more than they should, simply because they come to mind (called “the availability heuristic”). An example is useful for illustrating a point, but they rarely prove it. Coming to a conclusion about a large category on the basis of one example is moving from species to genus (fallacy of composition) such as assuming that because the one French person you knew liked tap-dancing, all French people like tap-dancing. The more common fallacy is to move from genus to species (fallacy of division), assuming that, since something is part of a large category, we can assume that it has the characteristics we attribute to that big category. For instance, it’s fallacious to assume that, since the person is French (genus) they love croissants (species). Even if the characteristic is statistically true of the majority in that category (most Americans are Christian), it’s fallacious to assume that the individual in front of you necessarily fits that generalization. Picking only those examples (studies, quotes, historical incidents) that fit your claim is generally called “cherrypicking.”

False dilemma/poisoning the wells. If there are a variety of options, and one of the interlocutors insist there are only two, or insists that we really only have one (because they have unfairly dismissed all the others), then that person has fallaciously misrepresented the situation. “You’re either with me or against me” is a classic example of the false dilemma, especially since “with me” usually means “agree with everything I say.” You might disagree with something I say because you’re “for” me—you care about me, and think I’m making a bad decision.

Straw man/nutpicking. We engage in straw man when we attribute to the opposition an argument much weaker than the one they’ve actually made. We generally do this in one of three ways. First, if people are drawn to binary thinking, then they’re likely to assume that you’re either with us or against us. For instance, if they think a person is either completely loyal to a political party or they’re a member of the “other” party, then they’ll assume that anyone who disagrees with them is a member of the “other” party. (So, if I’m a binary thinker, and a Republican, and you criticize a Republican policy, I might assume that you’re a Democrat and then attribute to you “the” argument I think Democrats make.) Second, we will often unconsciously make an opposition argument (or even criticism) more extreme than it is—you’ve said something “often” happens, but I represent your argument as that that something “always” happens. Third, we will often take the most extreme member of an opposition group and treat them as representative of the group (or position) as a whole—that’s often called “nutpicking” (a term about which I’m not wild).

Post hoc ergo propter hoc/confusing causation and correlation. This fallacy argues that A preceded B, so it must have caused B. Of course, it isn’t always a fallacy—if A always precedes B, and/or B always follows from A, they must have some kind of relationship. The relationship might be complicated, though. While a fever might always precede illness, reducing the fever won’t necessarily reduce illness. Lightning doesn’t cause thunder—they’re part of the same event.

Circular reasoning. This is a very common fallacy, but surprisingly difficult for people to recognize. It looks like an argument, but it is really just an assertion of the conclusion over and over in different language. For instance, if I argue, “Squirrels are evil because they are villainous,” that’s a circular argument—I’ve just used a synonym. Motivism sometimes comes into play here. For instance, I might say, “Squirrels are evil because they never do anything good. Even when they seem to do something good, like pet puppies, they’re doing so for evil motives.” That’s a circular argument.

Non sequitur. This is a general category for when the claims don’t follow from each other. It’s often the consequence of a gerfucked syllogism. Sometimes people are engaged in associational reasoning.


A few other comments.

An argument might be fallacious in multiple ways at the same time. For instance, arguing that anyone who disagrees with me is a fascist who wants to commit genocide is binary thinking, ad misericodiam, motivism, and almost certainly straw man. And, once again, identifying a claim as a fallacy almost always requires explaining how it is fallacious.

Another way of thinking about fallacies is that they are moves in a conversation that obstruct productive disagreement. If you think about them that way, you get a list with a lot of overlap, but some differences.









Citations.
“red herring, n.” OED Online, Oxford University Press, June 2020, www.oed.com/view/Entry/160314. Accessed 15 July 2020.

In-groups, out-groups, and identity politics

building with face on it
Mussolini’s headquarters just before an important vote

I often say that the first step in demagoguery is the reduction of politics to identity. And I’m often understood to be making an argument that is very different from what I’m trying to say. It’s important to understand that I’m talking about in-groups and out-groups from within social group theory. So, the “in-group” is not the “group in power.” It’s the group someone is in.

If you meet a new person, and ask them to describe themselves, they’ll typically do it by listing whatever happens to seem to be the most relevant social groups they’re in (their “in-groups”): Christian, Irish-American, Texan, teacher. If I were at a conference of teachers, it would be weird for me to say that I’m a teacher, since everyone there is (it isn’t information anyone needs), and that I am Irish-American would only be irrelevant. I’d list the in-groups most salient for that setting.

We all have a lot of in-groups; our membership in those groups is a source of pride. We also tend to have at least some out-groups. Out-groups are groups against which we define ourselves—we are proud that we aren’t in them. They can get pretty specific. I’ve mentioned elsewhere that my kind of Lutheran (ELCA) often takes pride in not being that kind of Lutheran (e.g., Missouri or Wisconsin synod); college rivalries are in-/out-group; fans of a band often take pride in not being the losers who are fans of that band (or kind of music).

There are two ways I’m often misunderstood when I say that the first step in demagoguery is the reduction of politics to in-group/out-group. The first is that, since I’m saying that social groups are socially and rhetorically constructed, people think I’m saying that social groups have no material reality, and that would be a stupid thing to say. Being a cancer survivor is a very real and material identity. Even categories that are purely socially constructed with no basis in biology (the notion of “Aryans” v. Central or Eastern Europeans) had the very real and material consequences of Hitler’s serial genocides. I’m saying that there aren’t necessary and inevitable connections among social group, material conditions, and how the groups are constructed. What it means to be a “cancer survivor” varies from one culture to another (whether it’s a point of pride or shame, for instance)—that real and material identity doesn’t necessarily or inevitably lead to a specific social group or political agenda.

Second, I’m often understood to be arguing for some Habermasian/Rawlsian identity-free world of policy argumentation in which arguments (and not people), like autonomous mobiles in space, engage with one another. That kind of argumentation is neither possible nor rational.

Of course our identity is relevant to our argument; it’s one of many things we should consider. For instance, that someone is a cyclist means that they can give useful information about what feel like the safest places to ride a bike where they live. That’s relevant information because they’re a cyclist. My opinion about what are the safest places to ride is not relevant because I’m not a cyclist. Unless I’m a traffic engineer who has a stack of studies about accidents in the city. The traffic engineer (who may or may not be a cyclist) and the cyclist have views that should be considered. Neither one is necessarily right.

Thinking about politics in terms of social groups become toxic when we think those groups are discrete (you’re either in one group or another) ontologically grounded categories (meaning that we think we know everything we need to know about an individual when we categorize them into a social group). That notion that, once I’ve put you into a social group I know everything I need to know about your motives, beliefs, politics, and moral worth (you’re a teacher, so you’re a liberal elitist who supports Biden because he’ll increase teacher salaries and you’re greedy). You might really be a cancer survivor, teacher, cyclist, or traffic engineer, but once I know your membership in any of those groups, I don’t immediately know everything about you.

Identity politics is healthy when it is about acknowledging that we have a system that privileges some social groups over others, that some social groups might be possible to ignore (a person could have a long and happy life without ever understanding the distinction between Missouri and Wisconsin Synod Lutherans) but that some are so interwoven into community identity and political rhetoric you can’t not see them (such as “color” in the US), that there are real material conditions of being identified as belonging to some groups versus others, that claims about groups are generalizations that may or may not apply to specific individuals because of overlapping group membership, that overlapping group identities mean that membership in a specific group that guarantee identical experiences (intersectionality).

Those approaches aren’t ways of thinking about identity and its relationship to politics that contribute to demagoguery.

While it’s probably cognitively impossible not to be strongly influenced by notions of in-group, not everyone does so in the same way. In-group identification seems to require some notion of out-groups (or at least non-in-groups). We’re only aware of the boundaries of the in-group (the line that marks “in” so to speak) if there are boundaries, and that means at least the possibility of being outside those boundaries. There must be non-in-group members for there to be an in-group. There also must be groups of people who are outside those boundaries—out-groups. We tend to define ourselves by not being out-group.

What varies is how much hostility we feel toward non-in-group members, whether we group them all as one out-group, and whether we narrate ourselves as in a zero-sum battle. I might take pride in being ELCA and believe that that group has better theology than Missouri Synod, but that pride in my in-group doesn’t require that I feel threatened by members of the Missouri Synod; it doesn’t mean I believe that it is bad for me if something good happens to them, or that it is good for me if something bad happens to them (zero-sum).

When we think in terms of zero-sum, we fail to see ways that we might have shared interests, values, or goals with an out-group or some of its members. We will settle for policies that hurt us, as long as they hurt the out-group; we deny goods to the out-group, even if their getting those goods might benefit us.

So, when I say that we shouldn’t reduce politics to questions of identity, I don’t mean that consideration of identity is always a reduction, but it is a reduction when we assume that there are only two identities, that they are internally homogeneous, and they are inevitably in a zero-sum relationship with each other.


Privilege, ableism, and the just world model

stairs at university of texas

In a footnote on another post, I mentioned that the just world model is ableist. Someone asked that I explain.

Here’s the explanation.

The “just world model” says that good things happen to good people and bad things happen to bad people. It provides a kind of security: you can keep bad things from happening to you. The just world model says that: someone who was assaulted shouldn’t have had an open window (or gotten drunk, or worn that dress), the Black driver should have been more polite, the person who died of a heart attack shouldn’t have been such an over-achiever, the person who got cancer doubted God.

The just world model is a world in which individuals are in perfect and complete control of our lives. It’s a really comforting narrative. It’s magical thinking. It says that if you do this thing and don’t do that thing, you will be protected from disaster.

I have a crank theory that people look at a homeless person and respond in one of two ways: 1) I would never let that happen to me, and that person should just suck it up and get a job; or 2) There but for the grace of God go I.

My crank theory is that acknowledging our common humanity with a homeless person, that something like a TBI could put us in that situation, is terrifying for some people. Some people find the notion that individuals do not have perfect agency unimaginably threatening. Republicanism has embraced the just world model, especially in its attachment to neoliberalism (which is pure just world model), but also in its commitment to the Strict Father Model (if you exert complete control over your children you will raise them to be good).

Various non-partisan ideologies similarly say that, if a bad thing happened to you, you did something to deserve it (anti-vax, a lot of “healthy lifestyle” rhetoric, the idea that people who get cancer or have heart attacks had personality flaws that brought those conditions on). Thus, what might have its origin in an irrational desire to feel more comfortable about how much control we have in our own life ends up enabling a kind of political hardheartedness regardless of Dem v. GOP affiliation.

Regardless of whatever psychological needs the just world model soothes, the consequence of attachment to it is that it drops a sociopathic curtain between us and victims. One of the ways it does so is by closing off any possibility of talking about systemic discrimination.

I work on a campus much of which was built when the assumption was that anyone in a wheelchair shouldn’t be in public. There are steps everywhere. There are steps that aren’t necessary from an engineering perspective, but are there for aesthetic reasons. The way the campus is built means that there is an extra burden on someone who has even the slightest mobility issue—it’s harder for them to be a successful student, staff, or faculty member.

At this campus, being able-bodied gives a person a fair amount of privilege—it’s possible to schedule classes back to back that are in distant buildings, it’s easy to get to office hours regardless of where they are, there’s always a bathroom nearby you can use, you don’t show up to class or meeting already exhausted from negotiating the trip there. The just world model says that you earned that privilege by choosing not to have a disability—the people who are encumbered by the building design brought it on themselves. Since they could simply choose not to be encumbered, it isn’t necessary to do the expensive work of ensuring the buildings are accessible. There isn’t a systemic problem—there are just individuals, all of whom are getting what they deserve. So, the just world. Model simultaneously reinforces privilege and denies its existence.

Stop calling Biden a “socialist.” It just makes you look silly.

He’s a Third-Way Neoliberal.

The first thing to explain is that “neoliberalism” is not a lefty political/economic ideology. It’s conservative (I’ll explain why it has the word “liberal” in it below). Reagan was the first neoliberal President, and he did the most to reshape American policy as neoliberalist. Clinton, Obama, HRC, and Biden are not and were not socialists. They are “third way neoliberals.”

Here’s why it’s called neoliberalism.

In the late 18th and early 19th century, a political ideology arose that is often called “liberalism.” [1] The New Dictionary of the History of Ideas defines “liberalism:”
“It is widely agreed that fundamental to liberalism is a concern to protect and promote individual liberty. This means that individuals can decide for themselves what to do or believe with respect to particular areas of human activity such as religion or economics. The contrast is with a society in which the society decides what the individual is to do or believe. In those areas of a society in which individual liberty prevails, social outcomes will be the result of a myriad of individual decisions taken by individuals for themselves or in voluntary cooperation with some others.” [2]

It’s useful to distinguish between political and economic liberalism—a point that will take a while to explain.

It’s paradoxical, but important, to understand that all the major political parties and movements in the US endorse political liberalism, or claim to. The disagreement is how to honor individualism, but notice that, in the major policy disagreements, everyone argues from within a frame of promoting individual freedom (gun control is about the freedom to carry a gun or the freedom to speak freely without worrying about shot, the freedom to be LGBTQ+ or the freedom to condemn them).

In the nineteenth century, economic liberalism advocated no governmental intervention in the “free market,” saying that the “free market” would better determine prices, wages, and working conditions. In Britain, this led to the potato famine among other catastrophes. In the US, it led to a cycle of booms and busts, outrageous working conditions, and environmental degradation that tanked the economy (I have to meet a person who advocates this kind of liberalism who knows much of anything about the 19th century economic cycles, working conditions, or the dust bowl). Because liberalism was such a disaster—worldwide—as was shown in 1929, a lot of people started considering other options. There were, loosely, four options that countries chose.

In the early twentieth century, a lot of people argued that liberalism as a political philosophy could be separated from liberalism as an economic philosophy (in other words, economic and political liberalism aren’t necessarily connected). But many people argued (and still do) that the commitment to a political practice (authoritarianism, democracy, monarchy) can’t be separated from an economic practice (mercantilism, autarky, capitalism, and so on). Stalinists and fascists (who have a lot in common, rhetorically) endorsed that (false) notion that political and economic commitment are the same, and insis(ed)t that, if you choose this economic system, you are necessarily choosing that political system.[3] They were wrong, and they’re still wrong, but that’s a different post. [4]

In the 19th and early 20th century, there were a lot of kinds of socialism. That’s why Communist Manifesto spends about a third of the book arguing with other socialists about why they should be their kind of socialist. That’s also why various activists who were conservative in terms of things like sexuality but radical in terms of economic issues sometimes called themselves socialist (such as Dorothy Day), and were not endorsing Stalinism.

In the early twentieth century, a lot of people believed that “individuals can decide for themselves what to do or believe with respect to particular areas of human activity such as religion,” but the government can “intervene” in regard to issues like food safety, accuracy in advertising, fraud, consciously fatal work conditions, exploitative contracts, deliberate manipulations of the market, and so on.

In other countries, this was called democratic socialism, but FDR (if I have my history correct) called it liberalism. Supposedly, he thought that people would reject the “socialism” term, and his political agenda was liberal (but his economic one wasn’t). And he’s right. I can’t even begin to estimate the number of people who say, “SOCIALISM ALWAYS ENDS IN DISASTER” (they do like them some caps lock) when someone wants to reject economic “liberalism.” It simply isn’t true that rejecting economic liberalism ends in disaster, if people maintain political liberalism. On the contrary, if people try to maintain economic liberalism at the expense of political liberalism, disaster ensues.

A society with political, but not economic, liberalism is one that doesn’t require you to have particular religious, ideological, sexual, or even political ideologies, as long as it’s all consenting adults, and there’s no force involved. The basic premise of liberalism is that your right to swing your fist stops at my face, and so a society with political liberalism is always arguing about that point of contact.

Economic liberalism has a different problem. One of the problems is empirical. The contradiction at the heart of economic liberalism is that there is force involved—no market is free. The coercion might be the government coercing businesses into behaving certain ways, businesses coercing each other, businesses coercing employees, employees coercing business. Paradoxically, the only way to maintain the ability of the individual to decide for themselves (the core of liberalism) is if the government intervenes to ensure that the market doesn’t enable some individuals (or corporations) to engage in force.

Economic liberalism as a political program got hammered by the Depression and the needs of a war economy. Post-war, there were people who argued that we’d gone too far in the direction of government intervention in the market, and we needed to go back to economic liberalism. They’re called neoliberals, because it’s a new form of the classical liberalism of the 19th century. They argue that we should let the markets take care of almost everything. As I said, Reagan was a neoliberal.

Some people felt we went too far in the direction of neoliberalism, and, while we didn’t need the governmental intervention of LBJ’s Great Society, a market completely free of government control ground the faces of the poor, destroyed God’s creation, and landed us in unwise (and endless) wars (it’s important to understand how much of this political agenda is religious). The idea was that these goals could be achieved by the government working with the market to establish incentives. This kind of person is typically called a “Third Way Neoliberal.” They want to preserve as much freedom in the markets as is compatible with legitimate community ends. They support capitalism as the most desirable economic system.

Whether that’s possible is an interesting argument. Whether it leads to Stalin’s kind of socialism isn’t.[5] And that’s what Clinton, Obama, HRC, and Biden are and were. Third Way Neoliberals.






[1] There are never just two political ideologies at play in any given era, so people who think, “If you aren’t this, then you must that” are always reasoning fallaciously.
[2] Charvet, John. “Liberalism.” New Dictionary of the History of Ideas, edited by Maryanne Cline Horowitz, vol. 3, Charles Scribner’s Sons, 2005, pp. 1262-1269. Accessed 24 June 2020.
[3] Right now, we have this weird situation in which a lot of people who claim to be neoliberal in terms of economic agenda are arguing for fascism in the political agenda. David Neiwert has made that argument about Rush Limbaugh, for instance.
[4] If you want a really good book about the Nazi economy, and how it ended up being not what fascists supposedly want, Adam Tooze’s Wages of Destruction is deeply researched and elegantly argued.
[5] While some democracies have slid into authoritarianism, slowly voting in or allowing increasingly authoritarian policies to stand, they haven’t slowly moved into communism. Communism arises from people being in desperate situations, and there’s a violent revolution of some kind. As someone said, probably Orwell, you have to be in a desperate situation to be willing to give up ownership of your last cow.



Are Trump supporters racist? Yes. Are Biden supporters racist? Yes. Are they equally racist? No.

Notice that Japanese Americans must report for internment

Far too many people (mostly white)….

…..think that I just did something racist by saying “mostly white.”

People might think that because, if you stop someone on the street and ask them, “what does it mean to be racist?,” a lot of them would say it means:

1) consciously categorizing people by race;

2) and you can know that someone is doing that by “making race an issue” (that is, mentioning race);

3) “stereotyping” a race (that is, making a generalization about it), especially if the generalization is negative;

4) as a consequence of that conscious negative stereotype about the race, treating everyone of that race with aggression and hostility.

It would seem I’ve violated the first through third rule, so, if you think those are good ways of deciding what racism is, I’m racist.

Those actually aren’t good ways of deciding that something is racist (although it’s true that I’m racist). In the first place, these rules imply useless and cognitively impossible solutions to racism. They suggest that the solution to racism is to: not see race; not mention race; not make generalizations about groups; and never consciously behave badly to someone just because of their race.

In the US, we can’t not see race. Race is so important in our culture that saying you don’t see race is like saying you don’t see gender. Unless you are literally blind, you see race and gender. Those are the things we notice about someone immediately. We’re often wrong about someone’s gender, just as we’re often wrong about someone’s “race,” but we can’t help but categorize people. Individuals can resist, but never completely free ourselves of, the culture in which we have been raised. Even Gandhi struggled to free himself of thinking in terms of the caste system. What matters about Gandhi is that he recognized, and acknowledged (publicly) that he wasn’t free of thinking about people from within the caste system, and he tried to account for it.

Aristotle describes ethical action as much like aiming with a bow and arrow. His argument was that every virtue has extremes on either side. It’s a vice to be reckless, and a vice to be cowardly. It’s a vice to be spendthrift, and a vice to be a miser. We all have a tendency toward one extreme or another, just as we are prone to pull to one side or another when aiming a bow and arrow. [1] We need to acknowledge our tendency, so that we can adjust for it. That’s how racism works. We can’t escape it, but we can try to figure out how much it’s making us miss the mark, and adjust for it.

Aristotle’s point is that none of us is born with perfect aim. We can get to ethical actions by acknowledging our tendency to unethical action. The notion that acknowledging (or naming) race makes the action/statement racist guarantees we will not correct our aim. It’s like saying that your shot must have been good because you don’t see misses.

So, are Trump supporters racist? Yes. Are Biden supporters? Yes. They/we are all racist because we’re all Americans and Americans are racist. But not equally so.

Racism isn’t an either/or. It isn’t that we’re racist or not; it’s how racist we are and what we’re doing about it. It’s the fourth (false) criterion for racism that enables racism most effectively.

Racism is an unconscious bias. No one is unbiased. That isn’t how cognition works. You can’t perceive the world without perceiving it in light of what you already believe. Imagine that you’re a white person trying to find an office in a university building. You can find the door to the building because you have a stereotype about how buildings work. You walk past classrooms because you have a stereotype about classrooms. You walk into a room because you have a stereotype (and prejudices) about what an office looks like. For instance, it might say on the door, “Department of Rhetoric,” and you’re looking for that department. You have a prejudice (you have prejudged) that departments put their name on a door.

That’s why the argument that you shouldn’t stereotype groups is nonsense. We stereotype. That’s how we think. The very statement, “Generalizations are bad” is a generalization. Generalizing isn’t the problem.

You walk in to that office. There are several people. Whom do you assume is the executive assistant, and whom do you assume is the Department Chair?

You see a tall white male with slightly graying hair, a short stout Black woman of the same age as the white male, a younger white woman elegantly dressed, a person whose race and gender you can’t immediately identify. Whom do you treat as the receptionist?

Your decisions about whom to treat as the Chair are just as much questions of prejudging, stereotypes, and expectations as your decisions regarding finding the door (and it’s decisions, and not decision—there are a lot of factors). You can rely on your prejudgments, stereotypes, and expectations, or you can decide to treat humans differently from doors. You can’t not have the prejudgments; you can treat know that you have prejudgments and then act differently.

Racism isn’t getting up in the morning and deciding on whose lawn you’ll burn a cross. Racism is assuming the Black woman isn’t the Chair.

Does that mean that the non-racist thing to do is to walk into the office groveling in shame, filled with guilt, hating your whiteness? If you get your information from the GOP-propaganda machine, that’s what you’d think. They say that being anti-racist means being ashamed of being white (something no anti-racist activist has ever said would solve racism). Would walking into that room full of shame for being white change anything about the interaction? If, full of shame, you assumed the white guy was the chair, you’re still racist.

A lot of people assume that racism is a sin of commission, and the common notion about sins of commission is that you know you’re doing something that is a sin and you do it anyway. I think that’s pretty rare in racism. In fact, I’m not sure it’s ever the case.

My experience is that racists—even actual Nazis—don’t (or didn’t) see themselves as acting out of racism. Nazis these days call themselves “racial realists,” the real Nazis claimed that they were acting on the basis of objective and realist science. Racists think racism is irrational hostility to a race; racists believe that their stereotypes are grounded in data.

They’re grounded in confirmation bias.

Sometimes, racists say that they aren’t racist because their actions–such as wanting to restrict immigration from some group–are grounded in concerns about politics, not race. Therefore, they aren’t racist!

That’s how race-based genocide is justified. Native Americans had to be exterminated because they were a military threat. Jews were, the Nazis said, a political threat, as were Poles, Czechs, and various other non-Aryan “races” of central and eastern Europe. The people who engaged in lynching didn’t say they were doing something racist; they said they were trying to preserve a social order (that was racist). I’ve spent a lot of time crawling around the nastiest of the nastiest racist writings—both current and historical—and I can’t think of a time when racists called what they were doing “racist.”

In other words, even people engaged in racist-based genocide—the most extreme version of racism–have ways of rationalizing those actions so that they don’t see themselves as committing the sin of racism. Racism never seems to the racist to be a sin of commission because there are ways of pretending it isn’t racism–we pretend it’s about upholding “objective” (actually racist) standards (such as standardized tests, or arrest rates), reducing crime (but really the crime of not being white).

These were exactly the ways that Nazis criminalized being Jewish. Jews were more criminal, they said, and had arrest rates to prove it (because Jews were arrested for things that wouldn’t have resulted in an arrest for non-Jews), science that agreed Jews were essentially criminal, and media that promoted the stereotype of Jews as criminal.

Are Trump supporters racist? Yes, because they support the most openly racist President we’ve had since Wilson. Racism isn’t a binary; it’s a continuum. And Trump is very far on the racist side of the continuum.

Are Biden supporters racist? Yes, because Americans are racist. He isn’t as racist as Trump.

Does it hurt the feelings of Trump supporters to be called racist? Well, then don’t be racist. One way for Trump supporters to show they aren’t racist is for them to condemn Trump’s racism. Until they do, they’re more racist than Biden supporters.

If I’m a shitty driver and regularly run people over, I don’t get to say that I’m just as hurt by being called a shitty driver as the people are hurt by my running them over. If I want to stop being called a shitty driver, I should try to learn to drive better.


[1] If you’re a geek about this kind of thing, and you want a very scholarly, but beautifully written, book about the Athenians of Aristotle’s era and justice, Martha Nussbaum’s Fragility of Goodness changed my world.