On being nice to Trump supporters

people arguing
From the cover of Wayne Booth’s _Modern Dogma-

Cicero, in De Inventione, said that, if you are presenting an argument with which your audience already agrees, you land your thesis in the introduction. If you are arguing for something your audience disagrees, you delay your thesis. Oddly enough, as I’ve taught a lot of workshops across the disciplines for scholarly writing, I’ve found that Cicero is right. When people are making an argument their audience doesn’t want to hear, they delay their thesis, even in scholarly arguments (they have a partition instead, or sometimes a false thesis).

I have always required that my students write to a reasonable and informed opposition, and that means delaying their thesis, delaying their claims till after they’ve given evidence, beginning by fairly representing the opposition, getting evidence from sources their opposition would consider reliable, giving a lot of evidence, and explaining it well. I don’t have those requirements because I think this is what all teachers should teach–we shouldn’t. Since student writing requires announcing a thesis, giving minimal explanation, starting paragraphs with main claims, and various other non-persuasive strategies, it is responsible for people teaching the genre of college writing to teach students how to do that. I’m describing that pedagogy because I want it clear that I understand the value of reaching out to an audience and trying to find common ground.

The hope of rhetoric is that we can avoid violence by talking.

We use violence when we believe that we are in a world of existential threat, when we believe that the out-group is engaging in actions that might exterminate us. Sometimes that belief is an accurate assessment of our situation—Native Americans through the entire nineteenth century, Jews in Nazi Germany, free African Americans in the antebellum era, powerful African Americans in most of the nineteenth and twentieth century, Armenians in Turkey, and so on. Whether violence or non-violence is the most strategic choice for the people being threatened with extermination is an interesting argument. For me, whether third-party groups should use violence to stop the extermination is not an interesting argument. The answer is yes.

Sometimes the rhetoric of in-group extermination is simultaneously right and irrational. Antebellum white supremacists correctly understood that abolition would mean that their political monopoly would end were African Americans allowed to vote. Their sense of existential threat was the consequence of so closely and irrationally identifying with white supremacy–with believing that losing that system was essentially extermination. It wasn’t; it was just losing the monopoly of power. Racist demagoguery enabled them to persuade themselves that, because they were threatened with extermination, they were not held by any bounds of ethics, Christianity, legality.

That’s how demagoguery about existential threat works, and that’s what it’s intended to do. It’s designed to get people to overcome normal notions that we should follow the law, be fair to others, listen to others, treat children well, be compassionate, behave according to the ethical requirements of the religion we claim to follow, and so on by saying that, while we are totally ethical people, right now we have to set all that aside–because we’re faced with extermination. When, actually, we’re just faced with losing privilege. That connection is sheer demagoguery.

Republicans now correctly understand that allowing everyone to vote would end their political monopoly. White evangelicals correctly realized that they were losing the political power they had with Bush and Reagan. Coal miners are faced with a world that doesn’t need a lot of people to have that job. Racists, homophobes, and bigots of various kinds are being told they need to STFU. None of these groups are faced with being actually exterminated, but they are faced with their political power being lessened. And too many people in those groups listen to media that has taken the Two-Minute Hate to 24/7 demagoguery about existential threat.

Trump supporters have spent years drinking deep from the Flavor-Aid of the pro-GOP Outrage Machine, and so they believe a lot of things. They believe they’re the real victims here, that the media is against them, that white people are about to be persecuted, that there is no legitimate criticism of their position, that libruls have nothing but contempt for them and think they’re racist,that they are so threatened with extermination that anything done on their behalf is justified.

And here I have to stop and say that authoritarians (regardless of where they are on the political spectrum, and authoritarians are all over the place, but at any given time they tend to congregate on a few spots) misunderstand the concept of analogy. If, for instance, I say that supporters of Hitler reasoned the same way that squirrel haters are now reasoning, I am not saying that they are the same people (or dogs) in every way. I am not making an identity argument; I am making an argument about reasoning.

But, all over the political spectrum, people who are, actually, reasoning the way that people who supported the Nazis reasoned, are outraged at the comparison. It isn’t a comparison about identity; it’s a comparison about methods of reasoning.

We aren’t in a crisis of facts. Everyone has facts. We’re in a crisis of meta-cognition. We have a President who is severely cognitively impaired and obviously declining rapidly, fires people who disagree with him, can’t make a coherent argument for his policies, doesn’t argue from a consistent set of principles. Trump supporters can find ways to support him, but none of those ways fit all the other ways, let alone are ways that explain their opposition to out-group members. The debacle about ingesting disinfectants is just the latest.

We are at a point when the defenses of Trump are that he doesn’t have the skills to be President–he is thin-skinned (he was so obsessed with impeachment that he couldn’t pay attention to anything else), lies all the time (his height, weight, the number of people at his inauguration, whether he was talking to Birx), forces other people to lie on his behalf (such as Trump supporters lying that he was so obsessed with impeachment he couldn’t do anything else, although he also said that wasn’t true), refuses to listen to anyone (which his supporters defend by blaming the disloyal people), gives briefings when he doesn’t actually know what he’s talking about (every briefing), and often says things that aren’t what he meant (every defense of Trump).

What I’m saying is that Trump supporters grant all the criticisms of Trump–their argument is that he’s incompetent.

But their defenses of him show something about them–that they can’t put forward a rational defense of him. I mean “rational” in the way that theorists of argumentation use the term. They can’t put forward an argument for Trump without violating most of rules of rational-critical argumentation. (And, I’d love to be proven wrong on this, so if any Trump supporters want to show me an argument for him that follows that rules, I’d love to see it.)

In other words, support for Trump isn’t about any kind of rational support for his enhancing democratic deliberation, nor even his trying to ground his political decisions and rhetoric in a coherent ideology, but a “fuck libruls, we’re winning” rabid tribal loyalty that eats its own premises.

Trump happens to be the most obvious example right now, but, again, all over the political spectrum are people who can’t defend their positions in a coherent and consistent way. They can defend their positions—but by giving evidence that relies on a major premise they don’t believe, engaging in kettle logic, or whaddaboutism.

If we’re paying attention to Cicero, then we should find common ground with them, be fair to their representation of their own argument, and delay our theses. And, as I said, I think that is great advice.

But it isn’t useful advice when we’re arguing with people who, as soon as they sense you are going to criticize them, refuse to listen because they think they know what you are going to argue, and they know they shouldn’t listen. People well-trained in what the rhetoric scholars Chaim Perelman and Lucie Olbrecths-Tyteca called “philosophical paired terms” just assume that, if you’re saying Trump isn’t the best, then you are part of the ruling elite–just as Stalinists used to say that Trotsky must be a capitalist, since he criticized Stalin; Nazis said that anyone who criticized Hitler must be a Jew; anyone who opposed McCarthy was a communist; slavers said that anyone who criticized slavery must want a race war. If you aren’t with us, you are against us.

In the 1830s, the major critics of slavery were predominantly Quakers and free African Americans who described slavery accurately, but that (accurate, it should be emphasized) description hurt the feelings of slavers.

Slavers and pro-slavery rhetors said that any criticism of slavery was an incitement to slave rebellion. Much like pro-Trump rhetoric that inadvertently gives away the game–their argument is that he doesn’t have the skillset to be a good President–this rhetoric gave away that slaves hated being slaves, and that the actual conditions of slavery were indefensible.

Many people tone-policed the anti-slavery rhetors (to the extent of having a gag rule in Congress, which is pretty amazing if you think about it). Oddly enough, some anti-slavery rhetors said that these (accurate) descriptions of individual slavers beating and raping slaves were inflammatory, and so some of them tried to write conciliatory anti-slavery tracts. They were accused of fomenting slave rebellion.

Individuals can be persuaded to change their ways on the basis of individual interactions, and there are a lot of anecdotes saying that can work. That’s how individuals leave cults, for instance. But conciliatory rhetoric to groups of people who are drinking deep from a propaganda well is a waste of time.

If you have a personal connection to someone who is a Trump supporter, then building on that personal connection might work, but it’s worth noting that the notion of being able to change people is why people stay in abusive relationships.

But, when we’re talking about relative strangers–the strange world of social media interlocutors–then I don’t think engaging the claims is as useful as pointing out the inability to follow the basic rules of rational-critical argumentation. When people are fanatically committed to an ideology that is internally incoherent and incapable of defended in rational-critical argumentation—and that’s where support of Trump is now—no level of “let’s be inviting to them” will persuade them. It’s worth the time to be precise in our criticisms of their position, but not because being precise will be more or less rhetorically effective. It’s worth the time to be right.

People in rhetoric need to understand that some people are engaged in good faith argumentation, and some aren’t, and we behave toward them differently.

It is impossible to defend Trump through rational-critical argumentation.

Shaming Trump supporters on that point is a good rhetorical strategy. Whether you do that through conciliation with individuals or through generally pointing it out is an audience choice.







Emma Goldman

cat lying on a hat

When we were living in Cedar Park (or, as I call it, Cedar Fucking Park), there were neighbors who let their dogs out at night. Those dogs killed small dogs and cats, and a malfunctioning garage door meant that two of our cats were out while those dogs were looking for animals to kill.

We got two kittens, Winston Churchill (who ended up being more like Winston Smith) and a torby we named Emma Goldman. We are naming all our cats after anarchists from now on, because that’s what they are. (Although an argument could be made that they’re all believers in absolute monarchy but disagree as to who the absolute monarch is.) Winston brought home a virus, and it got into one of Emma’s eyes (iirc, a variety of herpes), and we were giving her eye drops and pills for I don’t even remember how long. Eventually, the vet recommended we give up and get the eye removed, so that’s what she did.

I still feel bad about this. After we did that, she changed personality. She became a loving and affectionate cat. She had obviously been in a lot of pain.

Being one-eyed had absolutely no impact on her ability to play with bits of string, correctly assess a jump, or various other activities that would seem to require stereoscopic vision. Cats are amazing.

She liked to sit in our laps while we were at our desks, or just sit on our chairs. She was not always gracious about letting us sit in our desk chairs. After a while, I discovered that having her sit in my lap while I worked hurt my back (I still don’t know why), and so I set up a basket on my desk for her. Jim continued to let her sit in his lap. She really got to like the basket, and I attribute my scholarly productivity to having a cat I could scratch while writing.

Once, she urped onto USB ports on Jim’s CPU unit that caused Jim to spend hours diving deep into the Windows registry in order not to get error messages. Personally, I suspect that she felt that the fish he had shared at dinner was over-cooked, and she was teaching him a lesson. I wouldn’t put it past her.

She was never actually a fat cat, but she had a kind of grandeur, and so the joke started about her being a Fat Cat Banker. She would have been a damn good banker. She was sensible, good at assessing choices, and she completely dominated the dogs. She was an early poster in the “cats against feminism tumblr,” arguing that she earned that chair because dogs (iirc).

We live on a busy street, near a creek that has coyotes, and so our cats are indoor cats. Jim built a catio for the cats, so that they could be outside and watch birds at a bird feeder, and Emma liked it. But our house is on clay that’s on limestone, and that means that doors suddenly don’t close the same way if there’s been the right amount of rain. One night, a door didn’t close, and Emma got out. We were frantic. Jim was walking the road, and Jacob and I were searching in the backyard, and she suddenly materialized in front of Jacob. She didn’t come running from another place, or come over the fence. She was just suddenly there.

That was her superpower. You couldn’t find her anywhere, and then, there she was. She did that in the house too.

Another time, she got out, and she got out of the backyard, and I frantically chased her. I had one of those nightmare-like slow motion experiences of watching her run toward the road while a car was coming. She hit a car. She bounced off the wheel.

After that, we would sometimes take her out in the backyard if we were going to sit and read and she was reliable. She would hang out and survey her demesne. We knew it was time when she wasn’t enjoying being in the yard.

She was a badass cat. She did what she wanted to do. She knew what she wanted, and she asked for it. She was clear on her boundaries, and enforced them without anger. I admired her. We had a rule about no cats on the dinner table, but once she hit a certain age, that became more of a guideline than a rule. She would often try to get food from me, but since I’m more vegetarian than not, that didn’t work out well for her (although it shows that she was attentive as to who was the bigger sucker). Jim, however, was a goldmine.

She was a torby, and we learned that torbies have a reputation for not putting up with shit (as Jim can say, since he had to have antibiotics for the time she bit him at a vet). Otoh, we spent the last three weeks giving her 100ml subcutaneously every day, and it was all good.

After Winston died, she would join us in the morning for snuggles. Sometimes—there was no clear pattern—she would come and sleep on someone for a while. When Clarence was in bad shape, and her kidney issues made themselves clear, she would come to the bedroom during the night and paw at the blanket till we adjusted to let her sleep under the covers with us. Friends recommended heating pads, and that helped, but she would still sometimes want to be with us. I don’t know why, but I know that she knew what she wanted, and asked for it. That’s who she always was.

She loved Pearl. She loved having Pearl boop her head. Pearl was completely intimidated by her, and so was always a little cautious about booping, which is so incredibly sweet on both their parts.

Because of social distancing, the vet put her down in the backyard she loved, in the sun. She was so weak that it took less than the vet expected. She is buried in that yard, just outside a window of the room she loved.

She was an oddly present cat. She was just always there. And so going through every room is being aware that she is not there. Even sitting in the backyard is being aware she isn’t there. She was always present in our lives for sixteen years.

A friend once said that, when someone you love dies, you never get over it, and you never stop thinking about them. It’s just that they move to a different place in your life. I admired Emma. I admired her ability to be loving, clear with boundaries, and rarely angry. I will miss her so much, and look for her in rooms for months, and I will think of her for the rest of my life. I will also keep myself from urping into Jim’s CPU, no matter how angry I am with him.

How persuasion happens

train wreck

Some time in the 1980s, my father said that he had always been opposed to the Vietnam War. My brother asked, appropriately enough, “Then who the hell was that man in our house in the 60s?”

That story is a little gem of how persuasion happens, and how people deny it.

I have a friend who was raised in a fundagelical world, who has changed zir mind on the question of religion, and who cites various studies to say that people aren’t persuaded by studies. That’s interesting.

For reasons I can’t explain, far too much research about persuasion involves giving people who are strongly committed to a point of view new information and then concluding that they’re idiots for not changing their minds. They would be idiots for changing their mind because they’re given new information while in a lab. They would be idiots for changing their mind because they get one source that tells them that they’re wrong.

We change our minds, but, at least on big issues, it happens slowly, due to a lot of factors, and we often don’t notice because we forget what we once believed.

Many years ago, I started asking students about times they had changed their minds. Slightly fewer many years ago, I stopped asking because I got the same answers over and over. And what my students told me was much like what books like Leaving the Fold, books by and about people who have left cults, changed their minds about Hell or creationism, and various friends said. They rarely described an instance when they changed their mind on an important issue because they were given one fact or one argument. Often, they dug in under those circumstances—temporarily.

But we do change our minds, and there are lots of ways that happens, and the best of them are about a long, slow process of recognition that a belief is unsustainable.[1] Rob Schenck’s Costly Grace reads much like memoirs of people who left cults, or who changed their minds about evolution or Hell. They heard the counterarguments for years, and dismissed them for years, but, at some point, maintaining faith in creationism, the cult, the leader of the cult, just took too much work.

But why that moment? I think that people change their minds in different ways partially because our commitments come from different passions.

In another post I wrote about how some people are Followers. They want to be part of a group that is winning all the time (or, paradoxically, that is victimized). They will stop being part of that group when it fails to satisfy that need for totalized belonging, or when they can no longer maintain the narrative that their group is pounding on Goliath. At that point, they’ll suddenly forget that they were ever part of the group (or claim that, in their hearts, they always dissented, something Arendt noted about many Germans after Hitler was defeated).

Some people are passionate about their ideology, and are relentless at proving everyone else wrong by showing, deductively, that those people are wrong. They do so by arguing from their own premises and then cherry-picking data to support that ideology. They deflect (generally through various attempts at stasis shift) if you point out that their beliefs are non-falsifiable. These are the people that Philip Tetlock described as hedgehogs. Not only are hedgehogs wrong a lot—they don’t do better than a monkey throwing darts—but they don’t remember being wrong because they misremember their original predictions. The consequence is that they can’t learn from their mistakes.

Some people have created a career or public identity about advocating a particular faction, ideology, product, and are passionate about defending every step into charlatanism they take in the course of defending that cult, faction, ideology. Interestingly enough, it’s often these people who do end up changing their minds, and what they describe is a kind of “straw that breaks the camel’s back” situation. People who leave cults often describe a sudden moment when they say, “I just can’t do this.” And then they see all the things that led up to that moment. A collection of memoirs of people who abandoned creationism has several that specifically mention discovering the large overlap in DNA between humans and primates as the data that pushed them over the edge. But, again, that data was the final push–it wasn’t the only one.

Some people are passionate about politics, and about various political goals (theocracy, democratic socialism, libertarianism, neoliberalism, anarchy, third-way neoliberalism, originalism) and are willing to compromise to achieve the goals of their political ideology. In my experience, people like this are relatively open to new information about means, and so they look as though they’re much more open to persuasion, but even they won’t abandon a long-time commitment because of one argument or one piece of data—they too shift position only after a lot of data.

At this point, I think that supporting Trump is in the first and third category. There is plenty of evidence that he is mentally unstable, thin-skinned, corrupt, unethical, vindictive, racist, authoritarian, dishonest, and even dangerous. There really isn’t a deductive argument to make for him, since he doesn’t have a consistent commitment to (or expression of) any economic, political, or judicial theory, and he certainly doesn’t have a principled commitment to any particular religious view. It’s all about what helps him in the moment, in terms of his ego and wealth. That’s why defenders of his keep getting their defenses entangled, and end up engaging in kettle logic. (I never borrowed your kettle, it had a whole in it when I borrowed it, and it was fine when I returned it.)

The consequence of Trump’s pure narcissism (and mental instability) and lack of principled commitment to any consistent ideology is that Trump regularly contradicts himself, as well as talking points his supporters have been loyally repeating, abandons policies they’ve been passionately advocating on his behalf, and leaves them defending statements that are nearly indefensible. What a lot of Trump critics might not realize is that Trump keeps leaving his loyal supporters looking stupid, fanatical, gullible, or some combination of all three. He isn’t even giving them good talking points, and many of the defenses and deflections are embarrassing.

For a long time, I was hesitant to shame them, since an important part of the pro-GOP rhetoric is that “libruls” look down on regular people like them. I was worried that expressing contempt for the embarrassingly bad (internally contradictory, incoherent, counterfactual, revisionist) talking points would reinforce that talking point. And I think that’s a judgment that people have to make on an individual basis, to the extent that they are talking about Trump with people they know well—should they avoid coming across as contemptuous?

But for strangers, I think that shaming can work because it brings to the forefront that Trump is setting his followers up to be embarrassed. That means he is, if not actually failing, at least not fully succeeding at what a leader is supposed to do for his followers. The whole point in being a loyal follower is that the leader rewards that loyalty. The follower gets honor and success by proxy, by being a member of a group that is crushing it. That success by proxy comes from Trump’s continual success, his stigginit to the libs, and his giving them rhetorical tactics that will make “libs” look dumb. Instead, he’s making them look dumb. So, pointing out that their loyal repetition of pro-Trump talking points is making them look foolish is putting more straw on that camel’s back.

Supporting Trump, I’m saying, is at this point largely a question of loyalty. Pointing out that their loyalty is neither returned nor rewarded is the strategy that I think will eventually work. But it will take a lot of repetition.



[1] Conversions to cults, otoh, involve a sudden embrace of this cult’s narrative, one that erases all ambiguity and uncertainty.

Abolitionist conspiracies, leftists as the “ruling class,” and the pleasure of implausible scapegoating

In the mid-1830s, the British writer Harriet Martineau visited the United States, and she found many slavers who were up in arms about the American Abolition Society having “flooded” the South with an anti-slavery pamphlet. She asked whether any of them had actually seen the pamphlet, and was met with outrage—how could she doubt the word of gentlemen? A lot of people didn’t doubt the word of those “gentlemen,” and the myth of the 1835 massive pamphlet mailing remains in history books (Fanatical Schemes, see especially 149-150, and Gentlemen of Property and Standing). It never happened. Martineau had already met with the people who had sent pamphlets to one post office, and who had agreed to send no more, so she suspected (correctly) that it hadn’t. She didn’t tell the slavers they were wrong, but she did ask what evidence they had, and their “evidence” was that their personal certainty, and the certainty of reliable people, all grounded in what their media said.

This mythical event was brought up in the next Congress, and people acted on the basis of a thing that never happened. The antebellum era had a lot of instances of that kind of thing—the fabricated Murrell conspiracy, various non-existent abolitionist plots, Catholic conspiracies against democracy.

People believed those myths for two reasons (which might actually be one): those myths were repeated endlessly by in-group (us) media, and those myths fit the overall narrative of that in-group media.[1] That overall narrative was one common to cultures of demagoguery: yes, we have a lot of problems, and it might look as though those problems are the consequence of slavery. But they aren’t! All of those problems are caused by the actions of Them.

Slavery had an almost endless number of ethical, practical, and rhetorical contradictions. People who claimed to be Christian rejected and deflected Jesus’ very clear commandment to “do unto others as you would have do unto you” (all cultures of demagoguery fail that test); they ignored, denied, and deflected very clear rules in Scripture about how to treat slaves; they reframed the very clear instructions about caring for the poor and weak as the need to enslave them. In short, Scripture is pretty clear: do unto others as you would have done unto you, take care of the poor and marginal. The problem for people who want to enslave, exterminate, or oppress others and yet want to see themselves as Christian is always how do we reconcile the cognitive dissonance?

We reconcile that cognitive dissonance through myths. And, oddly enough, the people who are now rationalizing a system that grinds the faces of the poor engage in the same non-falsifiable and extraordinarily self-serving myth in which slavers engaged: that people who are oppressed deserve their oppression.

This is an example of the just world model, the notion that bad things only happen to bad people, and that people who succeed earned that success, and that poor people are poor, not because of structural inequities, greed on the part of the wealthy, but because our system is too kind to the poor, making them choose to be poor.

From a Judeo-Christian perspective, the notion that we should be crueler to the poor in order to inspire them to be less poor requires a lot of intricate dancing in regard to Scriptural interpretation, with some ignoring or engaging in intricate explanations of anything Jesus said, in favor of open cherry-picking of the Hebrew Bible. It also requires a lot of intricate dancing in terms of data, with some serious cherry-picking. But, really, when people have decided that Jesus’ saying “Do unto others as you would have them do unto you” doesn’t actually mean, well, doing unto others as you would have them do unto you, they can swallow a camel.

And they swallow a camel by swallowing circular arguments. Given that people whom we oppress are inferior, we can conclude they are inferior. Given that people who are poor deserve being poor, we can conclude that they deserve to be poor. Given that POC should be treated differently, we can conclude that they are different. Given that only inferior races are enslaved, we can conclude that those races are inferior. Given that we need to believe that slaves are happy, slaves are happy.

There are similar myths now: the American military is unbeatable, the free market solves all problems, government does everything wrong, cutting taxes boosts the economy, if you have enough faith you will be healthy and wealthy. People who are or were deeply committed to those myths have (or had) to explain slave rebellions, military quagmires, famines, situations in which even libertarians want the government to intervene, such as the Tea Party political figures who were outraged with what Obama did in 2008, but are now voting for a bigger bailout.

Failure presents people, and a community, with an opportunity to reflect sensibly on what we’ve been doing and thinking. The collapse of a relationship, failing a test, getting fired–these are all opportunities for us to tell stories about ourselves in which we behave differently.

Or not.

I had a friend who kept getting dumped because, his girlfriends said, he was too critical. I tried to suggest that maybe he should be less critical, but he insisted women were wrong not to appreciate how he was trying to help them. I used to have friends who lost money on timeshares multiple times. Maria Konnikova’s fascinating The Confidence Game describes how con artists con the same people multiple times.

Instead of reconsidering our commitment to an ideology, narrative, or sense of ourselves (a path that would admitting to people we were wrong, losing face, reconsidering all sorts of beliefs and relationships) we have the option of treating this situation as an exception. And it’s an exception either because of a lack of will—so if we recommit to our problematic ideology with greater will, then it will work. In other words, instead of the failure of a policy or ideology being an indication we should reconsider it, the problem is that we didn’t beleeeeeve in it strongly enough, and the failure is proof that it was the right course of action all along.

(No matter how times I see people react that way—and it happens in all the communities I’ve studied that ended up in train wrecks—it surprises me.)

Recommitting with greater will is almost always paired with scapegoating some group. They are the reason that our flawless plan keeps failing. And because They are so cunning and nefarious, we are justified in more extreme measures.

Normally, we tell ourselves and anyone who will listen, we would be kind to slaves, take care of the poor, respect the law (and so on), but we are forced to be heartless and suspend laws by Them. And what continually surprises me about the effectiveness of this scapegoating is how completely implausible the scapegoats are. Slavers picked on abolitionists—who, at the time they started getting scapegoated, were a tiny group of mostly Quakers. Hardly very threatening, and extremely unlikely to be fomenting race war.

Mid-19th century fantasies of a Catholic conspiracy to overthrow the United States involved a highly improbable collaboration among Irish, Italian, and German Catholics (the Irish wouldn’t even let the Italians worship with them in New York, let alone share political power) led by the Hapsburg Emperor and the Pope.

The Nazi fantasy about Jews had them as both communists and capitalists, a neat trick, and was persuasive enough that people accused any critic of Nazism of being either a Jew or a stooge of the Jews. As the scholar of rhetoric Kenneth Burke pointed out, that there appears to be a contradiction was taken by true believers as proof of the cleverness of the Jews.

Rush Limbaugh scapegoats liberals, who are “the ruling class.” As with the scapegoating of abolitionists or Jews, this scapegoating is simultaneously an elaborate and contradictory narrative, in which government employees, university professors (especially in the humanities), and environmentalists (hardly people with a lot of economic or political power), funded by George Soros and Bill Gates, are more powerful than actual billionaires who are actually in political office.

That this narrative is implausible and incoherent—if libruls were that powerful, they wouldn’t be grading first-year composition papers—just shows the cleverness of the libruls (as the apparent impossibility of an effective conspiracy of abolitionists, Catholics, Jews was evidence of the brilliant plan). Libruls are like the evil villains in old movies, who, instead of just shooting the hero, create Rube Goldberg machines to kill the hero and his sidekick.

The inchoate nature of the conspiracy (what, exactly, is the goal of the librul conspiracy? To work in the Post Office? Surely clever people would come up with a better endgame than that) means that Limbaugh can’t be proven wrong, that anything and everything can be blamed on the ruling elite, and no evidence that the GOP is actually the problem needs to be considered.

The American Anti-Slavery Society never flooded slave states with pamphlets; the problems with slavery weren’t caused by abolitionists.

[1] “In-group” doesn’t mean the group that’s in power, but the group people are in.

Arguing with extremists

My first experience of the digitally connected public sphere was Usenet in the mid-80s, and since then I’ve spent a fair amount of time arguing with people, including arguing with extremists. Here are some notes I recently made about what I’ve learned by arguing on the underbelly of the internet.

Highly-educated people don’t necessarily argue better than people with a lot fewer degrees.

People reason associatively, grounded in the binary of some things are good, and some things are bad. If something is associated with a good thing, it can’t be bad in any way. (This explains why people, in response to substantive criticism of a public figure, say, “S/he couldn’t have done that because s/he did this completely unrelated good/bad thing.”

Some (many?) people think and reason in binaries and extremes (all or none, always or never) when they’re threatened (and some people are easily threatened). Not everyone does this, but the people who don’t are rare; I’ve seen it all over levels of education, ideological commitment, apparently calm demeanor, discipline. It’s about how people handle threats (hell, I’ve had people who self-identify as skeptics do this, and I’ve caught myself doing it).

Some people argue vehemently because they really want to be right, and that means that they want really good arguments on the other side, and they’re open to good opposition arguments; some people argue vehemently because they are swatting away any disconfirming information. Those two kinds of people can look really similar in terms of tone, vehemence, and even snarkiness. It takes time to figure out whether someone is open to argument.

On the other hand, people who claim to dislike argument and just want everyone to get along can be the most rigid thinkers and least open to new ideas.

Far too many people don’t know how to do research or assess sources, and much teaching on that subject makes this situation worse. Also, having access to good sources is expensive, and doing good research is time-consuming.

Instead of doing research on the basis of the quality of argument of sources, people tend to rely on gut instincts about trustworthiness, and that generally means confirmation bias and in-group favoritism. This, too, is all over the political and educational map.

People completely misunderstand the issue of “bias” and have an incoherent epistemology about perception—highly educated people might just be worse on this than people on the street. They’re certainly no better.

People use bad examples to stereotype out-group and good examples to stereotype in-group.

People confuse “giving an example” (a datum or quote) with proving a point.

People engage in motivism way too fucking much.

Extremists argue the same way, regardless of where they are on the political spectrum, or even if it’s a political question at all.

People have bad stopping rules when it comes to research.

People pay too much attention to tone.

People tone police women and POC way too fucking much.

Charismatic leadership is a drug, and a lot of people are way too high on it.

People value loyalty to the in-group (and especially to the leader) more than truth because they redefine truth as loyalty.

No argument is too ridiculous if it enables you to say that you were right all along.

If a media source is in-group, makes their audience feel connected with them, makes their audience feel good about their beliefs and choices, then that audience will remain loyal no matter how many times that media source is just completely wrong.

Far too many people reason deductively from non-falsifiable premises, and think they’ve thereby proven a point to be true.

People are desperate to resolve cognitive dissonance, especially the dissonance created by being fanatically committed to a faction (or unwilling to consider any disconfirming information) and wanting to see ourselves as fair, compassionate, and rational.

People reason from identity way too fucking much.

Unification through a common enemy and a failure of leadership

Photo of Americans being sent to concentration camps
https://anchoreditions.com/blog/dorothea-lange-censored-photographs

A sociologist friend and I were talking about how deeply entrenched it is for people to think in terms of in- and out-groups (Us v. Them), and he joked that the only thing that could unite humanity was an attack from outer space. And there’s something to that—in rhetoric, it’s sometimes called “unification through a common enemy.” The rhetoric scholar Kenneth Burke, in 1939, published an article in which he pointed out that that was one of Hitler’s strategies for uniting Germans. It’s how a lot of families function—everyone is mad at each other until they can agree how much they hate Aunt Agnes. I’ve seen fractious departments unify against an upper administrator. Churchill unified a deeply divided country when its existence was threatened by Nazism—his speeches continually spoke to a common, shared identity, and a common effort (FDR was much the same).

Those four examples show that the impulses that cause us to unite in our shared division can range from the trivial (the family dislikes the aunt, the department dislikes the Dean) to somewhat more important (if the Aunt is trying to defraud the family or the dean is trying to defund the department) to the very existence being threatened (as in the case of the UK). But what of the missing fourth example—Germany?

Germany is a strange case, because many Germans felt deeply threatened by various things— a world economic collapse that threatened large numbers of people with poverty and unemployment. Many Germans also felt threatened by the secularization of education, losses of privilege, modernization of various kinds, and their sense of group was esteem was threatened by the disastrous outcome of the Great War. But their existence wasn’t threatened; their prestige as a nation was, but not their existence as a nation.

But they became persuaded it was. The irony, of course, was that this belief in existential threat was a self-fulfilling prophecy. Germans, persuaded that the Reichstag Fire demonstrated an existential threat, put in power a leader and party that would, actually, lead to the extermination of Germany as a nation, and the extermination of between five and eight million Germans (with about 500,000 killed as part of the racial and political purification programs).

Athens, in the fifth century BCE, was facing an existential threat in the form of the Spartans. Instead of uniting as a city-state to fight that threat, they were more concerned with the existential threat to their faction, to the possibility that the other faction might exterminate them, and so focused their energy on exterminating the other. And they lost the war to Sparta.

What I’m saying is that the existential threat doesn’t have to be real for it to be really effective at unifying, and having a real existential threat doesn’t necessarily unify. What makes the difference is the rhetoric of the leadership.

Churchill and FDR responded to existential threat with a rhetoric that tried to unify the entire country, even the political parties that had recently been their worst critics. Both had opposition members in their cabinets. Both listened to people who disagreed (Kershaw’s Fateful Choices describes their decision-making processes, and how much they relied on thoughtful attention to the opposition, elegantly.) FDR and Churchill used the existential threat to transcend factionalism. Hitler and the demagogues of Athens manufactured or used the existential threat in order to amplify the factionalism, to equate opposition groups and critics with the external threat, and thereby enable elimination of fellow citizens. Instead of trying to unify a people, their goal was purification through extermination of the opposition.

In a way, COVID-19 is the external threat my sociologist friend joked about. It could be the moment of unification, a moment when we transcend factional disagreement in order to unify against this disease. It could be that moment if political leadership decides to make it that.

Promoting unity is hard, and nobody does it perfectly, but some do it better than others. FDR allowed a rhetoric of internal purification to lead to massive race-based imprisonment, and Churchill treated India as only sort of unified with the UK (enemy enough for mass starvation). But they were better than Hitler or the Athenian demagogues, and they resisted even more extreme forms of internal purification.

We’re in a culture of demagoguery, in which every issue is not just us v. them, but treated as a zero-sum war of existential threat between us and them. Someone saying “Happy Holidays” threatens Christians with extermination because it’s part of the “war on Christmas.” Requiring vaccines is a war on liberty. Trying to reduce poverty is a war. Treating every issue as a war means treating people who disagree with our policy agenda as traitors. That’s a bad idea.

We do have a common enemy in the form of COVID-19; we need a leadership that enables us to transcend our differences to work together. The last thing we need is a leader who exacerbates internal animosity, who openly tries to exterminate dissent, who has a fragile ego that has to be stroked, who refuses to listen to anyone who disagrees, and who is now openly toying with exterminating democracy itself. We need someone even better than FDR, not someone even worse than Cleon.

Bad math, belief, and half Nazis

The above are two very popular tweets (as you can see from the likes), and they rely on a way of thinking about political choices that is often popular. The argument is that you shouldn’t vote for this person because s/he is still in a category of evil people.

You see it all over the political spectrum (we need to stop talking about either a binary or single-line continuum of political positions—it’s false and damaging, and it fuels demagoguery). In 2016, there were informational enclaves that said that people should vote against HRC because she was a socialist, fascist, neoliberal, and therefore no different from Stalin, Hitler, Thatcher.

It’s a way of arguing that eats its own premises, and yet it’s so often persuasive. For instance, the argument that you shouldn’t vote for Biden because he’s half the nazi that Trump is has the major premise that you should never choose the thing that is twice as good.

Of course you should choose the thing that is twice as good. You should buy the car that is twice as good, rent the apartment that is twice as good, take the job that is twice as good. When we’re deciding about a car, apartment, or job, we can do that math, but, when it comes to politics, suddenly people can’t see that half a fascist is twice as good as a full fascist, let alone whether Biden is half a fascist.

So, why do people who can take an imperfect apartment that is twice as good as their other option, when it comes to politics, reject taking an option that is twice as good as the other?

There are a lot of reasons. Here, I want to mention two. First, politics is tied up with identity in a way that getting an apartment usually isn’t (although, people I’ve known for whom their apartment is closely attached to their identity have the same bad math—an apartment twice as good as the other is just as bad as the other); second, people who reason deductively often have false narratives about the past, or don’t care about what has happened. A politics of purity is often connected to a belief in belief.

The first move in that argument is to treat everyone who disagrees with us as in the Other category. There are good arguments that Trump is fairly high on the fascism scale (although with some important caveats, particularly about individualism), but Biden is not a fascist. He’s a third-way neoliberal. But, really, when people are making this kind of argument—HRC is basically Stalin, Sanders is Castro, HRC is Trump—they aren’t putting the argument forward as some kind of invitation to a nuanced discussion about political ideologies. It’s a hyperbolic appeal to purity politics.

Like all hyperbole, the main function of the claim is that it is a performance of in-group fanatical commitment, a demonstration of loyalty on the part of the speaker. The point is to demonstrate that they think in terms of us or them, and they are purely opposed to them.

That seems like a responsible political posture because, in cultures of demagoguery, there are a lot of people (who are bad at math) who decide that being purely committed to the in-group is the right course of action, regardless of whether that has ever worked in the past. They believe that we can succeed if we purely commit to a pure commitment to a pure in-group set of pure policies. That way of thinking about politics—the way to win in politics is to refuse to compromise—is all over the political spectrum.

And, I just want to emphasize: the math is bad. A half-nazi is actually better than a full nazi. A leader who would have done half what Hitler did would have been better than Hitler. Unless you are thinking in terms of purity, and so you don’t actually care about how many people are killed, in which case you’ve fallen into what George Orwell, the democratic socialist, called the fallacy of saying that half a loaf is the same as nothing at all. If you’re hungry, half a loaf is still half a loaf.

A friend once compared it to the trolley problem, in which a person refuses to pull the lever that involves being a participant in an action they really dislike in order to prevent a much worse outcome. I’m not a big fan of the trolley problem as an actual test of ethical judgment, but I think the metaphor is good—it’s a question of whether a person who refused to act (pull a lever that would cause one person to die rather than five) feels that this failure to act is more ethical than acting. When I talk to people who are in this kind of ethical dilemma, it’s clear that they are balking at that moment of their grabbing the lever—they want the trolley to shift tracks; they don’t want Trump to get reelected; they just don’t want to pull the lever.

That was complicated, but all I’m saying is that it’s a question of whether people recognize sins of omission. They don’t object to Biden getting elected; they object to voting for him.

So, how has that worked out in the past? I can’t think of a time when refusing to vote because one candidate was half as bad as the other has worked to lead to a better political situation (but I’m open to persuasion on this), but I can think of a lot of times when it hasn’t. I’ll mention one. It happens to be a time that people could vote for half-nazis, and liberals tried to persuade voters to do exactly that.  

It’s important to remember that the Weimar Communists could have prevented Hitler from coming to power by being willing to form a coalition government, but they wouldn’t because, they said, every other political party (including the democratic socialists) were, basically, fascists.

I’m not saying that compromising principles is always a good choice; a lot of people made the mistake of thinking that they could work with Hitler, that they should stay in his administration (or on his military staff) so that they could try to control him or, at least, direct him toward better actions. They couldn’t. Within a couple of years of his being installed as Chancellor, all the people in his administration who were going to try to moderate him were either fired or radicalized. It took longer with the military, and in that case the people who tried to control him were fired, strategically complacent, or radicalized. But it was the same outcome. There was no working with Hitler—there was only working for him.

If we want to prevent another Hitler, then we have to vote against him.

Time management for associate professors

I posted something about time management for graduate students and assistant professors, and so now I should write something about associate professors, and that means writing about imposter syndrome.

The presumption, not always true, is that associate professors are oriented toward promotion to full. The advice I’m giving here is oriented toward finding a manageable and sustainable career–whether it’s to get promoted, or to remain at the associate level.

My crank theory is that people who developed a sustainable set of work practices (that is, ones not driven by panic or binge writing) as a graduate student or assistant professor just need to keep doing what they were doing once they get tenure. They’ll face many the same decisions—whether to take on a leadership position in the department, college, or discipline, what the next set of scholarly projects should be, how many new courses to develop—but, if they negotiated those shoals well as an assistant professor, things should be okay.

There is a lot of shaming rhetoric about people who remain at the level of associate professor, and that shaming makes me ragey. An awful lot of departments (not my current one, btw—the full profs have heavy service responsibilities) enable full professors to focus on scholarship because the whole department is functioning on the backs of those “stalled” associate professors. There are lots of reasons that people lose the thread of their scholarly life, many of which I’m not talking about here (ranging from bad, such as a family health crisis, to good, such as deciding that promotion isn’t desirable), but one of them is that there are some very toxic narratives about writing and scholarly productivity.

A lot of people say our world is oriented toward extraverts, but it really isn’t; it’s oriented toward narcissists. A lot of narcissists flame out in grad school; a lot of flame out as assistant professors. But, in my experience, narcissists who make it to associate make it to full.

So, this leaves us with non-narcissists, and why so many really good and smart people who have produced enough good writing to get where they are have trouble producing enough to get any further. One common explanation is imposter syndrome, but I don’t think that’s the problem; I think the problem is how people try to get past it.

Every reasonable accomplished person I have met has imposter syndrome—feeling that they have gotten more rewards and praise than their work actually merits, that they only got where they were through luck. The only people I have ever met who don’t have imposter syndrome are narcissistic fucks. So, there is no “getting over” imposter syndrome. In fact, we are always pretending to be more sure than we are; we fling ourselves into new projects when we don’t know what we’re doing; we make claims we aren’t entirely sure are accurate; we decide we can make a contribution to a field even when we haven’t actually read everything in that field. And people who succeed haven’t done so entirely on merit—only narcissists think that—hard work is necessary but not sufficient for success. People with imposter syndrome are honest about the intellectual precarity of our work; narcissists don’t know they’re imposters, but they are. They don’t know they’re imposters because narcissists can never really look at themselves from the position of a reasonably skeptical group of people who know things they don’t; they dismiss those people as fools. People with imposter syndrome know there is that group, although we don’t always know who they are.

One way that people manage imposter syndrome is through perfectionism. Some people refuse to submit anything for publication unless it’s perfect—that way, no one will expose them as an imposter. These are people who spend years working on things that they refuse to submit until perfect—that is, beyond criticism–, and so they don’t submit it. Or they don’t write at all, and just imagine the perfect thing they would write if they weren’t so swamped by obligations that they keep taking on.

Another way that people manage imposter syndrome (and fear of failure, and various other related issues) is by letting panic take the wheel. People who have succeeded in writing through high school, college, and coursework often have a truncated writing process: they are faced with an assignment, and they first decide on their argument, and then they decide on the organization for that argument, and then they write it out. (A lot of writing teachers think they’re teaching “the writing process” by teaching this linear method. They aren’t.) If you’re not a narcissist, and you’re trying to follow the “process” you’ve been taught, then, when you sit down to write, you’re trying to write, critique, and revise all at the same time.

And that’s how you get a writing block.

One of my crank theories is that some people have gotten to associate professor through generating enough sheer panic to make it past the crunch points. But that doesn’t mean the solution for either associate professors or people who want to mentor them is to panic them. (I’ve had full professors tell me that the reason that associates can’t publish is that they aren’t panicked enough—a sweet example of how Strict Father Morality is a pond into which supposedly lefty academics dip their toes from time to time). People who let panic take the wheel seem to think that people should spend their entire career in a panic in order to produce enough.

A lot of “stalled” associate professors are people who have been given that advice, and told that narrative, and have said, “Fuck that shit.”

And so they should. So should we all. It makes sense to reject a toxic narrative about productivity.

If you’ve never developed a long-term sustainable work practice—if your only method of motivating yourself to write is to be in a white-hot panic about your situation (and it appears that the only other method is to be an asshole narcissist) then the decision to remain a permanent associate professor seems not only sensible, but compassionate to the people in your life.

The problem isn’t that associate professors are insufficiently panicked—the problem is that far too many people promote a writing process dependent on panic and valorize a toxic narrative about success.

Once you get tenure, you get committee assignments. It looks different from the challenges of being assistant, but it really isn’t—you still have to figure out what scholarly projects to pursue, what committee assignments to take, what new classes to develop. The difference is one I have a hard time describing. Despite academics’ reputations for being lefty, far too many academics (including several department chairs I’ve known) have thoroughly embraced the neoliberal narrative of what it means to be a good worker—you throw yourself on the pyre of your own career to meet the standards of “good work” of your institution. You live and breathe in a world of panic, 60-hour work weeks, and self-congratulation for having no boundaries about work.

There is another option. It’s about creating a sustainable relationship to work.

And the first step in that creation of a sustainable relationship to work is stepping away from a writing process that relies on panic. A responsible graduate program would ensure that first step happens in graduate school, but we aren’t in that world (although there are many graduate advisors who are trying to do exactly that).

The best way to respond to imposter syndrome is to stop approaching every step in the writing and publication process as the moment we might be exposed to the world, but to be comfortable with writing shitty stuff, submitting things that someone might slam, and to know that we will never reach a point in our career when we are not being told that what we wrote is shitty by someone. And they may be right. So?

That response involves a lot of possible moves— most of them involve abandoning thinking about each publication process as risking everything, and they mean working because you want the outcomes the work will get, you’re interested in the crafting of the work, you want others to know about these insights you have. It also involves breaking the writing process into at least three different kinds of work that don’t happen all at once—creating, critiquing, revising. It involves walking away from perfectionism. It involves rejecting (and getting help rejecting) toxic narratives about how much we should be working; it involves finding allies and mentors. It doesn’t necessitate giving up on scholarship, although that might be a viable and joyful choice (some people decide they really love administration, for instance), and it certainly doesn’t necessitate living life in a state of panic.

Time management for assistant professors

weekly work schedule

In an earlier post, about time management for graduate students, I mentioned that there is a limit as to how much a person can write in a day. I also think that a lot of people get burned out working day after day on the same topic, and, if they don’t get burned out, they lose their ability to think critically about what they’re writing. Some people manage that second problem by working on multiple projects at the same time. When they just can’t work on, they work on that for the next three weeks or so, and then come back. I can’t do that.

In many fields, a graduate student teaches one class (perhaps two), is on very few committees, and has one or two major scholarly obligations (finishing the dissertation and trying to get something published). The kinds of classes that graduate students teach often have fairly established syllabi (or, at least, course requirements).

There’s a post here where I talk some about the challenges. The time management challenges for assistant professors are, I think (and I was an assistant professor for a long time—at three different institutions), very different from either graduate student or full professor, but they are much like the issues for associates (with a big exception I’ll mention).

These challenges are: much more open-ended teaching opportunities, the vagaries of establishing a professional identity, service requirements, multiple scholarly obligations, and (if it wasn’t already a challenge in graduate school) often a family or just very different sorts of living conditions.

Perhaps somewhat paradoxically, one of the challenges of being an assistant professor is the freedom regarding teaching. Often, departments rely on new hires to create new courses, modify curriculum, or in other ways be the innovators. There are good reasons for that reliance—assistant professors are likely to be trained in ways that are very different from the older faculty, simply because they were recently at a very different program. It can be tempting to create too many new courses—a strategic choice is to spend the first year creating a repertoire of courses, and then tinkering with them for a while. It can be intoxicating to teach entirely new ones, to have the chances to work in programs (such as honors or mentoring programs) that are often overload.

There’s a similar problem with service—assistant professors want to make themselves central to the department, and want to be liked. It’s important to make strategic choices about obligations. And, it’s also important to keep in mind that women and POC get a lot more pressure to take on service-heavy responsibilities, for both good (representation) and bad (tokenism) reasons. Learning to say, “I’d love to do that after I have finished my book” (or “enough for tenure” or “have tenure”) in a genuinely enthusiastic way can be very useful.

It’s important to go to conferences, since it’s good to network (find other scholars working on similar projects, find out who might be a good co-panelist, co-author, co-editor of a collection), and also good to get a sense of who people are citing a lot, where the field appears to be going.

But it’s often hard to figure out which conferences, how many, and it isn’t a good idea to spend a lot of time writing paper conferences that aren’t candidates for articles or chapters. Conferences used to be good for chatting with editors (to try to figure out if a project has a market), but presses are attending fewer conferences, so it’s hard to say.

Many students (especially ones who took some time between grad and undergrad) have children in graduate school; many don’t until they’re assistant professors. Some people get tired of crappy student apartments and really want a house. Those kinds of choices have some odd consequences—I became much more productive when I reduced my commute, something I hadn’t expected. So, choices to live far from campus (because it’s more affordable, schools are better, or other reasons) can introduce variables.

In short, being an assistant professor is a challenge in terms of time management because, even more than as a graduate student, it involves making decisions without enough information to make good ones.

Being an assistant professor is a challenge in terms of time management because, even more than as a graduate student, it involves making choices without knowing what all the options really are, the relative advantages and disadvantages, the potential consequences. It’s just as much uncertainty as a graduate student, but with more choices.

The most obvious course of action is to get good mentoring, but even that is choosing among several paths in a forest of unknowns. While I feel comfortable giving advice in the abstract, I don’t think I know enough about conditions now for junior scholars to make a lot of specific recommendations. I think it’s useful to have several mentors—someone just one rank above at a different institution, someone high up at your institution, someone just one rank above at your institution.

Because I am none of those things, the advice I’m about to give should be taken with a grain of salt (or more). Regardless of the publication standards for tenure at your institution, publish. I know that isn’t easy, but publication is the scholarly equivalent of “fuck you” money. It gives you the ability to move (which, paradoxically, makes it easier to stay). If you’re at an institution that requires a book for tenure, you have to have a manuscript ready to submit to a publisher by your third year.

A lot of graduate students spend the year or two (or three) that they’re writing their dissertation in a white-hot panic, they develop back problems, they sleep badly. Sometimes there is a six-month period when they are basically alternating between terror and panic. That happens because very few programs prepare students well for that last marathon of dissertation-writing (and an unhappy number of faculty believe that their job is to make sure that last stretch is boot-camp).

As I’ve tried to write about elsewhere, the unfortunate consequence is that people come to rely on a writing process that is driven by panic. That is not sustainable as an assistant professor. But, for some people, that’s the only way they know to write—they only know how to run sprints, and so they spend some amount of time (perhaps the last two years, when it’s publish or get fired) in that same white-hot panic, making everyone around them miserable, but most of all themselves.

That’s an emergency, not a career. The goal during graduate school should be to find a work process that is sustainable for life. But there really isn’t a lot of incentive to do that. Graduate courses inevitably reward treating paper writing as a sprint, and, despite the best efforts of the best advisors, so many documents leading up to the dissertation are written out of panic—because of fear of failure, imposter syndrome, panic-driven writing processes, decisional ambiguity. Good writers, and anyone who gets into graduate school is a good writer, are people accustomed to sit down and produce a product. That they might have to revise, draft, and cut can feel like a failure. Graduate students spend a lot of time trying to reproduce the writing processes that got them into graduate school, even though those processes are no longer working. This problem of remaining committed to panic-driven writing processes isn’t helped by the unpleasant fact that there are advisors who actively work to keep students sprinting (they deliberately work their advisees into panics, they delay reading material, they believe their job is to “toughen up” students, they have panic-driven writing processes and can’t imagine any other).

Since it is so very possible to write a dissertation in a year of sheer panic, as a series of exhausting sprints, a lot of assistant professors treat trying to publish enough to get tenure as the same world of panic and sprinting that got them to finish their dissertation. That is a very bad decision.

Here’s what I wish someone had told me when I got my first job: create the work life you want to have for your entire career; stop treating your work responsibilities as a series of crises.




Trump’s border rhetoric/policies and COVID-19

a small concrete ball with an entrance
A four-person bomb shelter in Munich

Right now, I’m seeing a lot of people say that the COVID-19 crisis proves that Trump was right in his controversial policies to shut down the borders. I’m seeing it in enough different places that it’s clearly become a talking point getting repeated as a truism in pro-Trump media and communities. It’s a really interesting argument because many people think it’s a clobber argument—one that should end the argument. But critics of Trump don’t find it all that persuasive. Why not?

There are a lot of reasons, including that some people won’t grant Trump credit for anything (just as there are Trump supporters who won’t acknowledge any criticism of him)—that’s just rabid factionalism.

Another reason has to do with how people think about politics (and lots of other things). Many people reason associatively. There’s a famous quiz for testing thinking processes that has questions like this:

There is a group of women, 30% of whom are librarians, and 70% of whom are nurses. Mary is one of those women, and she is 35. What are the chances that she is a librarian?

A. 10-40%
B. 40-60%
C. 60-80%
D. 80-100%

A fair number of people will pick 30%.

If the example is:

There is a group of women, 30% of whom are librarians, and 70% of whom are nurses. Mary is one of those women, and she is 35 and wears glasses. What are the chances that she is a librarian?

A. 10-40%
B. 40-60%
C. 60-80%
D. 80-100%

Under those circumstances, a fair number of people will pick a higher percentage, as though the added detail “wears glasses” changes the chances of her being a librarian. But, that detail doesn’t change the chances—there are, as far as I know, no studies showing that librarians are more likely to wear glasses than nurses. Wearing glasses is something we associate with librarians, largely because of movies and TV. It isn’t logically related, but associatively.

Another example of that kind of thinking is to ask one group of people how many calories a meal has, such as a meal consisting of 6 ounces of poached chicken breast and 1 cup of rice, and to ask another group of people about the calories of a meal consisting of 6 ounces of poached chicken breast, 1 cup of rice, and a salad (4 ounces mixed green lettuces, 3 cherry tomatoes, and 1 tablespoon oil and vinegar dressing). A lot of people will give the meal with the salad fewer calories than the one without. (Sometimes even the same people will give the meal with the salad fewer calories than the one without.)

Of course, the meal with the salad has more calories, but people think it doesn’t because salads are associated with healthy food, and healthy eating is associated with consuming fewer calories.

A few years ago, I had a funny conversation with someone about McDonald’s—they said that they got the fried chicken sandwiches rather than any of the hamburgers (even though they liked the hamburgers more) because it had fewer calories than any of the hamburgers. Actually, it doesn’t. Again, it’s a question of association—chicken is associated with healthy food, and so this person was simply assuming that chicken sandwiches had fewer calories. I had a similar conversation with someone who bragged that she didn’t let her children drink milk for health reasons; she gave them fruit juice instead.

I once lived somewhere that, several years before, had had a series of burglaries that took place in the middle of the day, while people were away at work. Several of the neighbors responded by leaving very bright outdoor lights on all night, and that’s an interesting response. It wasn’t going to make any difference as far as preventing the burglaries—they happened during the day. But daytime burglaries are burglaries, and they’re associated with danger. And leaving lights on during the night is associated with safety, with safety against a different kind of burglary, but one that’s still associated with daytime burglaries.

So, did the policy of leaving lights on protect those neighbors against the burglars who were active in the neighborhood? No, but it protected them against something, and so seemed like a good policy.

When we’re frightened, we have a tendency to believe that protecting our borders (physical, biological, ideological) is a good plan, simply because it’s associated with protection—regardless of whether that particular way of protecting our borders will actually prevent the outcome about which we’re frightened. We protect our house against one kind of burglary, but not the one actually threatening us.

Trump’s policies regarding “borders” has as much logical relevance to COVID-19 as leaving lights on all night had for daytime burglaries. Trump’s policies were (and are) about blocking land-based immigration from Mexico and any immigration (or travel) from various Muslim countries. He never did anything about Americans travelling to and from China, and that’s how we got COVID-19. As Jeff Goodell says, “In fact, the travel ban was a failure before it began. “You can’t hermetically seal the United States off from the rest of the world,” Rice says. For one thing, the ban only applied to Chinese citizens, not to Americans coming home from China or other international travelers, or to cargo that was coming into the U.S. from China.”

His rhetoric associated various Others as evil and dangerous, but never in a way that would have kept the US safe from this virus. And, despite what many people who are repeating the talking point about his policies being right seem to think, Trump got his way with his travel bans. They went into effect.

So, this talking point is simply saying that Trump was right to make Americans fearful about our borders, but he didn’t make Americans fearful about borders. He made Americans fearful about Mexicans and Muslims, and now he’s trying to make us fear the Chinese. Viruses don’t have a race, and they don’t see race. Building the wall wouldn’t have prevented COVID-19. His travel ban (which was instituted) didn’t prevent COVID-19. His second travel ban (about which he bragged) was ineffective.

That Trump’s rhetoric is a rhetoric of fear of Others, and that his policies are associated with that fear, doesn’t mean his policies were effective. That two (or more) things are associated in our minds is not actually proof that they are either causally or logically connected. They’re just associated in our mind, and sometimes someone’s rhetoric.