Everyone claims that they’re forced into war

Bill O'Reilly claiming there is a war on Christmas

[I’m back to working on a book I started almost ten years ago, that came out of the “Deliberating War” class. I’m hoping for a book that is about 40k words, so twice the length of my two books with The Experiment, but half the length of any of my scholarly books. It starts with “The Debate at Sparta,” goes to this (hence the comment about a previous chapter), moves to wankers in Congress in the 1830s, and then I think the appeasement rhetoric, Hitler’s deliberations with his generals, Falklands, and then metaphorical wars (like the “War on Christmas”). I wanted to post this section for reasons that are probably obvious.]

When I had students read Adolf Hitler’s speech announcing the invasion of Poland, they often expressed surprise—not that he had invaded Poland, but that he bothered to try to rationalize it as self-defense, that he presented Germany as a perpetual victim of aggression. They were surprised because they expected that Hitler wouldn’t try to claim that Germany was a victim, let alone that he was forced into war by others—they thought he would openly warmonger. He had been quite open in Mein Kampf about his plans for German world domination, and he wasn’t the first leader of Germany to plan to achieve European hegemony through war—why claim victim status now?

And I explained that, regardless of their motives or plans or desires, people generally don’t like to see ourselves as exploiting others, or engaged in unjust behavior. And even Hitler needed to maintain the goodwill of a large number of his people—while actual motives might have been a mixture of a desire for vengeance, doing-down the French, relitigating the Great War, making Germany great again, racism and ethnocentrism, German exceptionalism, Germans (just like everyone else) wanted to believe that right and justice were on their side. It’s rare, in my experience, that people explaining why they should go to war (or, as in the case of Hitler and Poland, why he has gone to war) will claim anything other than that they were forced into war, they tried to negotiate their concerns reasonably, and that their actions are sheer self-defense. One of the functions of rhetoric is legitimating a policy decision; in the case of arguing for immediate maximum military action, that position considered most legitimate is self-defense. So, almost everyone claims self-defense. Even the “closing window of opportunity” line of argument for war is (including when used by both sides, as in the Sparta-Athens conflict) an assertion of a sort of “pre-emptive self-defense”—we are not in immediate danger of extermination, but the enemy will exterminate us some day, and this is our best opportunity to prevent that outcome, so it is self-defense to exterminate them.

There is an interesting exception. According to Arrian of Nicomedia (a Greek historian probably writing in the second century AD), in 326 BCE Alexander the Great faced resistance from his army. He was on the Beas River, considering conquering the Indian region just past Hyphasis, but his army was less than enthusiastic. Arrian says, “the sight of their King undertaking an endless succession of dangerous and exhausting enterprises was beginning to depress them,” and they were grumbling. Scholars argue about whether the incident should be properly called a mutiny, but of more interest rhetorically is that the speech that Arrian reports is one of few instances of a genuinely “pro-war” speech, in which the rhetor doesn’t base the case on self-defense.

Alexander begins his speech by observing that his troops seem less enthusiastic than they had been for his previous adventures, and goes on to remind them of how successful those ventures have been.

“[T]hrough your courage and endurance you have gained possession of Ionia, the Hellespont, both Phrygias, Cappadocia, Paphlagonia, Lydia, Caria, Lycia, Pamphylia, Phoenicia, and Egypt; the Greek part of Libya is now yours, together with much of Arabia, lowland Syria, Mesopotamia, Babylon, and Susia; Persia and Media with all the territories either formerly controlled by them or not are in your hands; you have made yourselves masters of the lands beyond the Caspian Gates, beyond the Caucasus, beyond the Tanais, of Bactria, Hyrcania, and the Hyrcanian sea; we have driven the Scythians back into the desert; and Indus and Hydaspes, Acesines and Hydraotes flow now through country which is ours.”

It is an impressive set of accomplishments, but Alexander goes on to make an odd (and highly fallacious) sort of slippery slope argument—since we’ve accomplished so much, he says, why stop now? Is Alexander really proposing to keep conquering until they start losing? If people have gained territory in war, the cognitive bias of loss aversion (we hate to let go of anything once we’ve had it in our grasp—the toddler rule of ownership) means we will go to irrational lengths to keep from losing it, or to get it back. Since that bias will kick in as soon as he stops winning, he is in effect, arguing for endless war. It’s one thing to say that we have to fight till we exterminate a specific threatening enemy, but another to argue for world conquest, for an endless supply of enemies.Yet, that does seem to be his argument: “to this empire there will be no boundaries but what God Himself has made for the whole world.”

He says that the rest of Asia will be “a small addition to the great sum of your conquests,” easily achieved because “these natives either surrender without a blow or are caught on the run—or leave their country undefended for your taking and when we take it.” But, if they stop now, “the many warlike peoples” may stir the conquered areas to revolt. In other words, he has the problem of the occupation (it’s always the occupation). That argument is the closest that he gets to a self-defense argument, and he isn’t claiming that Macedonia faces extinction unless they try to conquer India; he’s saying that they might lose what they’ve gained. And it’s a vexed argument. Are the people in Asia to be feared or not—they seem both easy to conquer, but threats to the Macedonians? Second, and more important, he has established an “ill” (there might be revolt) that isn’t solved by his plan (conquering all of Asia). No matter how much he conquers, unless he conquers the entire world, there will always be a border that has to be defended. And conquering more territory doesn’t make it easier to occupy existing conquered areas.

I mentioned in the previous chapter that the complicated range of options available to one country in regard to provocative action on the part of another tend to get reduced into the false binary of pro- or anti-war. Rhetors engaged in demagoguery do the same thing.

There were rhetors opposed to the Bush plan for invading Iraq who were not opposed to war in general, or even invading Iraq in principle, but they wanted to wait till the action in Afghanistan was completed, or they wanted UN approval, or they wanted to begin with more troops. Yet, they were often portrayed as “anti-war.” Similarly, Alexander’s troops can hardly be called “anti-war”—they’ve spent the last eight years fighting Alexander’s wars. They don’t want this war, at this time.

This tendency to throw people opposed to this war plan into the anti-war bin is ultimately a pro-war move because it makes the issue seem to be war, rather than the specific plan a rhetor is proposing. It isn’t really possible to deliberate about war in the abstract; we can only deliberate about specific wars, and specific plans for those wars. And, since being opposed to war in the abstract is an extreme position, the tendency to describe the problem as pro- v. anti-war puts the harder argument on anyone objecting to this war—they look like they’re pacifists or cowards or they don’t recognize the risks the enemy presents. They can easily be framed as though they are arguing for doing nothing (which is how they’re almost always framed). I’m not saying that the general public should deliberate all the possible options and military strategies—in this chapter I’ll talk about some ways such open deliberation can contribute to unnecessary wars—but that we should remember that it’s rarely (never?) a question of war or not. We have options.

If another country has done something provocative, we can respond with: immediate maximum military response (going to war immediately); careful mobilization of troops, resources, and allies that might delay hostilities (but we fully intend them to happen); limited military response; a show of force intended to improve our negotiating position when we are genuinely willing to go to war; a show of force that we have no intention of escalating into war (a bluff); economic pressures; shaming; nothing. Even the last option isn’t necessarily an anti-war position—it might simply mean that this provocation doesn’t merit war.

But notice that Alexander doesn’t have all those options because the countries he wishes to conquer have done nothing provocative, other than to exist. If there is a legitimate casus belli—that is, if a country has strategic or political goals other than sheer conquest—then negotiation is possible, and the threat of war can add rhetorical weight to one side or another in that negotiation. If conquest is the goal, however, then the “negotiations” are simply determining the conditions of surrender (or, as in the case of the “Melian Dialogue,” allowing the choice between slavery and extermination).

In the case of Hitler, he tried to look like someone who had negotiable strategic and political goals, and he succeeded for quite some time. His rhetoric about the invasion of Poland was part of that rhetorical strategy, of looking as though he didn’t have sheer conquest as his goal, and was simply using negotiating as a way of keeping his window of opportunity open as long as possible. Alexander makes no such move, perhaps because the rhetorical situation meant he wasn’t constrained by the need to establish some kind of legitimacy for his hostilities. His troops didn’t need to be told that this was anything other than a war of conquest. They’d known that for eight years.

Arguing with people who want the US to be a theocracy of their beliefs

Ollie's bbq
Ollie’s bbq, the subject of the SCOTUS case, Katzenbach v. McClung (https://www.oyez.org/cases/1964/543) [image from here: http://joshblackman.com/blog/wp-content/uploads/2010/04/scan0001.bmp]


Someone asked me about arguing with someone who says we should have the death penalty for homosexuality because Leviticus 20, and it turned into my writing a blog post I’ve been thinking about for a while.

How do you argue with someone who says they’re Christian, and who cites Leviticus 20:13 as proof that “conversion therapy” (using the cover of psychology to abuse people) is good, and allowing non-het people full civil rights is bad?

Trying to argue with people who use Leviticus (and other “clobber verses”) to support homophobia is hard because they don’t understand their own argument. They’re just saying something that makes them feel better about the commitments they have for reasons not up for argument. Persuading them to understand the problems with the various claims they’re putting forward isn’t about refuting those claims, but about getting them to notice those claims don’t add up to a coherent position.

Too often, we think that persuasion involves changing what people believe, but, in my experience arguing with extremists all over the internet (and all over the political spectrum), persuasion requires getting people to reconsider how they believe.

Let’s imagine that you have a friend, call him Rando, who has cited Leviticus 20:13 as to why we should not allow gay marriage, “conversion therapy” is good, and overturning Obergefell v. Hodges is only slightly less important than overturning Roe v. Wade or Brown v. Board. Oh, sorry, that last one isn’t supposed to be said out loud (although it too was a Supreme Court decision that prohibited white Christian evangelicals from dragging their religious beliefs into the civic realm).

I have to start by pointing out that Leviticus 20:13 says nothing about whether conversion therapy is effective, nor whether we should allow gay marriage. But it does say that the death penalty is involved. Pretty clearly.

Rando has a serious problem with his citing that text as authoritative unless he wants the death penalty for homosexual acts. If he sincerely believes that Leviticus 20:13 condemns consensual gay sex (it probably doesn’t), and we must follow it, then, then he’s insisting on the death penalty for gay sex. If he is citing that Scripture as authoritative, and he isn’t advocating the death penalty for homosexual acts, then he is cherry-picking bits of the verse he is citing as authoritative.

He’s cherry-picking Scripture, while pretending he isn’t. Rando does that a lot.

So, how do you argue with him? The rhetorical problem is that Rando believes four things: 1) his interpretation of Scripture is right because that interpretation makes sense in light of everything else Rando believes; 2) he can find reasons to support his interpretation; 3) if you “just look” at the evidence, and you’re a good and reasonable person, you can see the truth (naïve realism); 4) if you don’t think the truth of any situation—including the true interpretation of Scripture—is immediately and completely clear to people of good will and intelligence, then you’re a hippy relativist who thinks all interpretations are equally valid.

If you’re trying to persuade Rando to change his mind, then it all comes down to the first and fourth. Arguing with Rando about his interpretation of Leviticus 20 is really arguing with him about how he reads Scripture and how he thinks about belief (the binary of certain or clueless). If Rando believes the first and fourth, then he believes that being open to persuasion about his reading of Scripture is a sin–he thinks being less than fully committed to what your church tells you is right is being a hippy smoking dope and saying people can believe whatever they want.

That’s why arguing with Rando so hard. You aren’t arguing with him about claims; every argument in which he engages is an argument about whether he’s totally right or there is no right and wrong at all. That’s why he digs in so very, very hard.

What follows is drawn from my experience of arguing with Rando over the years when it comes to the Leviticus argument.

Rando might be the kind of person who wants the US to be a theocracy (he’ll call it a “Christian nation” but that isn’t what he means—he has zero intention of including Christian denominations with which he disagrees, let alone that asshole who argues with him in Bible study). He wants the US to enforce his reading of Scripture. What he wants isn’t a “Christian” nation for a couple of reasons. The first is that Christians disagree about a lot of things, so many that Christians benefit from the notion of a separation of church and state. After all, a lot of the crucial rulings about separation of church and state were because Christians were being legally disadvantaged and prohibited from practicing their religion by other Christians. Keep in mind the number of times that Christians have killed one another in the name of religion, the Albigensian massacres through the death toll in Ireland.

Rando doesn’t want a “Christian” nation—he wants a “nation that makes my way the only way.” In my experience, if you point that out to Rando, he won’t understand the point. When you point out that he wants a nation that would persecute other Christians, and not allow them to practice their religion, he’ll say that those practices aren’t really Christian. He’ll say those people are rejecting the Bible, cherry-picking, or reading it in a biased way. His model of exegesis is (and various Randos over the years have said this to me), “Just read the Bible.”[1]

One interesting strategy is to point out that even figures like Augustine, Luther, Jerome, and Calvin don’t agree on crucial aspects of Scripture, and all of them said that Scripture is unclear at parts. So, is Rando claiming to be smarter than Calvin? A better reader of Scripture than Calvin? (It can also be fun to point out that Calvin didn’t use the King James translation.)

Everyone picks and chooses from Scripture—does Rando’s church ban pearls in church? Or braided hair? Does the altar follow the rules laid out in Deuteronomy?

A lot of times the impulse is to ask if he eats shellfish, but argument isn’t a great one–Paul explicitly rejects the rules about food (and animal sacrifice).

But there are strategies that sometimes work. One is asking Rando if he follows all the laws in the Hebrew Bible. Does he have to marry his sister-in-law if his brother dies? In my experience, he’ll say that those rules are cultural, and peculiar to the time, and then you can point out that homosexuality is also very much a cultural issue. (That argument can get pretty weird, even unintentionally funny on Rando’s part, and you get extra points if it gets around to when Rando shows he spends a lot of time thinking about what gay men do in the bedroom.)

Sometimes Rando will admit that Scripture requires a process of interpretation, but he’ll insist that his process is not something he is imposing on Scripture, but something in Scripture. He’ll say that Scripture has two kinds of laws, civic and moral (this is just the cultural argument above, but you don’t end up getting TMI about Rando’s thoughts on gay sex). Civic laws are time and culture-specific, but the moral laws are timeless and endorsed by Jesus. This is not a distinction that appears anywhere in the Hebrew Bible, even implicitly. It’s just a way that Rando can rationalize his cherry-picking.

Leviticus 20 , for instance, has a prohibition that is often read as prohibiting same sex relations (it doesn’t). Rando wants to keep that one as a moral law. But Leviticus 20 also prohibits seeing one’s aunts naked, having sex with the followers of Moloch (how worried should we be about that?), having sex with a menstruating woman, or mixing up clean and unclean beasts. Those prohibitions are interspersed in with the rest—it isn’t as though Leviticus 20 is only about what Rando wants to call “moral” laws (unless he’s squeamish about pigs, I guess). And that’s the way all the various prohibitions in the Hebrew Bible are (and quite a few in the Epistles)–if there is a distinction between cultural and moral, it’s a distinction that we, as interpreters, choose to make. There’s no reason to think that the authors of it saw themselves as creating two different kinds of prescriptions and proscriptions.

Jesus rejected some of the Hebrew Bible laws (such as the imposition of the death penalty), and strengthened others (such as loving our neighbor), but he never did so by saying, “Well, those were just cultural, but these are moral.” He did it on his own authority. And, tbh, if you’re Jesus, you get to do that. Rando isn’t Jesus.

Since Jesus never condemned homosexuality, then its inclusion in the moral laws that Jesus strengthened is a bit vexed.

Here’s the final point I’ll make about the cultural/moral distinction being a filter we impose to make sense of Scripture, rather than one Scripture commands us to use: were that distinction in Scripture, and were Rando’s application of that distinction not motivated reasoning, then there would be unanimous (or nearly unanimous) agreement in the Christian tradition as to what rules we should keep and which ones we shouldn’t. Or even agreement on one of those categories. And there isn’t. To pick one example from Leviticus 20, Calvin was very strict about Sabbath keeping, Luther not so much. Major American denominations (*cough* Southern Baptists *cough*) treated the presence of slavery in Scripture as proof that it was God’s will, while rejecting various specific practices (such as jubilee) as cultural.

So, once again, Rando’s position—that his reading of Scripture is Scripture, and anyone who disagrees with him is imposing their prejudices onto Scripture—necessitates that he say he’s better at interpreting Scripture than major theologians in the Christian tradition.[2] No one in the history of Christianity got that distinction right, but Rando has? Once again, he’s smarter than Calvin? Rando’s distinction isn’t in Scripture; it’s in his head.[3]

A variation on the strategy of trying to make Rando take seriously his own reliance on the Hebrew Bible rules is to ask if he wants the US to have as legal code all of the rules in the Hebrew Bible. Again, his answer is no. If you ask why he wants the US to follow the rules he personally thinks matter, you get one of two answers. Both are dependent on the way he reads Scripture (and thinks about belief) mentioned above—that he (or his church) has the unmediated correct interpretation of Scripture (a belief belied every adult Sunday school class). After all, if Rando is right that it’s from God’s mouth to his ear, and he’s right that homosexuality sends you to Hell, then he could just not have gay sex. Why prohibit other people engaging in it? Or keep them from getting the material benefits of marriage? He could just let them go to Hell, or even spend a lot of time thinking about them in Hell, and thinking about the acts that got them there. Whatever floats your boat, Rando.

Why get the nation-state involved? In my experience, the most common answer is that my neighbor not behaving the way I want will involve my being punished. And now we are on the topic of Sodom and Gomorrah—the notion that God will destroy the US for allowing sin. Sodom and Gomorrah are stories of God saving the righteous–there is no Scriptural text of which I’m aware that has God destroying righteous people because of the sins of the people around them. Rando is not going to be destroyed because he has gay neighbors who are allowed to marry, and nothing in Scripture says he will.

And Sodom wasn’t destroyed because of what came to be called sodomy (this is discussed in three of the links included above). If you’re arguing with Rando, you can point out that even the most hardcore fundagelicals have given up on the argument that God destroyed Sodom for homosexuality—it was for oppressing the poor. Hmmmmm….should the US worry about whether we oppress the poor? Is Rando up in arms about the poor? Or does he spend more time thinking about what gay men do in bed?

As an aside, the whole notion that God will destroy a nation for being sinners is Scripturally vexed, but that’s a long argument and not very productive in the short run because it’s so complicated. If Rando is a follower of the “just world model,” and he thinks it’s endorsed by Scripture (prosperity gospel), then persuading him out of that model is something that takes years. As far as I can tell, people who are strongly attached to the just world model and give it up do so because of lived experiences.

Every once in a while (it’s pretty rare in my experience), you get the argument that it’s for their own good—that you’re saving people from damnation by keeping them from sinning. It’s John Locke who has the best answer to that, in Letter Concerning Toleration. If a person goes to church just because they’re forced to by the law, they’re still going to Hell. If they behave well just to avoid going to Hell, that’s where they’ll end up.[4]

If the disagreement does go in the direction of using the power of the state to force people to behave as you think they should, you might have a good discussion of the principle of liberalism. A lot of people seriously believe (because they’ve been told) that they will be forced to have a gay pastor or something. Their church will not be required to perform gay marriages—we don’t even force churches to perform “mixed” marriages, or second marriages. Churches can allow or prohibit whatever members they want—this is about civil society. This isn’t about what Rando’s church is allowed to; it’s about what Rando will allow my church to do. In my experience, Rando doesn’t understand that you can believe that what someone is doing is wrong, and not try to use the power of the state to force them to stop.

And the issue of using the power of the state to force others to behave as you think they ought brings up what can be the most productive strategy, when it works. This is only worth pursuing if you have some hope for Rando.

If he is open that he wants a nation that has as its laws the rules he thinks are important in Scripture, rejecting any other Christian readings, then ask if he thinks it’s okay for Iran to have a theocracy of their religion. When he says no, then say something like, “So, you want to be able to force people of other religions (even other kinds of Christianity) to live by your reading of Scripture, but you don’t want anyone to treat you that way?”

When he says yes, as he usually does, then you can say, “So, you want to be able treat others in a way that you don’t want to be treated. Someday, you will be face to face with someone who said you should do unto others as you would have them to do unto, and you will get to explain why you decided to ignore what he very clearly said. Good luck with that.”[5]


[1] This is why I always end up on the question of epistemology. He thinks his perception is unmediated. Other people are biased, but he isn’t. And he knows he isn’t biased because he knows his beliefs are true. He knows his beliefs are true because 1) he can find evidence to support them, and 2) he can ask himself if his beliefs are true, and he always get a YES!
[2] I’m tempted to say every theologian, but I’m not sure that’s true. I’m pretty sure I could find some belief of every theologian that they identified as central and necessary that he wouldn’t, but I’m not certain.
[3] I’m not saying that we are hopelessly lost in our own projections when it comes to reading Scripture, but that we are all humans, and humans are prone to motivated reasoning. Rando’s mistake is thinking that his method of reading is unmediated by his own political and personal commitments. In my experience, Rando is a binary thinker, and so he has the binary of certain/clueless. He believes that, if he isn’t certain about what Scripture means, then he’s clueless, and all interpretations are equally valid. That’s like saying that, if you aren’t certain about what a complicated contract means, then you have no clue, and you can believe whatever you want.
[4] In my experience, Rando believes in Hell—yet another belief not well-supported by Scripture.
[5] Almost all of also this applies to how people often talk about the Constitution, and their reading being unmediated.

I got banned from Facebook (again)

dates I got banned from FB

Loosely, here is the chain of events that got me banned. In March, I shared Nazi propaganda about euthanization, in order to make the point that social Darwinism (which is what people were advocating for covid) was exactly the line of argument used by Nazis. Personally, I would encourage everyone to share this image, as it is a very effective way to get people who are (still) arguing for “let covid run its course as it will only hurt the weak.” If hundreds of people get banned for it, that would be good.

I got a “you’re banned for 24 hours for this violation,” and then “you keep violating our standards and so are getting more punished” for the same post. It ended up being at least three and maybe four times. When a human looked at it, the decision was made (correctly) that I wasn’t promoting Nazism, but using a Nazi image for appropriate purposes of discussion. Therefore, I was let out of Facebook jail for the last violation. To make the whole thing more irritating, I’m still on record for having violated Facebook standards for a post they said was not a violation of their standards.

It happened again on Friday—banned twice, with increasing penalties—for the same post, and then I got a notice that the post is fine. But, since it was reported twice, I’m still unable to post on Facebook till the three days are up. Only one of those posts was removed from my permanent record.

I’ve often posted about how I think we should use good old policy argumentation when trying to solve problems, and this is a great example. It might be tempting to say that my problem is Facebook, and there are lots of things to say about what’s wrong with Facebook. If “Facebook” is the problem, then the solution is to refuse to participate in Facebook, but my refusing to participate in Facebook doesn’t mean they handle issues of crappy censorship any better. If I quit Facebook, I have solved my problem of Facebook banning me, but I’ve solved it by banning myself.

Facebook banned my post because its policies assume: 1) reducing hate speech can be solved through bots; 2) racism and hate speech are all clear from surface features; 3) sharing is supporting. The first follows neatly from the second and third.

Around the time I was banned last spring, Facebook was being sued by people paid to review posts because their work was so awful that it was giving them serious health issues. Having spent a tiny amount of my time throughout the years trying to engage with the kind of rhetoric those people would have had to read, I can say that their claims were completely legitimate. It would be awful work.

The people who sued argued that the pay should be better, and there should be more support, and those claims seem to be reasonable to me. What puzzles me is why Facebook would decide that someone’s job would be to wade into that toxic fecal matter for forty hours a week at $16-18$ per hour. I assume that settlement is why, last spring, they started relying heavily on bots.

The bots don’t work very well, and so people can complain and get the posts reviewed by humans, but it’s still gerfucked (as in the multiple reports for the same post). It’s also indicative of how people think about “hate speech.” It’s long been fascinating to me that people use “hate speech” and “offensive speech” as though those terms are interchangeable. They aren’t—they don’t even necessarily overlap.

People assume that the problem with “hate speech” is that it expresses hate, and that’s bad. It’s bad on its own (because you shouldn’t hate anyone), and it’s bad because it hurts someone else’s feelings. So, “hate speech” is bad because of feeeeelings. I’m not sure hate is necessarily bad—I think there are some things we should hate. In addition, you can hurt my feelings without expressing hate—if you tell me that I’ve hurt your feelings, I’ll feel bad, so does that make what you did “hate speech”? It’s approaching the whole issue this way that makes people think that telling someone they’re racist is just as bad as saying something racist. They’re wrong.

“Hate speech” is bad because it encourages, enables, and causes violence against a scapegoated out-group.

And it isn’t necessarily offensive. I’ve known a lot of people who didn’t intervene (or think any intervention should happen) for passive-aggressive hate speech because they didn’t notice that it was hate speech. It didn’t seem “hateful” because they, personally, didn’t find it offensive. If we think hate speech is offensive, then we either aspire after a realm of discourse in which no one is ever offended (and that is neither possible nor desirable) or we only care about whether the dominant group is offended.

If we think of hate speech as hateful and offensive, then we’re likely to rely on surface features—that is, whether the speech is vehement and/or has boosters. Vehement speech isn’t necessarily hate speech (although it makes people very uncomfortable, so they’re likely to find it offensive, and mischaracterize it as hate speech), and hate speech isn’t necessarily vehement. It’s hard to notice passive-aggressive attacks on a scapegoat (or scapegoated out-group) because we don’t feel attacked. Thus, the most effective hate speech doesn’t have a lot of what linguists call “boosters” (emphatic words or phrases), but instead seems calm and even hedging. Praeteritio and deflection are useful strategies for maintaining plausible deniability while rousing a base to violence against a scapegoated out-group because people not in the scapegoated out-group won’t be offended by it. (“I don’t know if it’s true, but I’ve heard very smart people say…”)

Thus, surface features aren’t good indicators of whether something is hate speech, nor is whether we are offended by it.

The third bad assumption in this whole dumb process is that sharing is supporting. There’s a big problem as to whether we should share hate speech, even if we’re criticizing it, since we’re thereby boosting the signal (and there are people, like Ann Coulter, who are, I think, deliberately offensive for publicity purposes). But I’m not really talking about that particular dilemma. It struck me when I was working with graduate students how many of them refused to teach a book or essay with which they disagreed, or which they disliked. We still see teaching as profoundly inculcation, as presenting students with admirable things they should like. There are a lot of problems with that way of thinking about teaching (it presumes, for one thing, that the teacher has infallible judgment), and one of those problems is shared with the larger culture—the desire to live in a comfortable world of like-minded and pleasurable things. That is why Facebook is such an informational enclave—because we choose to use it that way.

So, unfortunately, Facebook is probably right that most of the times someone shares an image or post, they’re indicating agreement. I don’t, therefore, object to a post of Mussolini’s headquarters being stuck in timeout for an hour till a human can look and see if it’s approving or disapproving of Mussolini. I do object to the fact that, because of their incompetent system, I’m banned from posting for three days for a post they have decided doesn’t violate their standards. I also object to how difficult it is to get my (not) penalties removed from my permanent record, and I do wish they had smarter bots, and I do wish we were in a world that was smarter about hate speech.





How do you teach SEAE?

marked up draft


I wrote a post about how forcing SEAE on students is racist, and someone asked the reasonable question: “It has been very challenging, especially in FYC classes, to reconcile my obligation to prepare students for academic writing across disciplines with my wish to preserve their own agency and choice. How do you strike that balance?”

And my answer to that is long, complicated, and privileged.

University professors are experts in everything. I had a friend who was a financial advisor who said that financial advisors routinely charge doctors and professors more, because both of those groups of people think they’re experts in everything and so are complete pains in the ass. He thought I’d be mad about that, but I just said, “Yeah.” And, unhappily, at a place where people have to write a lot to succeed, far too many people think they’re experts in writing..

I’ve had far too many faculty and even graduate students (all over the U) who’ve never taken a course in linguistics or read anything about rhetoric or dialect rhetsplain me. They think they’re experts in writing because they write a lot. I walk a lot, but that doesn’t mean I’m a physical therapist. It was irritating, but as a faculty member (especially once I got tenure), I could just shrug and move on.

In other words, I’m starting with the issue that how I handled this in my classes was influenced by my privilege. Even as an Assistant Professor, I was (too often) the Director of Composition, and so I knew that any complaints about my teaching would go to me. When I found myself in situations in which I had to defend my practices, I knew enough linguistics to grammar-shame the racists. (Grammar Nazis are never actually very good at grammar, even prescriptive grammar. Again, the analogy is accurate.) I think I have to start by acknowledging the issue since not everyone has the freedom I did.

So, what did I do?

I was trained in a program that had people write the same kind of paper every two weeks. This was genius. It was at a time when most writing programs had students writing a different kind of paper every two (or three weeks). That was also a time when research showed that no commenting practice was better than any other, since none seemed to correlate any more than any other with improvement in student writing (Hillocks, Research in Written Composition). But, even as a consultant at the Writing Center, I could see that the writing in Rhetoric classes did get better (that wasn’t true of all first-year writing courses).

Much later, I would read studies about cognitive development and realize that that classic form of a writing class (in which each paper is a new genre) makes no sense cognitively—even the Rhetoric model that I liked was problematic. The worst version is that a student writes an evaluative paper about bunnies, and the teacher makes comments on it. Then the student is supposed to write an argumentative paper about squirrels. A sensible person would infer that the comments about the evaluative paper are useless for their argumentative paper about squirrels (unless they’re points about grammar, and we’ll come back to that). That’s why students read comments simply as justifications of the grade. The cognitive process involved in generalizing from specific comments about a paper on one genre and topic to principles that can be applied to the specific case of a paper about another topic and in another genre is really complicated.

The Rhetoric model was a little better, insofar as it was the same genre, but even that was vexed. A student writes an argument about bunnies, and gets comments about that paper, and then has to abstract the principles of argument to apply to a different argument about squirrels. With any model in which the student is writing new papers every time, the student has to take the specific comments, abstract them to principles, and then reapply them to a specific case. That task requires metacognition.

I’m a member of the church of metacognition. I think (notice what I did there) that all of the train wrecks I’ve studied could have been prevented had people been willing to think about whether they might be wrong—that is, to think about whether their way of thinking was a good way to think.[2] But, I don’t think it makes sense to require (aka, grade on the basis of) something in a class that you don’t teach. So, how do you teach metacognition?

You don’t teach it by requiring that students already can do it. You teach it by asking students to reconsider how they thought about an issue. You teach it by having students submit multiple versions of an argument, and you make comments (on paper and in person) that make them think about their argument.

Once again, we’re back on the issue of my privilege. I have only once had a thoroughly unethical workload, and that ended disastrously (I was denied tenure). Otherwise, it’s been in the realm of the neoliberal model of the University, and I’ve done okay. But, were I in the situation of most Assistant Professors (let alone various fragile faculty positions) I would say use this model for one class at most.

I haven’t gotten around to the question of dialect because the way I strike the balance between being reasonable about how language works and the expectation that first-year composition prepares students for writing in a racist system is to throw some things off the scale. We can’t teach students the conventions of every academic disciplines; those disciplines need to do that work.

There was a moment in time (I infer that it’s passed) when people in composition accepted that FYC was supposed to be some kind of “basic” class in which people would learn things they would use in every other class with any writing. The fantasy was (and is, for many people) that you could have a class that would prepare students for all forms of writing they will encounter in college. Another was that you could teach students to read for genre, so that you should have students either write in the genre of their major or write in every genre. Both of those methods have students needing to infer principles in a pretty complicated way.

A friend once compared this kind of class to how PE used to be—two weeks on volleyball, two weeks on tennis, two weeks on swimming. You don’t end up a well-rounded athlete, but someone who sucks at a lot of stuff.

What I did notice was that a lot of disciplines have the same kind of paper assignment: take a concept the professor (and/or readings) have discussed in regard to this case (or these cases), and apply it to a new case (call this the theory application paper). We can teach that, so I did. That kind of paper has several sub-genres:
1) Apply the theory/concept/definition to a new case in order to demonstrate understanding of the theory/concept/definition;
2) Apply the theory/concept/definition to a new case in order to critique the theory/concept/definition;
3) Apply the theory/concept/definition to a new case in order to solve some puzzle about the case (this is what a tremendous number of scholarly articles do).

So, I might assign a reading in which an author describes three kinds of democracy, and ask that students write a paper in which they apply the definitions to the US. I might have an answer for which I’m looking (it’s the third kind), or I might not. I might be looking for a paper that:
1) Shows that the US fits one of those definition;
2) Shows that the US doesn’t quite fit any of them, and so there is something wrong with the author’s definitions/taxonomy;
3) Shows that applying this taxonomy of democracies explains something puzzling about the US government (why we have plebiscites at the state level, but not federal, or why we haven’t abandoned the Electoral College) or politics (why so few people vote).
Of course, I might be allowing students to do all three (if students think it fits, then they’d write the first or third, but if they don’t they would write the second).

Students typically did three papers, and turned the first one in three times (the third revision was late in the semester). They turned in their first version of their first paper within the first three weeks of class; I’d comment on it (I’d rarely give a grade for that first version) and return it within a week. They’d revise it and turn it in again a week after getting it back (we’d have individual conferences in the interim). I’d get that version back in a week. They’d turn in their first version of their second paper a week or two after that, and so on. Since the paper would be so thoroughly rewritten, I barely commented on sentence-level issues (correctness, clarity, effectiveness) on that first submission of the first paper (or second, for that matter). For many students, the most serious issues would disappear when they knew what they wanted to say.

I’ve given this long explanation of how the papers worked because it means that students had the opportunity to focus on their argument before thinking about sentence-level questions.

Obviously, in forty years my teaching evolved a lot, and so all I can say is where I ended up. And here’s the practice on which I landed. In class, we’d go over the topic of “grammar,” with the analogy of etiquette. And then I’d do what pretty much everyone else does. I’d emphasize sentence-level characteristics that interfered with the ability of the reader to understand the paper (e.g., reference errors, predication), only remarking on them once or twice in a paper. If it was a recurrent thing, I might highlight several instances (and I mean literally highlight) of a specific problem. I might ask them to go to the Writing Center or come to office hours, so we could go over it.

But, and this is important, I gave them a specific task on which they should focus. Please don’t send a student to the Writing Center telling them to work on “grammar.” It’s fine to tell them to go to the Writing Center to revise the sentences you’ve marked, or to reduce passive voice (but please make sure it’s passive voice that you mean, and not progressive or passive agency). Telling a student to work on “grammar” is like saying a paper is “good”—what does that mean?

I didn’t insist that students write in SEAE—that is, I didn’t grade them on it. I graded on clarity, and let students know about things that other people might consider errors (e.g., sentence fragments). And that seems to me a reasonable way to handle those things. If a student wants to get better at SEAE (and some students do), then I’d make an effort to comment more about sentence-level characteristics. My department happened to have a really good class in which the prescriptive/descriptive grammar issue was discussed at length, so students who really wanted to geek out on grammar could do it.

I think the important point is that students should retain agency. The criticism that a lot of people make about not teaching SEAE is that we’re in a racist society, and students who speak or write in a stigmatized dialect will be materially hurt. Well, okay, but I don’t see how materially hurting them now (in the form of bad grades) is helping the situation. It’s possible to remark on variations from SEAE without grading a student down for them. It’s also possible to do what the student wants in that regard, such as not remark on them.

Too many people have the fantasy of a class that gets rid of all the things we don’t want to deal with in students. Students should come to our class clean behind the ears, so that…what? So we don’t have to teach?





[1] I love that people share my blog posts, and I know that means people read them who don’t know who I am. Someone criticized my “casual” use of the term Nazi, and that’s a completely legit criticism—people do throw the term around–but it isn’t casual at all for me. Given the work I do, I would obviously never use that term without a lot of thought. People who rant about pronouncing “ask” as “aks,” make a big deal about double negatives, or, in other words, focus on aspects of Black English, aren’t just prescriptivists (we’re all prescriptivists, but that’s a different post)—they’re just people who want to believe that racist hierarchies are ontologically grounded, citing pseudo-intellectual and racist bullshit. Kind of like the Nazis. I call them Nazis because I take Nazis very seriously, and I take very seriously the damage done by the pseudo-intellectual framing of SEAE as a better dialect.

[2] My crank theory is that metacognition is ethical. I don’t see how one could think about thinking without perspective-shifting—would I think this was a good way of thinking if someone else thought this way? And, once you’re there, you’re in the realm of ethics.

Three triggers for procrastination: drudgery, decisional ambiguity, and existential threat

dream weekly schedule

The short version of this post is that there are three triggers of procrastination, or three situations in which procrastination is a very tempting choice, and writing a book, grant proposal, article, or dissertation falls into all three areas.

Some tasks involve a lot of uninteresting drudgery, and many people procrastinate those tasks partially because the panic of being up against a deadline makes them slightly more interesting. Some tasks require that we make decisions without adequate information, and so the temptation is to delay making the decision in the hopes that we’ll get more information. Some tasks threaten our sense of self–failure at the task feels as though it would be the end of the world.

One scholar, Baker, takes those three situations and points out personality types prone to one or another (but, again, all three are part of academic writing) Amanda, the procrastinating grant-writer mentioned in a previous post, fits into the category that Baker, following Ferrari, calls “avoiders,” “who seem to have issues of self-esteem that they are confirming by putting off needed tasks and who are also very concerned with the opinions of others (they promote the idea that they did not have time rather than that they were not up to a task)”. (Baker Thief 169) That is, “avoiders” avoid tasks that have existential stakes (“am I an imposter?” “am I good enough?”). Failing to get around to the task, rather than failing at the task, leaves open the possibility that one could have succeeded if one had tried. In my experience, “avoiders” sometimes avoid scholarly tasks by taking on unnecessary service responsibilities, picking up time-consuming hobbies, or getting involved in organizations (procrastiworking). That isn’t to say that everyone engaged in service is procrastinating, or that no one should take up a hobby or get involved in community work, but that those choices might be subtle forms of procrastination.

Rose Fichera McAloon describes her undergraduate writing process: “I was terrified of criticism, of being unmasked as a fraud, of being stripped of my self- esteem, of being irreparably crushed. I wanted to write the papers, fuss over them lovingly, craft them to perfection—but would not and could not. They were always written in a slapdash way, never reread for content, and turned in with the hope that a miracle would happen and that I would beat the odds once again. It mostly worked.” (239)

Amanda’s situation (above), like McAloon’s, is one in which applying for the grant appears to have more pain associated with it than delay: if writing the grant means confronting her sense of personal inadequacy and risking rejection and exposure, why do it? The route of shoving the grants away and hoping that something comes up down the road can seem very, very attractive.

Procrastinating can seem to protect one’s self-image as a talented person. Our talent remains untested, since we didn’t fully apply ourselves. Also sometimes called “fear of failure,” or “imposter syndrome,” this strategy of procrastinating the immediate task in order to evade existential challenges is, it seems to me, difficult, but not impossible to overcome, particular with a combination of strategies (discussed in the next section), most of which involve some method of removing, reducing, or even procrastinating the shame and anxiety that writing raises for us. One strategy for managing this kind of anxiety, perhaps paradoxically, is to write through it; as a method of desensitizing, working even when feeling almost paralyzed by self-doubt becomes a foundational experience on which we build future experiences. Having done it once, and survived, we know we can do it again.

The best way to get over the anxiety that you might get a savaging review of a book or article is to have a book or article savaged. After all, it isn’t the savaging we fear, it’s some suspicion that we will be entirely destroyed by the savaging; if our identity is “a good writer” or “a smart person,” then it might seem that we will lose our very identity if an editor, committee member, reader, or reviewer tells us that a piece is badly-written, stupid, or wrong. Once you get savaged by a reviewer, and it happens to everyone, you learn that your cells did not cease to adhere, you did not melt into the floor, your friends will not shun you, and you’re okay. Sometimes you decide the piece really was pretty bad, and sometimes you decide it wasn’t that bad, and sometimes you decide the reader’s responses should be entirely ignored, and sometimes you bounce around among various responses. But, whatever response you have, you are still you, maybe a slightly more resilient you, and that might be good.

Baker notes two other kinds of procrastinators (relying on Ferrari’s research). There are “’arousal types’ who experience a ‘euphoric rush’ by putting off their work until it is too late” (Thief 169). Ferrari’s description of this kind of procrastination is similar to Piers Steel’s discussion of people prone to procrastinate boring or tedious tasks, a character he calls “Time-Sensitive Tom.” By introducing the possibility of failure, a dull task becomes more interesting; in addition, living in crisis mode is gratifying—comfortable even—for some people. A colleague once speculated that filling out book order forms a day late or rushing one’s grades in at the last minute can make it seem as though one’s life is so busy (and, by implication, the person is so important) that getting simple tasks done on time is difficult. It struck me as a cynical interpretation, till I caught myself thinking almost exactly that about myself: getting tedious tasks done on time is what drudges do; tossing too many balls in the air is what interesting people do. “Arousal types” tend toward what is described above as “just in time” procrastination. When JIT procrastination goes badly, it is the consequence of a failure to estimate time correctly and/or correctly calculate the costs and risks of failure.

Steel argues, convincingly, that impulsivity strongly correlates to procrastination (see especially 25-26), which is worsened by the fact that “We tend to see tomorrow’s goals and concerns abstractly—that is, in broad and indistinct terms—but to see today’s immediate goals and concerns concretely—that is, with lots of detail on the particulars of who, what, and when” (25). Also called “hyperbolic discounting,” this tendency to value the immediate (the bird in the hand) is one of the fundamental biases, and is implicated in a lot of bad decision-making of various kinds.6 We know the pleasure we will get from playing another computer game; the pleasure we will get from getting an article published is distant (I will later discuss how I think this tendency to favor immediate reward is one reason that people put too much time into service and teaching, since they both provide immediate rewards).
Whereas it’s useful to reduce drama in order to reduce avoidance-type procrastination (and make the task more routine), increasing the drama makes boring tasks more likely—arousal types may procrastinate to make something less routine. Steel emphasizes the importance of planning, saying, “Proper planning allows you to transform distant deadlines into daily ones, letting your impulsiveness work for you instead of against you” (39). Thus, the method for dealing with “avoidance” procrastination can be very different from the best method for dealing with “arousal” procrastination.

Similarly, the best methods for managing “Decisional procrastinators”—people “who procrastinate because they cannot make up their minds” (Thief 169)—are somewhat different from the avoidance or arousal procrastination. For some academics, grading, serving as an outside reviewer (for journals or presses), or reading dissertations trigger “decisional” procrastination. Afraid that we might assign the wrong grade, or that we might unfairly reject an article, we put off making the decision. Decisional procrastination is not necessarily a bad choice; in fact, David Allen’s very useful Getting Things Done is largely about being deliberate regarding decisional procrastination. If it is a decision about which it is possible to get more information, and plausible that we will, then deliberately delaying it (but not losing track of it) is a rational strategy. The strategies for managing this kind of procrastination are also discussed later, but mostly involve setting reasonable deadlines, not just for completing the task, but for trying to get the information that would make the decision easier to make. People who have an aversion to closure are particularly prone to decisional procrastination, and so can benefit by finding ways to make decisions that are contingent (this can be done in regard to grading in various ways, also discussed later).

It’s because these various kinds of procrastination can happen at different moments during the same project that I’m dubious about the accuracy or utility of identifying them as different kinds of people: a person might be an “avoidance” procrastinator in regard to writing an article, an “arousal” procrastinator in regard to preparing the Works Cited, and a “decisional” procrastinator in regard to where to submit the manuscript. The same task might trigger different kinds of procrastination in different people: some people find that grading triggers “arousal” procrastination (because they find it tedious), and some people find that it triggers “decisional” procrastination (because they are unsure about their grading), and some people find that it triggers “avoidance” procrastination (because reading papers raises insecurities about the job they have done as teachers).

So, I think most of us are prone to all three kinds of procrastination, especially since academic writing presents all three situations. Still and all, knowing about the three can help figure out which one we’re doing right now, and that will help us decide which strategies might work. I think understanding the different triggers also helps reduce shame. There are so many books of advice out there–Destination Dissertation, Writing Your Dissertation in Fifteen Minutes a Day, Getting Things Done–and they all work for a bit and under some circumstances because they’re useful for one or two of the kinds of procrastination. But they all stop working at some point or in some situations. In my experience, grad students or faculty can then fall into a shame/anxiety spiral, thinking they suck, and they’ll never finish. It’s just that they need some new strategies–not for ever, but for this moment.

Ways of thinking about our procrastination: “naifs” v. “sophisticates”

messy office

Procrastination researchers Ted O’Donoghue and Matthew Rabin set up an experiment that had two tasks for the subjects. Subjects who committed to both tasks and completed them got the most rewards, with the second-highest rewards going to subjects who committed to the first step and completed it. Subjects who committed to both tasks but didn’t complete both received the least reward. Hence, subjects were motivated to be honest with themselves about the likelihood of their really finishing both tasks. O’Donoghue and Rabin argue that some people who procrastinate know that they do so, and make allowances for it. These people, “sophisticates,” in O’Donoghue and Rabin’s study, made better decisions about their commitments and therefore (or thereby?) mitigate the damage done by their procrastination. “Naifs” are people who procrastinate, but “are fully unaware of their self-control problems and therefore believe they will behave in the future exactly as they currently would like to behave in the future” (“Procrastination on long-term projects”). That is, although they have procrastinated in the past, and may even be aware that this practice has caused them grief, naifs make decisions about future commitments predicated on the assumption that they will not procrastinate in the future. They are not harmed by their procrastination as much as they are harmed by their belief that they will magically stop themselves from procrastinating in the future.

The short version of this post is that we all procrastinate, and so we plan for it.

O’Donoghue and Rabin conclude that naifs are more likely to incur the greatest costs from procrastination. They say: “The key intuition that drives many of our results is that a person is most prone to procrastinate on the highest-cost stage, and this intuition clearly generalizes. Hence, for many-stage projects, if the highest-cost stage comes first, naive people will either complete the project or never start, whereas if the highest-cost stage occurs later, they might start the project but never finish. Indeed, if the highest-cost stage comes last, naive people might complete every stage of a many-stage project except the last stage, and as a result may expend nearly all of the total cost required to complete the project without receiving benefits.”

Sometimes procrastinating the highest-cost stage to the end is necessary: the dissertation is the highest-cost stage of graduate school, and it is necessarily the last. Many people advise leaving the introduction to the dissertation or book (or theoretical chapter) till last because it’s more straightforward to write when we know what we’re introducing–we’ve written the rest, but that also means procrastinating the highest- cost stage. It isn’t necessarily bad to procrastinate the highest-cost stage, but it does mean that people who sincerely believe that 1) they don’t procrastinate, or 2) they can simply will themselves out of procrastinating this time (“I just need to sit my butt down and write”) may be setting themselves up for a painful failure, especially if this kind of procrastination is coupled with having badly estimated how much time writing the dissertation would actually take. It would be interesting to know how many ABDs are “naifs.”

In a sense, the story that “naifs” tell about procrastination is a simple one—they can make themselves behave differently this time the same way one can make oneself get out of bed. But such a view—that willing one’s self to write an article is like willing one’s self to get out of bed—ignores “procrastination” in regard to scholarly productivity is not a question of lounging in bed or getting up, of eating cupcakes or writing an article. These posts are from a book project I was thinking about writing, and the first very rough draft wasn’t too hard to write; it went quite quickly, probably because I’d been thinking (and reading) about the issue for years. But when it came time to work on it again—incorporate more research, especially the somewhat grim studies about factors that contribute to scholarly productivity—I instead reprinted my roll sheet, deleting from it the students who had dropped, adding to my sheet the dates I hadn’t included, composing and writing email to students whose attendance troubled me, and comparing students’ names with the photo roster (in a more or less futile effort to learn all their names). I then printed up the comments I’d spent writing and stapled them to the appropriate student work. I sent some urgent email related to a committee I chair, answered email (related to national service for a scholarly organization) I should have answered yesterday, and sent out extremely important email to students clarifying an assignment I’d made orally in class. None of that was very pleasurable—I’d far prefer to have eaten a cupcake. And yet it was procrastination.

My procrastinating one task by completing others is typical of much procrastination (it’s sometimes called “procrastiworking”); it isn’t a question of choosing between something lazy and self-indulgent and something else that is hard work. Take, for instance, this poignant description of a scholar who keeps procrastinating applying for grants:
“Grant application season has rolled around once again. Amanda, who has in the past regularly failed to submit applications for research grants that many of her colleagues successfully obtain, feels that she really should apply for a grant this year. She prints out the information about what she would need to assemble and notes the main elements thereof (description of research program, CV , and so on) and—of course—the deadline for submission. She puts all of these materials in a freshly labeled file folder and places it at the top of the pile on her desk. But whenever she actually contemplates getting down to work on preparing the application—which she continues to think she should submit—her old anxieties about the adequacy of her research program and productivity flare up again, and she always find some reason to reject the idea of starting work on the grant submission process now (without adopting an alternative plan about when she will start). In the end the deadline passes without her having prepared the application, and once again Amanda has missed the chance to put in for a grant.” (Stroud, Thief 65) Whatever complicated things are going on in this story, or in the minds of people who find themselves in Amanda’s situation, it’s absurd to say that she is choosing pleasure over pain.

I find this story heartbreaking, probably because the details are so perfectly apt. Of course she would neatly label the folder, and add it to a pile (I used to keep a section of my file cabinet labeled “Good Intentions”). And I have to add, she needs to get “down” to work on the applications—why is it always “down”? When people are beating themselves up about not doing writing (or grading), they tell me, “I just need to sit down and do it” or “buckle down and do it” or variations on those themes. Why don’t people need to “sit up” and work on the project? Or “get up and go” on it?

That this method of managing grants has never worked doesn’t seem to register, and so there is what Jennifer Baker calls “a cruel cycle:” “Procrastinators are inefficient in doing their work, they make unrealistic plans in regard to work, and they are so cowed by perfectionist pressures that they become incapable of incorporating advice or feedback into their future behavior.” (Thief 168). Baker is here describing something much like “naifs”—unwilling or unable to recognize that there is a pattern, they hope or expect to be able to work on getting a little better: they will do it completely different in the future. Applying the same sense of perfectionism to our work habits, we set unrealistic goals for our future selves, virtually ensuring that we fall back into imprudent delays. Because no grant could possibly be as good as we want, we write no grant at all. Instead of setting up fantasies of behaving completely differently in the future, we need to be honest about what we are doing now, and why we are doing it.

Procastination of academic writing: different kinds and different solutions

marked up draft

I. Some ways of categorizing procrastination: “just in time,” “miscalculation,” “imprudent delay”

When people talk about “procrastinating,” they often mean “putting off a task,” but there are many ways of doing that: putting off paying bills till near the due date, avoiding an unpleasant conversation, rolling back over in bed instead of getting up early to exercise, delaying preparing for class till half an hour before it starts, ignoring the big stack of photos that should be put in albums, answering all of my email rather than proofing an article, writing a conference paper the night before, delaying going to the dentist, intending to save money for retirement but never getting around to it, eating a cupcake and promising to start the diet tomorrow, telling myself I cannot do my taxes until I have set up a complicated filing system, ignoring the stack of papers I need to grade until they must be returned. All of these involve putting off doing something, but they are different kinds of behavior with different consequences:
1) indefinite delaying such that the task may never get done;
2) allocating just barely (or even under) enough time necessary to complete a task (“just in time” procrastination);
3) a mismatch between my short-term behavior and long-term goals (procrastination as miscalculation).
Procrastinating proofreading by answering email is potentially productive (as long as I get to the proofreading in time), delaying going to the dentist might mean later dental work is more expensive and more painful, and putting off papers till the last minute might reduce my tendency to spend too long on grading.

It seems to me that many talented students use a “just in time” procrastination writing process for both undergraduate and graduate classes, largely because it works under those circumstances. (In fact, the way a lot of classes are organized, no other process makes sense.) “Just in time” writing processes work less well for a dissertation—they make the whole experience really stressful and very fraught, and they sometimes don’t work at all. It’s an impossible strategy for book projects—it simply doesn’t work because there aren’t enough firm deadlines. Shifting away from a “just in time” writing process to more deliberate choices means being aware of other writing processes, and can often involve some complicated rethinking of identity.

“Just in time” procrastination sometimes goes wrong, as when something arises in the allotted time and so it was not nearly enough. Sometimes the consequences are trivial—a dog getting sick means I didn’t finish those last few papers and I have to apologize to students; my forgetting to bring the necessary texts home means I have to get to campus extremely early to prepare class there; I misunderstand the due date on bills and have to pay a late fee. But the consequences can be tragic: if there is a delay at a press, a reader/reviewer has serious objections, or illness intervenes, then a student may lose funding, a promising scholar may be denied tenure, a press may cancel a contract.

Procrastination as miscalculation, or the inability to make short-term choices fit our long- term goals, is the most vexing, what Christine Tappolet calls “harming our future selves” (Thief 116) or what Chrisoula Andreou calls “imprudent delay;” that is, procrastination as involving “leaving too late or putting off indefinitely what one should, relative to one’s goals and information, have done sooner” (Andreou Thief 207). This kind of procrastination (imprudent delay) might mean choosing a short-term pleasure over a long-term goal (going back to sleep instead of getting up to exercise), delaying a short-term pain (putting off going to the dentist until one is actually in pain), or simply making a choice that is harmless in each case but harmful in the aggregate (spending time on teaching or service rather than scholarship). Imprudent delay isn’t necessarily weakness of will, as it doesn’t always mean doing something easy instead of something hard; it might mean choosing different kinds of equally hard tasks, and it is only imprudent in retrospect, or in the aggregate.

Many books on time management and productivity focus on this kind of procrastination, and describe effective strategies for keeping long-term goals mentally present in the moment. Ranging from products (such as the Franklin-Covey organizers) to practices (such as David Allen’s “tickler” files), these methods of improving calculation seem to me to work to different degrees with different people under different circumstances. None of them works every time with every person, a fact that doesn’t mean the strategy is useless or the person is helpless, but it does mean that people might need to experiment among different strategies and products.

Imprudent delay, when it comes to academia, is complicated, perhaps because it so often not a choice between eating a cupcake and exercising. After all, even if a scholar gets to a point in her career at which she comes to believe she has previously spent time on service that should have been spent on scholarship, there is probably, even in retrospect, no single moment that she made the mistake. I can look back on a period of my career when I spent too much time on service and teaching, but I was asked by my Department Chair to do the service, so I didn’t feel that I could say no. My administrative position often involved meeting with graduate student instructors to discuss their classes, and I can’t think of a single conversation I wish I hadn’t had. I can think of things I wish I had done differently (some are discussed here) but I empathize with junior colleagues who carefully explained why they have taken on this task. And, as my husband will tell anyone who wants to listen, I still regularly take on too many tasks. But, I will say in my defense, I’m better.

Imprudent delay—failing to save for retirement, spending too much time on service, engaging in unnecessary elaborate teaching preparation—never looks irrational in the short run. Phronesis, usually translated as “prudence,” is, for Aristotle, the ability to take general principles and apply them in the particular case. One reason “prudent” versus “imprudent” procrastination seems to me such a powerful set of terms is that the sorts of unhappy situations in which academics often find ourselves are the consequence of the abstract principle (“I want to have a book in hand when I go up for tenure”) not being usefully applied to this specific case (“Should I write another memo about the photocopier?”). This is a failure to apply Aristotle’s phronesis.

Another reason that thinking of procrastination in terms of Aristotle seems to me useful is that his model of ethics is as a practice of habits, which we can consciously develop through the choices. We do not become different people, but we develop different habits, sometimes consciously. People with whom I’ve worked sometimes seem to have an ethical resistance to some time or project management strategies or writing processes because they don’t want to become that kind of person (a drudge, an obsessive, ambitious). Thinking that achieving success requires becoming a different person is not only unproductive, but simply untrue.

Martha Nussbaum points out that Aristotle’s metaphor is aiming: making correct ethical choices is like hitting a target. If one has a tendency to pull to one side, then overcompensating in the aim will increase the chances of hitting the target. Andreou points out that there are things about people have a lot of willpower, and others about which we have very little, “I may, for example, have very poor self- control when it comes to exercising but a great deal of self-control when it comes to spending money or treats” (Thief 212). The solution, then, is to use the self-control about spending money to leverage self-control in regard to exercise: meeting one’s exercise goal is rewarded by spending money. If, however, one has little self-control in regard to spending money, then trying to use monetary rewards/punishments to encourage exercise won’t work, since a person won’t really enforce whatever rules they’ve set for themselves.

A lot of people respond to procrastination with shaming and self-shit-talking, and my point is that those are both useless strategies. It’s more useful to try to figure out what kind of procrastination it is, and what’s triggering it (the next post).

Procrastination: introduction

weekly work schedule

“A writer who waits for ideal conditions under which to work will die without putting a word on paper.” (E.B. White, “E. B. White, The Art of the Essay No. 1” Paris Review)

Reason #3 I wanted to retire early was so that I could finish a bunch of projects. One of them is about scholarly writing. Someone asked that I pull out the parts about procrastination–that was about 10k words. Even when I brutally whacked at it, it was 4k, which is just way too much for a blog post. So I’ve broken it into parts. Here’s the first.

I haven’t edited or rewritten it at all, and I wrote this almost six years ago. I tried to move footnotes into the texts, but it’s still wonky as far as citation. I didn’t want to put off posting it till it was perfect (the irony would be too much), so here goes.

Procrastination is conventionally seen as a weakness of will, a bad habit, a failure of self- control–narratives that imply punitive behavior is the solution. Those narratives ignore that procrastination isn’t necessarily pleasurable, and often doesn’t look like a bad decision in the moment. Putting off doing scholarship in favor of spending time and energy on teaching or service is not a lack of willpower, the consequence of laziness, or inadequate panic. But it is putting off tasks that Stephen Covey would call important but not urgent in favor of tasks that are important and urgent. Since it isn’t caused by lack of willpower or inadequate fear, it isn’t always solved by self-trash-talk or upping the panic.

Procrastination isn’t necessarily one thing, and so it doesn’t have one solution. Nor is it always a problem that requires a solution; dictating barely enough time to a task can ensure we don’t spend more time on it than is necessary can make a dull task more interesting, as it introduces the possibility of failure, and it can be efficient. I once tried preparing class before the semester began by doing all the reading and making lecture notes during the summer. I had to reread the material the night before class anyway, so the pre-preparing meant I spent more time on teaching, not less. Grading papers is a task that will expand to fill the time allotted, as I could always read a little more carefully, word my suggestions more thoughtfully, or give more specific feedback. Leaving the most complicated four or five papers till the morning of class means I had to get up at 4 in the morning, but it also meant I could only spend half an hour on each, and I was forced to be more efficient and decisive with my comments.

Many self-help and time managements books promise an end to procrastination, but that is an empty promise. As long as we have more tasks than time, we will procrastinate. The myth that one can become a perfect time manager who doesn’t procrastinate can inhibit the practical steps necessary to become more effective with one’s time. People who procrastinate because they don’t want to be drudges, and like the drama of the panicked writing, resist giving up procrastination, since it seems to suggest they have to become a different person. Some perfectionists procrastinate because they won’t let themselves do mediocre work—hoping to do perfect work, they may spend so much time doing one task perfectly that they get nothing else done, or they may wait till they feel they are capable of great work (if that moment never comes, they complete nothing), or they ensure that they have good excuses (such as running out of time) for having submitted less than perfect work. Unhappily, the same forces—the desire for a perfect performance—can inhibit the ability to inhabit different practices in regard to procrastination.

The perfectionist desire for procrastination can cause us to try to find the perfect system, product, or book–a quest that can will someone into a person who never gets anything done. It’s possible to procrastinate by trying all sorts of new systems that prevent procrastination. We can fantasize about ending procrastination—so that we will, from now on, do all tasks easily, effortlessly, promptly, and without drama—in ways that are just as inhibiting as fantasizing about writing perfectly scholarship. The point is not to become perfect, but to become better. The next few posts will describe some concepts and summarize some research that I found very helpful.