Goebbels pt. IV: Argument v. argumentation

building blown up by weathermen

Basically, I’m saying that fyc teaches argument and not argumentation, and that fyc, as currently taught, often rewards demagoguery, unintentionally. It does so by encouraging students to assume there are two sides on every issue, and that those two sides are identities (“liberals” v. “conservatives,” or “pro-“ or “anti” whatever). If there is any discussion of fallacies (and most textbooks don’t mention), it appeals to modernist notions of fallacies,[1] and it encourages students to note the fallacies in out-group rhetoric. That’s useless. That just inflames demagoguery.

Teaching students how to identify what’s wrong with how some out-group of theirs argues doesn’t help our situation.

What’s wrong with our world is not that we have a war between people who are right and people whose arguments are stupid, villainous, fallacious, self-serving, and irrational. What’s wrong with our world is that far too many of us frame the vexed, nuanced, entangled, and uncertain world of policy choices as a choice between the obviously right option (advocated by people who are good, objective, compassionate, rational [aka, Us]) and all other options (advocated by people who are villainous, and the people who are stooges or tools of that villainous group [aka Them]).

What’s wrong with our world is that far too many people believe that our politics is a war of extermination in which “real” people are justified in abrogating all the norms of democratic discourse and constitutional restraints as pre-emptive self-defense against the group that is trying to destroy us. That is the argument of Trump supporters, and that is what makes their rhetorical and political agenda anti-democratic. Like Stalinists, they argue that they are justified in violating all norms because we are in an apocalyptic war of identity (people who are good v. people who are bad). Trump supporters are far from alone in making that argument–people all over the political spectrum do; some more than others.

People out to destroy democracy rarely see (or describe) themselves as doing that. They see themselves as instituting a real democracy, a democracy of the only group that has a legitimate understanding of political issues. They believe that, by destroying all democratic norms and legal procedures, they are purifying the nation of the people who prevent a real democracy. They destroy the village in order to save it.

The problem isn’t that they’re bad people; the problem is that they’re people who believe that no point of view other than theirs, and no policy agenda other than theirs, is worth considering. Thus, getting out of a culture of demagoguery doesn’t mean abrogating the norms and rules of demcracy in order to exterminate the group that is threatening democracy. That is exactly what people who destroy democracies argue.

Saving democracy means saving the norms and legal practices of democracy. But how do you do that when a large part of the population is drinking deep of the Flavor-Aid that our group is threatened with extermination by Them, and therefore we are justified in anything we do?

That’s where courses in argumentation can do good work.

One way to get out of that culture is to show that we are not in a zero-sum battle between two groups. This isn’t to say that all positions are equally valid; it is to say that there aren’t just two. We have many potentially reasonable disagreements about policy that are not accurately described as a binary. Of course, there are people and groups who will crush anyone who disagrees with them, who will violate all norms in order to get their way, and those people (and groups) should be condemned and constrained. But, that someone disagrees with us is not proof that they a member (or tool) of those authoritarian groups. Not everyone who disagrees with us is a tool or villain. Some are, but not everyone. There are also people who are mistaken, deluded, gullible, ignorant, constrained in our understanding, and we are that people.

Making fyc a class in civics doesn’t mean giving students tools that will enable them to argue that their or our out-group(s) is/are irrational and bad. It should be a course in which the teachers are committed to teaching students how to figure out when their in-group is mistaken, deluded, gullible, ignorant (which means modelling acknowledging when our in-group is mistaken and so on). It would mean showing that our policy options are never a binary. Achieving that goal would mean teaching students argumentation, and not argument.

Teaching argument means teaching students to perform the moves we associate with an argument, and it restricts the teaching of logic to the formal fallacies. From the perspective of civics, this approach is useless since an argument might be formally right and yet still fallacious. “All bunnies are fluffy. This animal is not fluffy; therefore it is not a bunny.” That argument is formally correct—the problem is not the form, but that the major premise is false.

In formal logic, truth doesn’t matter; in informal logic, it does. Goebbels’ arguments followed logically from his premises, and his major premises are untrue. They also are inconsistent with major premises of many of his other arguments, but that’s a different post (and it’s how we get out of the problem of “logical argument” simply being a synonym for “argument I think is true”).

Goebbels would get an ‘A’ in any class that only relied on the formal fallacies. Where Goebbels would fail is in regard to fallacies relevant to informal argumentation: 1) did he engage the best criticisms of his argument? 2) did he hold his interlocutors to the same standards of logic and evidence to which he held himself? 3) did he represent his opposition fairly?[2] 4) is his overall argument internally consistent? (5) could he cite non-in-group sources to support his claims about “facts”?

If we’re going to talk about fallacies, let’s do it well—in ways grounded in current scholarship in cognitive biases and argumentation. There are a lot of ways that a person could teach a class grounded in either set of scholarship, and I’ll get to them later, but, mostly, they involve students identifying their own tendency to reason fallaciously/rely on cognitive biases.

And there is one hard rule on which I’ll insist: that approach means “open” assignments are off the table if we’re claiming to teach argumentation and not argument. It isn’t ethical for a teacher to claim to teach argumentation and let each student write about whatever issue interests that student because the teacher can’t possibly assess the resulting papers in terms of argumentation. You can teach argument that way, and you can also teach lots of other wonderful things, but not argumentation.

And here we’re back to my claim that fyc doesn’t have to teach argumentation. It really doesn’t.

I think a major problem in our field, and one reason we get into unproductive and uninteresting argybargies, is that there is an underlying assumption that all fyc programs should have the same goal—that there is this thing, an eidos fyc, and we are all trying to achieve it. I think we should walk away from the notion that all fyc programs should have the same goals, and consider fyc to be strategic and local. The goals of any fyc program should be determined, not on the basis of what “the field” says should happen, but on the basis of what is most useful for the first year students of that institution. I think that decision should be informed by scholarship in rhetoric and composition, but I also think that scholarship in rhetoric and composition doesn’t support the claim that all programs should have the same goals.

But, back to assuming that the goal is teaching students to engage responsibly in civic discourse. If an instructor is going to claim to teach argumentation (and not just argument), then we have to know whether a student has accurately represented opposition arguments, is engaging the smartest opposition arguments, and is not relying on a binary. There is no way a person can know that about every issue on which any student might write. We can only think we know the best opposition on every issue if we apply modernist notions of fallacies (and react to things like tone), assume that one source always has the best argument (usually in-group), or if we ourselves think in terms of a binary (and so ask that students engage the “liberal” and “conservative” or “pro-“ and “anti-“ on every issue). As I used to say to my son when I advised him not to do something, “Guess how I know this.”[3]

I’m not saying we have to have “closed” assignments, in which students write only about a text or small set of texts picked by the instructor. Down that road lies not only boredom but actually loathing the most important part of our job: responding to students’ papers in a way that models how they should respond to arguments they read.

There are a lot of ways that teachers can constrain paper topics so that there are papers on a variety of topics, and yet a teacher can notice if the opposition has been misrepresented. I’ll explain a representative sample of them later. Here I’ll simply note that many of those teachers (like me) didn’t figure out how to do it while teaching fyc. (Or even for some time after.) I’m not, just to be clear, saying that the field of rhetoric and composition fails to teach argumentation; there are lots of people, and lots of texts, that do great jobs at it. I’m saying fyc doesn’t, but it claims to. And that is the problem.

There are lots of strategies, including not teaching argumentation. But, and this is the important point of this post, if we’re going to say that, as teachers, we can grade something as a good or bad argument without knowing the controversy well enough to know whether a student has accurately represented the smartest opposition, even though we haven’t read the sources about which the student is writing, we are modelling how disagreement works on the internet, when people believe they can assess the quality an argument without actually reading it.

We’re thereby making things worse.





[1] I mean “modernist” in almost the technical sense—late nineteenth and early twentieth century Anglo-American rejections of Anglo-American Enlightenment models of the mind. What I’m calling “modernist” is often called “Enlightenment,” but that’s inaccurate. The Anglo-American Enlightenment didn’t accept the Cartesian mind/body rational/irrational split. For the Anglo-American Enlightenment philosophers, there wasn’t a binary. So, for instance, sentiment assisted deliberation, but passion didn’t. So, they didn’t believe that “emotions” were irrational. It seems to me that it wasn’t until the late nineteenth and early twentieth century that Anglo-American philosophy assumed the rational/irrational split (when, by the way, a lot of classical texts were translated into English, so they show that bias).

[2] I’ve come to think this and the second are the most important. When people are engaged in demagoguery, they homogenize all non- in-group members into one, and then pick the most useful—even if completely an outlier—quote or individual to represent all non-in-group members.

[3] He once asked, “Is there anything you didn’t learn the hard way?”






What grade would Goebbels get in first-year composition (pt. III): rejecting Aristotelian physics

revisionist history books

It is generally very easy for people to rationalize (in both senses of that word) marginalization, disenfranchisement, deliberate oppression, enslavement, expulsion, and extermination of out-groups by having systems and rhetoric that claims to be rational. Nazi Germany had a functioning judicial system throughout its tenure, as did the USSR, after all, as well as the US throughout segregation and slavery. People defending these systems and policies argued that they were necessary, just, and realistic, and therefore “rational.” [1]

Thus, many people think that working toward a world without genocide, slavery, deliberate oppression, expulsion, and so on requires that we abandon rationality. And, I think that’s sort of right. We need to abandon several specific ways of defining rationality, but we don’t need to abandon rational argumentation.

If you stop someone on the street, and ask them to explain various physical phenomena, they’ll give you an Aristotelian explanation. They’re wrong. Saying that we need to stop teaching rationality because modernist [2] notions of rationality are oppressive (and they are) is like saying that we need to stop teaching physics because Aristotelian physics is wrong. Physics is fine; Aristotelian physics isn’t. Rationality is fine; modernist notions of rationality aren’t.

The problem isn’t with rationality, but with how argumentation textbooks are grounded in modernist models of the mind that are slightly less defensible than Aristotelian physics.

Imagine that introductory physics courses were staffed by hiring people who were smart and skilled at writing about literature, who might never have taken a physics course since high school, and they were given a one- or two-day workshop (that also included Title IX training, a presentation from the writing center, information about digital resources, information about how to get keys, a presentation from the library, and so on) before being thrown into an autonomously taught course in physics. What would they teach? They’d teach Aristotelian physics.

And imagine that, instead of teaching those people other models of physics, the introductory physics courses and textbooks were designed so that those people could teach “successfully.” Introductory physics textbooks would be Aristotelian physics.

That’s what we do in staffing fyc argumentation courses, and that’s why the most popular textbooks are the way they are.

Just to be clear: I don’t think fyc has to teach argumentation. There are lots of other valuable things it can do. I’m open to the argument that argumentation should be a more advanced course taught (and supervised) by people who actually have some understanding of the scholarship in argumentation. A college course in argumentation would be, after all, a college course. It shouldn’t be a controversial claim for me to say that it should be grounded in recent scholarship and taught by people familiar with that scholarship.

My analogy of Aristotelian physics being like modernist notions of rationality falls apart because, while Aristotelian physics is intuitive, modernist notions of rationality are not. People are taught modernist notions of rationality–they’re counter-intuitive. If we’re going to ignore current scholarship in argumentation, why not rely on intuition? While there are reasons for thinking about all this more systematically (and there are a lot of possible systems), I think even common sense is a good basis. I think we can get to a pretty good standard of argumentation by starting with out intuitions about good disagreements.

If you ask students, “What makes for a really good disagreement?,” you end up with a list like this. Interlocutors:

  • are open to persuasion, or, at least, hearing other positions;
  • stay on topic;
  • accurately represent one another’s positions, claims, and so on;
  • give evidence for their claims;
  • present claims that are consistent with each other;
  • if we’re talking about an argument on social media, then they provide sources;
  • avoid the blazingly obvious fallacies.

The last is where modernist notions again trip us up, and I’ll get to that in the next few posts. But, there too we can generate a list of particularly irritating fallacies even if we don’t know the names. We don’t like when people attribute an argument to us we didn’t make, ask us to defend a position we never claimed, say our argument can be dismissed because it makes them feel bad or because we’re emotional or are bad people, insist that we say they’re right because they feel certain or can cite some youtube video by Rando McRando.

There’s a long and somewhat pedantic post about a more complicated way to think about fallacies here. I intend to do a more accessible version in this series, but, really, the fairness rule tends to work pretty well. Would we feel that’s a fair way to argue were someone to use it against us?

Do you think it’s okay if people don’t listen to you, and represent your position on the basis of what a third party who hates you has said? Do you think it’s okay if someone takes quotes out of context to condemn you, or attributes to you the views of the most extreme member of your in-group? Do you think it’s okay when people deflect?

Then don’t do it to others.

A lot of people believe that, because their group is right, anything they do is right, and any claim that supports their position is true and proof that they are right (regardless of whether it’s logically connected to their conclusion, accurate, sourced in a way they would accept as valid if it made a claim they don’t like). When we ask people to think about the way they’re arguing, and ask them whether they think that’s a good way to argue when others do it to them, we’re asking that they do two things: first, engage in meta-cognition, and two, hold themselves to the same standards they hold others. I think those are good things to teach.

[1] There’s an interesting polysemy in the word “rational” that leads to some nasty and politically toxic equivocation. “Rational” is sometimes used as a synonym for “realist” which is itself used to mean ruthless pursuit of individual or factional goals. Sometimes it is used to mean a supposedly “amoral” pursuit of the best means to achieve a goal set elsewhere. Thus, as people like Albrecht Speer and Wernher von Braun argued, they were just technocrats who didn’t think about the ends and just worried about the mean. That was a lie. They were fine with the ends.

[2] I’m calling it “modernist,” although there are arguments to be made that it’s more accurately called Cartesian. I think it’s useful to call it “modernist,” though, because various groups that are anti-post-modernism are openly advocating a return to modernist understandings of rationality. They are doing so by positioning themselves against one non-modernist position (which they call post-modernist) which is actually pretty marginal, and which they completely misrepresent. If you have to lie to make your case, you have a bad case. And if you’re lying about your critics in order to go back to an ideology that was explicitly supportive of colonialism and genocide, you have serious problems.

What grade would Goebbels get in fyc? Pt. II

Teacher in front of chalkboard

What grade does Goebbels get, pt. II

In an earlier post, I argued that a common way of thinking about first-year composition courses that claim to teach argument means that Goebbels could easily write an essay that would fit the criteria implicit in what remains a tremendously popular prompt. I said that the prompt forces teachers into a false dilemma of either giving Goebbels a good grade, or suddenly introducing a new criterion. The problem is the prompt.

I have a lot of crank theories, but this isn’t one of them.

In fact, what I’m saying is pretty much mainstream for scholars of argumentation, informal logic, cognitive psychology, policy argumentation, or political psychology. Just as what apparently controversial scholars in our field say about “grammar” is old news to anyone familiar with sociolinguistics, so anyone familiar with research in any of those fields would know I’m saying anything particularly insightful or new.[1]

And, because what I’m saying isn’t particularly controversial to anyone who is reading the relevant research, there are lots of ways of teaching fyc that don’t get teachers into that false dilemma. One solution is not to claim to teach argumentation, and to do any of the many valuable things that non-argumentation fyc can do.[2]

But, if we’re going to claim to teach argumentation, let’s do it. And there are lots of ways of doing it. That’s the next several posts.

Here, though, I need to argue why we should teach argumentation.

The problem is that fyc has long been dominated by a uselessly formalist presentation of argument, strongly connected to self-serving (and incoherent) definitions of rationality, teaching generations of people that having a “good” argument means having a “rational” tone, giving evidence from a “good” source, and giving reasons from “good” sources.

We do so because of staffing. FYC arose in the late-nineteenth and early-twentieth century when the notion was that there was a mental faculty, judgment, which could be trained through study of literature, music, or art. A person taught to have good taste would necessarily have good ethics because both were questions of good judgment. Similarly, writing “correct” English meant that they were thinking correctly, and communicating clearly (thesis first, list reasons) meant having a clear understanding of the situation. Interpretation was a universally valid skill, so teaching someone to read a poem was the same as teaching them to read a scientific study. College was seen as training someone to join a community of like-minded people with good judgment, good taste, and “good English.”[3]

Thus, teaching students to appreciate literature, and to write “well” about that literature made students better citizens. With that model of citizenship, it made sense to assume that graduate students who had been excellent literature undergraduates, highly skilled in meeting standards of “correct” grammar—even with no training in argumentation or linguistics—could teach first-year composition classes that would help students as citizens and students. That’s the staffing model we still have.

And, just to be clear, I think college students should study literature, although not for the reasons above. Reading literature cultivates empathy , can help people become more comfortable with uncertainty, fosters perspective-shifting. Literature courses can be tremendously important for an inclusive democracy.

But literature courses do not teach argumentation, and people skilled in literature are not magically capable of teaching argumentation.

This whole set of posts began because, in a comment thread about how our problem (meaning why do so many people think Trump’s open refusal to follow legal or cultural norms is okay) is that students don’t have civics classes,[4] I threw out the comment that fyc could be that class, but it would require a different staffing model, and someone asked me to explain. This set of posts is the explanation.

I meant something like, fyc could be a pretty effective civics course, but not a magic wand. And, of course, the very notion of a civics course that would make people reject toxic populist authoritarianism means a course that is grounded in a particular notion of democracy. It assumes seeing the democratic ideal as a community of people who value disagreement, who strive for a pluralistic world not about your group triumphing, but about a one in which we are all fairly represented, included, and accountable, and held to standards of fairness in terms of benefit and burden.

Depending on your model of education, there are lots of courses that could do this work–history, government, sociology, psychology, and first-year composition. Whatever class it is, it is not a course that relies on the transmission model of education; it has to be a course that persuades people to do the hard work of democratic deliberation. Telling students how to think about politics doesn’t work. I’ll come back to this.

Democracy is counter-intuitive. When we are making decisions, we are tempted to rely on what cognitive psychologists call System 1 thinking : we let our cognitive biases (especially in-group favoritism, binary thinking, associational thinking, naïve realism) drive the bus. Democracy requires that we step out of our world and engage in perspective-shifting, value fairness across groups (do unto others), are willing to lose, and can make our arguments rationally.[5] Ida Wells-Barnett’s Southern Horrors, Martin Luther King’s “Beyond Vietnam,” or Hans Morgenthau’s criticisms of Vietnam were all rational, offensive (condemned as violating norms of civility in their era), and deeply committed—perhaps even vehement—texts.[5] They are fair to their opposition not in terms of niceness, or attributing good motives to them, but in terms of accurately representing their arguments. Their arguments are internally coherent, applying standards across all groups.

In the previous post, I asked what grade Josef would get with a standard paper prompt, and I pointed out that, given that prompt, he would either get a good grade, or we would introduce a new criterion. That’s a dilemma created by how bad that assignment is. It’s also a dilemma created by how bad fyc argument textbooks are on the issue of “logic,” and how gleefully free they are from any influence by the various scholarly fields that should be influencing them: argumentation theory, cognitive psychology, political psychology. And that’s what this post is about.

We are faced with the dilemma about grading Josef because how fyc textbooks conflate “logic” with Aristotle’s term “logos.” (This recent article does a great job explaining that.) And can we start with: why in the world are fyc textbooks arranged around an anachronistic reading of Aristotle’s ethos/pathos/logos? If we’re going to rely on Aristotle, why not the enthymeme, which is what he actually cared about? Or, clutch your pearls, why not recent scholarship in argumentation, cognitive biases, reasoning, or any actually relevant field?

When we teach that “appeal to logos,” “logical appeal,” and “logical argument” are the same, we are conflating two very different meanings of the word “logic.” One is descriptive, and one is evaluative. The first is simply saying that the move is trying to look as though it’s logical (and maybe it is, and maybe it isn’t), and the second is saying that it is logical (it fits the standards of logic). I don’t think Aristotle meant either of those, but, if anything, something closer to the first.

Whatever Aristotle meant, he did not mean what argument texts say is an appeal to logic, since they emphasize what are surface features of a text (if anything, what he would have put in the ethos category): facts, statistics, and various other concepts that wouldn’t even have been in Aristotle’s world.

So, what I’m saying in this post is that, while teaching students to read literature is a tremendously important task, people who are deeply trained in reading and writing about literature are not a priori any more capable of teaching argumentation in a way that enhances inclusive democratic deliberation than graduate students in any other discipline. But, since that’s who’s teaching fyc courses, textbooks have to be ones that people with no training in argumentation can teach. And that is our problem.

If we want to teach argumentation, then we have to hire people who are trained in argumentation.




[1] At one point, I started trying to write a post that had all those references, and I got overwhelmed. These two articles are good starting points, with good citations.

[2] Notice that this solution is good as far as argumentation, but it still means that there are people who are teaching “grammar” without adequate training in sociolinguistics. I’ll come back to that.

[3] As a former Director of a Writing Center, and someone who argues on the internet a lot, I will also say that people who are most rigid about “grammar” are particularly likely to be wrong, even about prescriptive grammar. I have seen papers in which students were wrongly “corrected” for having said something like “The ball was thrown to Chester and me.” The number of faculty who believe in the breath rule for commas leaves me breathless.

[4] This argument is often represented as our needing to go back to some time when we had civics courses and people rejected open abuses of power oriented toward disenfranchising groups and violating democratic norms. Um, when would that be? When disenfranchising black voters was openly advocated? Granted, Trump supporters are very open that they want to go back to the early fifties, except without the taxes, because they believe (correctly) that then they could have political and cultural hegemony. In the fifties, when there were civics courses.

[5] As, I hope, will become clear in these posts, I don’t mean that out-dated, but still popular, understanding of “rationality” promoted by fyc textbooks and popular culture—the one grounded in 19th century logical positivism. All of those false models assume a binary of rational/irrational—a model of the mind falsified by research in cognition for the last thirty years, and also based in myth. Turns out the Phineas Gage story is probably wrong. Since I’ve cited that story more than a few times, my previous scholarship is part of the problem.

I think there are a lot of models of “rationality” that are more useful than the rational/irrational split, and more grounded in recent research on cognition. This research on cognition is usefully and cogently summarized in Seven and a Half Lessons about the Brain, Superforecasting, and Thinking Fast and Slow.

[6] Notice that I’m picking examples that are vehement, upsetting, decorum-violating, and controversial. Also, I’m not being precise about the distinctions among reason, rationality, and logic because I think that’s sort of inside baseball.



How do you teach SEAE?

marked up draft


I wrote a post about how forcing SEAE on students is racist, and someone asked the reasonable question: “It has been very challenging, especially in FYC classes, to reconcile my obligation to prepare students for academic writing across disciplines with my wish to preserve their own agency and choice. How do you strike that balance?”

And my answer to that is long, complicated, and privileged.

University professors are experts in everything. I had a friend who was a financial advisor who said that financial advisors routinely charge doctors and professors more, because both of those groups of people think they’re experts in everything and so are complete pains in the ass. He thought I’d be mad about that, but I just said, “Yeah.” And, unhappily, at a place where people have to write a lot to succeed, far too many people think they’re experts in writing..

I’ve had far too many faculty and even graduate students (all over the U) who’ve never taken a course in linguistics or read anything about rhetoric or dialect rhetsplain me. They think they’re experts in writing because they write a lot. I walk a lot, but that doesn’t mean I’m a physical therapist. It was irritating, but as a faculty member (especially once I got tenure), I could just shrug and move on.

In other words, I’m starting with the issue that how I handled this in my classes was influenced by my privilege. Even as an Assistant Professor, I was (too often) the Director of Composition, and so I knew that any complaints about my teaching would go to me. When I found myself in situations in which I had to defend my practices, I knew enough linguistics to grammar-shame the racists. (Grammar Nazis are never actually very good at grammar, even prescriptive grammar. Again, the analogy is accurate.) I think I have to start by acknowledging the issue since not everyone has the freedom I did.

So, what did I do?

I was trained in a program that had people write the same kind of paper every two weeks. This was genius. It was at a time when most writing programs had students writing a different kind of paper every two (or three weeks). That was also a time when research showed that no commenting practice was better than any other, since none seemed to correlate any more than any other with improvement in student writing (Hillocks, Research in Written Composition). But, even as a consultant at the Writing Center, I could see that the writing in Rhetoric classes did get better (that wasn’t true of all first-year writing courses).

Much later, I would read studies about cognitive development and realize that that classic form of a writing class (in which each paper is a new genre) makes no sense cognitively—even the Rhetoric model that I liked was problematic. The worst version is that a student writes an evaluative paper about bunnies, and the teacher makes comments on it. Then the student is supposed to write an argumentative paper about squirrels. A sensible person would infer that the comments about the evaluative paper are useless for their argumentative paper about squirrels (unless they’re points about grammar, and we’ll come back to that). That’s why students read comments simply as justifications of the grade. The cognitive process involved in generalizing from specific comments about a paper on one genre and topic to principles that can be applied to the specific case of a paper about another topic and in another genre is really complicated.

The Rhetoric model was a little better, insofar as it was the same genre, but even that was vexed. A student writes an argument about bunnies, and gets comments about that paper, and then has to abstract the principles of argument to apply to a different argument about squirrels. With any model in which the student is writing new papers every time, the student has to take the specific comments, abstract them to principles, and then reapply them to a specific case. That task requires metacognition.

I’m a member of the church of metacognition. I think (notice what I did there) that all of the train wrecks I’ve studied could have been prevented had people been willing to think about whether they might be wrong—that is, to think about whether their way of thinking was a good way to think.[2] But, I don’t think it makes sense to require (aka, grade on the basis of) something in a class that you don’t teach. So, how do you teach metacognition?

You don’t teach it by requiring that students already can do it. You teach it by asking students to reconsider how they thought about an issue. You teach it by having students submit multiple versions of an argument, and you make comments (on paper and in person) that make them think about their argument.

Once again, we’re back on the issue of my privilege. I have only once had a thoroughly unethical workload, and that ended disastrously (I was denied tenure). Otherwise, it’s been in the realm of the neoliberal model of the University, and I’ve done okay. But, were I in the situation of most Assistant Professors (let alone various fragile faculty positions) I would say use this model for one class at most.

I haven’t gotten around to the question of dialect because the way I strike the balance between being reasonable about how language works and the expectation that first-year composition prepares students for writing in a racist system is to throw some things off the scale. We can’t teach students the conventions of every academic disciplines; those disciplines need to do that work.

There was a moment in time (I infer that it’s passed) when people in composition accepted that FYC was supposed to be some kind of “basic” class in which people would learn things they would use in every other class with any writing. The fantasy was (and is, for many people) that you could have a class that would prepare students for all forms of writing they will encounter in college. Another was that you could teach students to read for genre, so that you should have students either write in the genre of their major or write in every genre. Both of those methods have students needing to infer principles in a pretty complicated way.

A friend once compared this kind of class to how PE used to be—two weeks on volleyball, two weeks on tennis, two weeks on swimming. You don’t end up a well-rounded athlete, but someone who sucks at a lot of stuff.

What I did notice was that a lot of disciplines have the same kind of paper assignment: take a concept the professor (and/or readings) have discussed in regard to this case (or these cases), and apply it to a new case (call this the theory application paper). We can teach that, so I did. That kind of paper has several sub-genres:
1) Apply the theory/concept/definition to a new case in order to demonstrate understanding of the theory/concept/definition;
2) Apply the theory/concept/definition to a new case in order to critique the theory/concept/definition;
3) Apply the theory/concept/definition to a new case in order to solve some puzzle about the case (this is what a tremendous number of scholarly articles do).

So, I might assign a reading in which an author describes three kinds of democracy, and ask that students write a paper in which they apply the definitions to the US. I might have an answer for which I’m looking (it’s the third kind), or I might not. I might be looking for a paper that:
1) Shows that the US fits one of those definition;
2) Shows that the US doesn’t quite fit any of them, and so there is something wrong with the author’s definitions/taxonomy;
3) Shows that applying this taxonomy of democracies explains something puzzling about the US government (why we have plebiscites at the state level, but not federal, or why we haven’t abandoned the Electoral College) or politics (why so few people vote).
Of course, I might be allowing students to do all three (if students think it fits, then they’d write the first or third, but if they don’t they would write the second).

Students typically did three papers, and turned the first one in three times (the third revision was late in the semester). They turned in their first version of their first paper within the first three weeks of class; I’d comment on it (I’d rarely give a grade for that first version) and return it within a week. They’d revise it and turn it in again a week after getting it back (we’d have individual conferences in the interim). I’d get that version back in a week. They’d turn in their first version of their second paper a week or two after that, and so on. Since the paper would be so thoroughly rewritten, I barely commented on sentence-level issues (correctness, clarity, effectiveness) on that first submission of the first paper (or second, for that matter). For many students, the most serious issues would disappear when they knew what they wanted to say.

I’ve given this long explanation of how the papers worked because it means that students had the opportunity to focus on their argument before thinking about sentence-level questions.

Obviously, in forty years my teaching evolved a lot, and so all I can say is where I ended up. And here’s the practice on which I landed. In class, we’d go over the topic of “grammar,” with the analogy of etiquette. And then I’d do what pretty much everyone else does. I’d emphasize sentence-level characteristics that interfered with the ability of the reader to understand the paper (e.g., reference errors, predication), only remarking on them once or twice in a paper. If it was a recurrent thing, I might highlight several instances (and I mean literally highlight) of a specific problem. I might ask them to go to the Writing Center or come to office hours, so we could go over it.

But, and this is important, I gave them a specific task on which they should focus. Please don’t send a student to the Writing Center telling them to work on “grammar.” It’s fine to tell them to go to the Writing Center to revise the sentences you’ve marked, or to reduce passive voice (but please make sure it’s passive voice that you mean, and not progressive or passive agency). Telling a student to work on “grammar” is like saying a paper is “good”—what does that mean?

I didn’t insist that students write in SEAE—that is, I didn’t grade them on it. I graded on clarity, and let students know about things that other people might consider errors (e.g., sentence fragments). And that seems to me a reasonable way to handle those things. If a student wants to get better at SEAE (and some students do), then I’d make an effort to comment more about sentence-level characteristics. My department happened to have a really good class in which the prescriptive/descriptive grammar issue was discussed at length, so students who really wanted to geek out on grammar could do it.

I think the important point is that students should retain agency. The criticism that a lot of people make about not teaching SEAE is that we’re in a racist society, and students who speak or write in a stigmatized dialect will be materially hurt. Well, okay, but I don’t see how materially hurting them now (in the form of bad grades) is helping the situation. It’s possible to remark on variations from SEAE without grading a student down for them. It’s also possible to do what the student wants in that regard, such as not remark on them.

Too many people have the fantasy of a class that gets rid of all the things we don’t want to deal with in students. Students should come to our class clean behind the ears, so that…what? So we don’t have to teach?





[1] I love that people share my blog posts, and I know that means people read them who don’t know who I am. Someone criticized my “casual” use of the term Nazi, and that’s a completely legit criticism—people do throw the term around–but it isn’t casual at all for me. Given the work I do, I would obviously never use that term without a lot of thought. People who rant about pronouncing “ask” as “aks,” make a big deal about double negatives, or, in other words, focus on aspects of Black English, aren’t just prescriptivists (we’re all prescriptivists, but that’s a different post)—they’re just people who want to believe that racist hierarchies are ontologically grounded, citing pseudo-intellectual and racist bullshit. Kind of like the Nazis. I call them Nazis because I take Nazis very seriously, and I take very seriously the damage done by the pseudo-intellectual framing of SEAE as a better dialect.

[2] My crank theory is that metacognition is ethical. I don’t see how one could think about thinking without perspective-shifting—would I think this was a good way of thinking if someone else thought this way? And, once you’re there, you’re in the realm of ethics.

Ways of thinking about our procrastination: “naifs” v. “sophisticates”

messy office

Procrastination researchers Ted O’Donoghue and Matthew Rabin set up an experiment that had two tasks for the subjects. Subjects who committed to both tasks and completed them got the most rewards, with the second-highest rewards going to subjects who committed to the first step and completed it. Subjects who committed to both tasks but didn’t complete both received the least reward. Hence, subjects were motivated to be honest with themselves about the likelihood of their really finishing both tasks. O’Donoghue and Rabin argue that some people who procrastinate know that they do so, and make allowances for it. These people, “sophisticates,” in O’Donoghue and Rabin’s study, made better decisions about their commitments and therefore (or thereby?) mitigate the damage done by their procrastination. “Naifs” are people who procrastinate, but “are fully unaware of their self-control problems and therefore believe they will behave in the future exactly as they currently would like to behave in the future” (“Procrastination on long-term projects”). That is, although they have procrastinated in the past, and may even be aware that this practice has caused them grief, naifs make decisions about future commitments predicated on the assumption that they will not procrastinate in the future. They are not harmed by their procrastination as much as they are harmed by their belief that they will magically stop themselves from procrastinating in the future.

The short version of this post is that we all procrastinate, and so we plan for it.

O’Donoghue and Rabin conclude that naifs are more likely to incur the greatest costs from procrastination. They say: “The key intuition that drives many of our results is that a person is most prone to procrastinate on the highest-cost stage, and this intuition clearly generalizes. Hence, for many-stage projects, if the highest-cost stage comes first, naive people will either complete the project or never start, whereas if the highest-cost stage occurs later, they might start the project but never finish. Indeed, if the highest-cost stage comes last, naive people might complete every stage of a many-stage project except the last stage, and as a result may expend nearly all of the total cost required to complete the project without receiving benefits.”

Sometimes procrastinating the highest-cost stage to the end is necessary: the dissertation is the highest-cost stage of graduate school, and it is necessarily the last. Many people advise leaving the introduction to the dissertation or book (or theoretical chapter) till last because it’s more straightforward to write when we know what we’re introducing–we’ve written the rest, but that also means procrastinating the highest- cost stage. It isn’t necessarily bad to procrastinate the highest-cost stage, but it does mean that people who sincerely believe that 1) they don’t procrastinate, or 2) they can simply will themselves out of procrastinating this time (“I just need to sit my butt down and write”) may be setting themselves up for a painful failure, especially if this kind of procrastination is coupled with having badly estimated how much time writing the dissertation would actually take. It would be interesting to know how many ABDs are “naifs.”

In a sense, the story that “naifs” tell about procrastination is a simple one—they can make themselves behave differently this time the same way one can make oneself get out of bed. But such a view—that willing one’s self to write an article is like willing one’s self to get out of bed—ignores “procrastination” in regard to scholarly productivity is not a question of lounging in bed or getting up, of eating cupcakes or writing an article. These posts are from a book project I was thinking about writing, and the first very rough draft wasn’t too hard to write; it went quite quickly, probably because I’d been thinking (and reading) about the issue for years. But when it came time to work on it again—incorporate more research, especially the somewhat grim studies about factors that contribute to scholarly productivity—I instead reprinted my roll sheet, deleting from it the students who had dropped, adding to my sheet the dates I hadn’t included, composing and writing email to students whose attendance troubled me, and comparing students’ names with the photo roster (in a more or less futile effort to learn all their names). I then printed up the comments I’d spent writing and stapled them to the appropriate student work. I sent some urgent email related to a committee I chair, answered email (related to national service for a scholarly organization) I should have answered yesterday, and sent out extremely important email to students clarifying an assignment I’d made orally in class. None of that was very pleasurable—I’d far prefer to have eaten a cupcake. And yet it was procrastination.

My procrastinating one task by completing others is typical of much procrastination (it’s sometimes called “procrastiworking”); it isn’t a question of choosing between something lazy and self-indulgent and something else that is hard work. Take, for instance, this poignant description of a scholar who keeps procrastinating applying for grants:
“Grant application season has rolled around once again. Amanda, who has in the past regularly failed to submit applications for research grants that many of her colleagues successfully obtain, feels that she really should apply for a grant this year. She prints out the information about what she would need to assemble and notes the main elements thereof (description of research program, CV , and so on) and—of course—the deadline for submission. She puts all of these materials in a freshly labeled file folder and places it at the top of the pile on her desk. But whenever she actually contemplates getting down to work on preparing the application—which she continues to think she should submit—her old anxieties about the adequacy of her research program and productivity flare up again, and she always find some reason to reject the idea of starting work on the grant submission process now (without adopting an alternative plan about when she will start). In the end the deadline passes without her having prepared the application, and once again Amanda has missed the chance to put in for a grant.” (Stroud, Thief 65) Whatever complicated things are going on in this story, or in the minds of people who find themselves in Amanda’s situation, it’s absurd to say that she is choosing pleasure over pain.

I find this story heartbreaking, probably because the details are so perfectly apt. Of course she would neatly label the folder, and add it to a pile (I used to keep a section of my file cabinet labeled “Good Intentions”). And I have to add, she needs to get “down” to work on the applications—why is it always “down”? When people are beating themselves up about not doing writing (or grading), they tell me, “I just need to sit down and do it” or “buckle down and do it” or variations on those themes. Why don’t people need to “sit up” and work on the project? Or “get up and go” on it?

That this method of managing grants has never worked doesn’t seem to register, and so there is what Jennifer Baker calls “a cruel cycle:” “Procrastinators are inefficient in doing their work, they make unrealistic plans in regard to work, and they are so cowed by perfectionist pressures that they become incapable of incorporating advice or feedback into their future behavior.” (Thief 168). Baker is here describing something much like “naifs”—unwilling or unable to recognize that there is a pattern, they hope or expect to be able to work on getting a little better: they will do it completely different in the future. Applying the same sense of perfectionism to our work habits, we set unrealistic goals for our future selves, virtually ensuring that we fall back into imprudent delays. Because no grant could possibly be as good as we want, we write no grant at all. Instead of setting up fantasies of behaving completely differently in the future, we need to be honest about what we are doing now, and why we are doing it.

Procastination of academic writing: different kinds and different solutions

marked up draft

I. Some ways of categorizing procrastination: “just in time,” “miscalculation,” “imprudent delay”

When people talk about “procrastinating,” they often mean “putting off a task,” but there are many ways of doing that: putting off paying bills till near the due date, avoiding an unpleasant conversation, rolling back over in bed instead of getting up early to exercise, delaying preparing for class till half an hour before it starts, ignoring the big stack of photos that should be put in albums, answering all of my email rather than proofing an article, writing a conference paper the night before, delaying going to the dentist, intending to save money for retirement but never getting around to it, eating a cupcake and promising to start the diet tomorrow, telling myself I cannot do my taxes until I have set up a complicated filing system, ignoring the stack of papers I need to grade until they must be returned. All of these involve putting off doing something, but they are different kinds of behavior with different consequences:
1) indefinite delaying such that the task may never get done;
2) allocating just barely (or even under) enough time necessary to complete a task (“just in time” procrastination);
3) a mismatch between my short-term behavior and long-term goals (procrastination as miscalculation).
Procrastinating proofreading by answering email is potentially productive (as long as I get to the proofreading in time), delaying going to the dentist might mean later dental work is more expensive and more painful, and putting off papers till the last minute might reduce my tendency to spend too long on grading.

It seems to me that many talented students use a “just in time” procrastination writing process for both undergraduate and graduate classes, largely because it works under those circumstances. (In fact, the way a lot of classes are organized, no other process makes sense.) “Just in time” writing processes work less well for a dissertation—they make the whole experience really stressful and very fraught, and they sometimes don’t work at all. It’s an impossible strategy for book projects—it simply doesn’t work because there aren’t enough firm deadlines. Shifting away from a “just in time” writing process to more deliberate choices means being aware of other writing processes, and can often involve some complicated rethinking of identity.

“Just in time” procrastination sometimes goes wrong, as when something arises in the allotted time and so it was not nearly enough. Sometimes the consequences are trivial—a dog getting sick means I didn’t finish those last few papers and I have to apologize to students; my forgetting to bring the necessary texts home means I have to get to campus extremely early to prepare class there; I misunderstand the due date on bills and have to pay a late fee. But the consequences can be tragic: if there is a delay at a press, a reader/reviewer has serious objections, or illness intervenes, then a student may lose funding, a promising scholar may be denied tenure, a press may cancel a contract.

Procrastination as miscalculation, or the inability to make short-term choices fit our long- term goals, is the most vexing, what Christine Tappolet calls “harming our future selves” (Thief 116) or what Chrisoula Andreou calls “imprudent delay;” that is, procrastination as involving “leaving too late or putting off indefinitely what one should, relative to one’s goals and information, have done sooner” (Andreou Thief 207). This kind of procrastination (imprudent delay) might mean choosing a short-term pleasure over a long-term goal (going back to sleep instead of getting up to exercise), delaying a short-term pain (putting off going to the dentist until one is actually in pain), or simply making a choice that is harmless in each case but harmful in the aggregate (spending time on teaching or service rather than scholarship). Imprudent delay isn’t necessarily weakness of will, as it doesn’t always mean doing something easy instead of something hard; it might mean choosing different kinds of equally hard tasks, and it is only imprudent in retrospect, or in the aggregate.

Many books on time management and productivity focus on this kind of procrastination, and describe effective strategies for keeping long-term goals mentally present in the moment. Ranging from products (such as the Franklin-Covey organizers) to practices (such as David Allen’s “tickler” files), these methods of improving calculation seem to me to work to different degrees with different people under different circumstances. None of them works every time with every person, a fact that doesn’t mean the strategy is useless or the person is helpless, but it does mean that people might need to experiment among different strategies and products.

Imprudent delay, when it comes to academia, is complicated, perhaps because it so often not a choice between eating a cupcake and exercising. After all, even if a scholar gets to a point in her career at which she comes to believe she has previously spent time on service that should have been spent on scholarship, there is probably, even in retrospect, no single moment that she made the mistake. I can look back on a period of my career when I spent too much time on service and teaching, but I was asked by my Department Chair to do the service, so I didn’t feel that I could say no. My administrative position often involved meeting with graduate student instructors to discuss their classes, and I can’t think of a single conversation I wish I hadn’t had. I can think of things I wish I had done differently (some are discussed here) but I empathize with junior colleagues who carefully explained why they have taken on this task. And, as my husband will tell anyone who wants to listen, I still regularly take on too many tasks. But, I will say in my defense, I’m better.

Imprudent delay—failing to save for retirement, spending too much time on service, engaging in unnecessary elaborate teaching preparation—never looks irrational in the short run. Phronesis, usually translated as “prudence,” is, for Aristotle, the ability to take general principles and apply them in the particular case. One reason “prudent” versus “imprudent” procrastination seems to me such a powerful set of terms is that the sorts of unhappy situations in which academics often find ourselves are the consequence of the abstract principle (“I want to have a book in hand when I go up for tenure”) not being usefully applied to this specific case (“Should I write another memo about the photocopier?”). This is a failure to apply Aristotle’s phronesis.

Another reason that thinking of procrastination in terms of Aristotle seems to me useful is that his model of ethics is as a practice of habits, which we can consciously develop through the choices. We do not become different people, but we develop different habits, sometimes consciously. People with whom I’ve worked sometimes seem to have an ethical resistance to some time or project management strategies or writing processes because they don’t want to become that kind of person (a drudge, an obsessive, ambitious). Thinking that achieving success requires becoming a different person is not only unproductive, but simply untrue.

Martha Nussbaum points out that Aristotle’s metaphor is aiming: making correct ethical choices is like hitting a target. If one has a tendency to pull to one side, then overcompensating in the aim will increase the chances of hitting the target. Andreou points out that there are things about people have a lot of willpower, and others about which we have very little, “I may, for example, have very poor self- control when it comes to exercising but a great deal of self-control when it comes to spending money or treats” (Thief 212). The solution, then, is to use the self-control about spending money to leverage self-control in regard to exercise: meeting one’s exercise goal is rewarded by spending money. If, however, one has little self-control in regard to spending money, then trying to use monetary rewards/punishments to encourage exercise won’t work, since a person won’t really enforce whatever rules they’ve set for themselves.

A lot of people respond to procrastination with shaming and self-shit-talking, and my point is that those are both useless strategies. It’s more useful to try to figure out what kind of procrastination it is, and what’s triggering it (the next post).

Procrastination: introduction

weekly work schedule

“A writer who waits for ideal conditions under which to work will die without putting a word on paper.” (E.B. White, “E. B. White, The Art of the Essay No. 1” Paris Review)

Reason #3 I wanted to retire early was so that I could finish a bunch of projects. One of them is about scholarly writing. Someone asked that I pull out the parts about procrastination–that was about 10k words. Even when I brutally whacked at it, it was 4k, which is just way too much for a blog post. So I’ve broken it into parts. Here’s the first.

I haven’t edited or rewritten it at all, and I wrote this almost six years ago. I tried to move footnotes into the texts, but it’s still wonky as far as citation. I didn’t want to put off posting it till it was perfect (the irony would be too much), so here goes.

Procrastination is conventionally seen as a weakness of will, a bad habit, a failure of self- control–narratives that imply punitive behavior is the solution. Those narratives ignore that procrastination isn’t necessarily pleasurable, and often doesn’t look like a bad decision in the moment. Putting off doing scholarship in favor of spending time and energy on teaching or service is not a lack of willpower, the consequence of laziness, or inadequate panic. But it is putting off tasks that Stephen Covey would call important but not urgent in favor of tasks that are important and urgent. Since it isn’t caused by lack of willpower or inadequate fear, it isn’t always solved by self-trash-talk or upping the panic.

Procrastination isn’t necessarily one thing, and so it doesn’t have one solution. Nor is it always a problem that requires a solution; dictating barely enough time to a task can ensure we don’t spend more time on it than is necessary can make a dull task more interesting, as it introduces the possibility of failure, and it can be efficient. I once tried preparing class before the semester began by doing all the reading and making lecture notes during the summer. I had to reread the material the night before class anyway, so the pre-preparing meant I spent more time on teaching, not less. Grading papers is a task that will expand to fill the time allotted, as I could always read a little more carefully, word my suggestions more thoughtfully, or give more specific feedback. Leaving the most complicated four or five papers till the morning of class means I had to get up at 4 in the morning, but it also meant I could only spend half an hour on each, and I was forced to be more efficient and decisive with my comments.

Many self-help and time managements books promise an end to procrastination, but that is an empty promise. As long as we have more tasks than time, we will procrastinate. The myth that one can become a perfect time manager who doesn’t procrastinate can inhibit the practical steps necessary to become more effective with one’s time. People who procrastinate because they don’t want to be drudges, and like the drama of the panicked writing, resist giving up procrastination, since it seems to suggest they have to become a different person. Some perfectionists procrastinate because they won’t let themselves do mediocre work—hoping to do perfect work, they may spend so much time doing one task perfectly that they get nothing else done, or they may wait till they feel they are capable of great work (if that moment never comes, they complete nothing), or they ensure that they have good excuses (such as running out of time) for having submitted less than perfect work. Unhappily, the same forces—the desire for a perfect performance—can inhibit the ability to inhabit different practices in regard to procrastination.

The perfectionist desire for procrastination can cause us to try to find the perfect system, product, or book–a quest that can will someone into a person who never gets anything done. It’s possible to procrastinate by trying all sorts of new systems that prevent procrastination. We can fantasize about ending procrastination—so that we will, from now on, do all tasks easily, effortlessly, promptly, and without drama—in ways that are just as inhibiting as fantasizing about writing perfectly scholarship. The point is not to become perfect, but to become better. The next few posts will describe some concepts and summarize some research that I found very helpful.

The salesman’s stance, being nice to opponents, and teaching rhetoric

books about demagoguery

I mentioned elsewhere that people have a lot of different ideas about what we’re trying to do when we’re disagreeing with someone—trying to learn from them, trying to come to a mutually satisfying agreement, find out the truth through disagreement, have a fun time arguing, and various other options. There are circumstances in which all of these (and many others) are great choices—I think it’s an impoverishment of our understanding of discourse to say that only one of those approaches is the right one under all circumstances.

We also inhibit our ability to use rhetoric to deliberate when we assume that only one approach is right.

I’ll explain this point with two extremes.

At one extreme is the model of discourse that has been called “the salesman’s stance,” the “compliance-gaining” model, rhetorical Machiavellianism, and various other terms. This model says that you are right, and your only goal in discourse is to get others to adopt your position, and any means is justified. So, if I’m trying to convert you to a position I believe is right, then all methods of tricking or even forcing you to agree with me are morally good or morally neutral.

From within this model, we assess the effectiveness of a rhetoric purely on the basis of whether it gains compliance. For instance, in an article about lying, Matthew Hutson ends with advice from someone who has studied that lying to yourself makes you a more persuasive liar.

“Von Hippel offers two pieces of wisdom regarding self-deception: “My Machiavellian advice is this is a tool that works,” he says. “If you need to convince somebody of something, if your career or social success depends on persuasion, then the first person who needs to be [convinced] is yourself.””

The problem with this model is clear in that example: if you’re wrong, then you aren’t going to hear about it. Alison Green, on her blog askamanager.org, talks about the assumption that a lot of people make about resumes, cover letters, and interviews—that you are selling yourself. People often approach a job search with exactly the approach that Von Hippel (and by implication, Hutson) recommend: going into the process willing to say or do whatever is necessary for you to get the job, being confident that you’ll get the job, lying about whether you have the required skills or experience (and persuading yourself you do).

Green says,

“The stress of job searching – and the financial anxieties that often accompany it – can lead a lot of people to get so focused on impressing their interviewer sthat they forget to use the time to find out if the job is right for them. If you get so focused on wanting a job offer at the end of the process, you’ll neglect to focus on determining if this is even a job you want and would be good at, which is how people end up in jobs that they’re miserable in or even get fired from.
And counterintuitively, you’ll actually be less impressive if it’s clear that you’re trying to sell yourself for the job. Most interviewers will find you a much more appealing candidate if you show that you’re gathering your own information about the job and thinking rigorously about whether it’s the right match or not.”

Van Hippel’s advice comes from a position of assuming that the liar is trying to get something from the other (compliance), and so only needs to listen enough to achieve that goal. The goal (get the person to give you a job, buy your product, go on a date) is determined prior to the conversation. Green’s advice comes from the position of assuming the a job interview is mutually informative, a situation in which all parties are trying to determine the best course of action.

If we’re trying to make a decision, then I need to hear what other people have to say, I need to be aware of the problems with my own argument, I need to be honest with myself at least and ideally with others. (If I’m trying to deliberate with people who aren’t arguing in good faith, and the stakes are high, then I can imagine using some somewhat Machiavellian approaches, but I need to be honest with myself in case they’re right in important ways.)

At the other extreme, there are people who argue that every conversation should come from a place of kindness, compassion, and gentleness. We shouldn’t directly contradict the other person, but try to empathize, even if we disagree completely. We should use no harsh words (including “but”). We might, kindly and gently, present our experience as a counterpoint. Learning how to have that kind of conversation is life-changing, and it is a great way to work through conflicts under some circumstances.

It (like many other models of disagreement) works on the conviviality model of democratic engagement: if we like each other, everything will be okay. As long as we care for one another, our policies cannot go so far wrong. And there’s something to that. I often praise projects like Hands Across the Hills or Divided We Fall that work on that model—our political discourse would be better if we understood that not all people who disagree with us are spit from the bowels of Satan. The problem is that some of them are.

That sort of project does important work in undermining the notion that our current political situation is a war of extermination between two groups because it reduces the dehumanization of the opposition. I think those sorts of projects should be encouraged and nurtured because they show how much the creation of community can dial down the fear-mongering about the other.

They are models for how genuinely patriotic leaders and media should treat politics—by continually emphasizing that disagreement is legitimate, that we are all Americans, that we should care for one another. But that approach to politics isn’t profitable for media to promote, and therefore isn’t a savvy choice for people who want to get a lot of attention from the media.

It also isn’t a great model for when a group is actually existentially threatened (as opposed to being worked into a panic by media). This model says, if we apply it to all situations, that, if I think genocide is wrong, and you think it’s right, I should try to empathize with you, find common ground, show my compassion for you. And somehow that will make you not support a genocidal set of policies? I do think that a lot of persuasion happens person to person, when it’s also face to face. I’ve seen people change their minds about whether LGBQT merit equal treatment by learning that someone they loved would be hurt by the policies they were advocating. I’ve also seen people not change their minds on those grounds. Derek Black described a long period of individuals being kind to him as part of his getting away from his father’s white supremacist belief system, but the guy went to New College; he was open to persuasion.

And I think it’s a mistake to think that kind of person-to-person, face-to-face kindness makes much difference when we are confronting evil. Survivors of the Bosnian genocides describe watching long-time friends rape their sister or kill their family. It isn’t as though Jews being nicer to and about Nazis would have prevented genocide. It wasn’t being nice to segregationists that ended the worst kind of de jure segregation. We have far too many videos that show being nice to police doesn’t guarantee a good outcome. People in abusive relationships can be as compassionate as an angel, and that compassion gets used against them. We will not end Nazism by being nice to Nazis.

That kindness, compassion, and non-conflictual rhetoric is sometimes the best choice doesn’t mean it’s always the only right choice. It can be (and often has been) a choice that enables and confirms extraordinary injustice. It’s often only a choice available to people not really hurt by the injustice. Machiavellian rhetoric is sometimes the best choice; it’s often not.




















Thesis statements, topic sentences, and “good” writing

marked up draft

In something I have that’s about writing, I have a footnote, and I was asked about this footnote in my advice on writing by a smart person who noticed that I had packed an awful lot into that footnote. And their question was, more or less, whuh? This is the footnote:

“Here you’re in a bind. American writing instructors, and many textbooks, mis-use the term “thesis statement.” The thesis statement is a summary of the main point of the paper; it is not the same as the topic statement. Empirical research shows that most introductions end with a statement of topic, not the thesis. But, our students are taught to mis-identify the topic sentence as the thesis statement (e.g., so they think that “What are the consequences of small dogs conspiring with squirrels?” is a thesis statement). This is not a trivial problem, and I would suggest is one reason that students have so much trouble with reasoning and critical reading. I’m not kidding when I say that I also think it contributes significantly to how bad public argument is. You can insist on the correct usage (which is pretty nearly spitting into the wind), or you can come up with other terms—proposal statement, main claim, main point.”

I wrote it badly (I said “most paragraphs end….rather than most introductions”). It’s now corrected. Still and all, what did I mean? I was saying that we should distinguish between thesis statements and other kinds of contracts, but why does that distinction matter? Before I can persuade anyone that it matters, I have to persuade people there is a distinction to be made.

Many teachers and textbooks tell students that “the introduction has to tell ‘em what you’re gonna tell ‘em, or your reader won’t know what the paper is about.” And they identify the thesis statement (the last sentence in a summary introduction) as the way to do that. Certainly, there is a sense in which that is good advice. You can see that students who have followed that advice get excellent scores on the SAT. Here are two sample “excellent” introductions for the SAT:

In response to our world’s growing reliance on artificial light, writer Paul Bogard argues that natural darkness should be preserved in his article “Let There be dark”. He effectively builds his argument by using a personal anecdote, allusions to art and history, and rhetorical questions.”

In the article, “Why Literature Matters” by Dana Gioia, Gioia makes an argument claiming that the levels of interest young Americans have shown in art in recent years have declined and that this trend is a severe problem with broad consequences. Strategies Gioia employs to support his argument include citation of compelling polls, reports made by prominent organizations that have issued studies, and a quotation from a prominent author. Gioia’s overall purpose in writing this article appears to be to draw attention towards shortcomings in American participation in the arts. His primary audience would be the American public in general with a significant focus on millenials.”

Those are summary introductions, with the thesis statement (that is simultaneously a partition ) very clearly stated. Thus, as far as helping students get good SAT scores, it’s pretty clear that teachers and textbooks are right to tell students to write summary introductions, and land that thesis hard in the introduction. I would say, based on my experience, that, although college teachers make fun of the “five-paragraph essay,” a non-trivial number of them do still want a summary introduction with that thesis landing hard, and a paper that is a list of reasons. Given that the thesis-driven format for a paper is rewarded, it might seem that I’m being a crank to say there is a difference between a thesis statement and a topic sentence (or, more accurately, a “contract”). So, am I?

Or, to put it the other way, are teachers and textbooks who insist that “good” writing has a summary introduction right? Is the SAT testing “good” writing?

One way to test those hypotheses is to look at essays that are valued in English classes, such as Martin Luther King, Jr.’s “Letter from Birmingham Jail” or George Orwell’s “Politics and the English Language.” Here’s the introduction from King:

“My Dear Fellow Clergymen:
While confined here in the Birmingham city jail, I came across your recent statement calling my present activities “unwise and untimely.” Seldom do I pause to answer criticism of my work and ideas. If I sought to answer all the criticisms that cross my desk, my secretaries would have little time for anything other than such correspondence in the course of the day, and I would have no time for constructive work. But since I feel that you are men of genuine good will and that your criticisms are sincerely set forth, I want to try to answer your statement in what I hope will be patient and reasonable terms.”

Here is the introduction from Orwell:
“Most people who bother with the matter at all would admit that the English language is in a bad way, but it is generally assumed that we cannot by conscious action do anything about it. Our civilization is decadent and our language — so the argument runs — must inevitably share in the general collapse. It follows that any struggle against the abuse of language is a sentimental archaism, like preferring candles to electric light or hansom cabs to aeroplanes. Underneath this lies the half-conscious belief that language is a natural growth and not an instrument which we shape for our own purposes.

“Now, it is clear that the decline of a language must ultimately have political and economic causes: it is not due simply to the bad influence of this or that individual writer. But an effect can become a cause, reinforcing the original cause and producing the same effect in an intensified form, and so on indefinitely. A man may take to drink because he feels himself to be a failure, and then fail all the more completely because he drinks. It is rather the same thing that is happening to the English language. It becomes ugly and inaccurate because our thoughts are foolish, but the slovenliness of our language makes it easier for us to have foolish thoughts. The point is that the process is reversible. Modern English, especially written English, is full of bad habits which spread by imitation and which can be avoided if one is willing to take the necessary trouble. If one gets rid of these habits one can think more clearly, and to think clearly is a necessary first step toward political regeneration: so that the fight against bad English is not frivolous and is not the exclusive concern of professional writers. I will come back to this presently, and I hope that by that time the meaning of what I have said here will have become clearer.

“These five passages have not been picked out because they are especially bad — I could have quoted far worse if I had chosen — but because they illustrate various of the mental vices from which we now suffer.”

Neither of those is a summary introduction, and neither has a thesis statement in it.

When I point this out to people who advocate the “you must have your thesis in your introduction,” they say that “I want to try to answer your statement in what I hope will be patient and reasonable terms” and “they illustrate various of the mental vices from which we now suffer” are thesis statements. But they aren’t, or, more accurately, it isn’t useful to use “thesis statement in such a broad way.” A “thesis statement” is (or should be used for) the statement of the thesis—that is, the sentence (or, more often, series of sentences) that clearly states the main argument the author is making.

If we use it that way, then it’s clear that neither King nor Orwell have the thesis in the introduction. King doesn’t have a single sentence that summarizes his argument. It’s a complicated argument, but stated most clearly in eleven paragraphs almost at the very end of the piece (from “I have traveled” to “Declaration of Independence”).

Orwell looks as though he’s giving a thesis, but he isn’t—he gives a really clear partition. (“These five passages have not been picked out because they are especially bad — I could have quoted far worse if I had chosen — but because they illustrate various of the mental vices from which we now suffer.”) He gives a kind of hypo-thesis (“Now, it is clear that the decline of a language must ultimately have political and economic causes”), something much less specific than what he actually argues. His thesis is most clearly stated at the end (from “What is above all needed” through his six rules).

I could give other examples (and often do) of scholarly articles, even abstracts, long-form journalism, discourse oriented toward an opposition audience of various kinds that show that clever rhetors delay their thesis when what they’re saying is controversial. That’s Cicero’s advice—if you have a controversial argument, delay it till after the evidence.

But if “I want to try to answer your statement in what I hope will be patient and reasonable terms” is not a thesis statement, what is it? It’s more accurately called a topic sentence, but some people call it a “contract.” It states, very clearly, what the topic of the letter will be. It establishes expectations with the reader about the rest of the piece.

At this point, it might seem that I’m being a pedant to insist on the distinction, but I think it makes a difference (one I can’t go into here). Here, I’ll just make a couple of other points. This advice—“tell ‘em what you’re gonna tell ‘em”—isn’t just presented as a way to write a particular genre (teachers and test writers like that genre because it is extremely easy to grade); it’s presented as “good” writing. And it isn’t. No one would read the sample student introductions and think, “Oh boy, I want to read this whole paper” unless we were being paid to read them. But we’d read King or Orwell. So, it isn’t good writing—it’s easy to grade writing.

What I’m saying is that there is a genre (“student writing”) that is not the same as writing we actually value. We’re teaching students to write badly.

I have sometimes taught a course on how high school teachers should teach writing. At one point, I had a class of genuinely good people but who were very focused on enforcing prescriptive grammar and the genre of student writing regardless of my trying to tell them about the problems with prescriptive grammar and the genre of student writing. I don’t have a problem with people teaching students how to perform the genre of student writing, but I do have a problem with people teaching anyone that that genre is not just about student writing, but about “good” writing. And that’s what this group of students kept doing.

So, I gave them a passage of writing, and asked them to assess it, and they all trashed it. It didn’t have a summary introduction, it didn’t start with a thesis, it didn’t have paragraphs that began with main claims. They agreed that it was badly written. And then I told them that they were the high school teacher who told James Baldwin he was a bad writer.






Teaching with microthemes

Over time, I have evolved to having students submit “microthemes” (the wrong word) before class, and I use them for class prep. I keep getting asked about that practice, so this is my explanation.

Here’s what I tell students in my syllabus.

———

Microthemes. Microthemes are exploratory, informal, short (300-700 words) responses to the reading (they can be longer if you want). They have a profound impact on your overall grade both directly and indirectly; doing all of them (even turning in something that says you didn’t one) can help your grade substantially. Since the microthemes are on the same topics as the papers, they also serve as opportunities to brainstorm paper ideas.
The class calendar gives you prompts for the microthemes, but you should understand those are questions to pursue in addition to your posing questions. That is, you are always welcome to write simply about your reaction to the reading (if you liked or disliked it, agreed or disagreed, would like to read more things like it). Students find the microthemes most productive if you use the microtheme to pose any questions you have–whether for me, or for the other students. They’re crucial for me for class preparation. So, for instance, you might ask what a certain word, phrase, or passage from the reading means, or who some of the names are that the author drops, or what the historical references are. Or, you might pose an abstract question on which you’d like class discussion to focus. I’m using these to try to get a sense whether students understand the rhetorical concepts, so if you don’t, just say so.

A “minus” (-) is what you get if you send me an email saying you didn’t do the reading; you get some points for that and none for not turning one in at all. So failure to do a bunch of the microthemes will bring your overall grade down. If you do all the microthemes, and do a few of them well, you can bring your overall grade up. (Note that it is mathematically possible to get more than 100% on the microthemes—that’s why I don’t accept late microthemes; you can “make up” a microtheme by doing especially well on another few.)

Microthemes are very useful for letting me know where students stand on the reading–what your thinking is, what is confusing you, and what material might need more explanation in class (that’s why they’re due before class). In addition, students often discover possible paper topics in the course of writing the microthemes. Most important, good microthemes lead to good class discussions. The default “grade is √, except for ones in which you say that didn’t do the reading, or check plusses, plusses, or check minus. (So, if you don’t get email back, and it wasn’t one saying you hadn’t done the reading, assume it got a √.)

If you get a plus or check plus (or a check minus because of lack of effort), I’ll send you email back to that effect. (I won’t send email back if it’s a minus because you said you didn’t do the reading—I assume you know what the microtheme got.) If you’re uncomfortable getting your “grade” back in email, that’s perfectly fine—just let me know. You’ll have to come to office hours to get your microtheme grade. You are responsible for keeping track of your microtheme grade. There are 26 microtheme prompts in the course calendar; up to a 102 will count toward your final grade. There are five possible “grades” for the microthemes [the image at the top of this page].

Please put RHE330D and micro or microtheme in the subject line (it reduces the chances of the email getting eaten by my spam filter). Please, do not send your microthemes to me as email attachments–just cut and paste them into a message. Cutting and pasting them from Word into the email means that they’ll have weird symbols and look pretty messy, but, as long as I can figure out what you’re saying, I don’t really worry about that on the microthemes. (I do worry about it on the major projects, though.) Also, please make sure to keep a copy for yourself. Either ensure that you save outgoing mail, or that you cc yourself any microtheme you send me (but don’t bcc yourself, or your microtheme will end up in my spam folder).

=========

I find that I can’t explain microthemes without explaining how I came around to them.

I have three degrees in Rhetoric from Berkeley, for complicated reasons, none of which my ever involved deciding at the beginning of one degree that I would get the next. I always had other plans. And, for equally complicated reasons, I ended up not only tutoring rhetoric but acting as an informal TA (what we now call a Teaching Fellow) for rhetoric classes at some point (perhaps junior or senior year). And then I was the TA (a person who grades something like 3/5 of the papers and taught 1/5 of the course—a great practice) for two years, and then the Master Teacher (graded 2/5 of the papers and taught 4/5 of the course). Berkeley, at that point, was a very agonistic culture, and so “teaching” involved waking into class and asking what students thought of the reading, I was just a kind of ref at soccer game.

The disadvantage of all that time at one place and in one department was that I was very accustomed to a particular kind of student. Teaching rhetoric at Berkeley at that moment in time (rhetoric was not the only way to fulfill the FYC requirement and drew the most argumentative students) meant managing all the students who wanted to argue. And, given my Writing Center training, I spent a lot of time in individual conferences. My teaching load as a graduate student was one class per quarter.

That training prepared me badly in several ways. First, it was a rhetoric program, and the faculty were openly dismissive of research in composition. Second, I was only and always in classrooms in which the challenge was how to ref disagreement. Third, I adopted a teaching practice that relied heavily on individual conferences.

I went from that to teaching a 3/3 (or perhaps 3/2—I was always unclear on my teaching load) in the irenic Southeast. Students would not disagree with each other—if they had to, they would preface their disagreement with, “I don’t really disagree but…” In an irenic culture, people actually disagree just as much as they do in an agonistic one, but they aren’t allowed to say so.

Granted, we can never get students to give us some weird kind of audience-free reaction to the reading (if there is such a thing), but I had lost the ability to get a kind of almost visceral reaction to the reading, a sense of the various disagreements that people might have. I also didn’t have the time to meet with students individually as much.

I tried various strategies, such as students keeping a “sketchbook” (I can’t remember who suggested that), in which students responded to the reading, but I couldn’t read the book (since, in those days, it was a physical book) till after class, by which time it was too late for me to respond to what they’d said. But I did notice that students’ responses to the reading were more diverse than what ever happened in class. For one thing, students writing to me would say things they wouldn’t say in front of class.

Sometimes too much so. There was a problem with students telling me more about how the reading reminded them of very private issues. At some point I tried calling them “reading responses,” but that name flung students too often in the opposite direction, and they just summarized the readings.

I moved on to a place and time with more digital options—discussion boards, blogs—and found that they were great in lots of ways. Introverts who won’t talk in class will post on a blog, but there was an issue of framing. In discussions, of any kind, the first couple of speakers frame the debate, and future speakers generally respond from within that frame. So, as opposed to the “sketchbooks,” the blog posts were dialogic rather than diverse (although there weren’t as many plaints about a romantic partner). And even I recognized that a student could easily fake having done the reading, simply by piggybacking on other posts. The discussion board got me no useful information about how my students had reacted to the reading.

“Reading responses” was too private, but blogs were too much prone to in-group pressures.
I honestly don’t know where I found the term “microthemes,” and it’s still wrong (although less wrong than it used to be). Were I to do my career over, I would find a different term, but I don’t know what it would be.

The problem is that it has the term “theme” in it, and so students who have been trained to write a “theme” try to write a five-paragraph essay. Since fewer high school teachers ask for themes, this problem seems to be dissipating.

There are a lot of models of what makes for good teaching, and one is that a good teacher has students engage with each other—a good teacher is the teacher I was at Berkeley, just letting students argue with each other, and acting as a ref at a soccer game. And, to be honest, that was fine at Berkeley, because, while racists and misogynists and homophobes might have whined (and did) that people disagreed with them, people disagreed with them. Their whingeing was that someone disagreed with them.

It got more complicated in an irenic culture, when students didn’t feel comfortable disagreeing with anything. And, by the time I’d found about the disagreement, it was hard to figure out how to put into the class (I learned that you do it by your reading selections, but that’s a different post). The irenic culture meant that, if a student said something racist, other students didn’t feel comfortable saying anything about it (especially if the racist thing was within the norms of what I always think of as “acceptable racism”).

Behind all of this is that we are at a time when there is a dominant and incoherent model of what makes good teaching: it is about having a powerpoint (meaning you aren’t listening to what these students need, and you’re transmitting knowledge you already think they know) and having discussion in class in which all student views are equally valid.

That model is fine for lots of classes, but it’s guaranteeing a train wreck if you’re teaching about racism, or any issue about which a teacher is willing to admit that racism might have an impact. Since we’re in a racist world, asking that students argue with one another as though their positions are equally valid, when racism ensures they aren’t equally valid, is endorsing racism.

Yet, in a class about racism, it’s important to engage the various forms of racism that are plausibly deniable racism. Most racists don’t burn crosses or use the n word, but they make claims that they sincerely think aren’t racist. As I’ve said, this is rough work, and it really shouldn’t be on the shoulders of POC—white faculty should take on the work of explaining to white racists who think they aren’t racist that they are.

If we think of the discursive space of a class as the moment of the class, then this is almost impossible to do, and it’s racist to think that non-racist students should have to explain to racist students that they’re racist. It’s racist because the notion that a classroom is some kind of utopic space in which the hierarchies of our culture are somehow escaped enables the hierarches to skid past consideration, and thereby, those hierarchies are enabled by “free” discussion.

But, if you’re teaching a class in which you want to persuade people to think about racism, you have to have a class in which people can express attitudes that might be racist. Open discussion won’t work, and blogs still have a lot of discursive normativity, and so you need a way in which students can be open with you and say things they don’t want to say in front of other students.

And so you have microthemes.

Students feel more free to express views that they wouldn’t say in front of other students, and they’ll tell you if they haven’t done the reading, so I walk into class knowing how many students didn’t do the reading.

There are some disadvantages. You can’t reuse old lecture notes; you can’t prepare a powerpoint. And, since I’m the one presenting views that students have, there is a reduction in student to student conversation (it gets me hits on teaching observations but, since student to student interaction is deeply problematic in terms of power, I’m okay with that).

And, since undergraduate lives are, well, undergraduate lives, students don’t always remember what they’ve said in microthemes. And there is a tendency for students (especially graduate students) to feel that, since they’ve already told me what they think, they don’t need to say it in class.

But, still and all, I wish I’d adopted microthemes years before I did, but with a different name.