On the precious little snowflakes who want to ban _To Kill a Mockingbird_

We have all read about the precious little snowflakes who want great pieces of literature banned because they feel that their group is attacked by some piece of literature generally considered by scholars to be great. This is a rallying point on the part of the Right-Wing Outrage Machine (RWOM), about how effeminate and sensitive students are being created by the faculty of political correctness who go on to insist that students not be allowed to read a book. That effeminate group is offended by something about the book, perhaps a word, more commonly the representation of a character who might be taken to represent their group. Perhaps the character is the only member of that group represented, or perhaps even every member of that group is represented as ignorant, violent, and criminal. The argument, according to the RWOM, is that these people say that you can’t have literature in K-12 classrooms that makes some of the students feel bad about their group, and the RWOM) is clear that they think that is a bad thing to do.

This claim—that people who object to great pieces of literature on the grounds that it makes them feel bad about their group—is an important plank in the platform of RWOM—that “liberals” are too precious to have their concerns taken seriously. “Liberals” are simultaneously sensitive and authoritarian—they can’t stand criticism of their group, and they will silence anyone who criticizes them. Thus, “liberals’” views on policy issues can be dismissed—they don’t understand that democracy is about being willing to be tough and listen to criticism of our in-group.

So, this issue, as far as the RWOM is concerned, isn’t just about the book—about whether “liberals’” concerns need to be considered at all.

And, for the RWOM, To Kill a Mockingbird (TKAM) is a case in point. There are people who object to this book being taught in K-12 because it portrays their group unfavorably. And the RWOM is univocal that those people are idiots, whose views on politics are so impaired (soft, weak, sensitive) that the people who make those arguments shouldn’t even be considered in political discourse.

The argument about TKAM, then, isn’t just an argument about that book—it’s an argument about who is should even have a voice in democratic political discourse. Democracy, as the founders said, is about disagreement. The principle of democracy is that a community benefits from different points of view. The RWOM argument about trying to censor TKAM is pretty clear: the people who want it banned from high schools are weak people who don’t understand democracy. It isn’t just that their views are bad, but that they are such weak and fragile people that their entire group should not be considered when we are thinking about policy.

Banning the book is “caving in” to people who want it banned is stupid.  Banning TKAM is a war on learning. The National Review asserts that the records suggest that all attempts to ban the book come from people who don’t like books with the “n word” in them (that isn’t true, but it is one of the reasons often given).

“But a different sin concerns today’s anti-Mockingbird crowd. In fact, the last time Mockingbird was challenged solely for its depiction of sexual intercourse, rape, or incest was in 2006 in Brentwood, Tenn. Since then, all five challenges — in 2008, 2009, 2010, 2012, and 2016 — have involved parents or children made uncomfortable by the use of the “N-word” or the book’s depiction of racism.”

That National Review article condemns, in no uncertain terms, people who want the book banned because it makes them uncomfortable. So, as far as the National Review is concerned, banning the book is, prima facie, evidence of your entire political group being an idiot.

The RWOM is unusually unanimous on this point: people who object to teaching TKAM because it hurts their feelings are fragile little snowflakes whose views can be dismissed from consideration on the grounds that they are…well…too fragile. And they are clear that this isn’t a partisan issue: “But to consider To Kill a Mockingbird racially divisive is exactly backwards. The book is invaluable both for introducing students to the reality of America’s racial past and for exposing its injustices.” As in the above cases (both minor and major media), they were unequivocal that they were operating on a principle of education: that, as the National Review says, “Eliminating the hard stuff eliminates the reality.”

In other words, they aren’t taking this position because of partisan politics: it’s a principle that they hold universally.

For the sake of argument, let’s treat that as a principle. I have often argued that the RWOM makes arguments that present themselves as thoroughly, totally, and deeply principled, but are actually rabid factionalism. They were opposed to pedophilia till a pedophile was the GOP candidate for Senate; they wanted Clinton impeached for groping till they had a groper in chief. The RWOM says that their stance on TKAM is principled. Is it?

And here it’s useful to distinguish tu quoque from an argument from principle. If a person really cares about a principle, they will condemn anyone—in-group or not—for violating that principle. If concern about the principle is just a handy brick to throw at the outgroup, then, when it’s pointed out that they are violating a principle they claim to be sacred, they will say, “The out-group does it too!” That’s tu quoque. It’s a fallacy.

More important, it’s an admission that the principle didn’t matter. If I say, “You are bad because you pet squirrels,” then I am making an argument that has the major premise “people who pet squirrels are bad.” If I later defend someone who pets squirrels, I have violated the logic of my own argument. I am putting faction above principle. I don’t think someone is bad for petting squirrels—I think out-group members are bad for doing that, but not in-group members.

So, is the RWOM flinging itself around about sensitive snowflake lefties on the basis of a principle about democracy and the need to read unpleasant books? Or is this about faction?

Most of the articles I could find on the right were about the Biloxi, Mississippi controversy, when a school board decided that the book would not be required reading in eighth grade English classes, and I couldn’t find any major right-wing media who endorsed banning the book. So, this might look as the RWOM is acting on principle.

But there is some sneaky partisanship: snowflakes are lefties, and people who want to ban the book are fragile snowflakes—a term that has become a synonym for social justice warriors. So, condemning the specific policy point of wanting TKAM banned isn’t just a condemnation of that policy point—as far as the RWOM is concerned, the stance of various groups about banning TKAM can be used to condemn the entire group.

The RWOM is so drunk on outrage about the fragile lefties who want the book banned that they make objection to the book, on principle, a sign of being partisan: “I wonder if any of the Biloxi school district’s administrators know how to read.” Obviously, anyone who wants it banned is an idiot, regardless of party.

And it’s interesting to me how the metaphors work in this argument—the people who want the book banned from classrooms are girly (weak, fragile, frail, sensitive) while the people who want it taught are masculine (strong enough to see criticism of America), anti-racist (they univocally endorse Atticus Finch’s stance), and, unlike flaccid lefties, not people who demand “to soften education, to remove any pain or discomfort.” They are firm, strong, and standing tall. (The tendency on the part of the RWOM to use metaphors of hardness for their view and softness for the opposition is both sad and hilarious.)

Were this a principled stance—if the people who have worked themselves into outrage about Biloxi are acting on principle and not just partisanship–then the National Review would fling accusations of flaccidity and girlyness at anyone who objected to TKAM on the grounds that it criticized their group. Do they?

Nope.

There are two, very different ways, this book is challenged.

First, there is the argument it is racist, and that’s complicated. That argument is public because it gets to school boards—the first thing a parent does when objecting to a book is go to the teacher, then the principal, so going to the school board means the teacher and principal are holding their ground.

So, what, exactly, are the arguments that TKAM is racist?

Well, for one thing, it uses the ‘n word’ a lot. And here I will say that I frequently teach material with racist epithets in it, and I make sure they know it on the first day of class. I believe, firmly, in the notion that students should be warned about what they’re getting into, and students who don’t want to read anything with racist epithets shouldn’t take the class. That isn’t because there’s anything wrong with students who would rather not read a lot of appalling racist things, but because they have a right to make choices as to whether they will read them. So I try to be clear about just how awful the reading will be.

My courses are not required; my students are college students. I thoughtfully design my classes so that students can choose to skip a fair number of readings a semester and still get a good grade on the “keeping up with the reading” part of the grade because I know that some of the readings may be unhelpfully provocative, and they can miss up to two weeks up class with no penalty. So, students who are “triggered” by readings can make strategic choices about readings and attendance. High school students don’t have those choices.

The use of the ‘n word’ in TKAM is complicated, as it is in comedy, and high school students aren’t very good at that kind of complexity, and it is used in the book in a way intended to inflict damage. Granted, one can (and, I think, should) read the book as condemning that usage, but reading the book that way involves understanding other minds and perspective-shifting, and not all high school students are there. In other words, as anyone remotely aware of scholarship in rhetoric, reader-response, or, well, basic teacher-training knows, whether a particular class can understand the complicated relationship between the narrator and the events being narrated is something only the teacher of that class could know.

But, let’s side aside the notion that audiences are different from one another and that people receive texts in different ways (really, that only means setting aside sixty years of research, so not that much).

There is another argument, mentioned above. Malcolm Gladwell has made this argument best, and I would simply add that there is a toxic and racist narrative about the Civil Rights movement in our world. That narrative is that people were racist—meaning they irrationally hated everyone who wasn’t “white” and knew that they hated everyone and knew it was irrational. So, a racist person got up in the morning and said, “In every way and every day I will irrationally hate all other races.” As long as you didn’t say that (if, for instance, you said to yourself, “I will only rationally hate all other races”), you weren’t racist.

This is the classic move of feeling good about your decisions because you could imagine someone who was behaving worse. Cheating on this exam by glancing over is okay because you didn’t get the whole exam ahead of time like someone might. Cheating That Race on the rent is okay because you didn’t try to evict them for their race. Adolph Eichmann justified his racism because he wasn’t like Julius Streicher.

What did Atticus Finch do? He, against his will, defended a black man whom he knew to be innocent in a case he knew to be entirely the kind of case Ida B. Wells-Barnett had already named years before. And, throughout the book, he insisted that the racism that would put Tom Robinson to death was one that could (magically?) be cured if people were… what? nicer? less redneck?

Finch acknowledges that the system is SO racist that Robinson telling the truth will tank his case. Robinson mentions that he was nice to the young white woman because he felt sorry for her. And Finch flinches. That moment is why this movie, and the book, are racist.

He knew he lost the case at that moment because he had a racist jury. So, does he try to do anything about their racism? Nope.

Instead, the moral center of the tale says that you need to be nice to racists and hope they’ll be a little bit less racist.

That’s racist.

I love the book. I love that one of my sisters called me “Scout” for a while because I looked like Scout. The movie and book rocked my world, and helped me to see how racist my community and culture were. It was a great book. Now it’s racist.

In its era, it wasn’t. A major issue in 1960 was that “good” people accommodated the KKK, lynchings, Citizens Councils, and that juries couldn’t be counted on to do the sensible thing. So, something that said that the KKK is not actually okay, and that juries that endorsed state-sponsored terrorism were bad was making a useful argument.

We’re way beyond that. There are various problems with TKAM in our era. Atticus Finch is a white savior, his whole stance is the progressive mystique, and the basic message of the story is that racists are rednecks, but we should all submit in a civil fashion to racist justice systems while privately bemoaning that we can’t get a better outcome. (Too bad about Tom!) To be clear, had more people in the South been like Finch in 1960 the world would have been a better place. But, in 2017 we don’t need to make heroes of people who believe that racism is a question of individual intention and feeling, and who think there are good people on both sides. There aren’t. There weren’t. Atticus was wrong about that.

And a text that can make white students feel that racism is over because it isn’t as bad as it was then, and that they would totally have been Atticus Finch (even though they do nothing that involves the same level of risk his actions involved) doesn’t do any kind of anti-racist work. It might even (albeit unintentionally) endorse racist beliefs, insofar as it makes all racism an issue of personal feeling.

This isn’t 1960, and what Finch proposes (and does) isn’t enough for where we are now. That’s another way that people can argue it’s racist—that it can make people feel that we just need to be like Finch and racism will end (or worse yet, that racism did end). So, the argument that the book is racist isn’t a stupid argument, and it certainly isn’t one that assumes some inability to handle difficult or unpleasant material—on the contrary, it’s grounded in the notion that TKAM is simplistic. And, so, as far as the Right Wing Outrage Machine goes, I am a precious and fragile snowflake because anyone who makes the kind of argument I am making is a snowflake.

But, let’s consider fairly the RWOM argument that lefties are weenies who want to silence free speech. Granted, the RWOM never engages the argument I made above—a nuanced and complicated argument about TKAM. Their argument is (as I hope I’ve shown) the false argument that anyone who objects to TKAM being taught in K-12 is a weeny who doesn’t want to hear criticism of their in-group.

If you are intellectually generous, you can find an implied syllogism in the RWOM outrage about TKAM: Lefties are people whose views can be dismissed because they oppose texts like TKAM on the grounds that it offends their feelings about their in-group.

That’s a potentially logically argument, and argument from principle: anyone who objects to TKAM on the grounds that it offends their feelings about their in-group is promoting a political agenda we should dismiss.

Recently, I spent the day with high school teachers from various places in Texas, and the issue of TKAM came up, especially their being told they couldn’t teach it. I was familiar with the cases when it came to school boards, and was willing to defend the case that it wasn’t a useful book for teaching about racism because we’ve moved beyond when aversive racism was the major issue, but that wasn’t the main complaint for any of them.

Every one of them said that the book was pulled because parents of white students complained that it made white Southerners feel bad about their past. They complained to the principal, and the book was pulled.[1] That’s the second reason the book is pulled, and you can see it in the ALA list of reasons the book is challenged.

So, I’m sure, now that I’ve said that racist white Southerners feel hurt about TKAM the RWOM will, because it’s a principle about criticism, insist that TKAM be taught. Who is the snowflake here?

I’m sure, since the Right Wing Outrage Machine is all about principle, they’ll now look into this issue.

I’m also sure I have a unicorn in my garden that poops gold.

[1] Here is the interesting point. Yes, parents who didn’t want TKAM taught because of the n word, and because of complicated issues about its racism, went to school boards. Presumably they didn’t first go to the school boards; they went to the principal and didn’t get anywhere, so they kept taking it up the ladder. Parents who didn’t want their white students to have to confront white racism went to the principal, and got their way. In other words, people who wanted to protect the fragile feelings of white Southerners didn’t need to go to the School Board—they could count on principals protecting the feelings of their previous snowflakes white students who didn’t want to hear that segregation might have been bad. Parents with more complicated issues had to go to the School Board.

Teaching about racism from a position of privilege

I’ve taught a course on rhetoric and racism multiple times (I think this is the third, but maybe fourth). It came out of a couple of other courses—one on the rhetoric of free speech, and the other on demagoguery, but also from my complete inability to get smart and well-intentioned people to engage in productive discussions about racism.

I never wanted to teach a class on racism because I thought that there wasn’t really a need for a person who almost always has all the privileges of whiteness to tell people about racism. But I had a few experiences that changed my mind. And so I decided to do it, but it is the most emotionally difficult class I teach, and it is really a set of minefields, and there is no way to teach it that doesn’t offend someone. And yet I think it’s important, and I think other white people should teach about racism, but with a few caveats.

Like many people, I was trained to create the seminar classroom, in which students are supposed to “learn to think for themselves” by arguing with other students. The teacher was supposed to act as referee if things got too out of hand, but, on the whole, to treat all opinions as equally valid. I was teaching a class on the rhetoric of free speech—with the chairs in a circle, like a good teacher–when a white student said, “Why can black people tell jokes about white people, but white people can’t tell jokes about black people?”

And all the African-American students in the class shoved their chairs out of the circle, and one of them looked directly at me.

That’s when I realized how outrageously the “good teaching” method—in which every opinion expressed by a student should be treated as just as valid as the opinion of every other student—was institutionalized privilege.

What I hadn’t realized till that moment was that the apparently “neutral” classroom I had been taught to create wasn’t neutral at all. I was trained at a university and a department at which nonwhites and women were in the minority, and so every discussion in which all values are treated as equal in the classroom necessarily meant that straight male whiteness dominated, just in terms of sheer numbers. Then I went to a university that was predominantly women, and white males still dominated. White males dominate discussion, while white fragility ensures that treating all views as though they’re equal is doing nothing of the kind. The “neutral” classroom treats the white students’ hurt feelings with being called racist as precisely the same as anything racist s/he might say. And they aren’t the same.

That “liberal” model of class discussion is so vexed, and so specifically vexed in terms of race, gender, and sexuality. Often being one of few women in a class, and not uncommonly being one of few who openly identified as feminist, I was not uncommonly asked to represent what “feminists” thought about an issue, and I’ve unhappily observed classes (or was in classes) where the teacher asked a student to speak for an entire group (“Chester, what do gay people think about this?”) It’s interesting that not all identities get that request to speak for their entire group. While I have seen teachers call on a veteran to ask what the entire class of “veterans” think, I have never been in a class where anyone said, “Chester, what do “working class people” think about this issue?” I’ve also never been in a class, even ones where het white Christian males were in the minority, where anyone asked a het white Christian male to speak for all het white males.

The most important privilege that het white Christian males have is the privilege of toggling between individualism and universalism on the basis of which position is most rhetorically useful in the moment. In situations in which het male whiteness is the dominant epistemology, someone with that identity can speak as an individual, about his experience. When he generalizes from his experience, it’s to position himself as the universal experience. Het white males are simultaneously entirely individual and perfectly universal.

The “liberal” classroom presumes people who are speaking to one another as equals, but what if they aren’t? The “liberal” classroom puts tremendous work on identities who walk into that room as not equal—they have to be the homophobic, racist, sexist whisperers. That isn’t their job. That’s my job. I realized I was making students do my work.

That faux neutrality also guarantees other unhappy classroom practices. For instance, students who disagree with that falsely neutral position do so from a position of particularity. The “normal” undergrad has asserted a position which seems to be from a position of universal vision, and so any student who refutes his experience is now not only identifying with a stigmatized identity, but self-identifying as a speaker who is simultaneously particular and a representative of an entire group. When your identity is normalized, you claim to speak for Americans; when your identity is marked as other, you speak for all the others in that category.

There’s a weird paradox here. Both the het white Christian male and the [other] are taken as speaking for a much larger group, but in the case of the het white male it’s that he is speaking for humanity at a whole. If he isn’t, if his identity as het white male isn’t taken as universal in a classroom, then some number of people in that category will be enraged and genuinely feel victimized and dismiss as “political correctness” that they have to honor the experience of others as much as they honor their own experience.

What the white panic media characterizes as “political correctness” is rarely about suppression of free speech (they’re actually the ones engaged in political correctness)—it’s about holding all identities to the same standards of expression. The strategic misnaming of trying to honor peoples’ understanding of themselves as “political correctness” ignores the actual history of the term, which was about pivoting on a dime in order to spin facts in a way that supported faction. In other words, the whole flinging poo of throwing the term “political correctness” at people asking for equality is strategic misnaming and projection.

The second experience was in a class that was about the history about conceptions of citizenship, I was trying to make the point that identification is often racial, and that the notion of “universal” is often racist. I gave the class the statistics about Congress—that it was about 90% male and also in the 90% (or more) white. I asked the white males in the class whether they would feel that they were represented if Congress were around 90% nonwhite nonmale. Normally, this set off light bulbs for students. But, this time, one student raised his hand and said, “Well, yes, because white males aren’t angry.”

Of course, that isn’t true, and I’d bet they’d be pretty angry about not being represented, but, even were it true, it would be irrelevant. That student was assuming that being angry makes people less capable of political deliberation—that anger has no place in political argument. That’s an assumption often made in the “liberal” classroom, in which people get very, very uncomfortable with feelings being expressed. And it naturally privileges the privileged because, if being emotional (especially angry) means that a person shouldn’t be participating (or their participation is somehow impaired) then we either can’t talk about things that bother any students (which would leave a small number of topics appropriate for discussion), or people who are angry about aspects of our world (likely to be the less privileged) are silenced before they speak—they’re silenced on the grounds of the feelings they might legitimately have.

So, if we’re going to have a class about racism, we’re going to have a class in which people get angry, and not everyone’s anger is the same. Racist discourse is (and long has been) much more complicated than a lot of people want it to be—we want to think that it’s easy to identify, that it’s marked by hostility, that it’s open in its attacks on another race. But there has always been what we now call “modern racism”—racism that pretends to be grounded in objective science, that says “nice” things about the denigrated group, that purports to be acting out of concern and even affection. That is the kind of reading that angers students the most, and I think it’s important we read it because it’s the most effective at promoting and legitimating racist practices. But it will offend students to read it.

And so the class is really hard to teach, and even risky. And that was the other point I realized. If we have institutions in which only people of color are teaching classes about racism, we’re making them take on the politically riskier courses. That’s racist.

I remain uncomfortable being a white person teaching about racism, and I think my privilege probably means I do it pretty badly. But I think it needs to be done.

On career choices as mingling in Burke’s parlor

On Wednesday, I sent off the scholarly version of the demagoguery argument. It isn’t the book I once planned (that would involve a deeply theoretical argument about identity and the digital world), but it’s the one I really wanted to write, that would (I think) reach more people than that other one.

And it’s the last scholarly book I’ll write. I intend to spend the rest of my career trying to solve the interesting intellectual problem of making scholarly concepts and debates more accessible to non-academics. But that isn’t because I reject highly specialized academic writing as, in any way, a bad thing.

I have no problem with highly theoretical and very specialized books. My books have barely grazed the 1000 sales point, and that’s pretty good for a scholarly book. People have told me that something I’ve written has had an impact on their scholarship, pedagogy, program administration, so I’m really happy with my record as a scholar.

And I’m happy with the record of people who have sold both more and less because measuring impact is so very difficult. Publishing a book with an academic press is an extraordinary achievement, and measuring the impact of such books accurately is nigh impossible—a really powerful book is shared in pirated pdfs, checked out of libraries, passed from one person to another. Sales and impact are orthogonal in academia.

If you study the history of ideas even a little you have to know that what seemed major in the moment was sometimes just a trend (like mullets) and sometimes a sea change (like the synthesizer). No one reads Northrop Frye anymore, but he was a big deal at one moment, and yet Hannah Arendt is still in the conversation, who was also a big deal around the same time. And there are all those people who weren’t big deals in their era, but later came to have tremendous impact, such as Mikhail Bakhtin.

Some trade books on scholarly issues have had extraordinary sales (such as Mortimer Adler’s writings), but it’s hard to know what impact they had. Madison Grant’s racist book Passing of the Great Race had poor sales, but appears to have had a lot of impact. And there are lots of trade books that have come and gone without leaving any impact, so there’s no good reason to conclude that trade books necessarily have more impact than scholarly ones. I don’t think there are a lot of (any?) necessary conclusions that one can draw about whether trade or scholarly books have more impact, are more or less important, more or less valuable intellectual activity.

I have always loved Kenneth Burke’s analogy of the parlor for what it means to be interested in major questions. You show up at a party, he says, and it’s been going on for a while, and you find some conversation that seems interesting. You listen for a while, and then you take a side or point out something new. You get attacked and defended, and some people leave the conversation, and others join, and eventually you too leave. And it goes on, with other people taking sides that may or may not have to do with what you were arguing.

What Burke fails to mention is that, if it’s a good party, there are a lot of conversations going on. You might choose to leave that particular conversation, but not leave the party.

I have loved writing scholarly pieces (although I didn’t initially think I would), and my work has placed me in some particular conversations. I’ve moved from one conversation to another, but all on the side of the parlor engaged in very scholarly arguments. I’d like to leave that side of the parlor, not because it’s a bad one—it’s a wonderful one—but because it’s a party with a lot of conversations going on. I’d like to mingle.

I think a lot of discussions of the public intellectual make a lot of odd assumptions that assume binaries—that either specialized or public scholarship is good, for instance. Scholarship that speaks with authority to a small group is neither better nor worse than scholarship that reaches a broad audience—it’s just a different conversation in Burke’s parlor. And I’m going to wander over there for a bit.

 

“Just Write!” and the Rhetoric of Self-Help

There is a paradox regarding the large number of scholars who get stalled in writing—and a large number do get stalled at some point (50% of graduate students drop out)—they got far enough to get stalled because, for some long period of time, they were able to write. People who can’t write a second book, or a first one, or a dissertation, are people who wrote well enough and often enough to get to the point that they needed to write a dissertation, first book, second book, grant, and so on. So, what happened?

The advice they’re likely to be given is, “Just write.” And the reason we give that advice (advice I gave for years) is that we have the sense that they’re overthinking things, that, when they sit down to write, they’re thinking about failure, and success, and shame, and all the things that might go wrong, and all the ways what they’re writing might be inadequate, and all the negative reactions they might get for what they’ve written. So, we say, “Just write,” meaning, “Don’t think about those things right now.”

The project of writing may seem overwhelming because existentially risky, and the fear created by all the anxiety and uncertainty is paralyzing. It can seem impossibly complicated, and so we give simple advice because we believe that persuading them to adopt a simpler view of the task ahead will enable them to write something. Once they’ve written something, once they’re unstuck, then they can write something more, and then revise, and then write more. Seeing that they have written will give them the confidence they need to keep writing.

And I think that advice often works, hence the (deserved) success of books like Writing Your Dissertation in Fifteen Minutes a Day or Destination Dissertation. They simplify the task initially, and present the tasks involved in ways that are more precise than accurate, but with the admirable goal of keeping people moving. Many people find those books useful, and that’s great. But many people don’t, and I think the unhappy consequence of the “you just have to do this” rhetoric is that there is an odd shaming that happens to people for whom that advice doesn’t work. And, while it’s great that it works for a lot of people, there are a lot for whom it doesn’t, and I’m not happy that they feel shamed.

These books have, as Barbara Kamler and Pat Thomson have argued, characteristics typical of the self-help genre (“The Failure of Dissertation Advice Books”), especially in that it presents dissertation writing as “a series of linear steps” with “hidden rules” that the author reveals. While I am not as critical of those books, or of the genre of self-help, as Kamler and Thomson, I think their basic point is worth taking seriously: that this advice misleads students because it presents dissertation writing as a set of practices and habits rather than cognitive challenges and developments.

Academic writing is hard because it’s hard. Learning to master the postures, steps, and dances of developing a plausible research question, identifying and mastering appropriate sources, determining necessary kinds of support, managing a potentially sprawling project, and positioning a new or even controversial claim in an existing scholarly conversation—all of that is hard and requires cognitive changes, not just writing practices.

Telling people academic writing “just” requires anything (“just write,” “just write every day,” “just ignore your fears,”) is a polite and sometimes useful fiction. And self-help books’ reliance on simple steps and hidden rules is, I’d suggest, not necessarily or manipulative, but based in the sense that telling people something hard is actually hard can discourage them. If you lie, and thereby motivate them to try doing it, then they might realize that, while hard, it isn’t impossible.

I think the implicit analogy is to something like telling a person who needs to exercise that they should “just get up off the couch.” Telling people that improving their health will be a long and slow process with many setbacks is unlikely to motivate someone to start the process; it makes the goal seem impossible, and unrewarding. Telling someone that getting healthier is simple, and they “just” need to increase their exercise slightly, or reduce portion size slightly, or do one thing differently will at least get them started. Having gotten a little healthier might inspire them to do more, but, even if it doesn’t, they are getting a little better.

But that’s the wrong analogy.

A scholar who is having difficulty writing is not analogous to someone who needs to get up off the couch: it’s a person with a long record of successes as a writer. That is what we (and people who are stuck) so often lose track of when we give the “just write” advice. They are not a person sitting on a couch; they are someone with an exercise practice that has always worked for them in the past and it isn’t working now.

The better analogy, I would suggest, is a sprinter who is now trying to run a marathon. Sprinting has worked for them in the past, and many academics have a writing process that is akin to sprinting—chunks of time in which we do nothing but write, and try to get as much done as quickly as we can. Writing a dissertation or book, on the other hand, is more like running a marathon.

It would be unethical to tell a sprinter who is unable to run a marathon that she should “just run.” She has been running; she’s quite good at it. But the way that she has been running is not working for this new distance. And if she does try to run a marathon the way she has always run short races, she will hurt herself.

My intuition is that people who have trouble writing are people who have always used the sprinting method, and have simply managed to develop the motivational strategies to sprint for longer, or collapse from time to time while on the race, and pick themselves up. Often, it seems to me, that motivation relies on panic and negative self-talk—they manage to binge write because otherwise, they tell themselves, they are a failure.

So I’m not saying that “Just write” is always bad advice. I am saying that it sometimes is; it is sometimes something that can send people into shame spirals. It only works for some people, for people who do find that polite fiction motivating. For others, though, telling them “just write” is exactly like telling a person in a panic attack “just calm down” or someone depressed “just cheer up.”

The “just write” comes from a concern that lack of confidence will paralyze a student. But I think we might be solving the wrong problem.

Part of the problem is the myth of positive thinking, which has taken on an almost magical quality for some people. There is a notion that you should only think positive thoughts, as though thinking negative things brings on bad events. Since thinking clearly about how hard it is to write a book, dissertation, or grant (and, specifically, thinking clearly about how we might have habits or processes that inhibit our success) is thinking about “bad” things, about how things might go wrong or what troubles we might have, the myth of positive thinking says you shouldn’t do it. You should, instead, just imagine success.

This is a myth. It isn’t just a myth, but pernicious, destructive nonsense. A (sometimes secular) descendant of the positive psychology elegantly described by Bowler in Blessed, this is magical thinking pure and simple, and perfectly contrary to what research shows about how positive thinking actually affects motivation.

But here I should be clear. Some people who advocate wishful thinking do so because believe that the only other possibility is wallowing in self-loathing and a sense that the task is impossible, and they believe that telling students that academic writing is hard will necessarily lead to their believing it is impossible. In other words, there is an assumption that there is a binary between thinking only and entirely about positive outcomes or thinking only and entirely about tragic outcomes. The former is empowering and the latter is paralyzing. That narrative is wrong on all three counts—positive thinking is not necessarily enabling, moments of despair are not necessarily disabling, and our attitude toward our own challenges is not usefully described as a binary between pure optimism and pure despair. Left out of that binary is being hopefully strategic: aware of possible failures, mindful of hurdles, with confidence in our resilience as much as in our talents.

As to the first, studies clearly show that refusing to think negative thoughts about possible outcomes is actively harmful, and frequently impairs achievement. That’s important to remember: telling students they shouldn’t think about their own flaws, the challenges ahead of them, and how things might go wrong is not helping them, and it is making it less likely they will do what they need to do.

Gabriele Oettingen’s considerable research shows that (summarized in the very helpful book Rethinking Positive Thinking), while wishful thinking can be useful for maintaining hope in a bad situation or identifying long-term goals, it inhibits action. Fantasizing about how wonderful a dissertation or book will be doesn’t inspire us to write either; for many people, it makes the actual sometimes gritty work so much more unattractive in comparison that it’s impossible to write. The fantasy is far more fun than writing a crummy first draft. Similarly, Carol Dweck’s research on mindsets shows that success depends on acknowledging what has gone wrong and identifying how one might grow and change to get a different outcome in the future.

A sense that the task is so hard as to be impossible is not inevitably and necessarily disabling. It is, however, inevitable. It is dishonest to tell students that we never feel that what we’re trying to do can’t be done or isn’t worth doing, because so many of us do. And most of us got (and get) through it. Sometimes it took time, therapy, medication, changing things in our personal lives, changing jobs, changing projects, all of the above. But I don’t know any productive scholar free from times of slogging through the slough of despond.

In my experience, academic writing gets easier, but it’s never easy. The hardest writing is probably finishing a dissertation while writing job materials—nothing after that is so hard. But it’s always hard. If we tell students that it’s easy, or that it gets easy, even if we do so with the intention of keeping them moving, we do them a disservice. If they believe us, if they believe that we find it easy, then, when it gets hard, as it necessarily will, they have to conclude that there is something wrong with them. They are unhappily likely to conclude that they have been exposed for the imposter they always worried they were.

The “just write” advice almost certainly works for some people in some situations, as does the “just write every day” or “just freewrite” or “just start with your thesis” or any of the other practices and rules that begin with “just.” They work for someone somewhere and maybe they work for everyone some of the time, and they always strike me as sensible enough to suggest that people experiment with them. But we shouldn’t pretend that they’re magical and can’t possibly fail, or that someone “just” needs to do them. The perhaps well-intentioned fiction that academic writing “just” requires certain practice is magical thinking, and we need to stop saying it.

In my experience, people who find the “just write” advice useless find it too abstract. So, I think we need to be clear that scholarly productivity is, for most people, hard, and it’s find that a person finds it hard. And it takes practice, so there are some things a person might “just write”:

    • the methods section;
    • descriptions of an incident, moment in a text, interaction, or some other very, very specific epitome of their problem (Pirsig’s brick in the wall of the opera house);
    • summaries of their secondary materials with a discussion of how each text is and is not sufficient for their research;
    • a collection of data;
    • the threads from one datum to another;
    • a letter to their favorite undergrad teacher about their current research;
    • a description of their anxieties about their project;
    • an imitation of an introduction, abstract, conclusion, or transition paragraph they like written by a junior scholar.

I’m not presenting that list as a magical solution. It would be odd for me to say that simplistic advice is not helpful and then give a list of the five (or seven, or ten) things we “just” have to do to become (or teach others to become) skilled and productive academic writers. What we have to do is acknowledge that the project requires significant and complicated cognitive changes: that, for most of us, scholarly writing is hard because it’s hard. Let’s be honest about that.

Rationality, demagoguery, and rhetoric

One of my criticisms of conventional definitions of demagoguery is that they enable us to identify when they are getting suckered by demagoguery, but not when we are. They aren’t helpful for helping us see our own demagoguery because they emphasize the “irrationality” and bad motives of the demagogues. And both strategies are deeply flawed, and generally circular. Here I’ll discuss a few problems with conventional notions of rationality/irrationality, and later I’ll talk about the problems of motivism.

Definitions of “irrationality” imply a strategy for assessing the rationality of an argument, and many common definitions of “rational” and “irrational” imply methods that are muddled, even actively harmful. Most of our assumptions about what makes an argument “rational” or “irrational” imply strategies that contradict one another. For instance, “rationality” is sometimes used interchangeably with reasonable and logical, sometimes used as a larger term that incorporates logical (a stance is rational if the arguments made for it are logical, or a person is rational if s/he uses logical processes to make decisions). That common usage contradicts another common usage, although people don’t necessarily realize it: many people assume that an argument is rational if you can support it with reasons, whether or not the reasons are logically connected to the claims. So, in the first one, a rational argument has claims that are logically connected, but in the second one it just has to have sub-claims that look like reasons.  There’s a third usage: many people assume that “rational” and “true” are the same, and/or that “rational” arguments are immediately seen as compellingly true, so to judge if an argument is rational, you just have to ask yourself if it seems compellingly true. Of course, that conflation of rational and true means that “rational” is another way of saying “I agree.” A fourth usage is the consequence of  many people equating “irrational” with “emotional:” it can seem that the way to determine whether an argument is rational is to try to infer whether the person making the argument is emotional, and that’s usually inferred by the number of emotional markers—how many linguistic “boosters” the rhetor uses (words such as “never” or “absolutely”), or verbs of affect (“love,” “hate,” “feel”). Sometimes it’s determined through sheer projection, or through deduction from stereotypes (that sort of person is always emotional, and therefore their arguments are always emotional).

Unhappily, in many argumentation textbooks, there’s a fifth usage thrown in: it’s not uncommon for a “logical” argument to be characterized as one that appeals to “facts, statistics, and reason”—surface features of a text. Sometimes, though, we use the term “logical” to mean, not an attempt at logic, or a presentation of self as engaged in a logical argument, but a successful attempt—an argument is logical if the claims follow from premises, the statistics are valid, and the facts are relevant. That usage—how it’s used in argumentation theory—is in direct conflict with the vaguer uses that rely on surface features (“facts, statistics, and reason” or the linguistic features we associate with emotionality). Much of the demagoguery discussed in this book makes appeals to statistics, facts, and data, and much of it is presented without linguistics markers of emotionality, but generally in service of claims that don’t follow, or that appeal to inconsistent premises, or that contradict one another. Thus, for the concept of rationality to be useful for identifying demagoguery, it has to be something other than any of the contradictory ones above—surface features; inferred, projected, or deduced emotionality of the rhetor; presence of reasons; audience agreement with claims.

Following scholars of argumentation, I want to argue for using “rationality” in a relatively straightforward way. Frans van Eemeren and Rob Grootendorst identify ten rules for what they call a rational-critical argument. While useful, for purposes of assessing informal and lay arguments, they can be reduced to four:

    1. Whatever are the rules for the argument, they apply equally across interlocutors; so, if a kind of argument is deemed “rational” for the ingroup, then it’s just as “rational” for the outgroup (e.g., if a single personal experience counts as proof for a claim, then a single appeal to personal experience suffices to disprove that claim);
    2. The argument appeals to premises and/or definitions consistently, or, to put it in the negative, the claims of an argument don’t contradict each other or appeal to contradictory premises;
    3. The responsibilities of argumentation appeal equally across interlocutors, so that all parties are responsible for representing one another’s arguments fairly, and striving to provide internally consistent evidence to support their claims;
    4. The issue is up for argument—that is, the people involved are making claims that can be proven wrong, and that they can imagine changing.

Not every discussion has to fit those rules—there are some topics not open to disproof, and therefore can’t be discussed this way. And those sorts of discussions can be beneficial, productive, enlightening. But they’re not rational; they’re doing other kinds of work.

In the teaching of writing, it’s not uncommon for “rationality” and “logical” to be compressed into Aristotle’s category of “logos” (with “irrational” and “emotional” getting shoved into his category of “pathos”)—and then very recent notions about logic and emotion are projected onto Aristotle. As is clear even in popular culture, recent ideas assume a binary between logical and emotional, so saying something is an emotional argument is, for us, saying it is not logical. That isn’t what Aristotle meant—he didn’t even mean that appeals to emotion and appeals to reason can coexist; he didn’t see them as opposed. Nor did he mean “facts” as we understand them, and he had no interest in statistics. For Aristotle, ethos, pathos, and logos are always operating together—logos is the content, the argument (the enthymemes); pathos incorporates the ways we try to get people to be convinced; ethos is the person speaking. So, were we to use an Aristotelian approach to an argument, we would look at a set of statistics about child poverty, and the logos would be that poverty has gotten worse (or is worse in certain areas, or for some people—whatever the claims are), the pathos would be how it’s presented (what’s in bold, how it’s laid out, and also that it’s about children), and the ethos is whatever is situated (what we know about the rhetor prior to the discourse) but also a consequence of the person using statistics (she’s well-informed, she’s done research on this) and that it’s about children (she is compassionate). For Aristotle, unlike post-logical positivists, the pathos and logos and ethos can’t operate alone.

I think it’s better just to avoid Aristotle’s terms, since they slide into a binary so quickly. More important, they enable people to conflate “a logical argument” (that is, the evaluative claim, that the argument is logical) with “an appeal to logic” (the descriptive claim, that the argument is purporting to be logical).

What this means for teaching

People generally reason syllogistically (that’s Ariel Kruglanski’s finding), and so it’s useful for people to learn to identify major premises. I think either Toulmin’s model or Aristotle’s enthymeme works for that strategy, but it is important that people are able to identify unexpressed premises.

Syllogism:

All men are mortal. [universally valid Major Premise]

Socrates is a man. [application of a universally valid premise to specific case: minor premise]

Therefore, Socrates is mortal. [conclusion]

Enthymeme:

Socrates is mortal [conclusion]

because he is a man. [minor premise]

The Major Premise is implied (all men are mortal).

Or, syllogism:

A = B [Major Premise]

A = C [minor premise]

Therefore, B = C. [conclusion]

Enthymeme:

B = C because A = B. This version of the argument implies that A = C.

Chester hates squirrels because Chester is a dog.  

Major Premise (for the argument to be true): All dogs hate squirrels.

Major Premise (for the argument to be probable): Most dogs hate squirrels.

 

Batman is a good movie because it has a lot of action.

Major Premise: Action movies are good.

 

Preserving wilderness in urban areas benefits communities

            because it gives people access to non-urban wildlife.

Major Premise: Access to non-urban wildlife benefits communities.

Many fallacies come from some glitch in the enthymeme—for instance, non sequitur happens when the conclusion doesn’t follow from the premises.

    • Chester hates squirrels because bunnies are fluffy. (Notice that there are four terms—Chester, hating squirrels, bunnies, and fluffy things.)
    • Squirrels are evil because they aren’t bunnies.

Before going on to describe other fallacies, I should emphasize that identifying a fallacy isn’t the end of a conversation, or it doesn’t have to be. It isn’t like a ref making a call—it’s something that can be argued—this is especially true with the fallacies of relevance. If I make an emotional argument, and you say that’s argumentum ad misercordiam, then a good discussion will probably have us arguing about whether my emotional appeal was relevant.

Appealing to inconsistent premises comes about when you have at least two enthymemes, and their major premises contradict.

For instance, someone might argue: “Dogs are good because they spend all their time trying to gather food” and “Squirrels are evil because they spend all their time trying to gather food.” You’ll rarely see it that explicit—usually the slippage is unnoticed because you use dyslogistic terms for the outgroup and eulogistic terms for the ingroup: “”Dogs are good because they work hard trying to gather food to feed their puppies” and “Squirrels are evil because they spend all their time greedily trying to get to food.”

Another one that comes about because of glitches in the enthyme is circular reasoning (aka “begging the question). This is a very common fallacy, but surprisingly difficult for people to recognize. It looks like an argument, but it is really just an assertion of the conclusion over and over in different language. The “evidence” for the conclusion is actually the conclusion in synonyms–“The market is rational because it lets the market determine the value of goods rationally.” “This product is superior because it is the best on the market.”

Genus-species errors (aka over-generalizing, ignoring exceptions, stereotyping) happens when hidden in the argument (often in the major premise is a slip from “one” (or “some”) to “all.” It results from assuming that what is true of a specific thing is true of every member of that genus, or what is true of the genus is true of every individual member of that genus. “Chester would never do that because he and I are both dogs, and I would never do that.” “Chester hates cats because my dog hates cats.”

Fallacies of relevance

Really, all of the following could be grouped under red herring, which consists of dragging something so stinky across the trail of an argument that people take the wrong track. Also called “shifting the stasis,” it’s trying to distract from what is really at stake between two people to something else—usually inflammatory, but sometimes simply easier ground for the person engaged in red herring. Sometimes it arises because one of the interlocutors sees everything in one set of terms—if you disagree with them, and they take the disagreement personally, they might drag in the red herring of whether they are a good person, simply because that’s what they think all arguments are about.

Ad personum (sometimes distinguished from ad hominem) is an irrelevant attack on the identity of an interlocutor. Not all “attacks” on a person or their character are ad hominem. Accusing someone of being dishonest, or making a bad argument, or engaging in fallacies, is not ad hominem because it’s attacking their argument. Even attacking the person (“you are a liar”) is not fallacious if it’s relevant. It generally involves some kind of name-calling (usually of such an inflammatory nature that the person must respond, such as calling a person an abolitionist in the 1830s, a communist in the 1950s and 60s, or a liberal now). It’s really a kind of red herring, as it’s generally irrelevant to the question at hand, and is an attempt to distract the attention of the audience.

Ad verecundiam is the term for a fallacious appeal to authority. In general, it’s a fallacy because their authority isn’t relevant—there’s nothing inherently fallacious about appeal to authority, but having a good conversation might mean that the relevance of the authority/expertise now has to become the stasis. Bandwagon appeal is a kind of fallacious appeal to authority—it isn’t fallacious to appeal to popularity if it is a question in which popular appeal is a relevant kind of authority.

Ad misericordiam is the term for an irrelevant appeal to emotion, such as saying you should vote for me because I have the most adorable dogs (even though I really do). Emotions are always part of reasoning, so merely appealing to emotions is not

Scare tactics (aka apocalyptic language) is a fallacy if the scary outcome is irrelevant, unlikely, or inevitable regardless of the actions. For instance, if I say you should vote for me and then give you a terrifying description of how our sun will someday go supernova, that’s scare tactics (unless I’m claiming I’m going to prevent that outcome somehow).

Straw man is dumbing down the opposition argument; because the rhetor is now responding to arguments their opponent never made, most of what they have to say is irrelevant. People engage in this one unintentionally by not listening, projection, and a fairly interesting process. We have a tendency to homogenize the outgroup and assume that they are all the same. So, if you say “Little dogs aren’t so bad,” and I once heard a squirrel lover praise little dogs, I might decide you’re a squirrel lover. Or, more seriously, if I believe that anyone who disagrees with me about gun ownership and sales wants to ban all guns, then I might respond to your argument about requiring gun safes with something about the government kicking through our doors and taking all of our guns (an example of slippery slope).

Tu quoque is usually (but not always) a kind of red herring, sometimes it’s the fallacy of false equivalency (what George Orwell called the notion that half a loaf is no better than none). One argues that “you did it too!” While it’s occasionally relevant, as it can point to a hypocrisy or inconsistency in one’s opposition, and might be the beginning of a conversation about inconsistent appeals to premises, it’s fallacious when it’s irrelevant. For instance, if you ask me not to leave dirty socks on the coffee table, and I say, “But you like squirrels!” I’ve tried to shift the stasis. It can also involve my responding with something that isn’t equivalent, as when I try to defend myself against a charge of embezzling a million dollars by pointing out that my opponent didn’t try to give back extra change from a vending machine.

False dilemma (aka poisoning the wells, false binary, either/or) occurs when a rhetor sets out a limited number of options, generally forcing one’s hand by forcing one to choose the option s/he wants. Were all the options laid out, then the situation would be more complicated, and his/her proposal might not look so good. It’s often an instance of scare tactics because the other option is typically a disaster (we either fight in Vietnam, or we’ll be fighting the communists on the beaches of California). It is “straw man” when it’s achieving by dumbing down the opponent’s proposal.

Misuse of statistics is self-explanatory. Statistical analysis is far more complicated than one might guess, given common uses of statistics, and there are certain traps into which people often fall. One common one is the deceptively large number. The number of people killed every year by sharks looks huge, until you consider the number of people who swim in shark-infested waters every year, or compare it to the number of people killed yearly by bee stings. Another common one is to shift the basis of comparison, such as comparing the number of people killed by sharks for the last ten years with the number killed by car crashes in the last five minutes. (With some fallacies, it’s possible to think that there was a mistake involved rather than deliberate misdirection; with this one, that’s a pretty hard claim to make.) People often get brain-freeze when they try to deal with percentages, and make all sorts of mistakes—if the GNP goes from one million to five hundred thousand one year, that’s a fifty per cent drop; if it goes back up to one million the next year, that is not, however, a fifty per cent increase.

The post hoc ergo propter hoc fallacy (aka confusing causation and correlation) is especially common in the use of social science research in policy arguments. If two things are correlated (that is, exist together) that does not necessarily mean that one can be certain which one caused the other, or whether they were both caused by something else. It generally arises in situations when people have failed to have a “control” group in a study. So, for instance, people used to spend huge amounts of money on orthopedic shoes for kids because the shoes correlated with various foot problems’ improving. When a study was finally done that involved a control group, it turned out that it was simply time that was causing the improvement; the shoes were useless.

Some lists of fallacies have hundreds of lists, and subtle distinctions can matter in particular circumstances (for instance, the prosecutor’s fallacy is really useful in classes about statistics), but the above are the ones that seem to be the most useful.