Stasis shifts (distracting people from how bad your argument is)

You can’t get a good answer if you ask a bad question. And one of the best ways to shut out any substantial criticism of your position is to ensure that the questions asked about it are softball questions. If your policy isn’t very good, make sure the debate isn’t on the stasis of “is this a pragmatic and feasible policy that will solve the problem we’ve identified.” Shift the stasis.

In a perfect world, we make arguments for or against policies on the basis of good reasons that can be defended in a rational-critical way (not unemotional—it’s a fallacy to think emotions are inappropriate in argumentation). But, sometimes our argument is so bad it can’t stand the exposure of argumentation, in that we can’t put forward an internally consistent argument. Saying that Louis would be a great President because squirrels are evil is a stasis shift—trying to get people to stop thinking about Louis and just focus on their hatred for squirrels.

Arguments have a stasis, a hinge point. Sometimes they have several. But it’s pretty much common knowledge in various fields that the first step in getting a conflict to be productive (marital, political, business, legal) is to make sure that the stasis (or stases) is correctly identified and people are on it. If we’re housemates, and I haven’t cleaned the litterboxes, and we have an agreement I will, then you might want the stasis to be: my violating our agreement about the litterboxes.

Let’s imagine I don’t want to clean out the litterboxes, but, really, it’s just because I don’t want to. I have made an agreement that I would, and when I made the agreement I knew it was fair and reasonable. So, even I know that I can’t put forward an argument about how tasks are divided, or who wanted a third cat and promised to clean litterboxes in order to get that cat. Were this a deliberative situation, I would be open to your arguments about the litterboxes, but let’s say I’m determined to get out of doing what I said I would do. I don’t want deliberative rhetoric. I want compliance-gaining—I just want you to comply with my end point (I don’t have to clean the litterboxes).

I will never get you to comply as long as we are on the stasis of my violating an agreement I made about the litterboxes, since that’s pretty much slam dunk for you, so I have to change the stasis.

The easiest one (and this is way too much of current political discourse) is to shift it to the stasis of which of us is a better human. If you say, “Hey, you said if we got a third cat, you’d clean the litterboxes, and we got a third cat, and you aren’t cleaning them,” I might say, “Well, you voted for Clinton in the primaries and that’s why Trump got elected,” and now we aren’t arguing about my failure to clean the litterboxes—we’re engaged in a complicated argument about the Dem primaries. I can’t win the litterbox argument, but I might win that one, and, even if I don’t, I might confuse you enough that will stop nagging me about the litterboxes.

[I might also train you to believe that talking about the litterboxes will get me on an unproductive rant about something else, and so you just don’t even raise the issue. That’s a different post, about how Hitler deliberated with his generals.]

Or, I might acknowledge that I don’t clean the litterboxes, but put the blame for my failure on you because your support of Clinton is so bad that I just can’t think about the litterboxes—that’s another way of shifting the stasis off of my weak point and onto an argument I might win.

Hitler and Rhetoric

As Nicholas O’Shaughnessy says, anyone looking at the devastation of World War II and the Holocaust is likely to wonder: “How was it possible for a nation as sophisticated as Germany to regress in the way that it did, for Hitler and the Nazis to enlist an entire people, willingly or otherwise, into a crusade of extermination that would kill anonymous millions?” (1) The conventional answer is to attribute tremendous rhetorical power to Adolf Hitler. Kenneth Burke calls Hitler “a man who swung a great deal of people into his wake” (“Rhetoric” 191). William Shirer, who was an American correspondent in Germany in the 30s, describes that, listening to a speech he knew was nonsense, “was again fascinated by [Hitler’s] oratory, and how by his use of it he was able to impose his outlandish ideas on his audience” (131). Shirer says Hitler “appeared able to swing his German hearers into any mood he wished” (128). Shirer is clear that Hitler owed his power to his rhetoric: “his eloquence, his astonishing ability to move a German audience by speech, that more than anything else had swept him from oblivion to power as dictator and seemed likely to keep him there” (127).

Scholars don’t necessarily agree, however. Ian Kershaw says, “Hitler alone, however important his role, is not enough to explain the extraordinary lurch of a society, relatively non-violent before 1914, into ever more radical brutality and such a frenzy of destruction” (Hitler, The Germans, and the Final Solution 347). While Hitler’s personal views were important, and neither the Holocaust nor war would have happened without his personal fanaticism and charisma, they weren’t all that was necessary: “Concentrating on Hitler’s personal worldview, no matter how fanatically he was inspired and motivated by it, cannot readily serve to explain why a society, which hardly shared the Arcanum of Hitler’s “philosophy,” gave him such growing support from 1929 on—in proportions that rose with astonishing rapidity. Nor can it explain why, from 1933 on, the non-Nationalist Socialist élites were prepared to play more and more into his hands in the process of “cumulative radicalization.”” (Hitler, the Germans, and the Final Solution 57)

In other words, Hitler’s followers were  not passive automatons controlled by Hitler’s rhetorical magic. So, how powerful was that rhetoric?

The answer to that question is more complicated than conventional wisdom suggests for several reasons. First, while Hitler was quick to use new technologies, including ones of travel, most of the Nazi rhetoric consumed by converts wasn’t by Hitler. People like Adolf Eichmann talk about being persuaded by other speakers, pamphlets, even books.

Second, no one claims that Hitler was a creative or inventive ideologue: “Hitler was not an originator but a serial plagiarist” (O’Shaughnessy 24). Joachim Fest said Hitler’s beliefs were the “sum of the clichés current in Vienna at the turn of the century” (qtd. in Gregor, 2), and Gregor says, “Neither can one claim that Hitler was an original thinker. There is little in his writings or speeches that we cannot find in the penny pamphlets of pre-1914 Vienna where he began to form his political views. His racial anti-Semitism rehearses the familiar slogans of many on the pre-war right. His visions of German expansion echo the ideas of the more extreme wing of the radical-nationalist Pan German movement [….] And, in essence, his anti-democratic, anti-Socialist sentiments similarly reproduce the conventional thinking of broad sectors of the German right from both before and after the First World War.” (2)

If Hitler wasn’t saying anything new, to what extent can we say he persuaded people? What did he persuade them of?

A closely related problem is that large numbers of Germans supported Hitler politically but rejected the central aspects of his ideology—such as his eliminationist racism and his desire for another war. Although he’d long been absolutely clear that those were central to his views, when he began to downplay them (especially in 1932 and 33), many people believed those were trivial aspects that could be ignored. Many people supported him strategically, especially the Catholic and Lutheran churches, both of which were outraged by the Social Democrats’ (democratic socialists) liberal social policies (e.g., legalizing homosexuality, supporting feminism, and, especially, breaking the religious monopoly on primary schools). Since Hitler and the Nazis were socially conservative, and Hitler promised to allow the churches more power than the Social Democrats would allow, many Protestants voted for Nazis, and the official Catholic Party (the Centre Party) Reichstag members voted unanimously for Hitler taking on dictatorial power (for more on this background, see Evans; Spicer).

Some scholars refer to “the propaganda of success,” by which they mean that Hitler gained the support of people not because he put forward good arguments, or even because of anything he said, but because they liked his locking up Marxists and Socialists, industrialists liked his support of big business, people liked the increased amount of order, they liked the improved economy, they liked his conservative social policies, a lot of Germans liked his persecution of immigrants, and a lot of people either liked or didn’t mind the legitimating and legalizing of discrimination against Jews (even the churches only objected to discrimination against converted Jews). And large numbers of Germans didn’t particularly like the idea of democracy—the premise of democracy is that political situations are complicated, and that there aren’t obvious solutions. Or, more accurately, there are solutions that appear to be obviously right from one perspective, but are obviously wrong from another perspective. Democratic processes assume that the various perspectives need to be taken into consideration, and so the best policy for the community as a whole will not be perfect for anyone and will take a lot of time to determine—many people would rather that a powerful leader make all the decisions and leave them out of it. After Hitler had been in power a year, many people felt that their lives were better, and that’s all they really cared about—that they were headed down a road that would make their lives much worse didn’t concern them because they didn’t think about it.

Finally, many people came to support Nazis because they liked that Hitler made them feel proud of being German again. He didn’t make them feel proud of being German by changing their minds about anything, but by insisting publicly and endlessly that they were victims—that nothing about their situation was the consequence of bad decisions they had made. He wasn’t saying anything that was new, but it was new for a political leader—he was simply the first major German political figure in a long time to say, unequivocally, Germany was for Germans, and Germans were entitled to run Europe (if not the world).

All these characteristics of Hitler’s relationship with his supporters—his lack of originality, strategic acquiescence, hostility to democracy, narrow self-interest on the part of many Germans, and the propaganda of success—mean that it’s actually an open question as to whether Hitler’s rhetoric was unique, let alone how much power we should ascribe to it. And so this course will consider the questions: what were Hitler’s rhetorical strategies? how unique or unusual was (is) it? what kind of impact does it have? to what extent (and under what circumstances) does it work?

 

Works Cited

Burke, Kenneth. “Rhetoric of Hitler’s ‘Battle.'” Philosophy of Literary Form. U of California P, 1974.

Evans, Richard. The Coming of the Third Reich. Penguin, 2005.

Gregor, Neil. How to Read Hitler. Norton, 2005.

Kershaw, Ian.  Hitler, the Germans, and the Final Solution. Yale U P, 2009.

O’Shaughnessy, Nicholas. Selling Hitler: Propaganda and the Nazi Brand. Oxford UP, 2016.

Shirer, William. The Nightmare Years: 1930-1940. Boston: Little, Brown and Company, 1984.

Spicer, Kevin, ed. Antisemitism, Christian Ambivalence, and the Holocaust. Indiana UP, 2007.

Ethos, pathos, and logos

Since the reintroduction of Aristotle to rhetoric in the 60s, there has been a tendency to read him in a post-positivist light. That is, the logical positivists (building on Cartesian thought) insisted on a new way of thinking about thinking—on an absolute binary between “logic” and “emotion.” This was new—prior to that binary, the dominant models involved multiple faculties (including memory and will) and a distinction within the category we call “emotions.” While it was granted that some emotions inhibited reasoning (such as anger and vengeance) theorists of political and ethical deliberation insisted on the importance of sentiments. The logical positivists (and popular culture), however, created a zero-sum relationship between emotion (bad) and reasoning (logic–good). Thus, when we read Aristotle’s comment about the three “modes” of persuasion post-positivist world, we tend to assume that he meant “pathos” in the same way we mean “emotion” and “logos” in the same (sloppy) way we use the word “logic.” And we get ourselves into a mess.

For instance, for many people, “logic” is an evaluative term—a “logical” argument is one that follows rules of logic. Yet, textbooks will describe an “appeal to facts” as a logos (logical) argument. That’s incoherent. Appealing to “facts” (let’s ignore how muckled that word is) isn’t necessarily logical—the “facts” might be irrelevant, they might be incorporated into an argument with an inconsistent major premise, the argument might have too many terms. In rhetoric, we unintentionally equivocate on the term “logical,” using it both to mean any attempt to reason and only logically correct ways of reasoning. (It’s both descriptive and evaluative.)

The second problem with the binary of emotion and reason is that, as is often the case with binaries, we argue for one by showing the other often fails. Since relying entirely on emotion often leads to bad decisions, then it must be bad, and relying on logic must be good. That’s an illogical argument because it has an invalid major premise. Were it valid, then someone who made that argument would also agree that relying on emotion must be good because relying purely on logic sometimes misleads (it’s the same major premise—if x sometimes has a bad outcome, then not-x must be good).

So, even were we to assume that emotion and logic are binaries (they aren’t), then what we would have to conclude is that neither is sufficient for deliberating.

And, in any case, there’s no reason to take a 19th century western notion and try to trap Aristotle into it.

A better way to think about Aristotle’s division is that he is talking about: what the argument of a speech is, who is making the speech, and how they are making it. So, the logos (discourse) in a speech can be summarized in an enthymeme because, he said, that’s how people reason about public affairs. There are better and worse ways of reasoning, and he names a few ways we get misled, but he didn’t hold rhetoric to the same standards he held disputation—that is where he went into details about inference. An appeal to logos, in Aristotle’s terms, isn’t necessarily what we mean by a logical argument.

Aristotle pointed out that who makes the speech has tremendous impact on how persuasive it is (and also how we should judge it)—both the sort of person the rhetor is (young, old, experienced, choleric), and how the person appears in the speech (reasonable, angry). And, finally, how the person makes the speech has a strong impact on the audience, whether it’s highly styled, plain, loud, and so on.

And all of those play together. A vehement speech still has enthymemes, and it’s only credible if we believe the speaker to be angry—if we believe the speaker to be generally angry (or an angry sort of person) that will have a different impact from an angry speech on the part of someone we think of as normally calm. Ethos, pathos, and logos work together, and they don’t map onto our current binary about logic and emotion.

As long as I can think of someone more racist, I’m not racist at all

My *favorite* assignment in the Rhetoric of Racism course is having students look at a text (or practice) about which there is an argument (ideally a text they think is racist) and explain why there is a disagreement.

There are basically eight ways people argue that a text isn’t racist:

1) a text isn’t racist if it doesn’t make a big deal about race;

2) texts are either racist or not racist and so if there is any way in which this text criticizes racism, then it can’t be racist;

3) it’s just a “feel-good” text and you’re over-reading;

4) it isn’t racist because what it says is true (in other words, the person saying the text isn’t racist is racist);

5) racists are people who explicitly and self-consciously hate everyone of every other race, and only racist people say racist things, so if the person created the text isn’t someone who never ever associates with or who never says anything “nice” about any member of any other race, then the text can’t be racist (also known as the “some of my best friends are…” defense);

6) the author didn’t intend to be racist (so it’s only racist if the individual who created the text engaged in actions s/he knew to be racist);

7) it doesn’t have the marks of hostility toward another race (the tone isn’t over-the-top, it doesn’t use racial epithets);

8) it isn’t racist because there are other texts that are more racist, or it doesn’t endorse the most extreme versions of racism, or the person knows of people who are more racist (what I’ll call the “Eichmann defense”).

This is also a list of how racism is legitimated—these are the ways that people allow racist practices to continue. They’re all complicated to talk someone out of (although there are ways), and here I want to focus on two of them: 4) and 8), which often co-exist. These are the ones that really muckle my students, and they are really interesting.

I think the two of them share the assumption that calling a text racist is a personal attack on, not just the author(s) of the text in question, but anyone who likes it. The underlying logic is: racists are evil, evil people are entirely not-good, people who like something racist are racist, so calling someone racist, or saying something they like is racist, is saying they are entirely evil.

That logic is a good example of what Chaim Perelman and Lucie Olbrechts-Tyteca called “philosophical paired terms.” The logic maps out like a question on a standardized test “Dogs are to mammals as parakeets are to ____.”

And, therefore, since good and evil are binaries (something is entirely good or entirely evil), then, if you can imagine something more evil, you must have some good, and so can’t be entirely evil, and so you can’t be evil at all. Therefore, you must be on the “not racist” side of the equation.

Most of us (perhaps all) engage in judgments comparatively, so that, as long as we are more [whatever] than our peers, we feel good about ourselves. Clearly, 8) relies on that move—as long as you aren’t as racist as someone else, you can feel good about your attitudes.

Interestingly enough, Adolph Eichmann relied on that argument a lot. In the interrogations, he several times condemned people for a Streicher-kind of anti-Semitism—part of trying to persuade his Jewish interrogators that he wasn’t anti-Semitic. He also continually tried to represent his job as okay because it wasn’t as directly death-dealing as the people who actually pulled the triggers or applied the gas.

If someone else was more guilty, then he wasn’t guilty at all.

This move is sometimes characterized as “whataboutism” but it’s actually different. Whataboutism is sheer tu quoque—it’s an attempt to shift the stasis of the argument away from what I did to some competition as to which group or individual is better. It’s almost always an admission that the people making the argument are engaged in sheer factionalism (there are complicated exceptions). So, for instance, defenders of Trump said Clinton did it too (a fallacy). But, some critics of Bill Clinton pointed out that he claimed he was a feminist and supporter of women’s rights, so his sexually harassing women was a violation of feminist principles. That’s a legitimate and important argument.

People who claim that the GOP is morally superior to the DNC can’t logically use the “Clinton groped women” argument at all because it shows that they think both parties are just as bad—and they’re claiming theirs is better.

“Whataboutism” works by accusing the out-group of doing the same thing the in-group has recently been outed for doing. But this move doesn’t accuse the out-group of anything—it just points out that there is a worse version (perhaps even a worse in-group version) of this behavior.

Eichmann defended himself as not anti-Semitic because another Nazi was more extreme. During slavery, slaveholders defended their treatment of slaves on the grounds that there were other slaveholders who were worse (they also engaged in tu quoque, but that’s a different story); pro-segregationists posited the KKK and violent segregationists as worse than they; the people I know who drink the Rush Limbaugh/Fox News flavor-aid all name somein-group pundit too extreme for them.

That someone may be more racist doesn’t mean you aren’t racist. Both you and they might be racist.

Talking about racism means, I think, getting the argument away from whether people are racist, whether their intentions are deliberately racist, and whether racist/not racist is a binary.

[Image screenshot from here.]
 

 

On the precious little snowflakes who want to ban _To Kill a Mockingbird_

We have all read about the precious little snowflakes who want great pieces of literature banned because they feel that their group is attacked by some piece of literature generally considered by scholars to be great. This is a rallying point on the part of the Right-Wing Outrage Machine (RWOM), about how effeminate and sensitive students are being created by the faculty of political correctness who go on to insist that students not be allowed to read a book. That effeminate group is offended by something about the book, perhaps a word, more commonly the representation of a character who might be taken to represent their group. Perhaps the character is the only member of that group represented, or perhaps even every member of that group is represented as ignorant, violent, and criminal. The argument, according to the RWOM, is that these people say that you can’t have literature in K-12 classrooms that makes some of the students feel bad about their group, and the RWOM) is clear that they think that is a bad thing to do.

This claim—that people who object to great pieces of literature on the grounds that it makes them feel bad about their group—is an important plank in the platform of RWOM—that “liberals” are too precious to have their concerns taken seriously. “Liberals” are simultaneously sensitive and authoritarian—they can’t stand criticism of their group, and they will silence anyone who criticizes them. Thus, “liberals’” views on policy issues can be dismissed—they don’t understand that democracy is about being willing to be tough and listen to criticism of our in-group.

So, this issue, as far as the RWOM is concerned, isn’t just about the book—about whether “liberals’” concerns need to be considered at all.

And, for the RWOM, To Kill a Mockingbird (TKAM) is a case in point. There are people who object to this book being taught in K-12 because it portrays their group unfavorably. And the RWOM is univocal that those people are idiots, whose views on politics are so impaired (soft, weak, sensitive) that the people who make those arguments shouldn’t even be considered in political discourse.

The argument about TKAM, then, isn’t just an argument about that book—it’s an argument about who is should even have a voice in democratic political discourse. Democracy, as the founders said, is about disagreement. The principle of democracy is that a community benefits from different points of view. The RWOM argument about trying to censor TKAM is pretty clear: the people who want it banned from high schools are weak people who don’t understand democracy. It isn’t just that their views are bad, but that they are such weak and fragile people that their entire group should not be considered when we are thinking about policy.

Banning the book is “caving in” to people who want it banned is stupid.  Banning TKAM is a war on learning. The National Review asserts that the records suggest that all attempts to ban the book come from people who don’t like books with the “n word” in them (that isn’t true, but it is one of the reasons often given).

“But a different sin concerns today’s anti-Mockingbird crowd. In fact, the last time Mockingbird was challenged solely for its depiction of sexual intercourse, rape, or incest was in 2006 in Brentwood, Tenn. Since then, all five challenges — in 2008, 2009, 2010, 2012, and 2016 — have involved parents or children made uncomfortable by the use of the “N-word” or the book’s depiction of racism.”

That National Review article condemns, in no uncertain terms, people who want the book banned because it makes them uncomfortable. So, as far as the National Review is concerned, banning the book is, prima facie, evidence of your entire political group being an idiot.

The RWOM is unusually unanimous on this point: people who object to teaching TKAM because it hurts their feelings are fragile little snowflakes whose views can be dismissed from consideration on the grounds that they are…well…too fragile. And they are clear that this isn’t a partisan issue: “But to consider To Kill a Mockingbird racially divisive is exactly backwards. The book is invaluable both for introducing students to the reality of America’s racial past and for exposing its injustices.” As in the above cases (both minor and major media), they were unequivocal that they were operating on a principle of education: that, as the National Review says, “Eliminating the hard stuff eliminates the reality.”

In other words, they aren’t taking this position because of partisan politics: it’s a principle that they hold universally.

For the sake of argument, let’s treat that as a principle. I have often argued that the RWOM makes arguments that present themselves as thoroughly, totally, and deeply principled, but are actually rabid factionalism. They were opposed to pedophilia till a pedophile was the GOP candidate for Senate; they wanted Clinton impeached for groping till they had a groper in chief. The RWOM says that their stance on TKAM is principled. Is it?

And here it’s useful to distinguish tu quoque from an argument from principle. If a person really cares about a principle, they will condemn anyone—in-group or not—for violating that principle. If concern about the principle is just a handy brick to throw at the outgroup, then, when it’s pointed out that they are violating a principle they claim to be sacred, they will say, “The out-group does it too!” That’s tu quoque. It’s a fallacy.

More important, it’s an admission that the principle didn’t matter. If I say, “You are bad because you pet squirrels,” then I am making an argument that has the major premise “people who pet squirrels are bad.” If I later defend someone who pets squirrels, I have violated the logic of my own argument. I am putting faction above principle. I don’t think someone is bad for petting squirrels—I think out-group members are bad for doing that, but not in-group members.

So, is the RWOM flinging itself around about sensitive snowflake lefties on the basis of a principle about democracy and the need to read unpleasant books? Or is this about faction?

Most of the articles I could find on the right were about the Biloxi, Mississippi controversy, when a school board decided that the book would not be required reading in eighth grade English classes, and I couldn’t find any major right-wing media who endorsed banning the book. So, this might look as the RWOM is acting on principle.

But there is some sneaky partisanship: snowflakes are lefties, and people who want to ban the book are fragile snowflakes—a term that has become a synonym for social justice warriors. So, condemning the specific policy point of wanting TKAM banned isn’t just a condemnation of that policy point—as far as the RWOM is concerned, the stance of various groups about banning TKAM can be used to condemn the entire group.

The RWOM is so drunk on outrage about the fragile lefties who want the book banned that they make objection to the book, on principle, a sign of being partisan: “I wonder if any of the Biloxi school district’s administrators know how to read.” Obviously, anyone who wants it banned is an idiot, regardless of party.

And it’s interesting to me how the metaphors work in this argument—the people who want the book banned from classrooms are girly (weak, fragile, frail, sensitive) while the people who want it taught are masculine (strong enough to see criticism of America), anti-racist (they univocally endorse Atticus Finch’s stance), and, unlike flaccid lefties, not people who demand “to soften education, to remove any pain or discomfort.” They are firm, strong, and standing tall. (The tendency on the part of the RWOM to use metaphors of hardness for their view and softness for the opposition is both sad and hilarious.)

Were this a principled stance—if the people who have worked themselves into outrage about Biloxi are acting on principle and not just partisanship–then the National Review would fling accusations of flaccidity and girlyness at anyone who objected to TKAM on the grounds that it criticized their group. Do they?

Nope.

There are two, very different ways, this book is challenged.

First, there is the argument it is racist, and that’s complicated. That argument is public because it gets to school boards—the first thing a parent does when objecting to a book is go to the teacher, then the principal, so going to the school board means the teacher and principal are holding their ground.

So, what, exactly, are the arguments that TKAM is racist?

Well, for one thing, it uses the ‘n word’ a lot. And here I will say that I frequently teach material with racist epithets in it, and I make sure they know it on the first day of class. I believe, firmly, in the notion that students should be warned about what they’re getting into, and students who don’t want to read anything with racist epithets shouldn’t take the class. That isn’t because there’s anything wrong with students who would rather not read a lot of appalling racist things, but because they have a right to make choices as to whether they will read them. So I try to be clear about just how awful the reading will be.

My courses are not required; my students are college students. I thoughtfully design my classes so that students can choose to skip a fair number of readings a semester and still get a good grade on the “keeping up with the reading” part of the grade because I know that some of the readings may be unhelpfully provocative, and they can miss up to two weeks up class with no penalty. So, students who are “triggered” by readings can make strategic choices about readings and attendance. High school students don’t have those choices.

The use of the ‘n word’ in TKAM is complicated, as it is in comedy, and high school students aren’t very good at that kind of complexity, and it is used in the book in a way intended to inflict damage. Granted, one can (and, I think, should) read the book as condemning that usage, but reading the book that way involves understanding other minds and perspective-shifting, and not all high school students are there. In other words, as anyone remotely aware of scholarship in rhetoric, reader-response, or, well, basic teacher-training knows, whether a particular class can understand the complicated relationship between the narrator and the events being narrated is something only the teacher of that class could know.

But, let’s side aside the notion that audiences are different from one another and that people receive texts in different ways (really, that only means setting aside sixty years of research, so not that much).

There is another argument, mentioned above. Malcolm Gladwell has made this argument best, and I would simply add that there is a toxic and racist narrative about the Civil Rights movement in our world. That narrative is that people were racist—meaning they irrationally hated everyone who wasn’t “white” and knew that they hated everyone and knew it was irrational. So, a racist person got up in the morning and said, “In every way and every day I will irrationally hate all other races.” As long as you didn’t say that (if, for instance, you said to yourself, “I will only rationally hate all other races”), you weren’t racist.

This is the classic move of feeling good about your decisions because you could imagine someone who was behaving worse. Cheating on this exam by glancing over is okay because you didn’t get the whole exam ahead of time like someone might. Cheating That Race on the rent is okay because you didn’t try to evict them for their race. Adolph Eichmann justified his racism because he wasn’t like Julius Streicher.

What did Atticus Finch do? He, against his will, defended a black man whom he knew to be innocent in a case he knew to be entirely the kind of case Ida B. Wells-Barnett had already named years before. And, throughout the book, he insisted that the racism that would put Tom Robinson to death was one that could (magically?) be cured if people were… what? nicer? less redneck?

Finch acknowledges that the system is SO racist that Robinson telling the truth will tank his case. Robinson mentions that he was nice to the young white woman because he felt sorry for her. And Finch flinches. That moment is why this movie, and the book, are racist.

He knew he lost the case at that moment because he had a racist jury. So, does he try to do anything about their racism? Nope.

Instead, the moral center of the tale says that you need to be nice to racists and hope they’ll be a little bit less racist.

That’s racist.

I love the book. I love that one of my sisters called me “Scout” for a while because I looked like Scout. The movie and book rocked my world, and helped me to see how racist my community and culture were. It was a great book. Now it’s racist.

In its era, it wasn’t. A major issue in 1960 was that “good” people accommodated the KKK, lynchings, Citizens Councils, and that juries couldn’t be counted on to do the sensible thing. So, something that said that the KKK is not actually okay, and that juries that endorsed state-sponsored terrorism were bad was making a useful argument.

We’re way beyond that. There are various problems with TKAM in our era. Atticus Finch is a white savior, his whole stance is the progressive mystique, and the basic message of the story is that racists are rednecks, but we should all submit in a civil fashion to racist justice systems while privately bemoaning that we can’t get a better outcome. (Too bad about Tom!) To be clear, had more people in the South been like Finch in 1960 the world would have been a better place. But, in 2017 we don’t need to make heroes of people who believe that racism is a question of individual intention and feeling, and who think there are good people on both sides. There aren’t. There weren’t. Atticus was wrong about that.

And a text that can make white students feel that racism is over because it isn’t as bad as it was then, and that they would totally have been Atticus Finch (even though they do nothing that involves the same level of risk his actions involved) doesn’t do any kind of anti-racist work. It might even (albeit unintentionally) endorse racist beliefs, insofar as it makes all racism an issue of personal feeling.

This isn’t 1960, and what Finch proposes (and does) isn’t enough for where we are now. That’s another way that people can argue it’s racist—that it can make people feel that we just need to be like Finch and racism will end (or worse yet, that racism did end). So, the argument that the book is racist isn’t a stupid argument, and it certainly isn’t one that assumes some inability to handle difficult or unpleasant material—on the contrary, it’s grounded in the notion that TKAM is simplistic. And, so, as far as the Right Wing Outrage Machine goes, I am a precious and fragile snowflake because anyone who makes the kind of argument I am making is a snowflake.

But, let’s consider fairly the RWOM argument that lefties are weenies who want to silence free speech. Granted, the RWOM never engages the argument I made above—a nuanced and complicated argument about TKAM. Their argument is (as I hope I’ve shown) the false argument that anyone who objects to TKAM being taught in K-12 is a weeny who doesn’t want to hear criticism of their in-group.

If you are intellectually generous, you can find an implied syllogism in the RWOM outrage about TKAM: Lefties are people whose views can be dismissed because they oppose texts like TKAM on the grounds that it offends their feelings about their in-group.

That’s a potentially logically argument, and argument from principle: anyone who objects to TKAM on the grounds that it offends their feelings about their in-group is promoting a political agenda we should dismiss.

Recently, I spent the day with high school teachers from various places in Texas, and the issue of TKAM came up, especially their being told they couldn’t teach it. I was familiar with the cases when it came to school boards, and was willing to defend the case that it wasn’t a useful book for teaching about racism because we’ve moved beyond when aversive racism was the major issue, but that wasn’t the main complaint for any of them.

Every one of them said that the book was pulled because parents of white students complained that it made white Southerners feel bad about their past. They complained to the principal, and the book was pulled.[1] That’s the second reason the book is pulled, and you can see it in the ALA list of reasons the book is challenged.

So, I’m sure, now that I’ve said that racist white Southerners feel hurt about TKAM the RWOM will, because it’s a principle about criticism, insist that TKAM be taught. Who is the snowflake here?

I’m sure, since the Right Wing Outrage Machine is all about principle, they’ll now look into this issue.

I’m also sure I have a unicorn in my garden that poops gold.

 

[1] Here is the interesting point. Yes, parents who didn’t want TKAM taught because of the n word, and because of complicated issues about its racism, went to school boards. Presumably they didn’t first go to the school boards; they went to the principal and didn’t get anywhere, so they kept taking it up the ladder. Parents who didn’t want their white students to have to confront white racism went to the principal, and got their way. In other words, people who wanted to protect the fragile feelings of white Southerners didn’t need to go to the School Board—they could count on principals protecting the feelings of their previous snowflakes white students who didn’t want to hear that segregation might have been bad. Parents with more complicated issues had to go to the School Board.
 

 

 

 

Teaching about racism from a position of privilege

I’ve taught a course on rhetoric and racism multiple times (I think this is the third, but maybe fourth). It came out of a couple of other courses—one on the rhetoric of free speech, and the other on demagoguery, but also from my complete inability to get smart and well-intentioned people to engage in productive discussions about racism.

I never wanted to teach a class on racism because I thought that there wasn’t really a need for a person who almost always has all the privileges of whiteness to tell people about racism. But I had a few experiences that changed my mind. And so I decided to do it, but it is the most emotionally difficult class I teach, and it is really a set of minefields, and there is no way to teach it that doesn’t offend someone. And yet I think it’s important, and I think other white people should teach about racism, but with a few caveats.

Like many people, I was trained to create the seminar classroom, in which students are supposed to “learn to think for themselves” by arguing with other students. The teacher was supposed to act as referee if things got too out of hand, but, on the whole, to treat all opinions as equally valid. I was teaching a class on the rhetoric of free speech—with the chairs in a circle, like a good teacher–when a white student said, “Why can black people tell jokes about white people, but white people can’t tell jokes about black people?”

And all the African-American students in the class shoved their chairs out of the circle, and one of them looked directly at me.

That’s when I realized how outrageously the “good teaching” method—in which every opinion expressed by a student should be treated as just as valid as the opinion of every other student—was institutionalized privilege.

What I hadn’t realized till that moment was that the apparently “neutral” classroom I had been taught to create wasn’t neutral at all. I was trained at a university and a department at which nonwhites and women were in the minority, and so every discussion in which all values are treated as equal in the classroom necessarily meant that straight male whiteness dominated, just in terms of sheer numbers. Then I went to a university that was predominantly women, and white males still dominated. White males dominate discussion, while white fragility ensures that treating all views as though they’re equal is doing nothing of the kind. The “neutral” classroom treats the white students’ hurt feelings with being called racist as precisely the same as anything racist s/he might say. And they aren’t the same.

That “liberal” model of class discussion is so vexed, and so specifically vexed in terms of race, gender, and sexuality. Often being one of few women in a class, and not uncommonly being one of few who openly identified as feminist, I was not uncommonly asked to represent what “feminists” thought about an issue, and I’ve unhappily observed classes (or was in classes) where the teacher asked a student to speak for an entire group (“Chester, what do gay people think about this?”) It’s interesting that not all identities get that request to speak for their entire group. While I have seen teachers call on a veteran to ask what the entire class of “veterans” think, I have never been in a class where anyone said, “Chester, what do “working class people” think about this issue?” I’ve also never been in a class, even ones where het white Christian males were in the minority, where anyone asked a het white Christian male to speak for all het white males.

The most important privilege that het white Christian males have is the privilege of toggling between individualism and universalism on the basis of which position is most rhetorically useful in the moment. In situations in which het male whiteness is the dominant epistemology, someone with that identity can speak as an individual, about his experience. When he generalizes from his experience, it’s to position himself as the universal experience. Het white males are simultaneously entirely individual and perfectly universal.

The “liberal” classroom presumes people who are speaking to one another as equals, but what if they aren’t? The “liberal” classroom puts tremendous work on identities who walk into that room as not equal—they have to be the homophobic, racist, sexist whisperers. That isn’t their job. That’s my job. I realized I was making students do my work.

That faux neutrality also guarantees other unhappy classroom practices. For instance, students who disagree with that falsely neutral position do so from a position of particularity. The “normal” undergrad has asserted a position which seems to be from a position of universal vision, and so any student who refutes his experience is now not only identifying with a stigmatized identity, but self-identifying as a speaker who is simultaneously particular and a representative of an entire group. When your identity is normalized, you claim to speak for Americans; when your identity is marked as other, you speak for all the others in that category.

There’s a weird paradox here. Both the het white Christian male and the [other] are taken as speaking for a much larger group, but in the case of the het white male it’s that he is speaking for humanity at a whole. If he isn’t, if his identity as het white male isn’t taken as universal in a classroom, then some number of people in that category will be enraged and genuinely feel victimized and dismiss as “political correctness” that they have to honor the experience of others as much as they honor their own experience.

What the white panic media characterizes as “political correctness” is rarely about suppression of free speech (they’re actually the ones engaged in political correctness)—it’s about holding all identities to the same standards of expression. The strategic misnaming of trying to honor peoples’ understanding of themselves as “political correctness” ignores the actual history of the term, which was about pivoting on a dime in order to spin facts in a way that supported faction. In other words, the whole flinging poo of throwing the term “political correctness” at people asking for equality is strategic misnaming and projection.

The second experience was in a class that was about the history about conceptions of citizenship, I was trying to make the point that identification is often racial, and that the notion of “universal” is often racist. I gave the class the statistics about Congress—that it was about 90% male and also in the 90% (or more) white. I asked the white males in the class whether they would feel that they were represented if Congress were around 90% nonwhite nonmale. Normally, this set off light bulbs for students. But, this time, one student raised his hand and said, “Well, yes, because white males aren’t angry.”

Of course, that isn’t true, and I’d bet they’d be pretty angry about not being represented, but, even were it true, it would be irrelevant. That student was assuming that being angry makes people less capable of political deliberation—that anger has no place in political argument. That’s an assumption often made in the “liberal” classroom, in which people get very, very uncomfortable with feelings being expressed. And it naturally privileges the privileged because, if being emotional (especially angry) means that a person shouldn’t be participating (or their participation is somehow impaired) then we either can’t talk about things that bother any students (which would leave a small number of topics appropriate for discussion), or people who are angry about aspects of our world (likely to be the less privileged) are silenced before they speak—they’re silenced on the grounds of the feelings they might legitimately have.

So, if we’re going to have a class about racism, we’re going to have a class in which people get angry, and not everyone’s anger is the same. Racist discourse is (and long has been) much more complicated than a lot of people want it to be—we want to think that it’s easy to identify, that it’s marked by hostility, that it’s open in its attacks on another race. But there has always been what we now call “modern racism”—racism that pretends to be grounded in objective science, that says “nice” things about the denigrated group, that purports to be acting out of concern and even affection. That is the kind of reading that angers students the most, and I think it’s important we read it because it’s the most effective at promoting and legitimating racist practices. But it will offend students to read it.

And so the class is really hard to teach, and even risky. And that was the other point I realized. If we have institutions in which only people of color are teaching classes about racism, we’re making them take on the politically riskier courses. That’s racist.

I remain uncomfortable being a white person teaching about racism, and I think my privilege probably means I do it pretty badly. But I think it needs to be done.

 

 

 

 

 

 

 

 

 

On career choices as mingling in Burke’s parlor

On Wednesday, I sent off the scholarly version of the demagoguery argument. It isn’t the book I once planned (that would involve a deeply theoretical argument about identity and the digital world), but it’s the one I really wanted to write, that would (I think) reach more people than that other one.

And it’s the last scholarly book I’ll write. I intend to spend the rest of my career trying to solve the interesting intellectual problem of making scholarly concepts and debates more accessible to non-academics. But that isn’t because I reject highly specialized academic writing as, in any way, a bad thing.

I have no problem with highly theoretical and very specialized books. My books have barely grazed the 1000 sales point, and that’s pretty good for a scholarly book. People have told me that something I’ve written has had an impact on their scholarship, pedagogy, program administration, so I’m really happy with my record as a scholar.

And I’m happy with the record of people who have sold both more and less because measuring impact is so very difficult. Publishing a book with an academic press is an extraordinary achievement, and measuring the impact of such books accurately is nigh impossible—a really powerful book is shared in pirated pdfs, checked out of libraries, passed from one person to another. Sales and impact are orthogonal in academia.

If you study the history of ideas even a little you have to know that what seemed major in the moment was sometimes just a trend (like mullets) and sometimes a sea change (like the synthesizer). No one reads Northrop Frye anymore, but he was a big deal at one moment, and yet Hannah Arendt is still in the conversation, who was also a big deal around the same time. And there are all those people who weren’t big deals in their era, but later came to have tremendous impact, such as Mikhail Bakhtin.

Some trade books on scholarly issues have had extraordinary sales (such as Mortimer Adler’s writings), but it’s hard to know what impact they had. Madison Grant’s racist book Passing of the Great Race had poor sales, but appears to have had a lot of impact. And there are lots of trade books that have come and gone without leaving any impact, so there’s no good reason to conclude that trade books necessarily have more impact than scholarly ones. I don’t think there are a lot of (any?) necessary conclusions that one can draw about whether trade or scholarly books have more impact, are more or less important, more or less valuable intellectual activity.

I have always loved Kenneth Burke’s analogy of the parlor for what it means to be interested in major questions. You show up at a party, he says, and it’s been going on for a while, and you find some conversation that seems interesting. You listen for a while, and then you take a side or point out something new. You get attacked and defended, and some people leave the conversation, and others join, and eventually you too leave. And it goes on, with other people taking sides that may or may not have to do with what you were arguing.

What Burke fails to mention is that, if it’s a good party, there are a lot of conversations going on. You might choose to leave that particular conversation, but not leave the party.

I have loved writing scholarly pieces (although I didn’t initially think I would), and my work has placed me in some particular conversations. I’ve moved from one conversation to another, but all on the side of the parlor engaged in very scholarly arguments. I’d like to leave that side of the parlor, not because it’s a bad one—it’s a wonderful one—but because it’s a party with a lot of conversations going on. I’d like to mingle.

I think a lot of discussions of the public intellectual make a lot of odd assumptions that assume binaries—that either specialized or public scholarship is good, for instance. Scholarship that speaks with authority to a small group is neither better nor worse than scholarship that reaches a broad audience—it’s just a different conversation in Burke’s parlor. And I’m going to wander over there for a bit.

 

“Just Write!” and the Rhetoric of Self-Help

There is a paradox regarding the large number of scholars who get stalled in writing—and a large number do get stalled at some point (50% of graduate students drop out)—they got far enough to get stalled because, for some long period of time, they were able to write. People who can’t write a second book, or a first one, or a dissertation, are people who wrote well enough and often enough to get to the point that they needed to write a dissertation, first book, second book, grant, and so on. So, what happened?

The advice they’re likely to be given is, “Just write.” And the reason we give that advice (advice I gave for years) is that we have the sense that they’re overthinking things, that, when they sit down to write, they’re thinking about failure, and success, and shame, and all the things that might go wrong, and all the ways what they’re writing might be inadequate, and all the negative reactions they might get for what they’ve written. So, we say, “Just write,” meaning, “Don’t think about those things right now.”

The project of writing may seem overwhelming because existentially risky, and the fear created by all the anxiety and uncertainty is paralyzing. It can seem impossibly complicated, and so we give simple advice because we believe that persuading them to adopt a simpler view of the task ahead will enable them to write something. Once they’ve written something, once they’re unstuck, then they can write something more, and then revise, and then write more. Seeing that they have written will give them the confidence they need to keep writing.

And I think that advice often works, hence the (deserved) success of books like Writing Your Dissertation in Fifteen Minutes a Day or Destination Dissertation. They simplify the task initially, and present the tasks involved in ways that are more precise than accurate, but with the admirable goal of keeping people moving. Many people find those books useful, and that’s great. But many people don’t, and I think the unhappy consequence of the “you just have to do this” rhetoric is that there is an odd shaming that happens to people for whom that advice doesn’t work. And, while it’s great that it works for a lot of people, there are a lot for whom it doesn’t, and I’m not happy that they feel shamed.

These books have, as Barbara Kamler and Pat Thomson have argued, characteristics typical of the self-help genre (“The Failure of Dissertation Advice Books”), especially in that it presents dissertation writing as “a series of linear steps” with “hidden rules” that the author reveals. While I am not as critical of those books, or of the genre of self-help, as Kamler and Thomson, I think their basic point is worth taking seriously: that this advice misleads students because it presents dissertation writing as a set of practices and habits rather than cognitive challenges and developments.

Academic writing is hard because it’s hard. Learning to master the postures, steps, and dances of developing a plausible research question, identifying and mastering appropriate sources, determining necessary kinds of support, managing a potentially sprawling project, and positioning a new or even controversial claim in an existing scholarly conversation—all of that is hard and requires cognitive changes, not just writing practices.

Telling people academic writing “just” requires anything (“just write,” “just write every day,” “just ignore your fears,”) is a polite and sometimes useful fiction. And self-help books’ reliance on simple steps and hidden rules is, I’d suggest, not necessarily or manipulative, but based in the sense that telling people something hard is actually hard can discourage them. If you lie, and thereby motivate them to try doing it, then they might realize that, while hard, it isn’t impossible.

I think the implicit analogy is to something like telling a person who needs to exercise that they should “just get up off the couch.” Telling people that improving their health will be a long and slow process with many setbacks is unlikely to motivate someone to start the process; it makes the goal seem impossible, and unrewarding. Telling someone that getting healthier is simple, and they “just” need to increase their exercise slightly, or reduce portion size slightly, or do one thing differently will at least get them started. Having gotten a little healthier might inspire them to do more, but, even if it doesn’t, they are getting a little better.

But that’s the wrong analogy.

A scholar who is having difficulty writing is not analogous to someone who needs to get up off the couch: it’s a person with a long record of successes as a writer. That is what we (and people who are stuck) so often lose track of when we give the “just write” advice. They are not a person sitting on a couch; they are someone with an exercise practice that has always worked for them in the past and it isn’t working now.

The better analogy, I would suggest, is a sprinter who is now trying to run a marathon. Sprinting has worked for them in the past, and many academics have a writing process that is akin to sprinting—chunks of time in which we do nothing but write, and try to get as much done as quickly as we can. Writing a dissertation or book, on the other hand, is more like running a marathon.

It would be unethical to tell a sprinter who is unable to run a marathon that she should “just run.” She has been running; she’s quite good at it. But the way that she has been running is not working for this new distance. And if she does try to run a marathon the way she has always run short races, she will hurt herself.

My intuition is that people who have trouble writing are people who have always used the sprinting method, and have simply managed to develop the motivational strategies to sprint for longer, or collapse from time to time while on the race, and pick themselves up. Often, it seems to me, that motivation relies on panic and negative self-talk—they manage to binge write because otherwise, they tell themselves, they are a failure.

So I’m not saying that “Just write” is always bad advice. I am saying that it sometimes is; it is sometimes something that can send people into shame spirals. It only works for some people, for people who do find that polite fiction motivating. For others, though, telling them “just write” is exactly like telling a person in a panic attack “just calm down” or someone depressed “just cheer up.”

The “just write” comes from a concern that lack of confidence will paralyze a student. But I think we might be solving the wrong problem.

Part of the problem is the myth of positive thinking, which has taken on an almost magical quality for some people. There is a notion that you should only think positive thoughts, as though thinking negative things brings on bad events. Since thinking clearly about how hard it is to write a book, dissertation, or grant (and, specifically, thinking clearly about how we might have habits or processes that inhibit our success) is thinking about “bad” things, about how things might go wrong or what troubles we might have, the myth of positive thinking says you shouldn’t do it. You should, instead, just imagine success.

This is a myth. It isn’t just a myth, but pernicious, destructive nonsense. A (sometimes secular) descendant of the positive psychology elegantly described by Bowler in Blessed, this is magical thinking pure and simple, and perfectly contrary to what research shows about how positive thinking actually affects motivation.

But here I should be clear. Some people who advocate wishful thinking do so because believe that the only other possibility is wallowing in self-loathing and a sense that the task is impossible, and they believe that telling students that academic writing is hard will necessarily lead to their believing it is impossible. In other words, there is an assumption that there is a binary between thinking only and entirely about positive outcomes or thinking only and entirely about tragic outcomes. The former is empowering and the latter is paralyzing. That narrative is wrong on all three counts—positive thinking is not necessarily enabling, moments of despair are not necessarily disabling, and our attitude toward our own challenges is not usefully described as a binary between pure optimism and pure despair. Left out of that binary is being hopefully strategic: aware of possible failures, mindful of hurdles, with confidence in our resilience as much as in our talents.

As to the first, studies clearly show that refusing to think negative thoughts about possible outcomes is actively harmful, and frequently impairs achievement. That’s important to remember: telling students they shouldn’t think about their own flaws, the challenges ahead of them, and how things might go wrong is not helping them, and it is making it less likely they will do what they need to do.

Gabriele Oettingen’s considerable research shows that (summarized in the very helpful book Rethinking Positive Thinking), while wishful thinking can be useful for maintaining hope in a bad situation or identifying long-term goals, it inhibits action. Fantasizing about how wonderful a dissertation or book will be doesn’t inspire us to write either; for many people, it makes the actual sometimes gritty work so much more unattractive in comparison that it’s impossible to write. The fantasy is far more fun than writing a crummy first draft. Similarly, Carol Dweck’s research on mindsets shows that success depends on acknowledging what has gone wrong and identifying how one might grow and change to get a different outcome in the future.

A sense that the task is so hard as to be impossible is not inevitably and necessarily disabling. It is, however, inevitable. It is dishonest to tell students that we never feel that what we’re trying to do can’t be done or isn’t worth doing, because so many of us do. And most of us got (and get) through it. Sometimes it took time, therapy, medication, changing things in our personal lives, changing jobs, changing projects, all of the above. But I don’t know any productive scholar free from times of slogging through the slough of despond.

In my experience, academic writing gets easier, but it’s never easy. The hardest writing is probably finishing a dissertation while writing job materials—nothing after that is so hard. But it’s always hard. If we tell students that it’s easy, or that it gets easy, even if we do so with the intention of keeping them moving, we do them a disservice. If they believe us, if they believe that we find it easy, then, when it gets hard, as it necessarily will, they have to conclude that there is something wrong with them. They are unhappily likely to conclude that they have been exposed for the imposter they always worried they were.

The “just write” advice almost certainly works for some people in some situations, as does the “just write every day” or “just freewrite” or “just start with your thesis” or any of the other practices and rules that begin with “just.” They work for someone somewhere and maybe they work for everyone some of the time, and they always strike me as sensible enough to suggest that people experiment with them. But we shouldn’t pretend that they’re magical and can’t possibly fail, or that someone “just” needs to do them. The perhaps well-intentioned fiction that academic writing “just” requires certain practice is magical thinking, and we need to stop saying it.

In my experience, people who find the “just write” advice useless find it too abstract. So, I think we need to be clear that scholarly productivity is, for most people, hard, and it’s find that a person finds it hard. And it takes practice, so there are some things a person might “just write”:

  • the methods section;
  • descriptions of an incident, moment in a text, interaction, or some other very, very specific epitome of their problem (Pirsig’s brick in the wall of the opera house);
  • summaries of their secondary materials with a discussion of how each text is and is not sufficient for their research;
  • a collection of data;
  • the threads from one datum to another;
  • a letter to their favorite undergrad teacher about their current research;
  • a description of their anxieties about their project;
  • an imitation of an introduction, abstract, conclusion, or transition paragraph they like written by a junior scholar.

I’m not presenting that list as a magical solution. It would be odd for me to say that simplistic advice is not helpful and then give a list of the five (or seven, or ten) things we “just” have to do to become (or teach others to become) skilled and productive academic writers. What we have to do is acknowledge that the project requires significant and complicated cognitive changes: that, for most of us, scholarly writing is hard because it’s hard. Let’s be honest about that.

 

 

 

 

 

Rationality, demagoguery, and rhetoric

One of my criticisms of conventional definitions of demagoguery is that they enable us to identify when they are getting suckered by demagoguery, but not when we are. They aren’t helpful for helping us see our own demagoguery because they emphasize the “irrationality” and bad motives of the demagogues. And both strategies are deeply flawed, and generally circular. Here I’ll discuss a few problems with conventional notions of rationality/irrationality, and later I’ll talk about the problems of motivism.

Definitions of “irrationality” imply a strategy for assessing the rationality of an argument, and many common definitions of “rational” and “irrational” imply methods that are muddled, even actively harmful. Most of our assumptions about what makes an argument “rational” or “irrational” imply strategies that contradict one another. For instance, “rationality” is sometimes used interchangeably with reasonable and logical, sometimes used as a larger term that incorporates logical (a stance is rational if the arguments made for it are logical, or a person is rational if s/he uses logical processes to make decisions). That common usage contradicts another common usage, although people don’t necessarily realize it: many people assume that an argument is rational if you can support it with reasons, whether or not the reasons are logically connected to the claims. So, in the first one, a rational argument has claims that are logically connected, but in the second one it just has to have sub-claims that look like reasons.  There’s a third usage: many people assume that “rational” and “true” are the same, and/or that “rational” arguments are immediately seen as compellingly true, so to judge if an argument is rational, you just have to ask yourself if it seems compellingly true. Of course, that conflation of rational and true means that “rational” is another way of saying “I agree.” A fourth usage is the consequence of  many people equating “irrational” with “emotional:” it can seem that the way to determine whether an argument is rational is to try to infer whether the person making the argument is emotional, and that’s usually inferred by the number of emotional markers—how many linguistic “boosters” the rhetor uses (words such as “never” or “absolutely”), or verbs of affect (“love,” “hate,” “feel”). Sometimes it’s determined through sheer projection, or through deduction from stereotypes (that sort of person is always emotional, and therefore their arguments are always emotional).

Unhappily, in many argumentation textbooks, there’s a fifth usage thrown in: it’s not uncommon for a “logical” argument to be characterized as one that appeals to “facts, statistics, and reason”—surface features of a text. Sometimes, though, we use the term “logical” to mean, not an attempt at logic, or a presentation of self as engaged in a logical argument, but a successful attempt—an argument is logical if the claims follow from premises, the statistics are valid, and the facts are relevant. That usage—how it’s used in argumentation theory—is in direct conflict with the vaguer uses that rely on surface features (“facts, statistics, and reason” or the linguistic features we associate with emotionality). Much of the demagoguery discussed in this book makes appeals to statistics, facts, and data, and much of it is presented without linguistics markers of emotionality, but generally in service of claims that don’t follow, or that appeal to inconsistent premises, or that contradict one another. Thus, for the concept of rationality to be useful for identifying demagoguery, it has to be something other than any of the contradictory ones above—surface features; inferred, projected, or deduced emotionality of the rhetor; presence of reasons; audience agreement with claims.

Following scholars of argumentation, I want to argue for using “rationality” in a relatively straightforward way. Frans van Eemeren and Rob Grootendorst identify ten rules for what they call a rational-critical argument. While useful, for purposes of assessing informal and lay arguments, they can be reduced to four:

1) Whatever are the rules for the argument, they apply equally across interlocutors; so, if a kind of argument is deemed “rational” for the ingroup, then it’s just as “rational” for the outgroup (e.g., if a single personal experience counts as proof for a claim, then a single appeal to personal experience suffices to disprove that claim);

2) The argument appeals to premises and/or definitions consistently, or, to put it in the negative, the claims of an argument don’t contradict each other or appeal to contradictory premises;

3) The responsibilities of argumentation appeal equally across interlocutors, so that all parties are responsible for representing one another’s arguments fairly, and striving to provide internally consistent evidence to support their claims;

4) The issue is up for argument—that is, the people involved are making claims that can be proven wrong, and that they can imagine changing.

Not every discussion has to fit those rules—there are some topics not open to disproof, and therefore can’t be discussed this way. And those sorts of discussions can be beneficial, productive, enlightening. But they’re not rational; they’re doing other kinds of work.

In the teaching of writing, it’s not uncommon for “rationality” and “logical” to be compressed into Aristotle’s category of “logos” (with “irrational” and “emotional” getting shoved into his category of “pathos”)—and then very recent notions about logic and emotion are projected onto Aristotle. As is clear even in popular culture, recent ideas assume a binary between logical and emotional, so saying something is an emotional argument is, for us, saying it is not logical. That isn’t what Aristotle meant—he didn’t even mean that appeals to emotion and appeals to reason can coexist; he didn’t see them as opposed. Nor did he mean “facts” as we understand them, and he had no interest in statistics. For Aristotle, ethos, pathos, and logos are always operating together—logos is the content, the argument (the enthymemes); pathos incorporates the ways we try to get people to be convinced; ethos is the person speaking. So, were we to use an Aristotelian approach to an argument, we would look at a set of statistics about child poverty, and the logos would be that poverty has gotten worse (or is worse in certain areas, or for some people—whatever the claims are), the pathos would be how it’s presented (what’s in bold, how it’s laid out, and also that it’s about children), and the ethos is whatever is situated (what we know about the rhetor prior to the discourse) but also a consequence of the person using statistics (she’s well-informed, she’s done research on this) and that it’s about children (she is compassionate). For Aristotle, unlike post-logical positivists, the pathos and logos and ethos can’t operate alone.

I think it’s better just to avoid Aristotle’s terms, since they slide into a binary so quickly. More important, they enable people to conflate “a logical argument” (that is, the evaluative claim, that the argument is logical) with “an appeal to logic” (the descriptive claim, that the argument is purporting to be logical).

What this means for teaching

People generally reason syllogistically (that’s Ariel Kruglanski’s finding), and so it’s useful for people to learn to identify major premises. I think either Toulmin’s model or Aristotle’s enthymeme works for that strategy, but it is important that people are able to identify unexpressed premises.

Syllogism:

All men are mortal. [universally valid Major Premise]

Socrates is a man. [application of a universally valid premise to specific case: minor premise]

Therefore, Socrates is mortal. [conclusion]

Enthymeme:

Socrates is mortal [conclusion]

because he is a man. [minor premise]

The Major Premise is implied (all men are mortal).

Or, syllogism:

A = B [Major Premise]

A = C [minor premise]

Therefore, B = C. [conclusion]

Enthymeme:

B = C because A = B. This version of the argument implies that A = C.

Chester hates squirrels because Chester is a dog.  

Major Premise (for the argument to be true): All dogs hate squirrels.

Major Premise (for the argument to be probable): Most dogs hate squirrels.

 

Batman is a good movie because it has a lot of action.

Major Premise: Action movies are good.

 

Preserving wilderness in urban areas benefits communities

            because it gives people access to non-urban wildlife.

Major Premise: Access to non-urban wildlife benefits communities.

Many fallacies come from some glitch in the enthymeme—for instance, non sequitur happens when the conclusion doesn’t follow from the premises.

  • Chester hates squirrels because bunnies are fluffy. (Notice that there are four terms—Chester, hating squirrels, bunnies, and fluffy things.)
  • Squirrels are evil because they aren’t bunnies.

 

Before going on to describe other fallacies, I should emphasize that identifying a fallacy isn’t the end of a conversation, or it doesn’t have to be. It isn’t like a ref making a call—it’s something that can be argued—this is especially true with the fallacies of relevance. If I make an emotional argument, and you say that’s argumentum ad misercordiam, then a good discussion will probably have us arguing about whether my emotional appeal was relevant.

Appealing to inconsistent premises comes about when you have at least two enthymemes, and their major premises contradict.

For instance, someone might argue: “Dogs are good because they spend all their time trying to gather food” and “Squirrels are evil because they spend all their time trying to gather food.” You’ll rarely see it that explicit—usually the slippage is unnoticed because you use dyslogistic terms for the outgroup and eulogistic terms for the ingroup: “”Dogs are good because they work hard trying to gather food to feed their puppies” and “Squirrels are evil because they spend all their time greedily trying to get to food.”

Another one that comes about because of glitches in the enthyme is circular reasoning (aka “begging the question). This is a very common fallacy, but surprisingly difficult for people to recognize. It looks like an argument, but it is really just an assertion of the conclusion over and over in different language. The “evidence” for the conclusion is actually the conclusion in synonyms–“The market is rational because it lets the market determine the value of goods rationally.” “This product is superior because it is the best on the market.”

Genus-species errors (aka over-generalizing, ignoring exceptions, stereotyping) happens when hidden in the argument (often in the major premise is a slip from “one” (or “some”) to “all.” It results from assuming that what is true of a specific thing is true of every member of that genus, or what is true of the genus is true of every individual member of that genus. “Chester would never do that because he and I are both dogs, and I would never do that.” “Chester hates cats because my dog hates cats.”

Fallacies of relevance

Really, all of the following could be grouped under red herring, which consists of dragging something so stinky across the trail of an argument that people take the wrong track. Also called “shifting the stasis,” it’s trying to distract from what is really at stake between two people to something else—usually inflammatory, but sometimes simply easier ground for the person engaged in red herring. Sometimes it arises because one of the interlocutors sees everything in one set of terms—if you disagree with them, and they take the disagreement personally, they might drag in the red herring of whether they are a good person, simply because that’s what they think all arguments are about.

Ad personum (sometimes distinguished from ad hominem) is an irrelevant attack on the identity of an interlocutor. Not all “attacks” on a person or their character are ad hominem. Accusing someone of being dishonest, or making a bad argument, or engaging in fallacies, is not ad hominem because it’s attacking their argument. Even attacking the person (“you are a liar”) is not fallacious if it’s relevant. It generally involves some kind of name-calling (usually of such an inflammatory nature that the person must respond, such as calling a person an abolitionist in the 1830s, a communist in the 1950s and 60s, or a liberal now). It’s really a kind of red herring, as it’s generally irrelevant to the question at hand, and is an attempt to distract the attention of the audience.

Ad verecundiam is the term for a fallacious appeal to authority. In general, it’s a fallacy because their authority isn’t relevant—there’s nothing inherently fallacious about appeal to authority, but having a good conversation might mean that the relevance of the authority/expertise now has to become the stasis. Bandwagon appeal is a kind of fallacious appeal to authority—it isn’t fallacious to appeal to popularity if it is a question in which popular appeal is a relevant kind of authority.

Ad misericordiam is the term for an irrelevant appeal to emotion, such as saying you should vote for me because I have the most adorable dogs (even though I really do). Emotions are always part of reasoning, so merely appealing to emotions is not

Scare tactics (aka apocalyptic language) is a fallacy if the scary outcome is irrelevant, unlikely, or inevitable regardless of the actions. For instance, if I say you should vote for me and then give you a terrifying description of how our sun will someday go supernova, that’s scare tactics (unless I’m claiming I’m going to prevent that outcome somehow).

Straw man is dumbing down the opposition argument; because the rhetor is now responding to arguments their opponent never made, most of what they have to say is irrelevant. People engage in this one unintentionally by not listening, projection, and a fairly interesting process. We have a tendency to homogenize the outgroup and assume that they are all the same. So, if you say “Little dogs aren’t so bad,” and I once heard a squirrel lover praise little dogs, I might decide you’re a squirrel lover. Or, more seriously, if I believe that anyone who disagrees with me about gun ownership and sales wants to ban all guns, then I might respond to your argument about requiring gun safes with something about the government kicking through our doors and taking all of our guns (an example of slippery slope).

Tu quoque is usually (but not always) a kind of red herring, sometimes it’s the fallacy of false equivalency (what George Orwell called the notion that half a loaf is no better than none). One argues that “you did it too!” While it’s occasionally relevant, as it can point to a hypocrisy or inconsistency in one’s opposition, and might be the beginning of a conversation about inconsistent appeals to premises, it’s fallacious when it’s irrelevant. For instance, if you ask me not to leave dirty socks on the coffee table, and I say, “But you like squirrels!” I’ve tried to shift the stasis. It can also involve my responding with something that isn’t equivalent, as when I try to defend myself against a charge of embezzling a million dollars by pointing out that my opponent didn’t try to give back extra change from a vending machine.

 

False dilemma (aka poisoning the wells, false binary, either/or) occurs when a rhetor sets out a limited number of options, generally forcing one’s hand by forcing one to choose the option s/he wants. Were all the options laid out, then the situation would be more complicated, and his/her proposal might not look so good. It’s often an instance of scare tactics because the other option is typically a disaster (we either fight in Vietnam, or we’ll be fighting the communists on the beaches of California). It is “straw man” when it’s achieving by dumbing down the opponent’s proposal.

Misuse of statistics is self-explanatory. Statistical analysis is far more complicated than one might guess, given common uses of statistics, and there are certain traps into which people often fall. One common one is the deceptively large number. The number of people killed every year by sharks looks huge, until you consider the number of people who swim in shark-infested waters every year, or compare it to the number of people killed yearly by bee stings. Another common one is to shift the basis of comparison, such as comparing the number of people killed by sharks for the last ten years with the number killed by car crashes in the last five minutes. (With some fallacies, it’s possible to think that there was a mistake involved rather than deliberate misdirection; with this one, that’s a pretty hard claim to make.) People often get brain-freeze when they try to deal with percentages, and make all sorts of mistakes—if the GNP goes from one million to five hundred thousand one year, that’s a fifty per cent drop; if it goes back up to one million the next year, that is not, however, a fifty per cent increase.

The post hoc ergo propter hoc fallacy (aka confusing causation and correlation) is especially common in the use of social science research in policy arguments. If two things are correlated (that is, exist together) that does not necessarily mean that one can be certain which one caused the other, or whether they were both caused by something else. It generally arises in situations when people have failed to have a “control” group in a study. So, for instance, people used to spend huge amounts of money on orthopedic shoes for kids because the shoes correlated with various foot problems’ improving. When a study was finally done that involved a control group, it turned out that it was simply time that was causing the improvement; the shoes were useless.

 

Some lists of fallacies have hundreds of lists, and subtle distinctions can matter in particular circumstances (for instance, the prosecutor’s fallacy is really useful in classes about statistics), but the above are the ones that seem to be the most useful.