Teaching about racism from a position of privilege

I’ve taught a course on rhetoric and racism multiple times (I think this is the third, but maybe fourth). It came out of a couple of other courses—one on the rhetoric of free speech, and the other on demagoguery, but also from my complete inability to get smart and well-intentioned people to engage in productive discussions about racism.

I never wanted to teach a class on racism because I thought that there wasn’t really a need for a person who almost always has all the privileges of whiteness to tell people about racism. But I had a few experiences that changed my mind. And so I decided to do it, but it is the most emotionally difficult class I teach, and it is really a set of minefields, and there is no way to teach it that doesn’t offend someone. And yet I think it’s important, and I think other white people should teach about racism, but with a few caveats.

Like many people, I was trained to create the seminar classroom, in which students are supposed to “learn to think for themselves” by arguing with other students. The teacher was supposed to act as referee if things got too out of hand, but, on the whole, to treat all opinions as equally valid. I was teaching a class on the rhetoric of free speech—with the chairs in a circle, like a good teacher–when a white student said, “Why can black people tell jokes about white people, but white people can’t tell jokes about black people?”

And all the African-American students in the class shoved their chairs out of the circle, and one of them looked directly at me.

That’s when I realized how outrageously the “good teaching” method—in which every opinion expressed by a student should be treated as just as valid as the opinion of every other student—was institutionalized privilege.

What I hadn’t realized till that moment was that the apparently “neutral” classroom I had been taught to create wasn’t neutral at all. I was trained at a university and a department at which nonwhites and women were in the minority, and so every discussion in which all values are treated as equal in the classroom necessarily meant that straight male whiteness dominated, just in terms of sheer numbers. Then I went to a university that was predominantly women, and white males still dominated. White males dominate discussion, while white fragility ensures that treating all views as though they’re equal is doing nothing of the kind. The “neutral” classroom treats the white students’ hurt feelings with being called racist as precisely the same as anything racist s/he might say. And they aren’t the same.

That “liberal” model of class discussion is so vexed, and so specifically vexed in terms of race, gender, and sexuality. Often being one of few women in a class, and not uncommonly being one of few who openly identified as feminist, I was not uncommonly asked to represent what “feminists” thought about an issue, and I’ve unhappily observed classes (or was in classes) where the teacher asked a student to speak for an entire group (“Chester, what do gay people think about this?”) It’s interesting that not all identities get that request to speak for their entire group. While I have seen teachers call on a veteran to ask what the entire class of “veterans” think, I have never been in a class where anyone said, “Chester, what do “working class people” think about this issue?” I’ve also never been in a class, even ones where het white Christian males were in the minority, where anyone asked a het white Christian male to speak for all het white males.

The most important privilege that het white Christian males have is the privilege of toggling between individualism and universalism on the basis of which position is most rhetorically useful in the moment. In situations in which het male whiteness is the dominant epistemology, someone with that identity can speak as an individual, about his experience. When he generalizes from his experience, it’s to position himself as the universal experience. Het white males are simultaneously entirely individual and perfectly universal.

The “liberal” classroom presumes people who are speaking to one another as equals, but what if they aren’t? The “liberal” classroom puts tremendous work on identities who walk into that room as not equal—they have to be the homophobic, racist, sexist whisperers. That isn’t their job. That’s my job. I realized I was making students do my work.

That faux neutrality also guarantees other unhappy classroom practices. For instance, students who disagree with that falsely neutral position do so from a position of particularity. The “normal” undergrad has asserted a position which seems to be from a position of universal vision, and so any student who refutes his experience is now not only identifying with a stigmatized identity, but self-identifying as a speaker who is simultaneously particular and a representative of an entire group. When your identity is normalized, you claim to speak for Americans; when your identity is marked as other, you speak for all the others in that category.

There’s a weird paradox here. Both the het white Christian male and the [other] are taken as speaking for a much larger group, but in the case of the het white male it’s that he is speaking for humanity at a whole. If he isn’t, if his identity as het white male isn’t taken as universal in a classroom, then some number of people in that category will be enraged and genuinely feel victimized and dismiss as “political correctness” that they have to honor the experience of others as much as they honor their own experience.

What the white panic media characterizes as “political correctness” is rarely about suppression of free speech (they’re actually the ones engaged in political correctness)—it’s about holding all identities to the same standards of expression. The strategic misnaming of trying to honor peoples’ understanding of themselves as “political correctness” ignores the actual history of the term, which was about pivoting on a dime in order to spin facts in a way that supported faction. In other words, the whole flinging poo of throwing the term “political correctness” at people asking for equality is strategic misnaming and projection.

The second experience was in a class that was about the history about conceptions of citizenship, I was trying to make the point that identification is often racial, and that the notion of “universal” is often racist. I gave the class the statistics about Congress—that it was about 90% male and also in the 90% (or more) white. I asked the white males in the class whether they would feel that they were represented if Congress were around 90% nonwhite nonmale. Normally, this set off light bulbs for students. But, this time, one student raised his hand and said, “Well, yes, because white males aren’t angry.”

Of course, that isn’t true, and I’d bet they’d be pretty angry about not being represented, but, even were it true, it would be irrelevant. That student was assuming that being angry makes people less capable of political deliberation—that anger has no place in political argument. That’s an assumption often made in the “liberal” classroom, in which people get very, very uncomfortable with feelings being expressed. And it naturally privileges the privileged because, if being emotional (especially angry) means that a person shouldn’t be participating (or their participation is somehow impaired) then we either can’t talk about things that bother any students (which would leave a small number of topics appropriate for discussion), or people who are angry about aspects of our world (likely to be the less privileged) are silenced before they speak—they’re silenced on the grounds of the feelings they might legitimately have.

So, if we’re going to have a class about racism, we’re going to have a class in which people get angry, and not everyone’s anger is the same. Racist discourse is (and long has been) much more complicated than a lot of people want it to be—we want to think that it’s easy to identify, that it’s marked by hostility, that it’s open in its attacks on another race. But there has always been what we now call “modern racism”—racism that pretends to be grounded in objective science, that says “nice” things about the denigrated group, that purports to be acting out of concern and even affection. That is the kind of reading that angers students the most, and I think it’s important we read it because it’s the most effective at promoting and legitimating racist practices. But it will offend students to read it.

And so the class is really hard to teach, and even risky. And that was the other point I realized. If we have institutions in which only people of color are teaching classes about racism, we’re making them take on the politically riskier courses. That’s racist.

I remain uncomfortable being a white person teaching about racism, and I think my privilege probably means I do it pretty badly. But I think it needs to be done.

 

 

 

 

 

 

 

 

 

On career choices as mingling in Burke’s parlor

On Wednesday, I sent off the scholarly version of the demagoguery argument. It isn’t the book I once planned (that would involve a deeply theoretical argument about identity and the digital world), but it’s the one I really wanted to write, that would (I think) reach more people than that other one.

And it’s the last scholarly book I’ll write. I intend to spend the rest of my career trying to solve the interesting intellectual problem of making scholarly concepts and debates more accessible to non-academics. But that isn’t because I reject highly specialized academic writing as, in any way, a bad thing.

I have no problem with highly theoretical and very specialized books. My books have barely grazed the 1000 sales point, and that’s pretty good for a scholarly book. People have told me that something I’ve written has had an impact on their scholarship, pedagogy, program administration, so I’m really happy with my record as a scholar.

And I’m happy with the record of people who have sold both more and less because measuring impact is so very difficult. Publishing a book with an academic press is an extraordinary achievement, and measuring the impact of such books accurately is nigh impossible—a really powerful book is shared in pirated pdfs, checked out of libraries, passed from one person to another. Sales and impact are orthogonal in academia.

If you study the history of ideas even a little you have to know that what seemed major in the moment was sometimes just a trend (like mullets) and sometimes a sea change (like the synthesizer). No one reads Northrop Frye anymore, but he was a big deal at one moment, and yet Hannah Arendt is still in the conversation, who was also a big deal around the same time. And there are all those people who weren’t big deals in their era, but later came to have tremendous impact, such as Mikhail Bakhtin.

Some trade books on scholarly issues have had extraordinary sales (such as Mortimer Adler’s writings), but it’s hard to know what impact they had. Madison Grant’s racist book Passing of the Great Race had poor sales, but appears to have had a lot of impact. And there are lots of trade books that have come and gone without leaving any impact, so there’s no good reason to conclude that trade books necessarily have more impact than scholarly ones. I don’t think there are a lot of (any?) necessary conclusions that one can draw about whether trade or scholarly books have more impact, are more or less important, more or less valuable intellectual activity.

I have always loved Kenneth Burke’s analogy of the parlor for what it means to be interested in major questions. You show up at a party, he says, and it’s been going on for a while, and you find some conversation that seems interesting. You listen for a while, and then you take a side or point out something new. You get attacked and defended, and some people leave the conversation, and others join, and eventually you too leave. And it goes on, with other people taking sides that may or may not have to do with what you were arguing.

What Burke fails to mention is that, if it’s a good party, there are a lot of conversations going on. You might choose to leave that particular conversation, but not leave the party.

I have loved writing scholarly pieces (although I didn’t initially think I would), and my work has placed me in some particular conversations. I’ve moved from one conversation to another, but all on the side of the parlor engaged in very scholarly arguments. I’d like to leave that side of the parlor, not because it’s a bad one—it’s a wonderful one—but because it’s a party with a lot of conversations going on. I’d like to mingle.

I think a lot of discussions of the public intellectual make a lot of odd assumptions that assume binaries—that either specialized or public scholarship is good, for instance. Scholarship that speaks with authority to a small group is neither better nor worse than scholarship that reaches a broad audience—it’s just a different conversation in Burke’s parlor. And I’m going to wander over there for a bit.

 

“Just Write!” and the Rhetoric of Self-Help

There is a paradox regarding the large number of scholars who get stalled in writing—and a large number do get stalled at some point (50% of graduate students drop out)—they got far enough to get stalled because, for some long period of time, they were able to write. People who can’t write a second book, or a first one, or a dissertation, are people who wrote well enough and often enough to get to the point that they needed to write a dissertation, first book, second book, grant, and so on. So, what happened?

The advice they’re likely to be given is, “Just write.” And the reason we give that advice (advice I gave for years) is that we have the sense that they’re overthinking things, that, when they sit down to write, they’re thinking about failure, and success, and shame, and all the things that might go wrong, and all the ways what they’re writing might be inadequate, and all the negative reactions they might get for what they’ve written. So, we say, “Just write,” meaning, “Don’t think about those things right now.”

The project of writing may seem overwhelming because existentially risky, and the fear created by all the anxiety and uncertainty is paralyzing. It can seem impossibly complicated, and so we give simple advice because we believe that persuading them to adopt a simpler view of the task ahead will enable them to write something. Once they’ve written something, once they’re unstuck, then they can write something more, and then revise, and then write more. Seeing that they have written will give them the confidence they need to keep writing.

And I think that advice often works, hence the (deserved) success of books like Writing Your Dissertation in Fifteen Minutes a Day or Destination Dissertation. They simplify the task initially, and present the tasks involved in ways that are more precise than accurate, but with the admirable goal of keeping people moving. Many people find those books useful, and that’s great. But many people don’t, and I think the unhappy consequence of the “you just have to do this” rhetoric is that there is an odd shaming that happens to people for whom that advice doesn’t work. And, while it’s great that it works for a lot of people, there are a lot for whom it doesn’t, and I’m not happy that they feel shamed.

These books have, as Barbara Kamler and Pat Thomson have argued, characteristics typical of the self-help genre (“The Failure of Dissertation Advice Books”), especially in that it presents dissertation writing as “a series of linear steps” with “hidden rules” that the author reveals. While I am not as critical of those books, or of the genre of self-help, as Kamler and Thomson, I think their basic point is worth taking seriously: that this advice misleads students because it presents dissertation writing as a set of practices and habits rather than cognitive challenges and developments.

Academic writing is hard because it’s hard. Learning to master the postures, steps, and dances of developing a plausible research question, identifying and mastering appropriate sources, determining necessary kinds of support, managing a potentially sprawling project, and positioning a new or even controversial claim in an existing scholarly conversation—all of that is hard and requires cognitive changes, not just writing practices.

Telling people academic writing “just” requires anything (“just write,” “just write every day,” “just ignore your fears,”) is a polite and sometimes useful fiction. And self-help books’ reliance on simple steps and hidden rules is, I’d suggest, not necessarily or manipulative, but based in the sense that telling people something hard is actually hard can discourage them. If you lie, and thereby motivate them to try doing it, then they might realize that, while hard, it isn’t impossible.

I think the implicit analogy is to something like telling a person who needs to exercise that they should “just get up off the couch.” Telling people that improving their health will be a long and slow process with many setbacks is unlikely to motivate someone to start the process; it makes the goal seem impossible, and unrewarding. Telling someone that getting healthier is simple, and they “just” need to increase their exercise slightly, or reduce portion size slightly, or do one thing differently will at least get them started. Having gotten a little healthier might inspire them to do more, but, even if it doesn’t, they are getting a little better.

But that’s the wrong analogy.

A scholar who is having difficulty writing is not analogous to someone who needs to get up off the couch: it’s a person with a long record of successes as a writer. That is what we (and people who are stuck) so often lose track of when we give the “just write” advice. They are not a person sitting on a couch; they are someone with an exercise practice that has always worked for them in the past and it isn’t working now.

The better analogy, I would suggest, is a sprinter who is now trying to run a marathon. Sprinting has worked for them in the past, and many academics have a writing process that is akin to sprinting—chunks of time in which we do nothing but write, and try to get as much done as quickly as we can. Writing a dissertation or book, on the other hand, is more like running a marathon.

It would be unethical to tell a sprinter who is unable to run a marathon that she should “just run.” She has been running; she’s quite good at it. But the way that she has been running is not working for this new distance. And if she does try to run a marathon the way she has always run short races, she will hurt herself.

My intuition is that people who have trouble writing are people who have always used the sprinting method, and have simply managed to develop the motivational strategies to sprint for longer, or collapse from time to time while on the race, and pick themselves up. Often, it seems to me, that motivation relies on panic and negative self-talk—they manage to binge write because otherwise, they tell themselves, they are a failure.

So I’m not saying that “Just write” is always bad advice. I am saying that it sometimes is; it is sometimes something that can send people into shame spirals. It only works for some people, for people who do find that polite fiction motivating. For others, though, telling them “just write” is exactly like telling a person in a panic attack “just calm down” or someone depressed “just cheer up.”

The “just write” comes from a concern that lack of confidence will paralyze a student. But I think we might be solving the wrong problem.

Part of the problem is the myth of positive thinking, which has taken on an almost magical quality for some people. There is a notion that you should only think positive thoughts, as though thinking negative things brings on bad events. Since thinking clearly about how hard it is to write a book, dissertation, or grant (and, specifically, thinking clearly about how we might have habits or processes that inhibit our success) is thinking about “bad” things, about how things might go wrong or what troubles we might have, the myth of positive thinking says you shouldn’t do it. You should, instead, just imagine success.

This is a myth. It isn’t just a myth, but pernicious, destructive nonsense. A (sometimes secular) descendant of the positive psychology elegantly described by Bowler in Blessed, this is magical thinking pure and simple, and perfectly contrary to what research shows about how positive thinking actually affects motivation.

But here I should be clear. Some people who advocate wishful thinking do so because believe that the only other possibility is wallowing in self-loathing and a sense that the task is impossible, and they believe that telling students that academic writing is hard will necessarily lead to their believing it is impossible. In other words, there is an assumption that there is a binary between thinking only and entirely about positive outcomes or thinking only and entirely about tragic outcomes. The former is empowering and the latter is paralyzing. That narrative is wrong on all three counts—positive thinking is not necessarily enabling, moments of despair are not necessarily disabling, and our attitude toward our own challenges is not usefully described as a binary between pure optimism and pure despair. Left out of that binary is being hopefully strategic: aware of possible failures, mindful of hurdles, with confidence in our resilience as much as in our talents.

As to the first, studies clearly show that refusing to think negative thoughts about possible outcomes is actively harmful, and frequently impairs achievement. That’s important to remember: telling students they shouldn’t think about their own flaws, the challenges ahead of them, and how things might go wrong is not helping them, and it is making it less likely they will do what they need to do.

Gabriele Oettingen’s considerable research shows that (summarized in the very helpful book Rethinking Positive Thinking), while wishful thinking can be useful for maintaining hope in a bad situation or identifying long-term goals, it inhibits action. Fantasizing about how wonderful a dissertation or book will be doesn’t inspire us to write either; for many people, it makes the actual sometimes gritty work so much more unattractive in comparison that it’s impossible to write. The fantasy is far more fun than writing a crummy first draft. Similarly, Carol Dweck’s research on mindsets shows that success depends on acknowledging what has gone wrong and identifying how one might grow and change to get a different outcome in the future.

A sense that the task is so hard as to be impossible is not inevitably and necessarily disabling. It is, however, inevitable. It is dishonest to tell students that we never feel that what we’re trying to do can’t be done or isn’t worth doing, because so many of us do. And most of us got (and get) through it. Sometimes it took time, therapy, medication, changing things in our personal lives, changing jobs, changing projects, all of the above. But I don’t know any productive scholar free from times of slogging through the slough of despond.

In my experience, academic writing gets easier, but it’s never easy. The hardest writing is probably finishing a dissertation while writing job materials—nothing after that is so hard. But it’s always hard. If we tell students that it’s easy, or that it gets easy, even if we do so with the intention of keeping them moving, we do them a disservice. If they believe us, if they believe that we find it easy, then, when it gets hard, as it necessarily will, they have to conclude that there is something wrong with them. They are unhappily likely to conclude that they have been exposed for the imposter they always worried they were.

The “just write” advice almost certainly works for some people in some situations, as does the “just write every day” or “just freewrite” or “just start with your thesis” or any of the other practices and rules that begin with “just.” They work for someone somewhere and maybe they work for everyone some of the time, and they always strike me as sensible enough to suggest that people experiment with them. But we shouldn’t pretend that they’re magical and can’t possibly fail, or that someone “just” needs to do them. The perhaps well-intentioned fiction that academic writing “just” requires certain practice is magical thinking, and we need to stop saying it.

In my experience, people who find the “just write” advice useless find it too abstract. So, I think we need to be clear that scholarly productivity is, for most people, hard, and it’s find that a person finds it hard. And it takes practice, so there are some things a person might “just write”:

  • the methods section;
  • descriptions of an incident, moment in a text, interaction, or some other very, very specific epitome of their problem (Pirsig’s brick in the wall of the opera house);
  • summaries of their secondary materials with a discussion of how each text is and is not sufficient for their research;
  • a collection of data;
  • the threads from one datum to another;
  • a letter to their favorite undergrad teacher about their current research;
  • a description of their anxieties about their project;
  • an imitation of an introduction, abstract, conclusion, or transition paragraph they like written by a junior scholar.

I’m not presenting that list as a magical solution. It would be odd for me to say that simplistic advice is not helpful and then give a list of the five (or seven, or ten) things we “just” have to do to become (or teach others to become) skilled and productive academic writers. What we have to do is acknowledge that the project requires significant and complicated cognitive changes: that, for most of us, scholarly writing is hard because it’s hard. Let’s be honest about that.

 

 

 

 

 

Rationality, demagoguery, and rhetoric

One of my criticisms of conventional definitions of demagoguery is that they enable us to identify when they are getting suckered by demagoguery, but not when we are. They aren’t helpful for helping us see our own demagoguery because they emphasize the “irrationality” and bad motives of the demagogues. And both strategies are deeply flawed, and generally circular. Here I’ll discuss a few problems with conventional notions of rationality/irrationality, and later I’ll talk about the problems of motivism.

Definitions of “irrationality” imply a strategy for assessing the rationality of an argument, and many common definitions of “rational” and “irrational” imply methods that are muddled, even actively harmful. Most of our assumptions about what makes an argument “rational” or “irrational” imply strategies that contradict one another. For instance, “rationality” is sometimes used interchangeably with reasonable and logical, sometimes used as a larger term that incorporates logical (an stance is rational if the arguments made for it are logical, or a person is rational if s/he uses logical processes to make decisions). In common usage, many people assume that an argument is rational if you can support it with reasons, whether or not the reasons are logically connected to the claims; thus, to determine if an argument is rational, you look to see if there are reasons. Many people assume that “rational” and “true” are the same, and/or that “rational” arguments are immediately seen as compellingly true, so to judge if an argument is rational, you just have to ask yourself if it seems compellingly true. Of course, that conflation of rational and true means that “rational” is another way of saying “I agree.” Since many people equate “irrational” with “emotional” it can seem that the way to determine whether an argument is rational is to try to infer whether the person making the argument is emotional, and that’s usually inferred by the number of emotional markers—how many linguistic “boosters” the rhetor uses (words such as “never” or “absolutely”), or verbs of affect (“love,” “hate,” “feel”). Sometimes it’s determined through sheer projection, or through deduction from stereotypes (that sort of person is always emotional, and therefore their arguments are always emotional).

Unhappily, in many argumentation textbooks, it’s not uncommon for a “logical” argument to be characterized as one that appeals to “facts, statistics, and reason”—surface features of a text. Sometimes, though, we use the term “logical” to mean, not an attempt at logic, or a presentation of self as engaged in a logical argument, but a successful attempt—an argument is logical if the claims follow from premises, the statistics are valid, and the facts are relevant. That usage—how it’s used in argumentation theory—is in direct conflict with the vaguer uses that rely on surface features (“facts, statistics, and reason” or the linguistic features we associate with emotionality). Much of the demagoguery discussed in this book makes appeals to statistics, facts, and data, and much of it is presented without linguistics markers of emotionality, but generally in service of claims that don’t follow, or that appeal to inconsistent premises, or that contradict one another. Thus, for the concept of rationality to be useful for identifying demagoguery, it has to be something other than any of the contradictory ones above—surface features; inferred, projected, or deduced emotionality of the rhetor; presence of reasons; audience agreement with claims.

Following scholars of argumentation, I want to argue for using “rationality” in a relatively straightforward way. Frans van Eemeren and Rob Grootendorst identify ten rules for what they call a rational-critical argument. While useful, for purposes of assessing informal and lay arguments, they can be reduced to four:

1) Whatever are the rules for the argument, they apply equally across interlocutors; so, if a kind of argument is deemed “rational” for the ingroup, then it’s just as “rational” for the outgroup (e.g., if a single personal experience counts as proof for a claim, then a single appeal to personal experience suffices to disprove that claim);

2) The argument appeals to premises and/or definitions consistently, or, to put it in the negative, the claims of an argument don’t contradict each other or appeal to contradictory premises;

3) The responsibilities of argumentation appeal equally across interlocutors, so that all parties are responsible for representing one another’s arguments fairly, and striving to provide internally consistent evidence to support their claims;

4) The issue is up for argument—that is, the people involved are making claims that can be proven wrong, and that they can imagine changing.

Not every discussion has to fit those rules—there are some topics not open to disproof, and therefore can’t be discussed this way. And those sorts of discussions can be beneficial, productive, enlightening. But they’re not rational; they’re doing other kinds of work.

In the teaching of writing, it’s not uncommon for “rationality” and “logical” to be compressed into Aristotle’s category of “logos” (with “irrational” and “emotional” getting shoved into his category of “pathos”)—and then very recent notions about logic and emotion are projected onto Aristotle. As is clear even in popular culture, recent ideas assume a binary between logical and emotional, so saying something is an emotional argument is, for us, saying it is not logical. That isn’t what Aristotle meant—he didn’t even mean that appeals to emotion and appeals to reason can coexist; he didn’t see them as opposed. Nor did he mean “facts” as we understand them, and he had no interest in statistics. For Aristotle, ethos, pathos, and logos are always operating together—logos is the content, the argument (the enthymemes); pathos incorporates the ways we try to get people to be convinced; ethos is the person speaking. So, were we to use an Aristotelian approach to an argument, we would look at a set of statistics about child poverty, and the logos would be that poverty has gotten worse (or is worse in certain areas, or for some people—whatever the claims are), the pathos would be how it’s presented (what’s in bold, how it’s laid out, and also that it’s about children), and the ethos is whatever is situated (what we know about the rhetor prior to the discourse) but also a consequence of the person using statistics (she’s well-informed, she’s done research on this) and that it’s about children (she is compassionate). For Aristotle, unlike post-logical positivists, the pathos and logos and ethos can’t operate alone.

I think it’s better just to avoid Aristotle’s terms, since they slide into a binary so quickly. More important, they enable people to conflate “a logical argument” (that is, the evaluative claim, that the argument is logical) with “an appeal to logic” (the descriptive claim, that the argument is purporting to be logical).

What this means for teaching

People generally reason syllogistically (that’s Ariel Kruglanski’s finding), and so it’s useful for people to learn to identify major premises. I think either Toulmin’s model or Aristotle’s enthymeme works for that strategy, but it is important that people are able to identify unexpressed premises.

Syllogism:

All men are mortal. [universally valid Major Premise]

Socrates is a man. [application of a universally valid premise to specific case: minor premise]

Therefore, Socrates is mortal. [conclusion]

Enthymeme:

Socrates is mortal [conclusion]

because he is a man. [minor premise]

The Major Premise is implied (all men are mortal).

Or, syllogism:

A = B [Major Premise]

A = C [minor premise]

Therefore, B = C. [conclusion]

Enthymeme:

B = C because A = B. This version of the argument implies that A = C.

Chester hates squirrels because Chester is a dog.  

Major Premise (for the argument to be true): All dogs hate squirrels.

Major Premise (for the argument to be probable): Most dogs hate squirrels.

 

Batman is a good movie because it has a lot of action.

Major Premise: Action movies are good.

 

Preserving wilderness in urban areas benefits communities

            because it gives people access to non-urban wildlife.

Major Premise: Access to non-urban wildlife benefits communities.

Many fallacies come from some glitch in the enthymeme—for instance, non sequitur happens when the conclusion doesn’t follow from the premises.

  • Chester hates squirrels because bunnies are fluffy. (Notice that there are four terms—Chester, hating squirrels, bunnies, and fluffy things.)
  • Squirrels are evil because they aren’t bunnies.

 

Before going on to describe other fallacies, I should emphasize that identifying a fallacy isn’t the end of a conversation, or it doesn’t have to be. It isn’t like a ref making a call—it’s something that can be argued—this is especially true with the fallacies of relevance. If I make an emotional argument, and you say that’s argumentum ad misercordiam, then a good discussion will probably have us arguing about whether my emotional appeal was relevant.

Appealing to inconsistent premises comes about when you have at least two enthymemes, and their major premises contradict.

For instance, someone might argue: “Dogs are good because they spend all their time trying to gather food” and “Squirrels are evil because they spend all their time trying to gather food.” You’ll rarely see it that explicit—usually the slippage is unnoticed because you use dyslogistic terms for the outgroup and eulogistic terms for the ingroup: “”Dogs are good because they work hard trying to gather food to feed their puppies” and “Squirrels are evil because they spend all their time greedily trying to get to food.”

Another one that comes about because of glitches in the enthyme is circular reasoning (aka “begging the question). This is a very common fallacy, but surprisingly difficult for people to recognize. It looks like an argument, but it is really just an assertion of the conclusion over and over in different language. The “evidence” for the conclusion is actually the conclusion in synonyms–“The market is rational because it lets the market determine the value of goods rationally.” “This product is superior because it is the best on the market.”

Genus-species errors (aka over-generalizing, ignoring exceptions, stereotyping) happens when hidden in the argument (often in the major premise is a slip from “one” (or “some”) to “all.” It results from assuming that what is true of a specific thing is true of every member of that genus, or what is true of the genus is true of every individual member of that genus. “Chester would never do that because he and I are both dogs, and I would never do that.” “Chester hates cats because my dog hates cats.”

Fallacies of relevance

Really, all of the following could be grouped under red herring, which consists of dragging something so stinky across the trail of an argument that people take the wrong track. Also called “shifting the stasis,” it’s trying to distract from what is really at stake between two people to something else—usually inflammatory, but sometimes simply easier ground for the person engaged in red herring. Sometimes it arises because one of the interlocutors sees everything in one set of terms—if you disagree with them, and they take the disagreement personally, they might drag in the red herring of whether they are a good person, simply because that’s what they think all arguments are about.

Ad personum (sometimes distinguished from ad hominem) is an irrelevant attack on the identity of an interlocutor. Not all “attacks” on a person or their character are ad hominem. Accusing someone of being dishonest, or making a bad argument, or engaging in fallacies, is not ad hominem because it’s attacking their argument. Even attacking the person (“you are a liar”) is not fallacious if it’s relevant. It generally involves some kind of name-calling (usually of such an inflammatory nature that the person must respond, such as calling a person an abolitionist in the 1830s, a communist in the 1950s and 60s, or a liberal now). It’s really a kind of red herring, as it’s generally irrelevant to the question at hand, and is an attempt to distract the attention of the audience.

Ad verecundiam is the term for a fallacious appeal to authority. In general, it’s a fallacy because their authority isn’t relevant—there’s nothing inherently fallacious about appeal to authority, but having a good conversation might mean that the relevance of the authority/expertise now has to become the stasis. Bandwagon appeal is a kind of fallacious appeal to authority—it isn’t fallacious to appeal to popularity if it is a question in which popular appeal is a relevant kind of authority.

Ad misericordiam is the term for an irrelevant appeal to emotion, such as saying you should vote for me because I have the most adorable dogs (even though I really do). Emotions are always part of reasoning, so merely appealing to emotions is not

Scare tactics (aka apocalyptic language) is a fallacy if the scary outcome is irrelevant, unlikely, or inevitable regardless of the actions. For instance, if I say you should vote for me and then give you a terrifying description of how our sun will someday go supernova, that’s scare tactics (unless I’m claiming I’m going to prevent that outcome somehow).

Straw man is dumbing down the opposition argument; because the rhetor is now responding to arguments their opponent never made, most of what they have to say is irrelevant. People engage in this one unintentionally by not listening, projection, and a fairly interesting process. We have a tendency to homogenize the outgroup and assume that they are all the same. So, if you say “Little dogs aren’t so bad,” and I once heard a squirrel lover praise little dogs, I might decide you’re a squirrel lover. Or, more seriously, if I believe that anyone who disagrees with me about gun ownership and sales wants to ban all guns, then I might respond to your argument about requiring gun safes with something about the government kicking through our doors and taking all of our guns (an example of slippery slope).

Tu quoque is usually (but not always) a kind of red herring, sometimes it’s the fallacy of false equivalency (what George Orwell called the notion that half a loaf is no better than none). One argues that “you did it too!” While it’s occasionally relevant, as it can point to a hypocrisy or inconsistency in one’s opposition, and might be the beginning of a conversation about inconsistent appeals to premises, it’s fallacious when it’s irrelevant. For instance, if you ask me not to leave dirty socks on the coffee table, and I say, “But you like squirrels!” I’ve tried to shift the stasis. It can also involve my responding with something that isn’t equivalent, as when I try to defend myself against a charge of embezzling a million dollars by pointing out that my opponent didn’t try to give back extra change from a vending machine.

 

False dilemma (aka poisoning the wells, false binary, either/or) occurs when a rhetor sets out a limited number of options, generally forcing one’s hand by forcing one to choose the option s/he wants. Were all the options laid out, then the situation would be more complicated, and his/her proposal might not look so good. It’s often an instance of scare tactics because the other option is typically a disaster (we either fight in Vietnam, or we’ll be fighting the communists on the beaches of California). It is “straw man” when it’s achieving by dumbing down the opponent’s proposal.

Misuse of statistics is self-explanatory. Statistical analysis is far more complicated than one might guess, given common uses of statistics, and there are certain traps into which people often fall. One common one is the deceptively large number. The number of people killed every year by sharks looks huge, until you consider the number of people who swim in shark-infested waters every year, or compare it to the number of people killed yearly by bee stings. Another common one is to shift the basis of comparison, such as comparing the number of people killed by sharks for the last ten years with the number killed by car crashes in the last five minutes. (With some fallacies, it’s possible to think that there was a mistake involved rather than deliberate misdirection; with this one, that’s a pretty hard claim to make.) People often get brain-freeze when they try to deal with percentages, and make all sorts of mistakes—if the GNP goes from one million to five hundred thousand one year, that’s a fifty per cent drop; if it goes back up to one million the next year, that is not, however, a fifty per cent increase.

The post hoc ergo propter hoc fallacy (aka confusing causation and correlation) is especially common in the use of social science research in policy arguments. If two things are correlated (that is, exist together) that does not necessarily mean that one can be certain which one caused the other, or whether they were both caused by something else. It generally arises in situations when people have failed to have a “control” group in a study. So, for instance, people used to spend huge amounts of money on orthopedic shoes for kids because the shoes correlated with various foot problems’ improving. When a study was finally done that involved a control group, it turned out that it was simply time that was causing the improvement; the shoes were useless.

 

Some lists of fallacies have hundreds of lists, and subtle distinctions can matter in particular circumstances (for instance, the prosecutor’s fallacy is really useful in classes about statistics), but the above are the ones that seem to be the most useful.