John Muir, the Hetch Hetchy Valley, and a bird: or, how I’ve spent the last forty years, and will spend the next as-long-as-I’ve-got

Great Blue Heron

One spring, when I was a child, my family went to Yosemite Valley in Yosemite National Park. My family mostly tried (and failed) to teach one another bridge, and I wandered around the emerald valley. Having grown up in semi-arid southern California, the forested walks seemed to me magical, and I was enchanted. One evening, my mother took me to a campfire, hosted by a ranger, who told the story of John Muir, a California environmentalist crucial in the preservation of Yosemite National Park. The last part of the ranger’s talk was about Muir’s last political endeavor, his unsuccessful attempt to prevent the damming and flooding of the Hetch Hetchy Valley, a valley the ranger said was as beautiful as the one by which I had been entranced. The ranger presented the story as a dramatic tragedy of Good (John Muir) versus Evil (the people who wanted to dam and flood the valley), with Evil winning and Muir dying of a broken heart. I was deeply moved.

I’d like to say this story so moved me that I became active in environmentalism, but that wouldn’t really be true—I could distinguish a pigeon from a seagull, and that was about it. Muir’s story did, however, stick with me as an odd story about rhetoric. Why could someone who, according to the ranger, have been so persuasive and moving on so many points—preserving Yosemite Valley, creating the national park system, valuing the High Sierras, starting the Sierra Club–have failed to persuade people on the one point that the ranger presented as so starkly simple? Why do people with the better cases so often lose arguments? And later it came back to me.

I went to Berkeley for my undergraduate degree, and became entranced again; this time by rhetoric.

The Berkeley rhetoric department emphasized the teaching of persuasive argumentation, something which must be distinguished from what many people experience as argument. I don’t want to get into the ways that was both right and wrong, so much as point out that it taught that rhetoric is always relational, and the kind of rhetoric we teach and practice signifies, models, and reinforces the kind of relationship with have with our interlocutors. Thus, a definition of rhetoric—whether we define rhetoric as getting others to do what we want or the ability to understand disagreements—is not just a theory of discourse, how we communicate to someone else, but a theory of community. A limited conception of rhetoric leads to a limited way of interacting with others, and the limited success we get from that interaction confirms our sense that rhetoric is limited.

Until I went to college, whenever I had been taught argumentation I had been required to have a confrontational thesis which was stated in the beginning (usually after a funnel introduction), and which was supported by three reasons (which were themselves stated at the beginning of each paragraph and before any evidence). Each “proof” paragraph had one piece of evidence to support its point. In the penultimate paragraph, I was expected to summarize and then contradict (or concede, but declare as trivial) some opposition argument. The conclusion would restate my thesis, and typically end with some rousing generalizations.

It is difficult to describe how frustrating I found this form. I certainly found it unpersuasive. That isn’t to say I’d never changed my mind; even in high school I was well aware that people did change their minds, but the texts that I’d found persuasive never followed this narrow structure. For one thing, the texts that changed my mind on things were often narratives—whether a fictional narrative like Nathaniel Hawthorne’s The Scarlet Letter that made me think differently about the role of gossip and identity, or a non-fiction narrative like Hannah Arendt’s Eichmann in Jerusalem that made me think differently about loyalty and duty.

It especially bothered me that the writers whom we were taught to admire and told to emulate—such as Martin Luther King, Jr., George Orwell, Virginia Woolf–did not write the way that we were required to write (thus making the recommendations to admire and emulate them more than a little confusing). On the contrary, their conclusions tended to be after their evidence, they tended to summarize their opposition early (often as early as the introduction), their theses were generally at the end of their texts, assuming they even had a thesis stated explicitly in the text at all. The experience of reading them was completely different from reading something written in the “tell ’em what you’re gonna’ tell, tell ’em, then tell ’em what you told ’em” form that always felt to me as though I was sitting in a small chair being yelled at, while reading people like King made me feel that I was walking along with the author who was pointing things out along the way.

Although we were presented with King, Orwell, and Woolf as rhetors to be admired, and told to emulate, we were graded down if we did. In other words, the explicit rules for good rhetoric—what my teachers said I had to do—were wildly at odds with the implicit rules for good rhetoric—what the ideal writing actually did. Thus, the teachers’ explicit instructions—write this way, and write like these authors—were actually in conflict.

This conflict within our explicit instructions for students—that we give them rules that are actually contradictory—is not particular to my teachers, and is a problem within the history of teaching writing. The contradiction comes about, I’ll suggest, from universalizing about rhetorical strategies and relations, and the number of concepts muddled in the term “effective.” This is another one of the themes to which I will return: what do we praise in rhetoric, what is effective in rhetoric, what do we say people should do, and how are those three at odds with each other.

At Berkeley, in the rhetoric classes, there was not as much conflict between the explicit and implicit rules for writing. The papers we wrote were supposed to be written for an intelligent and informed opposition (not at or about them) and were supposed to be structured in a movement from what we had identified as common ground with that opposition through our evidence to the conclusion.
But, this kind of writing is hard, and, one day, tired and frustrated with a paper assignment, I found myself walking by a coastal lagoon in an area far to the north of Berkeley. I had driven along the Northern California coast for days, and parked near the marshy water in order to give myself a chance to wander. But as I moved through the high grass, I startled something that took off with a surprising splash and whoof of sweeping wings. It was an elegant blue grey bird the color of the sky on an overcast day with a neck that seemed to me as long and majestic as a flamingo’s. What impressed me most was the grace, beauty, and power of the sweeping stroke of its wings as it flew over me and out of sight. I discovered I had been holding my breath.

I had never before been much impressed by birds.

I tried to find some places closer to Berkeley where I might watch birds like these. With high hopes, I went to a place called “Shorebird Park” only to discover that it had a neatly mown lawn, picnic tables, and dogs. While it was a friendly and inviting place for people, even able to accommodate large groups at picnic areas, it was useless for most shorebirds. The carefully tended lawns and rampaging dogs precluded any nesting habitat for birds; the ubiquitousness of garbage attracted seagulls who chased away any other species. I didn’t go back there. After some exploring, I discovered a marsh near a freeway, and another at the end of an access road near the airport, each of which provided habitat for egrets, red winged blackbirds, avocets, and stilts. There was something charming in watching the different birds–the way the avocets skittered, the red winged blackbirds flashed a ruby spot when they flew, the egrets endlessly looked gracefully ungainly. I was disturbed to discover that both of these marshy habitats were proposed for development.

I decided to try to use my experience seeing the grey bird (called a Great Blue Heron) as the common ground in order to move my audience toward the conclusion that the marshes I had visited and other like them should not all be turned into hotels or industrial parks, nor made into parks as sanitized and bereft of wildlife as the unintentionally ironically named “Shorebird Park.” Instead, at least some should remain wilderness areas in the middle of an urban environment so that everyone could have the breath-taking experience I did of seeing a Great Blue Heron.

I began with a description of seeing the heron, and then moved to bemoaning the tragedy of people in the city not having access to wildlife areas close to home. My instructor characterized the resulting paper as “an impressive effort, but unsuccessful” because it would not persuade an intelligent and informed opposition audience. That is, my common ground was not shared with my opposition (who were unlikely to see the flight of the heron as terribly important), and I had not really effectively incorporated or answered the kinds of concerns they were likely to have (such as the potential economic benefits of developing wetlands). Most important for the instructor was that the logic of my argument that preserving wilderness in urban areas would benefit people because it would provide them with opportunities to see a wider variety of wildlife was subtly circular.

There is a lot of disagreement in rhetoric as to what we should call that kind of discourse, and it is often called “epideictic,” from Aristotle’s tri-partite division of rhetoric. I have not found Aristotle’s taxonomy very useful, for various reasons. Here I simply want to mention that this kind of rhetoric—that looks as though it is persuading an opposition, but is actually confirming those who already agree—can happen anywhere, in political assemblies, schools, public areas, books, movies.

There are advantages to this kind of rhetoric, but one problem with it is that we don’t always recognize it when we see it. That is, we often use the word “persuasive” to mean “I like it” and describe a text as “persuasive” or “effective” when we mean that it confirmed beliefs we already have, rather than that it changed our views. This will be, perhaps, the most consistent theme in these lectures—the tremendous difficulty we have in describing the impact of rhetoric, both individual texts, sets of texts, and even a realm of texts.
For instance, in regard to the paper about the birds, I had shared the paper with fellow tutors at the Writing Center, with friends, with classmates, and with just about anyone I could persuade to read it, and all had praised it highly. It had seemed persuasive to them.

The teacher was right, of course—I hope that is clear—but none of us could see what she did because we, granting the premise that experiencing non-urban wildlife is valuable, could not imagine anyone not granting it. We could not shift our perspective to someone who disagreed.

Rhetoric, then, is a cognitive process, a way of thinking.

Or, at least, persuasive rhetoric is.

So, at this point, the question for me became whether I would find an enthymeme that would work with people who did not value the environment, and that led me back to John Muir and the Hetch Hetchy debate. Was there something he could or should have done that would have produced a different outcome? Could Muir, a man whose writing many still find persuasive, have found a rhetorical strategy that would have worked with his audience? Was Muir’s failure to prevent the damming and flooding of the Hetch Hetchy Valley a rhetorical failure? Is there something he should have done?

I decided I would write my senior thesis on this topic, and I could figure out what he should have done. I didn’t. So I decided I would get an MA, and figure it out. Then I thought I needed a PhD to solve this problem, so I decided to get one. (But I wasn’t going to be a professor.) And what I found was that, when people disagree about the environment, it’s because we disagree about God. So, how do you disagree productively about policies that affect a lot of us when you don’t share premises? I spent 40 years working on that problem.

I intend to spend my retirement working on it. I’ll get it this time.

And it all goes back to John Muir, a ranger who knew how to tell a story, and the way my soul still sings when I see a Great Blue Heron.


Grammar Nazis and deflected/projected racism

marked up draft

My mother, who was very racist but sincerely believed herself to be not racist, said that she was not personally opposed to intermarriage, but she was opposed to it, on the grounds that it was so hard on the children. In other words, she supported a racist practice (social shaming of “intermarriage”) while still feeling herself not racist because she could tell herself that her racist practice was necessitated by the racism of other people.

Teachers—all teachers, at every level—are far too often my mother. We teach in a racist way, all the while claiming that we, personally, aren’t racist, but our racist practice is necessary because of the racism of others. We do it when it comes to teaching “standard Edited American English” (a particular dialect) as though it is better than other dialects.

English has a lot of different dialects, and many of those dialects are grammatically different. Standard Edited American English (SEAE), for instance (a dialect no one speaks), prohibits the comma splice (The cats ran, the dogs barked), but Standard Edited British English doesn’t. In spoken English, sentence fragments are fine, and are also fine in much published writing (depending on formality), but generally prohibited in very formal writing (except resumes or cv, where they are required). It would be inappropriate for someone to use full sentences in a resume, and therefore equally inappropriate for someone to mark a resume as “wrong” for using sentence fragments. Sentence fragments aren’t therefore “worse” than complete sentences–they’re appropriate or not; that’s how language works.

However, in any language there are dialects that are stigmatized for racist, classist, historical, or various other bigoted reasons. They’re stigmatized as “bad” English (or French, or German, or whatever). In American English, one use of the double negative is stigmatized and the other accepted because one is associated with Black English. “She don’t know nuthin’ about nuthin’” is a perfectly clear sentence, but “The argument is not unclear” takes math to understand. Yet, it’s the first that gets called “bad English.” (Which is funny, if you think about it–calling something “bad English” is itself an instance of using the wrong term, so it’s “worse” English than a double negative.)

So, it’s important to separate out two kinds of grammatical errors: a violation of a dialect from within that dialect (such as someone trying to write SEAE who violates rules of that dialect, such as the muddled Black English of The Help), ones that are correct usage within that dialect but not accepted in the dialect a reader is expecting. (A third category would be uses of language that aren’t grammatically incorrect at all, but people think they are–ending a sentence with a preposition, for instance.)

Here’s what I mean by the second kind of error. It would be bizarre for someone to chastise someone speaking German for ending a sentence with a preposition—that’s how German works. (It’s also how English works, but that’s a different post.) It would also be sheer bigotry to say that French is better than German because French doesn’t allow ending sentences with prepositions. Dialects and languages are all equally good at communicating; none is better than another.

I’ll mention something about the first toward the end of this post, but, for the most part, I want to focus on what we do about stigmatized dialects. The problem is that, since, for instance, Black English is stigmatized, and Standard Edited American English is rewarded, should teachers require that their students learn Standard Edited American English?

The advice for years (ever since the National Council of Teachers of English and Conference on College Composition and Communication issued the Statement on Students’ Right to Their Own Language”) has been to advocate code-switching. To say that a student should know SEAE because it’s useful, not because it’s better, is like saying that it’s useful to know French if you intend to live and work in France. From within this model, German is no better than French (nor is French better than German), and a student might be speaking perfect German in a French class. A person shouldn’t give up German, but add on the knowledge of French. Students should learn SEAE as an additional dialect that is useful under some circumstances.[1]

Unfortunately, too many teachers and professors and employers and people in power use the language of code-switching in order to enforce the message that Black English is inferior.

A few years ago I found myself in an argument on the internet with a white teacher in a predominantly African American school who banned Black English in her classes. She was proud that she told her students that Black English would hold them back. She wasn’t racist, she insisted; she was helping them. There’s what might seem like a subtle difference between what she was doing and what “Students’ Right to Their Own Language” advocates, but it’s an important one. She was clear that SEAE was better than Black English, that Black English was something they should be shamed for using. I then noticed that I often had the same problem with training people in the teaching of writing—they made a bigger deal about perfectly clear uses of stigmatized language than they did about about grammar problems that interfered with communication. They did so, they said, because other people would be racist.

It’s my mother opposing “intermarriage” because other people would be racist. That’s racist.

Granted, we’re in a racist world, and using a stigmatized dialect will hurt a person in terms of job or housing applications, getting good scores on standardized tests, or dealing with racist teachers who deflect their racism onto others who might be racist. So, I understand, and still support, the idea that we should teach code-switching, but if (and only if) we give students the ability to choose whether they want to learn to code-switch, we do so by making it absolutely clear that no dialect is better than another, and we make a bigger deal about violations of grammar and usage within (rather than across) dialects. I don’t know that we can do the second, and if that’s the case, then teaching code-switching is racist.

I mentioned that violations within a dialect are worth looking at carefully, largely because they can signal issues with thinking. For instance, mixing metaphors can indicate that we haven’t decided on the underlying model, or that we’re appealing to troubling models, or that we just aren’t thinking. I once heard a facilitator say, “We’re on a fast train flying out of the box.” She was describing a train wreck, as far as I could tell, but I think she meant it as a good thing. I don’t know. Had she said, “We ain’t done nothin’ about nothin’” I could have understood her perfectly.

Unclear pronoun reference can mean we haven’t really decided how causality works. For instance, if I say, “There are bunnies eating kale in the backyard, which is weird,” it isn’t clear whether the weird part is that there are bunnies, that they’re eating kale, that they’re doing it in the backyard.” In other words, it isn’t clear what “which” is referring to. What’s interesting to me about these sorts of errors (predication error or mixed construction is another one along these lines) is that “correcting” the error means first figuring out what I’m trying to say. These are interesting and significant errors.

Whenever I get into this topic (or when it comes up even on scholarly mailing lists), people advocating my position (the position of most if not all linguists, btw) get accused of thinking that anything goes, and that we shouldn’t care about clarity or correctness of any kind. That isn’t what I’m saying. I’m making four points. First, no dialect is better than any other (it might be more useful, inappropriate, effective under certain circumstances). Second, what grammar Nazis worry about are often not “grammar” issues at all (but style preferences, hypercorrectness, misunderstandings of rules, misapplications of rules), and are almost always not issues of clarity, but are class or race markers (e.g., comma splices, double negatives, subject-verb agreement, ending with a preposition). Third, we should worry about certain issues of usage, but it should be the ones that are violations within a dialect, especially ones that signal muddled thinking. Fourth, the conventional wisdom among experts for years has been that we should teach code-switching (that is, the ability to switch between dialects), but that’s still racist unless we do so in a way that makes it clear that we aren’t privileging one dialect over another, and we offer it as a choice to students.


[1] Another way to put this is to say that prescriptivism is perfectly fine, as long as it’s taught qua prescriptivism.

Holding out for a Hero: The Far-Right Canonization of Kyle Rittenhouse

Guest post by Jim Roberts-Miller

Painting of St. Michael
Image from here: https://www.patriciarobertsmiller.com/wp-admin/post.php?post=2586&action=edit

On Tuesday, August 25 Kyle Rittenhouse drove from his home in Illinois to Kenosha, Wisconsin. Kenosha was roiled by protests over the police shooting of Jacob Blake. That night, Rittenhouse shot three people, killing two.

He is becoming a folk hero on the racist right. And not just on Twitter (which, as is often correctly pointed out, isn’t real life). As of this writing, at least two right wing pundits, Tucker Carlson and Ann Coulter, have come out decidedly in favor of the shooter, claiming he was acting in self-defense and setting his defense of private property as superior to the “lawlessness” of the protests.
The racist right is in desperate need of a hero after a summer of protest in which their usual tricks of attacking the victim, sympathizing with the tough job of police, and exaggerating the usually mild property damage that often comes with angry protests were, for various reasons, simply not working.

Despite their standard calls for “civility”, so-called support for “peaceful protests, not violence”, support for Black Lives Matter not only held steady, but actually went up. Corporations felt the need to make explicit their support of the BLM movement. Confederate monuments which survived recent protests were removed, in some case overnight. City governments began looking to lower police budgets, shifting that money elsewhere. In a least a few cases, city governments have actually done this. The right’s normal paladin, Donald Trump, seemed not only unable to move the rest of America with his typically harsh rhetoric, but watched as his popularity went down and the electoral lead of his opponent Joe Biden climb into the double digits, at least partially thanks to Trump’s ham-fisted efforts to violently put down what were seen an legitimate protests, walling off the White House and using tear gas to disperse protestors so he could hold a Bible upside down outside of a church.

A vast, if incomplete and imperfect, reckoning with the structures of white supremacy began to percolate through American society. The hysteria with which this was met on the right is extreme.

On the internet, you never more than two or three clicks away from a racist right wing alternate universe of (black and brown) wild-eyed leftists bent on burning down the suburbs and replacing the (white) social structures of peaceful law-abiding (white) Americans with their (black and brown) socialist agenda for robbing the productive so they can live off welfare. And in this universe, the fear, confusion, and anger over the failure of the rest of decent (white) society to get angry over the lawlessness and disrespect being shown to the normal (white) power structures was palpable.

But the racist right has only one play. And that is to keep pushing the narrative that the protestors are not only misguided and wrong, but that they are (black and brown) violent and greedy and actively coming for you (decent white person). They pushed that narrative with the Portland protests, but it wasn’t working out. Kenosha was another chance.

And then Rittenhouse, who broke several laws just by being present in Kenosha with an AR-15 (thus proving the Racist right’s problem isn’t really lawlessness, but who is breaking the law), shot three people. Unconsciously or not, the racist right realized that they could not allow Rittenhouse’s crimes to hijack the news the way the murder of Heather Heyer did in Charlottesville. To do so would once again wreck their narrative of leftist (black and brown) violence endangering good hard-working (white) folks. And so he couldn’t be written off as an aberration, or someone who made a mistake.

No. Rittenhouse had to be a hero. A young man who idolized the police, law and order, and who selflessly came to Kenosha to protect the property and society of ordinary (white) people from a ravening (black and brown) mob. That is their story. That is their desperate need. For them Rittenhouse is a hero, a martyr, a (white) man literally pursued by a mob who, in his extremity, was forced to kill to defend himself. And that is the story they will be pushing at all costs, because it all they have and they have to get enough (white) people to condone the violence needed to put the mob (black and brown people) back in their place and re-elect Donald Trump.

You must not let them do this. Rittenhouse deliberately chose to break several laws to go to Kenosha Wisconsin, gun in hand, expecting to shoot people. He got his wish. This is not heroism.

So someone said, “Check your privilege”

people arguing
From the cover of Wayne Booth’s _Modern Dogma-

It seems to me that white males get more upset about being told to “check your privilege” than do women or POC. (And, yes, POC do sometimes get told to check their privilege because privilege is complicated—Ijeoma Oluo has a nice chapter on checking her own privilege.) “Check your privilege” is upsetting, I’ve been told, because they understand themselves to have been told that their opinion is irrelevant purely because of who they are.

And I think women and POC have had that–being told our opinion is worthless because of who we are–happen so often that it’s nothing new. If anything, being told that my opinion is invalid because I’m speaking from such a place of privilege that my view is distorted is a much more valid reason than many others I’ve been given over the years. (My favorites remains the time that a man shouted at me that, because I’m a woman I couldn’t possibly understand logic.) After all, there are ways in which my coming from a place of privilege does make my opinion worth less (and sometimes worthless).

For instance, when I went to graduate school, it wasn’t possible—let alone necessary—to buy a personal computer, tuition was low, and housing close to campus was available and affordable. Therefore, although the stipend was low, it was possible to make it through the program with very little debt. Since I came from the kind of family that paid for my undergraduate education, I started graduate school with no debt at all. That I was so privileged means that any advice I might now give to students considering graduate school is worth less than the advice of someone closer to them in experience.

I give a lot of advice about writing, and, although I try to incorporate advice that others with different experiences have given, ultimately, what I say is going to be from my perspective. And my perspective is shaped by the advantages I have and I’ve had (such as low or nonexistent debt) And therefore it won’t be good advice for some people. They should ignore my advice.

If you tell me to check my privilege, you’re telling me that you think I’ve forgotten my epistemic limitations. You think my privilege means that my advice or judgment isn’t valid, or, at least, much more limited than I seem to realize.

What people who get defensive when told to check our privilege don’t understand is that your saying “Check your privilege” to me isn’t changing our relationship. You’re just naming it. It’s just a verbalized eyeroll. If you hadn’t said it, you would still have thought it.

So, the best response is to ask for clarification. In the days before people said, “Check your privilege,” there were other ways of making the same point: “You’re just saying that because you’re….” “I think you’re forgetting about…” “From my perspective…” “Someone from [this background] would look at it really differently…” and so on. And I think we’ve all had someone point out that our advice or judgment really was seriously limited by not having thought about it from another perspective. And it was useful.

It’s particularly hard to see how our perspective is limited by privilege because power comes into play. When I had people from prestigious and well-funded institutions give me career advice that was seriously limited by their privilege, it was hard for me to say, “Yeah, that won’t work for me” because they were powerful, and I needed their support. I didn’t say anything. But neither did I try to follow their advice because it didn’t make any sense—I didn’t have a TA to do my grading, a research assistant to help with clerical work, an administrative assistant to help with program administration. They hadn’t thought through how their advice was coming from a place of privilege, and was useless for someone like me.

This isn’t to say that someone who says, “Check your privilege” is always right. Sometimes people have a lot less privilege than it might appear, sometimes we’ve misunderstood how power works in a particular setting, sometimes people misunderstand what privilege means. Sometimes when people say, “Check your privilege” they want to talk about it, and they’re willing to explain in more detail. But sometimes they don’t want to, and that’s fine too. Almost always, it will take some time to think about whether and how privilege may have affected our judgment and what we should do about it.

Socially acceptable racism; Or, how “new” racism isn’t new

books about demagoguery

A lot of people make the point that there was a kind of racism—called “old” racism—that was openly biological/genetic, and openly hostile. Then, at a certain point, racist discourse shifted to become more genteel. That distinction between old and new racism isn’t entirely accurate, and the way it’s inaccurate is important. There have always been “genteel” racisms—what might be called “racism with a smile” or “some of my closest friends are…” racism. And those “nice” (that is, socially acceptable) racisms enable the kinds that openly advocate violence, expulsion, and extermination.

In this post, I want to talk about one of them—one that was tremendously popular in the twentieth century. This view accepted that there were “races,” that they were essentially (even genetically) different, that these differences manifest themselves in external characteristics (looks, behavior, cultural practices), but that all of these differences add to the richness of human life. This kind of racism celebrated the essential differences of human races. (Sort of. I’ll get to that.) People advocating this kind of racism often explicitly set themselves off from a similarly biological racism (they weren’t racist) on the grounds that they weren’t that bad.

Take, for instance, Dorothy Sayers, the mystery novelist. In Whose Body (1923), the villain kills a perfectly nice Jew out of spite with a non-trivial amount of antisemitism. The hero expresses no antisemitism, not even when his friend indicates a desire to marry into a Jewish family, and the narrator has nothing negative to say about the victim or his family. In fact, everything we hear about the victim and his family appears positive. He is very good at playing the stock market and therefore wealthy, but not showy in this wealth (for instance, because he doesn’t have a chauffeur, he travels alone to the meeting the murderer has set up). He dotes on his wife and daughter, and is a good family man. He is kind to people.

This all appears positive—he’s smart, successful, modest, and a family man. This characterization is, however, simply the “positive” side of the same coin of rabidly antisemitic rhetoric. For those groups, Jews are: parasitic capitalist, money-grubbing, cheap, tribal (“clannish” is the word sometimes used), and kind becomes “pacifist” or “cowardly.”

Antisemitic rhetoric in groups like the Nazis stuck close to the producer/parasite dichotomy that runs back through readings of Paul’s prohibition about usury. Chip Berlet and Matthew Lyons have a useful description of how that dichotomy plays into toxic populism. The short version is that toxic populism presents some group as producers, and the other as parasites, or, in Paul Ryan’s more recent rhetoric, “makers” and “takers.” The in-group is always makers. For many populists, people who make money off of money—financiers, people who play the stock market—haven’t really created wealth (such as through owning land). They’re parasites.

Nazis were populists (authoritarians almost always are, even though their policies actually screw over most of the populace, and especially the middle and lower classes). The notion that Jews were always financiers and stock market geniuses (and bankers) was one of the most important aspects of Nazi antisemitic propaganda. It’s a theme in Mein Kampf, fercryinoutloud. Real money, so this argument goes, comes from agriculture, or perhaps small manufacturing. Being good at the stock market, for Nazis, is a smear.

Similarly, the negative stereotype of Jews was that they can never really be patriots, because they always favor their family rather than their country (for Hitler, an “Aryan” putting his family first is putting the country first). And the stereotype of Jews as cheap was another piece of antisemitic rhetoric. In other words, Sayers, even if her portrayal of a Jew appeared sympathetic (i.e., she was trying to be “nice”), reinforced exactly the stereotypes that resulted in the Holocaust: Jews are good at finance (capitalist parasites), modest (miserly), family lovers (clannish), non-violent (pacifists and cowards). It was racism with a smile.

She was far from alone. After Wyndham Lewis’ enthusiastic paean to Hitler (1931) didn’t go over as well as he’d expected, and his insistence that Hitler was “a man of peace” showed him to have been very wrong, he tried to get back in the good graces of the public with his Jews: Are They Human? (1939). His answer is that they have their own virtues—they’re very loyal to one another and family-loving (clannish), careful with money (greedy and miserly), and so on. Like Sayers, he put it in positive terms, but it was still endorsing the notion that Jews have an essential set of characteristics.

Lewis took Hitler’s claims of wanting world peace at face value, but it’s interesting that he didn’t take Nazi antisemitism at face value. I think it’s because he didn’t really object to it all that much. Lewis and the Nazis didn’t disagree as to the basic character of Jews; they just disagreed as to what should be done about it. So, for Lewis, Hitler’s antisemitism wasn’t especially notable—it was something he could dismiss as a little bit of an overreaction.

What has been a little surprising to me in working on demagoguery, especially when it leads to extreme policies about the cultural out-group, is the number of people who consider themselves “moderate” who endorse the basic narrative behind the demagoguery about the out-group. They just don’t think it should be taken too far.

Germans who agreed that there should be a quota for Jewish doctors, Americans who agreed that integrated schools were just a little too much, Brits who wouldn’t want their daughter to marry one—they could all see themselves as “not racist” (or, at least, not unreasonable in their attitudes toward Those People) because there was some other group less nuanced, less reasonable in their hostility. And, when push came to shove, they might raise an eyebrow at the people who did go “too far,” or perhaps mutter some criticism, but that’s about it. They were often allies, and rarely enemies, of the people who went “too far.”

Thus, that we now have people who say “I’m not racist, but…” isn’t a sign that there is a new kind of racism. It’s an old form, and a very damaging one.

The weird place of expertise in our culture of demagoguery

image of batboy


While I was working on demagoguery, I was continually puzzled by the problem of anti-intellectualism. The problem matters because, too often, we characterize demagoguery in ways that we would never recognize if we’re getting suckered by it. We tell ourselves that demagogues are frauds, dishonest, and manipulative, but our leaders and pundits are sincere, truthful, and authentic. Sure they have to lie sometimes, but they aren’t lying out of a place of dishonesty–it’s out of sincere concern, it’s necessary, and they’re basically truthful. Supporters of even the most notorious demagogues believed that they weren’t supporting demagoguery because they believed that Hitler, Theodore Bilbo, Fidel Castro, Joseph McCarthy, Cleon were sincere, truthful, and authentic.

In general, I think it makes more sense to emphasize the culture of demagoguery, since the people we identify as demagogues were only able to come to power because the culture rewards demagoguery.

Demagoguery says that we don’t really face complicated issues of policy deliberation in a community of divergent and conflicting values, goals, and needs about issues that don’t have perfect answers. It says that things just look complicated—they’re actually very simple. We just have to commit to the obvious solution; that is, the solution that is obvious to our side.

That insistence on the solution being obvious, on disagreement and deliberation as unmanly dithering, can look like anti-intellectualism since it means the rejection of the kind of nuance and uncertainty generally considered central to science or research. But I’m not sure it’s useful to call it anti-intellectualism, since people rarely think of themselves as anti-intellectual. Like emphasizing the honesty/dishonesty of demagogues, talking about the anti-intellectualism of demagoguery means we won’t identify our own demagoguery.

It’s true that demagoguery often relies on rejecting experts as “eggheads” or, in Limbaugh’s phrase, “the liberal elite.” That quality of anti-elitism often means that scholars characterize demagoguery as a kind of populism (e.g., Reinhard Luthin). But lots of populism isn’t demagogic, and rhetoric in a democracy is of course going to attack some elite group–the super-rich, the military-industrial complex, Fat Cat Bankers. After all, major changes will be to disadvantage of someone.

In addition, we don’t like to see ourselves as crushing some weak group; we like the David and Goliath narrative. The narrative of the spunky underdog fighting a massive power is so mobilizing that it’s often used under ridiculous circumstances. To condemn populism, therefore, just condemns rhetoric.[1]

As Aristotle pointed out, the elite can engage in demagoguery. Earl Warren’s demagoguery regarding “the Japanese” was directed toward Congressional representatives, and he was presenting himself as an expert summarizing the expert judgment of others. Harry Laughlin’s demagogic testimony before Congress regarding the supposed criminality and mental incapacity of various “races” was expert testimony–experts can be full of shit, as he was.[2] I think there is a different way of estimating expertise, but I’ll get to that in a bit.

At one point, I started to think that demagoguery simplifies complicated situations, and I still think that’s more or less true, but in a deceptively complicated way. Demagoguery can have very complicated narratives behind them, so complicated that they’re impossible to follow (because they don’t actually make sense). QAnon, 9/11 conspiracies, Protocols of the Elders of Zion, conspiracy theories about Sandy Hook–they’re the narrative equivalent of an Escher drawing (conclusions are used as evidence for conclusions that are used as evidence for the first conclusions).

They’re often complicated narratives, in that they might have a lot of details and data, but they’re in service of a simple point about which one is supposed to feel certain: the out-group is bad, we are threatened with extermination[3], and any action we take against them is justified because they’re already doing worse or they intend to. So, the overall narrative is simple: we are good; they are evil.

Or, perhaps more accurately, the overall narrative is clear and provides us with certainty. Demagoguery equates certainty with expertise. Experts are certain; demagoguery doesn’t reject expertise, then, let alone precision, but it does reject any “expert” opinion that talks in terms of likelihood. Demagoguery relies on the binary of certain/clueless.

Thus, in a demagogic culture, certainty (sometimes framed as “decisiveness”) is seen as real expertise, the kind of expertise that matters.

Demagoguery tends to favor the notion of “universal genius”–the idea that judgment is a skill that applies across disciplines. So, someone with “good judgment” can see the truth in a situation even if they aren’t very knowledgeable. “Good judgment” is (in this model) not discipline specific (so someone with a PhD in mechanical engineering might be cited as an expert about evolution because he’s a “scientist”).

What I’m saying is that there are five qualities that contribute to demagoguery that we’re tempted to call “anti-intellectualism:” 1) the rejection of uncertainty; 2) the related rejection of deliberation; 3) the emphasis on narratives that are, in their end result, simple (we’re good and they’re bad); 4) faith in “universal genius;” 5) the equation of expertise with decisiveness.

Our impulse when arguing with someone who is promoting a debunked set of claims is to say “It’s been debunked by experts.” But that doesn’t work because it hasn’t been debunked by the people they consider experts. Similarly, it doesn’t help to say that they “reject facts.” They think they don’t–they think we do. (And we do, in a way–we reject data, some of which might be true.) I’m not sure how to persuade someone promoting false information that it’s false, but I’m increasingly coming to think that we’ll be running in place as long as we’re in a culture of demagoguery.

We need a conversation about certainty.



[1] I think there is a kind of populism that is toxic, and it’s the kind that Muller and Weyland each call “populism.” I think it’s more useful to call that kind of populism “populist demagoguery” or, as do Berlet and Lyons, “toxic populism.”

[2] I talk about these cases a lot more here.

[3] When I say this, many people focus on the “extermination” part, as though I’m casting doubt on whether groups sometimes face extermination. I’m not. As a side note, I’ll say that I’ve long noticed that people who live and breathe demagoguery have trouble noticing restrictive modifiers, especially if they’re left-branching or the modifier isn’t immediately obviously meaningful to them. That’s a different post, but the short version is that a person who thinks demagogically will read “Zionist Christianity is not necessarily a friend to Israel” as a claim about Christians, not a very specific kind of Christian.

Yes, unhappily, many groups face(d) extermination, but the situation isn’t zero-sum between only two groups. Something that hurt the Nazis didn’t necessarily help the Jews; Jews had potential allies among groups that were neither Jewish nor Nazi; there were, and had long been, disagreements within the Jewish communities in Europe as to how to respond to anti-semitism. Even now, it’s hard to say what would have been “the” right response because there probably wasn’t only one right response.

[2] People not engaged in demagoguery aren’t obligated to argue with every person who disagrees with them, but if we reject every opposition argument on the grounds that simply disagreeing means someone is bad, then it’s demagoguery.