Sciencing in public

As someone really worried with how badly Americans argue about public policies, I’ve especially worried about highly politicized attacks on science, and how hard it is for scientists to get pretty basic concepts understood. As a historian of public argumentation, I’m unhappily aware that the tendency to attack scientific discoveries on purely political grounds isn’t new. And a lot of people have written things about how science is attacked, and bemoaned our inability to get scientific findings to have real impact on public policy, but I think those things haven’t had much impact because of their rhetoric.

Lots of people have said that scientists’ rhetoric is flawed because it’s too technical and academic, but, honestly, I don’t think that’s the problem. I think the two major problems that vex public uses of science in public policy are: culturally, we have a vague definition of what is a “science,” and second, we have a thoroughly muddled notion of what “objectivity” is.

And scientists themselves don’t help. In public, too many scientists conflate “science” and “what I think is good science” and appeal to an inconsistent epistemology.

What people engaged in research about climate change, vaccines, evolution, and gender need to understand is that the people who attack what some of us think of as science do so by citing what they think of as science.

Behind the arguments that we think of as “science” arguments are, it seems to me, two deep misunderstandings: first, what a “science” is; second, what epistemology (model of knowledge) is right. The first one is relatively straightforward, but the second, more complicated one, is the really crucial one.

Part of the problem is that the cultural understanding of what it means to be a “science” is muddled, and, for a large number of people, simply outdated. Until well into the 20th century, various disciplines were called “sciences” that had nothing to do with what we now think of as the scientific method, insofar as they relied on non-falsifiable claims (eugenics, for instance). But they called themselves sciences and they were accepted as such because they had numbers, they had experts, and they had peer-reviewed journals. For many people, that older notion of a “science” prevails: a science is something that is done by people with degrees in fields that seem kind of science-y and have a lot of math. (Look at the oft-shared list of “scientists” who say global warming is a hoax.)

There are various organizations out there (and long have been) with very clear political agenda that call themselves “sciences” or “scientific” and manage to mimic the rhetorical moves of sciences. This, too, is nothing new. When various organizations abandoned race as a useful concept, racists formed their own organizations and journals that only published “studies” that fit their political agenda (John P. Jackson’s Segregationists for Science describes this process elegantly). Meanwhile, they railed at the mainstream journals for being politicized. They managed to look like “science” to many people because they had authors who had degrees in science, some of whom worked as “scientists.” That notion of science is an identity argument: science is the work done by people we think of as scientists.

The same thing happened when psychologists decided that homosexuality was not a mental illness—organizations formed with the political agenda of only supporting research that pathologized homosexuality (and, once again, that condemned other research as “politicized”). And they call themselves scientific organizations, with “research” prominent in their titles. There are similar organizations and webpages (and some journals) for organizations that promote Young Earth Creationism, anti-vaccine rhetoric, attacks on climate change, and all sorts of other ideologically charged issues. And, as with the pro-segregationist rhetoric, they are explicitly politicized while projecting that condemnation onto their critics. Because they are explicit that they are looking for “science” that supports beliefs they already have, one of the very straightforward ways that they are not sciences is that their claims are non-falsifiable.

They are scientific, they say, because they can generate studies and data that support their beliefs. In the case of creationism and homophobia, the groups often insist that they are proving that Scripture and “science” say the same thing. They can support their readings with data or quotes from people with degrees in science, and with scientific-sounding explanations. That’s cherry-picking, of course, but it means that they can invoke the authority of “science” to support their claims.

(And here I should probably come clean: I self-identify as Christian, and I think they cherry-pick Scripture just as much as they cherry-pick “science.”)

When I first wandered into these places, where people at odds with the scientific consensus insisted that they were doing science, I just assumed that there were being deliberately disingenuous, but I no longer think so. For me, as for many people, there is “normal science,” which is the data being produced by people publishing falsifiable studies in peer-reviewed journals. Science, furthermore, has the quality that scholars in rhetoric call “good faith argumentation,” meaning that the people putting forward a claim can imagine being presented with data that would cause them to abandon it (there are some other characteristics, but that one is the important one here). But that isn’t how everyone thinks about science–it isn’t about method, but about the identity of the person doing the work.

Young Earth Creationists, for instance, fail at every point mentioned above (except posture). They can cite data to support their claims (some of which, but not much, is true), but they can’t articulate the conditions under which they would abandon their narrative about the creation of the earth.

So, why do they continue to think of themselves as doing science?

It’s the identity argument. As I said earlier, for many people, “science” is the activity done by people who have degrees in a science field, regardless of the institution, and regardless of the discipline. So, how do they distinguish between good and bad science? Good science is true.

For them, science is a relationship to reality—if you’re a “scientist,” then you have a direct connection to the logos that God breathed into the fabric of the universe. Thus, that 700 scientists would say that global warming is false shows that people with that kind of unmediated knowledge make a claim. That faith in unmediated knowledge is often called the “naïve realist” epistemology.

That “unmediated knowledge” is crucial to all this, and it’s where scientists trip themselves up. It’s important to understand that the people arguing for young earth creation believe that they can simply look and see the truth–so any argument that says “You’re wrong, because you can simply look and see a different answer” isn’t going to work rhetorically. They are looking, and they can find evidence to support their position.

And that raises the second, fairly complicated, problem about epistemology. And scientists have issues with this, I think, because when in public they’re naive realists, and they insist you’re either a naive realist or a postmodern relativist (really? do they think creationists are postmodernists? they’re pre-modernists), but when at home they’re skeptics. Science itself rejects naive realism, so scientists need to stop talking as though there is naive realism or post-modernism. (In fact, that’s how creationists talk, which is a different post.)

A non-trivial complication in how the public argues about “science” is that what I earlier called “normal science” is often advocated by people who do and don’t claim that they have unmediated knowledge of the world. That’s a rhetorical problem. Scientists and young earth creationists (and all the other advocates of bad science out there) appeal to and reject naïve realism.

Briefly, many defenders of science in public debates make two claims simultaneously: science is indisputably true; science is better than religion because scientists change their mind when presented with new evidence—science is falsifiable. In other words, science looks true to people AND the results of scientific studies are contingent claims that could be proven false. So, as I said, in public discourse, too many scientists appeal to naive realism, but the scientific method itself rejects naive realism.

To many people, that looks as though scientists are saying that, although we’ve changed our mind a lot in the past (meaning “science” can be wrong) we are absolutely right now. Or, more bluntly: science is true but it’s been false.

And, let’s be blunt: it’s been false. Eugenics was mainstream science. It had bad methods, but it was mainstream science, and it was taught in science classes. It didn’t look bad at the time. Medicine claims to be a science, as does nutrition, and it has made a lot of claims that scientists in those fields now believe to be false.

Scientists need to reject the false binary of “you either believe that science tells us things that are obviously true” or “you are postmodernist literary critic who believes that all claims are equally true.” That is not only a falsifiable claim, but a false one. Young earth creationists are cheerfully unaffected by postmodernism anything, and they say that they believe things that are obviously true. Also, there are very few “postmodernists” who say that “all claims are equally true”–Feyerabend comes to mind, and very few others, and no, that isn’t actually what Foucault or Derrida said. (And I don’t even really like Foucault or Derrida, and I think that’s just an outrageously ignorant way to characterize what they’re saying.)

Keep in mind, Popper said that objectivity isn’t about what an individual does. A claim is objective, he said, because it’s an object in the world, and he said an objective claim isn’t necessarily true. So, since Popper said that an individual scientist isn’t necessarily objective, is he a postmodern relativist?

Good science isn’t about the cognitive processes of individuals engaged in science; it’s about the arguments people in science have. When people claim that you either believe what “science” says right now or you’re a postmodernist relativist hippy, they’re rejecting the scientific method.

The whole premise of the scientific method, especially concepts like a control group, falsifiability, and double-blind studies, is that people are prone to confirmation bias (a good study doesn’t set out to confirm a hypothesis: it sets out to falsify one). The scientific method presumes that humans’ perception is clouded. That acknowledging that individuals can’t see the truth doesn’t make the underlying epistemology either solipsistic or relativist (both of which are, oddly enough, often misnamed as postmodernism—they long predate modernism, let alone postmodernism). It means that science generally exists in the realm of skepticism, sometimes radical, sometimes the mild version that Karl Popper called fallibilism. For Popper, there is a truth out there, and it can be perceived by individuals, but individuals are fallible judges of when we have and have not reached the truth.

Science isn’t about binaries. It’s about continua. There are some claims that could, in principle, have been falsified, but have so withstood such tests that it isn’t even interesting to consider the possibility—such as evolution. There are aspects of evolution about which there is disagreement, and about which new consenses continue to form (such as the direct ancestor of homo sapiens), but all of those disagreements are subject to proof and disproof through further research. And that is the difference between evolution and creationism: religious faith, by its very nature, cannot be subject to disproof. Science is, fundamentally, a rejection of naive realism and of binaries about certainty: it says we should be skeptical about all claims, and we should think about claims in terms of how certain we are of them.

It’s no coincidence that science and skepticism arose at the same time, and, in fact, that’s the argument that scientists make about how science is different from religion: a true scientist will abandon her beliefs if the data disconfirm them, but religion is about rejecting the data if it disconfirms the beliefs.

Let me rephrase my original statement of the problem: scientists make a rhetorical claim (their claims should be granted more credence because of how they are supported), and an epistemological one (their arguments are true). I sincerely believe that science is in such a bad way right now because too many advocates of science reject what they know: that science isn’t about being certain or not, but about how certain you are, and what are the conditions under which you should change your mind.

The epistemology underlying science is a skeptical one, and scientists know that. When they’re arguing in public, they need to stop acting as though there is either naive realism or postmodern relativism. Scientists are skeptics who argue passionately for their point of view.

Right now, our political world is demagogic, and that means that our political world is dominated by the notion that there are good people who perceive the obviously correct way to do things and those assholes. We disagree about who are the assholes, but we all agree that it’s a binary.

What science could and should do for us is show a different way of thinking about thinking–that the right course of action depends on a correct understanding of the world as it is, and there is no correct understanding immediately available to us, but there are understandings that look pretty damn good, given all the research that’s been done.

I’m not saying that scientists need to argue better in public; while I think the whole project of sciencing in public is wonderful, I also think, ultimately, scientists aren’t obligated to be rhetoricians. (Some of them are wonderful rhetoricians, such as Steven Weinberg, but that shouldn’t be a requirement.) Instead, I think we need, as a culture, a better understanding of how knowledge isn’t a binary between certain and uncertain, but a continuum. I think, oddly enough, that the solution to our current problem of fake science isn’t really in science, but in the study of knowledge.

Among Democrats (Compromise, Purity, and Lefty Politics)

Among Democrats, there are a lot of narratives about the 2016 election, and two of them are highly factional (that is, they assume an us or them, with us being the faction of truth and beauty and them being the people who are leading us astray). One is that Clinton’s election was tanked by Bernie-bros who were all young white males too obsessed with purity to take the mature view and vote for Clinton. The other is that the DNC, an aged and moribund institution, foisted Clinton onto Dems when she was obviously the wrong candidate.

Both of those narratives are implicit calls for purity, for a Democratic Party (or left) that is unified on one policy agenda—maybe the policy agenda is a centrist one, and maybe it’s one much further left—but the agreement is that we need to become more purely something. Both narratives are empirically false (or else non-falsifiable), patronizing, and just plain offensive. In other words, both of those narratives are driven by the desire to prove that “us” is the group of truth and goodness and “them” is the group of muddled, fuddled, and probably corrupt idjits.

And, as long as the discourse on the left is which “us” is the right us, progressive politics will lose.

There isn’t actually a divide in the left—there’s a continuum. People who can be persuaded to vote Dem range from authoritarians drawn to charismatic leadership (anyone who persuades them that s/he is decisive enough to enact the obviously correct simple policies the US needs) all the way through various kinds of neoliberalism to some versions of democratic socialism. And there are all those people who can vote Dem on the basis of a single issue—abortion or gun control, for instance. When Dems insist that only one point (or small range) on that continuum is the right one, Dems lose because none of those points on the continuum has enough voters to win an election. That’s why purity wars among the Dems are devastating.

While voting Dem is actually a continuum, there are many who insist it is a binary—those whose political agenda the DNC should represent (theirs) and those whose agenda is actually destructive, whose motives are bad, and who cause Dems to lose elections (everyone else—who are compressed into one group).

Here’s what’s interesting to me. It seems to me that everyone who wants Dem candidates to win recognizes that a purity war on the left is bad, and everyone condemns it. Unhappily, being opposed to a purity war in principle and engaging one in effect are not mutually exclusive. There is a really nasty move that a lot of people make in a rhetoric of compromise—we should compromise by your taking my position—and that is what a lot of the “let’s not have a purity war” on the left seems to me to be doing. Let’s not do that. Let’s do something else.

This is about the something else that we might do.

And it’s complicated, and I might be wrong, but I think that Dems will always lose in an “us vs. them” culture because, at its heart, the Dem political agenda is about diversity and fairness, and people drawn to Dem politics tend to value fairness across groups more than loyalty to the ingroup, so any demagogic construction of ingroups and outgroups is going to alienate a lot of potential Dem voters. Sometimes voting Dem is a short-term looking out for your own group, but an awful lot of Dem voters are motivated by the hope of creating a world that includes them. I don’t think Dems will succeed if we grant the premise that Dem politics are about resisting: that only the ingroup is entitled to good things.

But we’re in a culture of demagoguery, in which politics is framed as a battle between Good and Evil, and deliberation (in which people of different points of view come together to work toward a better solution) that we’re in a world of us vs. them, how can Dems create a politics of us and them? That is our challenge.

And I want to make a suggestion about how to meet that challenge that is grounded in my understanding of what has happened in the past, not just 2016 (although that is part), but also to ancient Athens, to opponents of Andrew Jackson, to opponents of Reagan, and in the era of highly-factionalized media. I want to argue that what seem to be obviously right answers are not obvious, and possibly not even right.

1. In which I watch lefties tear each other to shreds and lose an election we should have won

When I first began to pay attention to politics, and saw how murky, slow, and corrupt it all was, it seemed to me that the problem was clear: people started out with good principles, and then compromised them for short-term gains, and so, Q effing D, we should never compromise. (I saw The Candidate as a young and impressionable person.)

I could look at political issues, and see the obvious course of action. And I could see that political figures weren’t taking it. Obviously, there was something wrong with them. Perhaps they were once idealistic, perhaps they had good ideas, but they were compromising, and, obviously, they shouldn’t; they should do the right thing, not the sort of right thing.

Another obvious point was how significant political change happens: someone sets out a plan that will solve our problems, and refuses to be moved. ML King, Rosa Parks, FDR, Woodrow Wilson, John Muir, Andrew Jackson (no kidding—more about his being presented as a lefty hero below) were all people who achieved what they did because they stood by their principles.

That history was completely, totally, and thoroughly wrong, in that neither Wilson nor Jackson were the progressive heroes I thought and that all of those figures compromised a lot, but, if that’s the history you’re given then you will believe that to compromise necessarily means moving from that obviously right plan (about which you shouldn’t have compromised) to one that is much less right, and the only reason to do that would be pragmatic (aka, Machiavellian) purposes. Therefore, substantial social change and compromise are at odds, and if you want substantial social change, you have to refuse to compromise. (Again, tah fucking dah—there’s a lot of that in easy politics.)

My basic premise was that the correct course of action was obvious, and, therefore, I had to explain why political figures didn’t adopt it. Why would people compromise a policy that is obviously right? And, obviously, they had to deviate from the right course of action in order to get political buy-in from people who value things I don’t value. Or they were bad politicians in the pocket of corporate interests. (Notice how often things seemed obvious to me.)

And then Reagan got elected. Reagan lied like a rug, and yet one of the first things his fans said about him was that he was authentic. He announced his run for Presidency by saying he would support states rights at the site of one of the most notorious civil rights murders. And yet his fans would get enraged if you suggested he appealed to racism.

People loved him, regardless of his policies, his actual history, his lies. They loved his image. (It’s still the case that people admire him for things he never did.)

When he was elected, lefties went to the streets. We protested. The people protesting were ideologically diverse—New Deal Dems, people who had said that there was no difference between him and Carter, radical lefties, moderate lefties, I even saw people who told me they intended to vote for Reagan because it would make the peoples’ revolution more likely, and they were now protesting that the candidate they had supported had won.

There were more than enough people out protesting Reagan’s election to prevent his getting reelected. And, in 1980, we all agreed that he shouldn’t be reelected. Unhappily, we also all agreed that he had been elected because there was too much compromising in the Dem party, that Carter was a warmongering tool of the elite, and the mistake we made was not have a candidate who was pure enough. And, so, we agreed, the solution was for the Dems to put forward a Presidential candidate who was more pure to the obviously right values and less willing to compromise on them. We didn’t get that candidate, we didn’t get a very good candidate in fact (he was pretty boring), but his policies would have been good. And a lot of lefties refused to vote for him.

Unhappily, it turns out we disagreed as to what those obviously right values were.

In 1980, the Democratic Party was the party of unions, immigrants, non-whites, people who believe in a strong safety net, isolationists, humanitarian interventionists, pro-democracy interventionists, people who believe a strong safety net was only possible in a strong economy (what would be later be called third-way neoliberals), environmentalists, people who were critical of environmentalists, and all sorts of other ideologically diverse people.

There wasn’t a party platform on which we could all agree. To support the unions more purely would have, union reps argued, meant virulently opposing looser standards about citizenship and immigration. The anti-racist folks argued for being more inclusive about citizenship and immigration. Environmentalists wanted regulations that could cause manufacturing to move to countries with lower standards, something that would hurt unions. People who wanted no war couldn’t find common ground with people who wanted humanitarian intervention. (And so it’s interesting how conservative the 1980 platform now looks.)

Dems, at that point, four choices: reject the notion that there was a single political agenda that would unify all of its groups (that is, move to a notion of ideological and policy diversity in a party); decide that one group was the single right choice; try to find someone who pleased everyone; try to find candidates who wouldn’t offend anyone; or engage in unification through division (get people to unify on how much they hated some other group).

Mondale was the fourth, most lefties went for the second or fifth. I think we should consider the first.

At the time I was a firm believer in the second, for both good and bad reasons. And lots of other people were too. What we believed is what I have come to think of as the P Funk fallacy: if you free your mind, your ass will follow. I believed that there were principles on which all right-thinking people agree, and that those principles necessarily involve a single policy agenda. Thus, we should first agree on principles, and then our asses will follow.

Lefty politics is the grandchild of the Enlightenment. We believe in universal rights, the possibilities of argument, diversity as a positive good, the hope of a world without revenge as the basis of justice. And, perhaps, we have in our ideological DNA a gene that is not helping us—the Enlightenment is also a set of authors who shared the belief (hope?) that, as Isaiah Berlin said, all difficult questions have a single true answer. I think the hope is that, if we get our theories right—if we really understand the situation—then the correct policy will emerge.

But, there might not be a correct policy, at least not in the sense of a course of action that serves everyone equally well. An economic policy that helps creditors will hurt lenders, and vice versa.[1] In trying to figure out then what kind of economic policy we will have, we can decide we’re the party of lenders, or we’re the party of borrowers, and only support policies that help one or the other. Or, we could be the centrist party, and try to have policies that kinda sorta help everyone a little but not a lot and therefore kinda sorta hurt everyone a little but not a lot. And thereby we’re promoting policies that everyone dislikes—I think Dems have been trying that for a while, and it isn’t working. But neither is deciding that we’ll only be the party of borrowers, since borrowers require lenders who are succeeding enough to lend.

The problem with the whole model of politics being a contest between us and them is that it makes all policy discussions questions of bargaining and compromise. What’s left out is deliberation. But that’s hard to imagine in our current world of, not just identity politics, but of a submission/domination contest between two identities. And, really, that has to stop.

Blaming the left for identity politics is just another example of the right’s tendency toward projection. The Federalist Papers imagines a world in which elections are identity-based (which the Constitution’s defenders saw as preferable to faction-based voting). Since most voters could not possibly personally know any candidate for President or Senate, they should instead vote for someone they could know, and whose judgment they trusted (see, for instance, what #64 says about the electors and the Senate). That person could then know the various candidates and make an informed decision as to which of them had better judgment. So, at each step, people are voting for a person with good judgment, to whom they were delegating their own deliberative powers.

That vision quickly evaporated and was replaced by exactly what the authors of the Constitution had tried to prevent: party politics. And then, by the time of Andrew Jackson, we got a new kind of identity politics: voting for a candidate because he seems to share your identity, and, will therefore look out for people like you. His good judgment comes not from expertise, the ability to deliberate thoughtfully, or deep knowledge of history, but from his being an anti-intellectual, successful, and decisive person who cares about people like you. Through the nineteenth century, the notion of an ideal political figure shifted from someone much smarter than you are to someone not threatening to you.

2. Factionalism, Andrew Jackson, and the rise of identification

The problem that everyone to the left of the hard right has is the same: that we are in a culture in which rabid factionalism on the part of various right-wing major media is normalized, and anything not rabidly right-wing is condemned as communist. Lefties should be deeply concerned about factionalism (including our own), and careful about how we try to act in such a world. There is are several clear historical lessons for Americans as to what that kind of rabid factionalism does (I’ll just talk about Athens), and a clear lesson from American history as to how we should not try to manage it (the case of Andrew Jackson).

Here’s the short version. The US, when it was founded, was an extraordinary achievement on the part of people well-versed in the histories of democracies, republics, and demagoguery. Their major concern was to make sure that the US would not be like the various republics and democracies with which they were familiar. That included the UK (which was, at that point, immersed in a binary factionalism), various Italian Republics (especially Florence and Venice), the Roman Republic, and Athens.

And Athens is an interesting case, and something about which current Americans should know more. Knowing their Thucydides (via Thomas Hobbes, a post I might write someday), the authors and defenders of the constitution knew that Athens had shot itself in the face because at a certain point (just after the Mytilenean Debate, for those of you who care), everyone in Athens thought about politics in two ways: 1) what is in it (in the short-term) for me; 2) what will enable my political party to succeed?

No one worried about “what is best for Athens” with a vision of “Athens” that included members of the other political party. So, because Athens was in a situation of rabid factionalism, you would cheerfully commit troops to a political action if you thought it would do down the other party. Military decisions were made almost entirely on factional bases.

Thucydides describes the situation. He says that city-state after city-state broke into hyper-factional politics that was almost civil war. All anyone cared about was whether their party succeeded—no one listened to the proposals of the other side with an ear to whether they were suggesting something that might actually help. In fact, being willing to listen to the other side, being able to deliberate with them, looking at an issue from various sides—all of those things were condemned as unmanly dithering. Refusing to call for the most extreme policies or suggesting moderation wasn’t a legitimate position—anyone doing that was just trying to hide that he was a coward. Only people who advocated the most extreme policies was trustworthy; anyone else wasn’t really loyal to the party and so shouldn’t be trusted. Plotting on behalf on the party was admirable, and it didn’t matter how many morals were shattered in those plots—success of the party justified any means. But people weren’t open that they were willing to violate every ethical value they claimed to have in order to have their party triumph; people cloaked their rabid factionalism in ethical and religious language while actually honoring neither. So, Thucydides says, there was a situation in which every good value was associated with your party triumphing, and every bad value associated with their not triumphing.

People worried about their party, and not their country.

We can think, why would anyone do that? And yet, we might do it. No one thought to themselves, “I wish to hurt Athens and so I will only look out for my political party.” Instead, what they probably never thought, consciously, but was the basis for every decision was that only their group was really Athenian. So, they thought (and sincerely believed) anything that promotes the interests of my group is good for Athens because only my group is really Athenian.

Michael Mann, a scholar of genocides, calls this the confusion of ethos and ethnos. The “ethos” of a country is the general culture, and the “ethnos” is one particular ethnic group. What can happen is that specific group decides that it is the real ethos, and therefore any action against other groups is protecting “the people.” They are the only “people” who count. Seeing only your class, political party, ethnic group, or religion as the real identity of the group hammers any possibility of inclusive deliberation. It is also the first step toward the restriction, disempowerment, expulsion, and sometimes extermination of the non-you. While not every instance of “only us counts” ends in mass killing, every kind of mass killing—genocide, politicide, classicide, religoicide—begins with that move.

Even ignoring the issue of the ethics of that way of thinking, it’s a bad way for a community to deliberate. But what they did think, as Thucydides says, is that anything that helped you and your party was a good thing to do, even it was something you would condemn in the other party. You might cheerfully use appeals to religion to try to justify your policies, but if other policies better helped your party, then you’d use religion to justify those policies. No principle other than party mattered.

If the other side proposed a policy, you didn’t assess whether it was a good policy, you were against it. You were especially likely to be against it if it was a good policy, since then they would gain more supporters. You would gleefully gin up a reason that troops should be sent to a losing battle and put an opposition political figure in charge—losing troops (and a battle) was great if it hurt the party.

And so Athens crashed. Hardly a surprise.

In fact, the people of Athens were dependent on each other, and no group could thrive if other groups lost battles. Us and Them thinking forgets that we are us.

At the time of the American Revolution, the British political situation was completely factionalized. We might like to admire Edmund Burke, who so eloquently defended the American colonies, but even I (an admirer of his) know that, had his party been in good with George III (they weren’t) he probably would have written just as eloquent an argument for crushing the American Revolution. The authors of the Constitution were also well aware of other historical examples that showed the fragility of republics, especially Venice (one of the longest lasting republics), Florence, and Rome.

And those were the conditions the authors of the Constitution tried to solve through the procedure of people voting for someone whose authority came from intelligence and judgment. That is, the constitution worked by having people vote, not for the President directly (since you couldn’t possibly know the President personally) but for someone you could know—a state legislator, an elector—whose judgment you could assess directly. But factions arose anyway.

The factions were somewhat different from those in either Athens or Britain. In Athens it was (more or less) the rich who wanted an oligarchy, or really a plutocracy, with the wealthy having more power than the poor, and with very little redistribution of wealth. On the other side were the non-leisured (not necessarily poor, but not very wealthy either) who wanted at least some redistribution of wealth and a lot of power-sharing. But an individual’s decision to join a particular faction was also influenced by family alliances and personal ambition. In Britain, factions were described as country versus city (wealth that came from land ownership versus industry and finance) which may or may not be accurate. As in Athens, there were other factors than just economics, and that city-country distinction might itself have been nothing more than good rhetoric to explain factions that weren’t really all that different from each other.

In the US, by the time of Andrew Jackson’s rise (the 1820s), there was some division along economic lines (agriculture vs. shipping, for instance), and some along ideological ones (Federalist vs. Antifederalist), but they didn’t give a very clean binary. There were more than two parties, and even the major parties were coalitions of people with nearly incompatible political agenda (Whigs and Democrats were both strong in the North and South, for instance). Given both the youth of the country and the large number of immigrants, there weren’t necessarily family traditions of having been in one faction or another, and there wasn’t some kind of regional distinction (the North was still predominantly agricultural, and some “Northern” states had slaves until the 1830s, so neither the agricultural/industrial nor slave/not slave distinctions provided any kind of mobilizing policy identity). There wasn’t the odd role that the monarchy played in British political factions (for years, one faction attached to the monarch, and another to the son whom the monarch hated). US factions were muckled and shapeshifting.

A disparate coalition is particularly given to intrafactional fighting, splitting, and purity wars, and so there is generally a strong desire to find what is usually called a “unification device.” The classic strategy to unify a profoundly disparate coalition is two-part: unification through finding a common enemy; cracking the other side’s coalition with a wedge issue. If a party is especially lucky, that two-part strategy is made available through one issue. And that’s what US parties did in the antebellum era, and, after trying various ones, they ended up on fear-mongering about abolitionism, with some anti-Catholicism thrown into the mix.

Antebellum media was extremely factionalized. Newspapers were simultaneously openly allied with a particular party, rabidly factional, and passionate in their condemnations of faction.

“The bitterness, the virulence, the vulgarity, and perfidy of factious warfare pervade every corner of our country;–the sanctity of the domestic hearth is still invaded;–the modesty of womanhood is still assailed…” (“Party” U.S. Telegraph, June 24, reprinted from the Sunday Morning News). The anti-Jackson Raleigh Register had the motto “Ours are the plans of fair delightful peace, unwarp’d by party rage, to live like brothers” but spent the spring and early summer of 1835 in vitriolic exchanges with the Jacksonian Standard. One letter in the exchange, for instance, begins, “The writhing, twisting and screwing–the protestation, subterfuge and unfairness and the lamentation, complaint and outcry displayed in this famous production” (Raleigh Register February 10, 1835). (From Fanatical Schemes).

For instance, a newspaper’s criticism of a political party inspired a member of that party to threaten a duel, and, once the various rituals had been enacted that enabled a duel to be avoided, the person who had threatened a duel over his political faction having been criticized said, “I regard the introduction of party politics as little less than absolute treason to the South.”

When, from about 2003 to 2009, I was working on a book about proslavery rhetoric, this characteristic—that people operating on purely factional motives condemned factionalism—was one of the characteristics that made me begin to worry about current US political discourse, since it was so true of what I was seeing in American media. The most passionately factional media have mottos like “Fair and Balanced.” I have an acquaintance who consumes nothing but the hyper-factionalized media, and he has several times told me I shouldn’t believe something not-that-media because it’s “biased.” Clearly, he doesn’t object to biased media, since that’s all he consumes. And then I noticed that’s a talking point in various ideological enclaves—you refuse to look at anything that disagrees with the information you’ve gotten from your entirely biased sources on the grounds that they are biased.

If you push them on that issue, I’ve found that consumers of that extremely factional media respond to criticisms of their factionalism (and bias) with “But the other faction does it too”—a response that only makes sense in which every question is “which faction is better” not “what behavior is right.” So, even their defense of their factionalism shows that, at the base, they think political discourse is a contest between factions, and not a place in which we should—regardless of faction—try to consider various policy options. They live and breathe within faction.

Andrew Jackson was tremendously successful in that world, partially because of his conscience-free use of the “spoils system”—in which all governmental and civil service positions were given to supporters. And Jackson didn’t particularly worry about his policies; one of his major “policy” goals was abolishing the National Bank. Scholars still argue about whether he had a coherent political or economic policy in regard to the bank; what is clear is that he didn’t articulate one, nor did his supporters. Hostility to the bank was what might be called a “mobilizing passion,” not a rationally-defended set of claims. But that passion was shared with many who had almost gut-level suspicions of big banks, monetary controls, and a strong Federal Government.

It was such a widely-shared view that Jackson’s destruction of the Bank, and its direct consequence, the Panic of 1837, couldn’t serve as a rallying point for his opposition. And Jackson’s combination of popularity, use of the spoils system (including his appointment of judges—one of whom is an ancestor of mine), and strong political party worried many reasonable people that he was trying to create a one-party state. So, even as his second term was ending, people were trying to figure out how to reduce his power, and yet they couldn’t use what was quite clearly unsound economic policies.

There were more opponents of Jackson than there were supporters, but to call them disparate is an understatement. Some were pro-Bank, but too many were anti-Bank for that issue to be useful. There were a large number of anti-Catholics (some of whom might have been Masons), and also a few anti-Masons. Jackson’s bellicose (albeit effective) handling of the Nullification Crisis had alienated many of the South Carolina politicians whom he had trounced, but their stance on the tariffs (which had catalyzed the Nullification Crisis—they were trying to  nullify tariffs) was incompatible with manufacturers in other areas.

Jacksonian Democrats played two (related) cards quite effectively—they played to racism about African Americans by supporting disenfranchisement of African-American voters and engaging in fear-mongering about free African Americans at the same time that openly embraced Irish-Catholic voters (whose right to vote was still an issue in some places). They thereby drove a wedge between two groups that might have allied (poor Irish and freed African Americans), essentially offering the gift of “whiteness” to the Irish for their political support (this story is elegantly and persuasively told in How the Irish Became White). Because politics naturally works by opposites, this made Catholicism an issue on which other parties had to take a stand, and they stood to lose large numbers of voters no matter which way they jumped. The only thing that the various anti-Jackson parties shared was that they were anti-Jackson, and it’s hard to raise a lot of ire against a white guy who does a good job of coming across as a regular guy who really cares about “normal” people. In rhetoric, that’s called “identification”—a rhetor persuades an audience that s/he and they share an identity, and persuades them that the shared identity is all the information the audience needs.[2]

Elsewhere I’ve argued that John Calhoun tried to use fear-mongering about abolitionists (who were a harmless fringe group at that point) in order to unify proslavery forces behind him. It’s a great kind of strategy—you find some kind of hobgoblin that is politically powerless but that frightens a politically powerful group, and you present yourself as the only one who can save them from that hobgoblin. Unfortunately for everyone, Calhoun’s opponents simply picked up his method and American politics began an alarmism race to see who could out-fearmonger the others and call for increasingly extreme (and irrational) gestures of loyalty to slavery. Eventually, those gestures (such as the Fugitive Slave Law, the “gag rule,” the attempt to expand slavery past the Mason-Dixon Line, and, finally, the Dred Scott decision) generated as much fear and anger about The Slave Power as proslavery rhetors were generating about abolitionists.

Reagan was much like Jackson, in that his economic policies were vague but seemed populist, and he persuaded people that he really cared about them and understood them. He was normal, and he wanted normal Americans to be at the center of America.

Trump’s situation is different in that he has never had very high approval outside of his faction, but the rabidly factionalized media ensures that he has a deliberately and wickedly misinformed faction who are willing to pivot quickly for a new posture on a political issue.

What makes the two people similar, and like Jackson, is just that they have far more opponents than they have allies, and a highly mobilized base. As long as the opposition remains internally factionalized, they win. But, at this point, all that is shared among Trump’s opponents is opposition to Trump. The impulse might be to try to do what Jackson’s opponents did, and find some issue about which to fear-monger, or to do what Reagan’s opponents did, and remain factionalized. Right now, we seem headed toward the second, and in a somewhat complicated (and genuinely well-intentioned) way.

The advice seems to be that we need to have a unified and coherent policy agenda in order to mobilize voters. And, while I agree that simply being anti-Trump isn’t enough, I don’t think the unified and coherent policy agenda strategy will work either, for several reasons. The first reason is that it is trying to solve the problem of faction through faction. The second (discussed much later) is that it grounded in a misunderstanding of how Americans vote.

3. Trying to solve the problems of factionalized politics by creating a more unified faction

[Most of this section was pulled out and posted separately here.]

4. The mobilizing passion/policy argument

Speaking of reasonable arguments and thinking about probabilities, what are reasonable ways to go on from here and not repeat the errors of the past? The two most common arguments as to what we should do now are both, I’ll argue, reasonable. I’ll also argue that they’re probably wrong. But they aren’t obviously wrong, and I doubt they’re entirely wrong. One is that we’re losing elections because we aren’t putting forward a charismatic enough leader who inspires passionate commitment to a clear identity (what I always think of as “the Mondale problem”). The second is that the problem with the Dems in 2016 is that they didn’t have a sufficiently progressive platform of policies, and so there wasn’t a mobilizing political agenda. Therefore, we should have clearer mobilizing identity or political agenda.

I think these are reasonable arguments, but I don’t think either of them will work—I’m not sure they’re plausible (they certainly aren’t sufficient), and I’ll explain why in reverse order.

First, as to the “we just need someone with a clear progressive policy agenda,” I have to say that a lot of lefties who make that argument in my rhetorical world turn out to have no clue what policies Clinton advocated. They lived in a world of hating on Clinton throughout the election, and so remain actively misinformed about her policy agenda (and the number of them who shared links from fake news sites in October was really depressing).

A lot of lefties are political wonks, and so we assume that everyone else is equally motivated by policy issues. Unhappily, a lot of research suggests that isn’t the case. The next section relies heavily on three books: Hibbing and Theiss-Morse’s Stealth Democracy (2002), Achen and Bartels’ Democracy for Realists (2017), and Parker and Barreto’s Change They Can’t Believe In (2014). I should say, before going through the research on the issue, that I’m not as hopeless about the prospects for more policy argumentation in American public discourse as I think these authors are, and I do think that improving our politics through improving our political discourse is the most sensible long-term plan. For the short-term, however, I think it makes sense to be pragmatic about how large numbers of people make decisions about voting, and they don’t do it on the basis of deep considerations of policy—or on the basis of policy at all.

John Hibbing and Elizabeth Theiss-Morse summarize their research: people care more about process than they do about policy, and they “think about process in relatively simple terms: the influence of special interests, the cushy lifestyle of members of Congress, the bickering and selling out on principles” (13). According to Hibbing and Theiss-Morse, people believe that the right course of action on issues is obvious to people of goodwill and common sense who care about “normal” Americans: people believe that there is consensus as far as the big picture and that “a properly functioning government would just select the best way of bringing about these end goals without wasting time and needlessly exposing the people to politics” (133). Hibbing and Theiss-Morse refer to “people’s notion that any specific plan for achieving a desired goal is about as good as any other plan” (224).

A disturbing number of people believe that the correct course of action is obvious, because it looks obviously correct from their particular perspective. And I should emphasize that it isn’t just those stupid people who do it. Even lefties—even academic lefties—who emphasize the importance of perspective, teach about viewpoint epistemology, and reject naïve realism can regularly be heard at faculty meetings bemoaning the benighted administration for its obviously wrong-headed policy. In my experience, there is always a perspective from which the administration’s response is sensible. Most commonly, something that puts a great burden on my department (and my kind of department) is a policy that works tremendously well for most of the university, or for the parts of the university that the administration values more. Sometimes the bad policies are mandated by the state or federal government, or sometimes they are, I think, a misguided attempt to improve the budget situation. From my perspective, their policies look bad; from their perspective, my preferred policy looks bad.

I’m not saying that both policies are equally good, or all perspectives are equally valid, or that there is no way out of the apparent conundrum of a lot of people who all sincerely care for the university disagreeing as to what we should do. I’m saying that it’s a mistake for any of us to think that the correct course of action is obviously right to every reasonable person. I’m saying we really disagree, and that determining the best policy is complicated.

Most important, I’m saying that the tendency to dismiss disagreement and assume that complicated problems have simple solutions is widespread.

Since this depoliticizing of politics is widespread, how do people explain all the disagreement about policies? Hibbing and Theiss-Morse argue that people believe that most politicians are self-interested, and bicker so much because they are submissive to the “special interests” that donate money to them: “The people would most prefer decisions to be made by what [Hibbing and Theiss-Morse] call empathetic, non-self-interested decision-makers” (86). They quote one of the participants in their research who “said he had voted for Ross Perot in 1996 because he felt Perot’s wealth would allow him to be relatively impervious to the money that special interests dangle in front of politicians” (123).

Hibbing and Theiss-Morse are persuasive on the profoundly anti-democratic way that people perceive “special interests.” They say, “Our claim is that the people see special interests as anybody with an interest. Since government is filled with people who have interests, the people naturally come to the conclusion that it is filled with special interests.” (226)

People use the term “special interest,” according to Hibbing and Theiss-Morse, “to refer to anybody discussing an issue about which they do not care” (222).

We see ourselves as “normal” Americans, whose needs should be central to American policy, and whose problems should be solved quickly and sensibly. Were government functioning well, that’s what would happen, but it isn’t happening because the people in office put “special interests” above people like us, so we want someone who conveys compassion and care for us.[5]

That claim—that voters care more about caring and quick solutions to their problems and are neither interested in nor moved by policy deliberation—is supported by Achen and Bartels’ Democracy for Realists, which reviews years of studies in order to refute what they call the “folk theory of democracy.” That theory assumes that democracy is “rule by the people, democracy is unambiguously good, and the only possible cure for the ills of democracy is more democracy” (53).

Achen and Bartels conclude that elections don’t represent some kind of wisdom of the people, but “that election outcomes are mostly just erratic reflections of the current balance of partisan loyalties in a given political system” (16). Achen and Bartels argue that voters’ perceptions of policies—even basic facts—are largely determined by motivated reasoning (people use their powers of reason to rationalize a decision they have made for partisan reasons) or simply out of a desire “to kick the government,” even for natural disasters over which the government had no control (118). People aren’t motivated to join a party because they like the policies: “The primary sources of partisan loyalties and voting behavior, in our account, are social identities, group attachments, and myopic retrospections, not policy preferences or ideological principles” (267). By “myopic retrospections,” they mean events that happened in a very short period just before the election, for which they are punishing the incumbents.

Achen and Bartels refer to Hibbing and Theiss-Morse, and other scholars, in their conclusion that “many citizens in well-functioning democracies” don’t understand the value of opposition parties and the necessary disagreement that comes with different points of view.

They dislike the compromises that result when many different groups are free to propose alternative policies, leaving politicians to adjust their differences. Voters want ‘a real leader, not a politician,’ by which they generally mean that their ideas should be adopted and other people’s opinions disregarded, because views different from their own are obviously self-interested and erroneous. (318)

There is a right way, in other words, and it’s the way that looks right to normal people, and it’s the one that should be followed.

Michele Lamont’s The Dignity of Working Men (2000) emphasizes that many men (especially white) gain dignity from seeing themselves as disciplined, and explain their success as completely their own individual achievement—they actively resent goods (such as support of various kinds) being given to people who don’t work (see especially 132-135; this was less true of African Americans whom Lamont interviewed, who tended to emphasize the “caring” self). And, especially for white men, wealth isn’t necessarily good or bad; they don’t necessarily resent people who are more wealthy, but they do resent people with higher status who look down on them (108-109). They want to feel respected and cared about (which may explain Trump’s success with precisely the kind of voter whom many people thought would resent his problematic record with small businesses).

What all of this means is that thinking that the issue for the Dems in 2016, or the issue at the state and Congressional level, is that we haven’t articulated a compelling and thorough policy argument is almost certainly wrong. People who voted for Obama and then voted for Trump weren’t drawn by his policies, but his identity. As Achen and Bartels remind us, voters often get wrong the policies of their favorite political figures or their own party. And voters are easily maneuvered by mild shifts in wording (asking people about ACA versus asking them about Obamacare, for instance). Large numbers of voters don’t care about policies.

They care about slogans—they care about being told that the party or politician cares about them, and will throw out the bastards, drain the swamp, clean house. Large numbers of people want to be reassured that their needs and desires for themselves are the only ones that matter and will be the first priority of the party/rhetor.

And a lot of voters vote on the basis of promises the candidate can’t possibly fulfill. This isn’t just something that their ignorant supporters do. Certainly, Trump promised to do things the President can’t do without thoroughly violating the Constitution (since he was proposing to dictate Congressional and judicial policies–but both Sanders and Clinton proposed policies there was no reason to think they could get through a GOP Congress. I’m repeatedly surprised at the reactions of large numbers of people to SCOTUS decisions–many people (including smart and sensible friends) don’t seem to understand that it isn’t the job of SCOTUS to make sure that laws are “just”–it’s their job to make sure they’re constitutional.

In the early spring of 2016, I was in a hotel in Louisiana eating the fairly crummy free breakfast, and two men behind me were discussing Trump (they liked him). When they talked about how he was going to do something about all those poor people who lived off of the government, one of them said, “Well, what are you going to do? You can’t kill ‘em.” Then they got onto the subject of his plan for ISIS. One of them said, “They’re complaining that he won’t say what his plan is. But of course he can’t say what it is.” The other said, “Right, then ISIS would know it!” Trump’s promise was to develop a plan to crush and destroy ISIS within 30 days of taking office. His plan, as it turned out, was to tell the Pentagon to come up with a plan—as though that had never occurred to Obama?

What they needed was to believe he was the kind of person who could solve problems. He told them political issues are simple, and he was a straightforward person who, like Perot, couldn’t be bought—he wouldn’t genuinely represent them and their interests. And now he is saying that it turns out every single issue is complicated.

I often wonder about those two guys, and what they make of all this. If research on people drawn to simple solutions is accurate, then they’re doing one of three things: 1) rewriting history, so that they never voted for him on the grounds that he could solve things quickly and easily; 2) making an exception for his finding things complicated, and using his new admission that he was entirely and completely wrong in everything he said about politics as additional evidence of his “authenticity” and sincerity (and, since all they care about is that he sincerely cares about them, they’re good); 3) regretting voting for him, but not rethinking why they voted for him, what their assumptions were about how to think about politics.

That’s what happened with the Iraq invasion, after all. People who had supported it denied they’d ever supported it, denied it was a mistake, or blamed Bush for lying to them. They didn’t decide that their process of making a decision about the war was a mistake—they didn’t stop watching the channels that had worked them into a frenzy about Saddam Hussein’s (non) participation in 9/11 or the (non)existence of weapons of mass destruction. They didn’t stop making political decisions on the basis of hating Dems, or trusting a political figure because he seemed like someone who cared about them.

So, no, we can’t reach that sort of person with a more populist political agenda because it isn’t about the political agenda.

I think it’s also a mistake to think that, since they’re engaged in demagoguery, and it’s winning elections for them, that’s what we should do. Demagoguery, a way of approaching public discourse that makes all political issues a question of us (angels) versus them (devils) works for reactionary politics because reactionary politics is attractive to “people who fear change of any kind—especially if it threaten to undermine their way of life” (Parker and Barreto 6). Reactionary politics, according to Parker and Barreto and also Michael Mann, arises when a group is losing privileges (such as whites losing the privilege of being able to see their group as inherently superior to non-whites). Democrats played that card for years, and it worked, but now it would alienate as many people as it would win (or more). The research on “moral foundations” is pretty clear that, while loyalty to the ingroup is important for people who self-identify as conservative, fairness across groups is important for people who tend to self-identify as liberal. Any rhetoric that says “this group is entitled to more than any other group” will alienate potential liberal voters.

While there is a lot of lefty demagoguery, it’s internally alienating. That is, the presence of internal demagoguery is what makes some people very hesitant to support the Democratic Party. And now we’re back to the two narratives of 2016—both are demagoguery, and both alienate people. We need to imagine a way to move forward that doesn’t involve any one kind of lefty becoming the only legitimate lefty.

And demagoguery won’t get us there.

And that brings us to the second option: find a charismatic leader. That’s a great idea, and we should always hope that our candidates can come across as people who really care about “normal” people (with, I would hope, a broader version of “normal” than reactionary politicians present), but 1) that is only an option if there is a deep bench of Democratic governors and Senators, and 2) that still doesn’t get a reasonable balance in Congress, state legislatures, or among governors.

So, what went wrong in 2016? We had a shallow bench. There are lots of reasons for progressives’ poor showing at the state and Congressional level—low progressive voter turnout in 2010 that enabled gerrymandering, a tendency for progressive voters only to come out for the Presidency, and various other complicated things (including the success of factionalized hate media). What won’t work is something I hear a lot of progressives say: “We just need to run more progressives.” People have been saying that for a long time, and trying it for a long time, and sometimes running progressives works and sometimes it doesn’t, so there is no “just” about it.

The first thing lefty voters need to do is get out the vote at the state level. And I think we need to be very clear that we care about all kinds of voters, and lefty rhetoric about hillbillies and toothless white guys doesn’t help, so we also need to shut down classism as fast as we shut down any other kind of bigotry.

And we can’t win within the parameters of demagoguery, so we need to stop trying to play within them.

5. On the Democratic Party as a strategic coalition

At the beginning, I talked about my initial perception of politics as a contest between what is obviously the right course of action and various things that other people want—because they’re selfish, wrong-headed, corrupt, misguided. Compromise made a good thing worse because it was a question of how much bad had to be accepted in order to get some good done, and it should only be done for Machiavellian purposes. I think too many lefties operate within that model.

When the refusal to compromise goes wrong, it ends up landing people in purity wars, and those are never good for people who are trying argue in favor of diversity and fairness. Purity wars can work well for authoritarians, racists, and people with what social psychologists call a “social dominance orientation,” but they don’t work well for the left.

So, simply refusing to compromise isn’t going to ensure better policies; it can ensure worse ones if, as happened under Reagan (or in Weimar Germany in 1932), the refusal to compromise means that the left is entirely excluded. Saying that refusing to compromise can be harmful isn’t to say that all compromises are good. I’m saying compromise isn’t necessarily and always good, but neither is it necessarily and always wrong. I’m saying that we should stop assuming it’s always evil, and we should stop falsely narrating effective lefty leaders as people who refused to compromise—they compromised. In fact, every effective leader on the left was excoriated in their time for having compromised too much.

The refusal to compromise comes from thinking about politics as a negotiating between right and wrong. We might instead think of politics 1) as the consequence of deliberation, not bargaining, 2) as an acknowledgement of the limitations of our own perspective, and/or 3) as a sharing of power with those people who share our goals. I think lefties would do well to think of at least some compromises as coming out of one of those three factors.

Here’s what I now think: thinking about compromise as always and necessarily wrong is bad, but neither is every compromise right. There are times when you say there is some shit you will not eat, and I am known as a difficult woman because I have refused to go along with various motions, statements, policies, and actions. I have nailed more than a few theses to a door. But I think lefties’ failure to think about compromise as anything other than distasteful realpolitik comes from, oddly enough, a less than useful way of thinking about diversity.

I think too often lefties accept the normal political discourse of thinking in terms of identity (even though we, of all people, should understand that intersectionality means that there aren’t necessary connections between a person and their politics), so we imagine that we have achieved diversity when we have a party that looks diverse—as though that’s all the diversity we need. So, we aspire to a political party that is diverse in terms of identity and univocal in terms of policy agenda. And I don’t think that’s going to work.

Instead of striving for a group that is univocal in terms of policy but diverse in terms of bodies, we need to imagine a party that is diverse in terms of what the Quakers call “concern.”

Early in the history of the Society of Friends, meetings struggled with what we would now recognize as burnout—people at meetings would speak of the need for everyone to be concerned about this and that issue, and everyone couldn’t be concerned about everything. So, there arose the notion that the Light makes itself known in different people in different ways, and that each person has a concern which is not shared with everyone. I think that’s what we on the left should do—we should be people concerned with inclusion, fairness, and reparative justice, and who are open to different visions of how those goals might manifest in moments of concern (and policy).

There are, of course, problems with calling for more diversity of ideology on the Left, including that it means cooperating with people whose views we think wrong. And so we have to figure out how much wrong we’re willing to allow. LBJ allowed Great Society money to go to corrupt Democratic machines, believing it was a necessary first step; Margaret Sanger cooperated with eugenicists, since it got her money and support; FDR compromised with segregationists in regard to the US military; Lincoln was willing to talk like a colonizationist to get elected and compromised with racists about pay for black troops. I don’t think they should have made those compromises.

There are some compromises that shouldn’t be made, and so we shouldn’t—but we should argue about what those limits are. And there may be times that we decide to compromise on purely Machiavellian grounds; I’m not ruling that out. But I am saying that lefties shouldn’t treat every disagreement as something that must be resolved with pure agreement on the outcome—that’s just a fear of difference. Lefties disagree. We really, really, really disagree. Lefties need to imagine that disagreement is useful, productive, and doesn’t always need to be resolved. We need to imagine a politics in which each of us gets something important for our well-being and none of us gets everything. And we need to stop hoping and working for a party of purity.

[1] If it helps one side too much, of course, then both end up losing—if interest rates are too high, no one takes out loans, and then lenders are hurt; or high interest rates might tank the economy, which can make it hard for lenders to find money to loan.

[2] It’s generally done through division—you and I are alike because we both hate them. Salespeople will often do it on big ticket sales, and con artists always use it.

[3] One sign of how factionalized a situation is is how often when I’m talking about this I have to keep saying that not all Sanders supporters are Sandersistas and not all Clinton supporters are Clintonistas. As scholars of group identity say, the more that membership in a group is important to you, the more that any criticism of any member of that group will feel like a personal attack.

[4] One of the odder arguments I sometimes hear people make is that Clinton was at fault for not motivating them—it’s the Presidency, not a hamburger; you’re responsible for making choices, and not a passive consumer of marketing. (Talk about a neoliberal model of democracy.) That argument irritates me so much I won’t even list it as a reason.

[5] While Hibbing and Theiss-Morse maintain this is not authoritarianism, because people want a direct connection to the halls of power when the government is not being appropriately responsive, I would argue that neither is it democratic (little d) in that there is no value given to deliberation or difference. And, of course, it’s how authoritarian governments arise—people give over all their power of deliberation to someone who will do it for them. When they want it back, they can’t always have it.

IV. “Decide for Peace or War:” How Hitler was normalized

This is the fourth in a series:
Introduction
Pt. I: "This collapse is due to internal infirmities in our national body corporate:" Popular science, their conspiracies, and agreement is all we need
Pt. II: "A source of unshakeable authority:" Authoritarian rhetoric
Pt. III: Immediate rhetorical background

From a September 3, 1944 tapped conversations of two Nazi Generals who were British POW, discussing when the German military should have refused to follow Hitler’s orders:

Hennecke: It should have been done in 1933 or in 1934 when things started.

Müller-Romer: No, the running of the state was still all right at that time. (From Tapping Hitler’s Generals 98)

The argument goes on for a while. Müller-Romer’s argument is that the political outcomes were just fine in 1933, and they should have waited till the political outcomes were worse. Müller-Romer says that “it wasn’t so bad before the war,” and Hennecke points out it was, that 1933 had the jailing of Hitler’s political opponents. Hennecke’s most important argument was that political processes were set in place in 1933 that virtually guaranteed horrific political outcomes eventually. Hennecke was right.

In 1933, Hitler set in place the criminalization of dissent, a propaganda machine, and a single-party state—those are governmental processes of authoritarianism. Hennecke was trying to argue, once those processes are in place, then, if the policy outcomes are bad, dissent is impossible. People have to protect, even in times when they like the policy outcomes, the processes they will need to be in place when they don’t like the policies.

Basically, anyone who took until after 1933 to realize Hitler was an authoritarian nightmare is someone who supported Hitler when it mattered. Realizing in 1939 that supporting Hitler was a mistake means that you’re thinking in terms of outcome and not process. Realizing in 1944 (as many of his generals did) that they had been backing the wrong horse is craven ambition—obviously, it’s only losing that hurt.

So, let’s assume that Hennecke was right, and 1933-34 was when the military should have tried to lead a revolt against Hitler’s dictatorship. Why didn’t they see that at the time? Whey didn’t most people?

They didn’t because Hitler, in March 1933 (and 1933 generally) was normalized. People who had fought against him now actively supported him, rationalized the violence of his supporters, insisted that he was at least better than the opposition, and believed that he was sincere in his professions of Christian faith (despite all appearances). The only group to vote against the act than enabled his dictatorship was the Social Democrats (democratic socialists; the communists would have voted against it, but they were banned or arrested). A rabidly factionalized press spun the situation as his being in control and decisive and finally doing the things that liberals had been too weak to do–such as cleansing the community of criminal elements. And those talking points were repeated by people who normalized behavior they had been condemning just months before.

People who think they would never have supported Hitler believe that they would never have supported a leader who pounded on the podium, screaming for the extermination of various races and an unwinnable war against every other industrialized country. And that’s what they think he did because, prior to 1933 (one might even argue late 1932) that was what he did. So, one way to think about the rest of this post is whether that test—I would never have supported Hitler because I would never have supported someone who advocated genocide and world war—is a good one for thinking about his March 23, 1933 speech. And the answer is that it isn’t.

The speech was part of the Nazi’s goal of establishment a one-party dictatorship, something that would be achieved in what was called “The Enabling Act.” They needed a 2/3 vote of the Reichstag, and a special election had been called for those purposes. They didn’t get 2/3, so they banned and arrested the communist leaders and declared they only needed 2/3 of the non-communist votes. That was a violation of the constitution. But, by the time Hitler spoke, they had done the math and knew the outcome.

Hitler’s speech was in the context of what Aristotle called deliberative rhetoric. There was a policy on the table, and so it would be expected that Hitler would engage in policy argumentation to support it (short version: he didn’t, and that’s important).

This was the Reichstag—the major deliberative body of Germany—and it was considering a major policy change; thus, in a healthy rhetorical community, Hitler’s speech on March 23, 1933 would have been deliberative rhetoric. He would have had to argue why the “Enabling Act” was an effective and feasible solution to real problems that would not go away on their own, and that the act would not involve solutions worse than the problem. He would have had to make that argument acknowledging the multiple policy options available, and to a community that was familiar with multiple sides and who insisted that he be fair to all those sides.

But Germany wasn’t a healthy rhetorical community. That isn’t what he did. He gave an epideictic speech, with bits of judicial. He didn’t engage in policy argumentation. Hitler’s speech has the overall structure of need/plan, but not in a policy argumentation way—it’s more like a skeezy sales pitch. Skeezy sales pitches have a rough need/plan organization, but the need is that you’re kind of a bad person and the plan part of the argument is that my product/company/election will solve that need thoroughly and completely. That rhetoric always begins by making the consumer slightly uncomfortable (insecure, ashamed, or worried), but with an implicit promise that they could be better. Pickup artists call it “negging” (“You would be pretty if you smiled”). And then the product is offered that will solve the problem; with pickup artists—and Hitler—the solution is the person. He didn’t engage any of the other parts of deliberative argument (consideration of multiple options, solvency, feasibility, unintended consequences).

Overall, Hitler’s argument was: things have been bad in so many ways, and real Germans have been consistently screwed over and ignored in our political system. The major decision-making body has been paralyzed by political infighting by professional politicians who haven’t been paying attention to the kind of people (in terms of race and religion) who are the real heart of this nation. Our relations with other countries have been completely lopsided, and we’ve been giving way more than we’ve been getting. We aren’t a warlike people, and we don’t want war, but we insist on the right to defend our interests. Liberals and communists are basically the same, in that liberalism necessarily ends up in communism. Situations are never actually complex, but people who benefit from pretending they’re complicated will say they are (teachers, experts, governmental employees, lawyers). The correct policies we should be pursuing are absolutely obvious to a person of decisive judgment—being able to figure out the right course of action doesn’t require expert knowledge or listening to people who disagree. The ideal political leader has a history of being decisive. And that person cares about normal people like you who are the real heart of Germany, and it’s easy for someone like you to know whether the leader has good judgment and cares about you—you can just tell. There is one party that supports the obviously correct course of action, and we should try to ensure that party has control of every aspect of government, and that there will be no brakes on what that party decides to do.

So, how does he do that? And why does it work?

He begins the speech with a vague reference to the proposal. It’s a proposal for shifting from a parliamentary system to a dictatorship, but he doesn’t say that. He says it’s “a law for the removal of the distress of the people and the Reich” (15). He grants that the procedure is “extraordinary” (a state of exception, so to speak), and gives “the reasons” for it, and his “reasons” are a purely need/blame argument (more appropriate for a judicial speech) that goes from the beginning till about fifteen paragraphs in (in the English—in the German, it’s about twelve), until he says, “It will be the supreme task of the National Government…”

I mentioned that Hitler’s policy solution was himself, and he sets up that solution by how he describes the problem. His argument is that Germany is undeniably in the most awful situation ever. And we are in the worst imaginable situation possible (hyperbole that makes him seem to be completely on their side—his commitment to the ingroup is extreme) for three reasons: first, the country has been led by Marxist politicians who are incompetent, deluded, just looking out for themselves, and/or actively villainous; second, the moral, political, and economic collapse of Germany “is due to internal infirmities in our national body corporate;” third, the “infirmities” of our life means that nothing is getting done because we’re in a deadlock: “the completely irreconcilable views of different individuals with regard to the terms state, society, religion, morals, family and economy give rise to differences that lead to internecine war” (16). Those last two are especially significant, in that they signify what kind of policies Hitler would enact. His argument in those two is that there are “defects” in our national life, especially views “starting from the liberalism of the last century,” that have inevitably led to this “communistic chaos” (16). There are political views, he says, that enable the “mobilization of the most primitive instincts” and end up in actual criminality. He’s equating disagreement and violent political conflict, and blaming all that on the presence in the community of a defect that will necessarily end in Soviet communism.

This whole argument of Hitler’s simultaneously promises stability—an end to disagreement and political paralysis–while ignoring that his own party was one of the major causes of the political paralysis, violence, and criminality of Weimar politics. Thus, this whole part of his argument is projection and scapegoating.

For instance, one of those “reasons” that his dictatorship is necessary is that it was the 1918 Marxist organizations that committed “a breach of the constitution” putting in place a revolution that “protected the guilty parties from the hands of the law.” These Marxists, according to Hitler, tried to justify what they did on the grounds that Germany was guilty of starting WWI.

Let’s assume, for the sake of argument, that all of his claims are true (they aren’t).

Why in the world is he even arguing about who is to blame for the loss of WWI? Even if the Weimar democracy was created by evil witches who mistreated bunnies and shoved little old ladies out of the way in crosswalks, that wouldn’t make his dictatorship a good plan. The Weimar dictatorship might have been Marxist (it wasn’t), it might have been disastrous (its major problems were Nazis and Stalinists), it might have lied about WWI (it didn’t), but even were all things true, it still wouldn’t necessarily mean that Hitler’s becoming a dictator was the right solution. It isn’t even clear that the actions of the people who put in place a democracy at the end of WWI were acting in an unconstitutional way. But it was absolutely clear that Hitler was.

He needed 2/3 of the Reichstag vote to get the Enabling Act passed, and he didn’t have that number. So, he had Marxists arrested and prevented from entering the chamber, and he decided on an interpretation of the constitution that said that, because he had prohibited their entry, their numbers didn’t count toward what amounts to quorum. (That isn’t what the constitution said.) So, Hitler’s hissy fit about what “the Marxists” did in 1918 isn’t a very accurate description of what they did, but it’s a perfectly accurate description of what he was, at that moment, doing. That accusation of unconstitutional action was projection.

His whole argument about violence and paralysis was also projection, since the violence and refusal to compromise (the cause of the paralysis) came from both the Stalinists and Nazis. Hitler’s argument is the pretty standard argument for people who think they’re totally and always right (that is, authoritarians): our problem is that you are disagreeing with me. The conflict would stop if you just agreed with me.

Hitler’s argument can be summarized in what, following Aristotle, people call an enthymeme. “My dictatorship is necessary because the Marxists are just awful.” Hitler was relying on the tendency a lot of people have to decide that a conclusion must be true if they believe the evidence is true. (It’s how most, maybe all, scams work.)

Hitler’s kind of argument takes it one step further than even skanky associational arguments go. He’s saying that, if the economic disaster of post-war Germany can be associated with Leninist-Marxists in any way, then they caused it, and therefore Hitler’s dictatorship. His argument is “My dictatorship because MARXISM!!!” (Notice the slip between Leninist-Marxism and Marxism.) That isn’t a logical argument, but associational. Even were it true that the “Marxists” were responsible for Germany’s post-war plight (as opposed to the war itself being the problem), then the “solution” isn’t necessarily Nazism. There were lots of other economic and political systems opposed to Marxism.

After all, liberal democracy is opposed to Marxism (liberal democrats are the first people up against the wall, as Marxists so charmingly say), as are democratic socialists (who accept some aspects of Marx’s critiques of capitalism, but oppose—unhappily often with their lives since Soviet Marxists call them liberals—Soviet Marxism and generally any kind of violent revolution), non-Soviet Marxism (Trotskyites, for instance), non-Marxist kinds of communism, the odd monetary model long promoted by the Catholic church, mercantilism, and even various other kinds of volkisch and reactionary groups. Nazism had a lot of opponents; it wasn’t the only choice other than Soviet Marxism.

So, what Hitler did was to scapegoat Marxists for Germany’s post-war situation, and associate every political party opposed to him with Marxists. [1]

Calling the people who instituted the Weimar Constitution “Marxist” is a deliberate smear—it’s just insisting that everyone to his left (and most were) is Marxist (a not unheard of tactic in our own era). It’s an equation he makes later in the speech, and made consistently in his rhetoric—he characterizes all forms of non-authoritarian governments as Marxist.

That’s a kind of argument that appeals to people who can’t manage uncertainty, ambiguity, or nuance and see all members of any outgroup as essentially the same. When we are in fight or flight mode, we are drawn to binaries. Something is good, or it is bad. Something is right, or it is wrong. And, since they think in binaries, people drawn to that way of thinking believe that you either believe everything is right or wrong or you believe it’s all good. [2]

Such people would really like Hitler’s speech, since he presents the situation as absolutely black and white. I said that he presents himself—not a set of policies—as the solution to their problems. He says, it is obvious what needs to be done; it is obvious that our bad situation is the consequence of politicians who were either “intentionally misleading from the start” or subject to “damnable illusions.” They were just looking out for themselves, giving people “a thousand palliatives and excuses.” They just made promises they never kept.

He doesn’t argue that his (vague) policy is the best policy choice; he’s arguing that “Marxists” caused all of Germany’s problems and concludes from that claim that his dictatorship is necessary. That’s a fallacious arguments in many ways. The logical form of Hitler’s argument is, as I mentioned, “My dictatorship is necessary because the Marxists are just awful.” Hitler’s dictatorship is in opposition to Marxism, and Marxism is bad, so his dictatorship is good. If you put that in logical terms, you have “A is necessary because not-A is bad.”

There are a lot of “not-A” out there. Were Hitler’s argument one that appealed to premises consistently, then he would also have to endorse this argument as equally logical: “Making my dog Louis a dictator is necessary because Marxism is bad.” After all, my dog Louis is also not a Marxist—he is not-A. Therefore, he would be just as great a leader as Hitler.

He wouldn’t be a great leader at all. He would mostly eat things, and demand a lot of walks. Whether he would have been a better leader than Hitler is an interesting question—he probably wouldn’t have been worse—but that wouldn’t make him a good leader. Yet, Hitler’s argument would apply as logically to Louis as it did to Hitler: after all, Louis would be a great leader because Marxists are bad doesn’t have any worse a major premise than “Hitler’s policies are good because Marxists are bad.”

And, let’s be clear: Louis is VERY opposed to any kind of Marxism.

And, really, that was Hitler’s argument, and that’s all it was. His argument wasn’t logical—he never put forward a major premise to which he held consistently. His argument was always “What I propose is good because I am good (decisive, caring about you, looking out for real Germans/Americans, not a professional politician, successful), they are bad,” and as long as he could rely on his audience not to think too hard about that major premise (“anyone who is decisive, caring about you, looking out for real Germans/American, not a professional politician, successful is proposing good policies”), then he was fine. And, I’ll point out again that Louis is very decisive, he cares about everyone, he is protective of his pack, he is not a professional politician, and he is very good at his job.

Simply looking to whether a claim has support is cognition, and I’m saying that good deliberation requires meta-cognition, that people will look at how they are arguing. And that people don’t just ask themselves whether an argument seems true to them, but whether they think how it’s being made is one they would consider good regardless of ingroup/outgroup membership.

Metacognition requires stepping back from an argument that justifies what you want to believe (what is called “motivated cognition”) to thinking about whether you would think your way of thinking is wrong if someone else did it. And that is the problem with the “I don’t care if it’s logical, I just know it’s true” line of argument. Do you endorse that kind of argument when other people make it? Only when they get to your conclusions. So, that method of making decisions (Hitler’s, by the way, and most authoritarians) is about ingroup loyalty, and it’s okay if your ingroup is magically always right, but there is always something mildly narcissistic about it, since it assumes your intuitions are perfect.

People who reason that way tend to favor people to whom they feel close, while, the whole time, they think they are being fair. Since they are unwilling to consider whether their method of reasoning is bad, they never notice when they’ve made mistakes. They sincerely believe their method of reasoning is good because it’s always worked for them. The question is: would they know if it was a bad method? Do they have a system for checking if their intuitions and feelings are bad? Yes, their method is to ask their intuitions and feelings whether their method is bad.

Albert Heim reported that Hitler had told him, “I don’t give a damn for intellect[–] intuition, instinct is the thing” (Tapping Hitler’s Generals 165). That fits with what Hitler said throughout his rhetoric—he insisted people trust him because his intuitions were so good that he could reject any expert advice that contradicted him. (Like most authoritarians, he endorsed expert advice that confirmed his views.) I like the term epistemological populism—something that “everyone” believes, even if it’s empirically false, is true because experts are just eggheads (unless they agree with you). You can appeal to the popular notion.

What the people who make that argument don’t notice is that their “common sense” is only “common” to their ingroup. Their “popular” notion (that this group is lazy, that that group is greedy) never includes all the groups who might have an opinion on the issue—when they say “everyone,” they don’t include the outgroup. It’s one of the subtle ways we delegitimate (and even dehumanize) the outgroup. When we do this, we aren’t trying to deletigimate or dehumanize them. It’s just that we take our ingroup associations and universalize them—since I think squirrels are evil, and I only hang out with people who think they are, then it will come to seem to me obviously true that “everyone” agrees that squirrels are evil. If Louis, who CLEARLY thinks squirrels are evil, runs for office, I will feel that he represents “everyone.” I can ignore the squirrels’ opinion on the issue.

If you like Louis (and, really, who doesn’t? he’s adorable) and he makes you feel good about yourself, then you will not hold him to the same standards that you hold other political figures. You will look for reasons to support him, and you will find them, (you are motivated to use your cognitive powers to justify his actions), and, so, you will think your support of him is rational since you can find examples and arguments to support your claims about him and his claims about himself.

But what you can’t find will be major premises that you will consistently endorse. Louis is great because he says he’s nice to you. The other candidate tries to be nice to you, but that’s just cynical manipulation on their part. Louis said something untrue, and so did that candidate. Louis was mistaken, but that candidate was lying.

Hitler played on that tendency brilliantly in this speech. Hitler made a set of claims his audience would like hearing: there is disorder, decay, uncertainty, and weakness. We don’t want to listen to any argument that Germany was to blame for WWI, or that we lost it, or that the Versailles Treaty wasn’t much worse than the treaty imposed on the French after the Franco-Prussian War of 1870.

What he said was, “You’re humiliated right now but you could be awesome with me as dictator.” Germans are humiliated right now but will be great once you put all power in me.” (Or, you would be pretty if you smiled.) Marxists are bad, and I am the kind of person who will impose order, end decay, never believe myself uncertainty, and will always be strong.

That claim involves the rhetorical strategy of projection. Whether Germany was at fault for the war is an interesting question (most scholars say yes, but very few say that only Germany was at fault), and whether the installation of the new constitution in 1918 was done in a constitutional way is an interesting question, but there is no doubt that Hitler’s pushing through of the Enabling Act violated the terms of the constitution. That move is called projection because it’s taking something you are doing and projecting it onto someone else—like a movie projector.

And it tends to work because it’s a particularly effective instance of the large category of fallacies involving a stasis shift (generally called fallacies of relevance).

Hitler’s argument shifts the stasis off of his weak points (whether he has pragmatic plans and just what they are) to ones he thinks he can win—that Marxists are bad, and that “real Germans” (the “volk”) are beleaguered victims of a political system that reward professional politicians for their dithering.

All that people know about Hitler’s policy is that he is abandoning democracy in favor of a single-party state that explicitly favors his party over others—the judicial system, educational system, arts, parliament, churches, science, and military will all be purified of anyone who isn’t fanatically committed to his political party.

Hitler is working on the basis of what Chaim Perelman and Lucie Olbrechts-Tyteca called “philosophical paired terms.” People who think in binaries also tend to assume that the binaries are necessarily logically chained to each other (which is why Laclau called them equivalential chains). So, for Hitler, there is a binary between “order” and “disorder” and that pair is necessarily connected to “his dictatorship” and “democracy.” Think of these terms as like the logic sections of some standardized tests that have questions like: “Tabby is to cat as pinto is to [what].” The answer is supposed to be “horse.”

Hitler’s argument is:

That chain of paired terms is what enables Hitler to get to what is actually an amazing argument for a purportedly Christian nation: that valuing fairness across groups is suicide, and part of a plot to weaken Germany.

And there’s a really interesting characteristic about this kind of argument. It’s normal for people to assume that an authoritarian state provides more order than a democratic one, and that it therefore is peaceful, but that’s an associational argument [strong father model], not an empirical or logical one. Authoritarian states take the conflict, violence, and chaos, and put them out of sight of “normal” people (which tends to get defined in increasingly small ways as time goes on). Empirically, and this was especially true in Hitler’s regime, authoritarian single-party governments have extraordinarily disorderly policies (they follow the whim of the person or people in charge), completely arbitrary applications of coercion, and they are systemically violence (think about how segregation operated in the Southern US).

But Hitler tries to equate his part with order, when the Nazis were the source of much (most?) of the disorder. The Freikorps engaged in random violence against Jews and lefties of various stripes. The Stalinist communists also engaged in violence, but there is no indication that democratic socialists, let alone liberals, relied on violence. So, the notion that Hitler’s party was opposed to violence just didn’t fit the situation, but his supporters appear to have followed it.

And they did it, I’d suggest, to the extent that they followed his associational chain. He chained various things together through association—order, authority, control, honor, true German identity, purification, peace, trust in him. He also throws in there victim/villain.

Logically, Nazis are not pure victims of violence. They were, in fact, murderers, thugs, and extortionists, but they were tolerated because the police and judges generally liked them (since their violence was against Jews and liberals). They got caught out in sheer murder (of Konrad Piezuch), and Hitler’s stance was that Nazi violence was always already self-defense. And Hitler’s chain of connections enabled him to connect Nazis to victims of violence. A reasonable description of the situation would have made Nazis mostly villains but also victims. Once you have a culture (or argument) that is only going to reason through paired terms, then Nazis are either victims or villains (in that world, you can’t be both). Since Nazis are connected to order, and order is opposed to violence (assertions Hitler made elsewhere in his argument) then, by the time he gets to Nazi murderers, it would seem “logical” to see them as opposed to villains (communists) so they MUST be villains.

And Hitler did sound more reasonable than he had in his beerhall speeches. He never said the word “Jew,” and only mentioned race twice. He didn’t say anything about Aryans, and talked a lot about the “volk.” For many people, the term simply meant “the people,” but for people steeped in the long and racist “volkish” literature, it meant the racial group that constituted true Germans. So, it was a dog whistle, unheard by many, but whistling up racism in others. Hitler used other racist dog whistles–he talked about decay, infirmities, the need to detoxify our public life, the “moral purging of the body corporate.” He called for greater spiritual unanimity, and ensuring that all art and culture would “regard our great past with thankful admiration” (19, emphasis added), so “blood and race will once more become the source of artistic intuition.” Someone who wanted to see him as a person who had changed (or who had never meant the racism) could point to the apparent absence of racism; someone who wanted to see him as the beerhall demagogue who would purify Germany of unwanted races could see him as someone who hadn’t changed.

But, or perhaps and, Hitler’s speech made a lot of promises that a lot of people who really wanted an end to the uncertainty of Weimar Germany politics would like to hear. The bulk of Hitler’s speech (where the plan should be laid out) is a series of vague assurances regarding the churches, the judiciary, economics (including his policies toward agriculture, the unemployed, and the middle classes, self-sufficiency), and foreign policy.

Those promises are:

    • Church. He calls for a “really profound revival of religious life,” implies he will not compromise with “atheistic organizations” and suggests that he believes religion is the basis of “general moral basic values.” He says his government “regard[s] the two Christian confessions [Catholic and Lutheran] as the weightiest factors for the maintenance of our nationality” and promises “their rights are not to be infringed” (20). He says the government will had “an attitude of objective justice” toward other religions, something Catholics and Lutherans would like hearing—that he connects the nation and their religion and doesn’t intend to put “other religions” on an equal footing with them (his audience would probably think immediately of Judaism, and possibly Jehovah’s Witnesses). Since Hitler was not himself a particularly religious person, and his organization had a lot of people in it openly hostile to Christianity, this alliance of his party with the two most powerful religious organizations would be reassuring, and it did seem to be persuasive (the Catholic political group voted for the Enabling Act).
    • Judiciary. Hitler was clear that he wanted a factionalized judiciary that didn’t respect the rights of all individuals equally (an Enlightenment value). The judicial system should, he said, make “not the individual but the nation as a whole alone the centre.” For him, the nation is the “volk” (discussed below), and judges should always put the concerns of the volk first—not abstract principles of due process.
    • Economics. Here Hitler was especially vague (which is saying something, considering how vague the whole speech is). He said he the government will protect the economic interests of “the German people” by “an economic bureaucracy to be organized by the state, but by the utmost furtherance of private initiative and by the recognition of the rights of property.” This was a clever apparent disavowal of the socialism that was central to Nazism in its beginnings, but one that wouldn’t alienate those people in the party who thought Hitler was still socialist (he would later have them killed).

He insisted on the importance of German agriculture, promised to use the unemployed to help production, told the middle classes that “I feel myself allied with them” (classic scam artist claim since he was actually a millionaire who didn’t pay taxes, and his policies wouldn’t help the middles class—it’s one of only two times he used the first person in the speech, which is rhetorically interesting), admitted that pure self-sufficiency was not possible, and then slowly moved into the more bellicose aspect of his speech.

When talking about the debt, he presented his stance as reasonable, in that he was simply insisting on fairness, a theme he drew into discussions of foreign policy. In the English translation, this section and the next (pages 22-23) have italicized text, in which he takes a strong stand toward other countries claiming that Germany’s policies were forced on them by the unreasonable behavior of other countries. And that theme leads him to what appears to be an absolutely clear statement of his policy.

For the Overcoming of the Economic Catastrophe

three things are necessary:–

  1. absolutely authoritative leadership in internal affairs, in order to create confidence in the stability of conditions;

  2. the securing of peace by the great nations for a long time to come, with a view to restoring the confidence of the nations in each other;

  3. the final victory of the principles of commonsense in the organization and conduct of business, and also a general release from reparations and impossible liabilities for debts and interest. (24)

People often mistake a set of assertions presented in what rhetors call “the plain style” with “a clear argument.” They aren’t the same thing at all, or even necessarily connected. A statement of Hitler’s policies would explain how authoritative leadership will create confidence—he’s got an associational argument, not logical one. An incompetent authoritative leadership (one that starts a war, for instance, or engages in kleptocracy) won’t necessarily stabilize conditions, and stable conditions won’t solve the worldwide depression. That’s a clear statement of a vague policy.

The second is simply a lie, but a comforting one, since Hitler’s previous rhetoric had been so war-mongering—that clear statement of a vague policy would make gullible people feel that Hitler’s previous rhetoric had just been to mobilize his base, or perhaps the responsibilities of leadership had sobered him. And, even did he actually mean it (he didn’t), Germany’s economic situation wasn’t the consequence of concern about war.

People love to hear that leaders will now act on common sense. We like to believe that our views are shared by all reasonable people, that the solutions to our problems are obvious, and that experts and eggheads should just be ignored in favor of what regular people believe. Appealing to his audience’s “common sense” also enables Hitler to sneak past the rhetorical obligation of saying what policies exactly he’ll pursue—a sympathetic person will believe he has, since they will now offer their own notions of common sense in the place of the policies he hasn’t mentioned.

Hitler promises he can achieve all these things, but not if “doubt were to arise among the people as to the stability of the new regime”—one of the ways he tugs on that set of chained terms. Stability and peace are linked, and in opposition to democratic deliberation. So, he says, he will continue to respect the Reichstag, but they won’t meet.

There is a jaw-dropping instance of strategic misnaming in his penultimate paragraph. He says (and it’s italics in the English): “Hardly ever has a revolution on such a large scale been carried out in so disciplined and bloodless a fashion as the renaissance of the German people in the last few weeks” (26). In fact, the violence of the previous weeks was unparalleled. As Richard Evans says, after January 30, when the Interior Ministry ordered that police no longer provide protection for opposition meetings, “Nazi stormtroopers could now beat up and murder Communists and Social Democrats with impunity” (320). As Evans says, in January, the Nazis “unleashed a campaign of political violence and terror that dwarfed anything seen so far” (317). Hitler is simply insisting on his version of truth—that his audience would know it to be inaccurate wouldn’t change their perception of it as “true” (that is, truly loyal to the group—what is called a “blue lie“), and it would make them see him as strong. And then we get the second time he uses the first person—having just uttered a blazing lie, he says, “It is my will and firm intention to see to it that this peaceful development continues in future” (26).

That sentence is so rhetorically brilliant that it is chilling. He is simultaneously threatening violence, renaming violence “peaceful,” and, because he’s claimed there wasn’t violence, giving himself plausible deniability. The dogs all perk up their ears at that very loud whistle, and the ministers of the Reichstag know that he is telling them either support the Enabling Act, or there will be civil war.

And he ends his speech with saying, “It is for you, Gentlemen, now to decide for peace or war.” And they did. They decided for war—one that would claim to be a war bringing world peace by exterminating difference.

In 1933, Hitler gained enough legitimacy to put in place authoritarian processes because 1) he managed to look enough less demagogic when arguing for the Enabling Act than he had during the previous years to make people think he had changed (or the demagoguery was all an act); 2) in the speech defending the act he promised a political agenda a lot of conservatives and reactionaries supported (ending the chaos of Weimar Germany, getting better deals in terms of treaties and agreements than the weak previous governments had gotten, protecting Catholicism and Lutheranism, protecting normal people, preserving peace, building the German economy, and just generally his being decisive, he also promised—in dog whistles—to purify Germany of immigrants and Jews); 3) appearing to be a better choice than Soviet communism (since all liberalism is communism); 4) the Catholics and Lutherans decided their political agenda was more likely to get enacted with him, and he promised to support them, although he’d never been a particularly good Christian prior to his election; 5) the political situation seemed to be simultaneously chaotic and paralyzed, and many people said it was because people like them had made bad choices, but Hitler said people like them were awesome and had never made bad choices and it was just evil politicians, and he wasn’t one, so they should trust him. (This point ignored that Hitler and his party had been crucial in making sure that democracy didn’t work.)

The whole “this person isn’t Hitler because I’d know Hitler” assumes that the Hitler of 1933 was a strikingly abnormal rhetor, and, certainly, Hitler’s rhetoric could be abnormal. When my students read Mein Kampf, they complain that he manages to be boring, enraging, and incoherent at the same time, and it’s an odd achievement for a text to do all three simultaneously—you’d think something enraging would at least manage not to be boring. Once we were using an online version that had skipped a page, and it took us a while to notice because the page jump made his argument only slightly more disconnected than usual. As mentioned earlier, the basic themes in Hitler’s rhetoric weren’t unique to him, and many Germans would have been consuming the same racist and militaristic rhetoric (even the lebensraum notion), but it was at least somewhat abnormal for a rhetor with major political ambitions to be so explicit and frothing at the mouth about them. But he was only that open until he was Chancellor.

So, the question of “Is this person just like Hitler?” generally appeals to a cartoon understanding of who “Hitler” was. It’s the wrong question. The question is whether they would have supported a leader who said: things have been bad in so many ways, and real Americans have been consistently screwed over and ignored in our political system. The major decision-making body has been paralyzed by political infighting by professional politicians who haven’t been paying attention to the kind of people (in terms of race and religion) who are the real heart of this nation. Our relations with other countries have been completely lopsided, and we’ve been giving way more than we’ve been getting. We aren’t a warlike people, and we don’t want war, but we insist on the right to defend our interests. Liberals and communists are basically the same, in that liberalism necessarily ends up in communism. Situations are never actually complex, but people who benefit from pretending they’re complicated will say they are (teachers, experts, governmental employees, lawyers). The correct policies we should be pursuing are absolutely obvious to a person of decisive judgment—being able to figure out the right course of action doesn’t require expert knowledge or listening to people who disagree. The ideal political leader has a history of being decisive. And that person cares about normal people like you who are the real heart of America, and it’s easy for someone like you to know whether the leader has good judgment and cares about you—you can just tell. There is one party that supports the obviously correct course of action, and we should try to ensure that party has control of every aspect of government, and that there will be no brakes on what that party decides to do.

If you would support someone making that argument, then Congratulations! You just endorsed Hitler’s argument in his March 23, 1933 speech!

[1] Again, not unheard of in our own time, and it’s done by people who get their panties in a bunch if anyone connects reactionary politics with other instances of reactionary politics—such as pointing out a possible connection between the SBC stance on gay marriage and its stance on segregation, or, perhaps, its formation and the connection to proslavery rhetoric. And, no, I’m not saying that everyone who now supports the SBC supports slavery. What I am saying is that the SBC has consistently gotten it wrong in regard to issues of race, and so maybe their exegetical method is flawed. If they keep getting an outcome that they later regret, maybe there is a process problem.

[2]They don’t live their lives that way, a point pursued elsewhere at greater length, but here I’ll just say that they will say something like “murder is wrong” and then have all sorts of exceptions and complicated cases. They manage to get dressed for work without being certain what the weather will be, and to pick a new show to watch without being certain they will like it (often, they just refuse to acknowledge the uncertainty).

“I cannot explain why it does not affect me:” How to make a Hitler comparison (Introduction)

Godwin’s Law is a reasonably good statement about internet arguments–that the argument is over when someone accuses the other side of being just like Hitler–because “Hitler” is what rhetoricians call an “ultimate term;” that is, all connotation and no denotation. It’s a word that powerfully evokes a set of closely associated ideas, the precise connection of which is surprisingly vague (“freedom,” “terrorist,” “political correctness”). People think they’re making a clear reference, but they aren’t (as you can tell if you ask them to define the term precisely-they just get mad). Since the invocation of Hitler is simultaneously powerful, apparently clear, but actually unclear, comparing an opponent to Hitler ends a conversation because there appears to be no useful way to refute or support the comparison.

So, what would it mean to try to have a reasonable conversation about Hitler, who he was, what he did, and how he got a fairly normal country to hand over all power to him and support him in a policy of ethnic cleansing that involved “cleansing” Europe of every member of lots of religions, ethnicities, and behaviors AND take on almost every other European power and every other major industrialized nation.

If we want to know whether a leader is like a current Hitler in some significant way, then we need to look at how Hitler looked in the moment, and not just through the lens of what we know was revealed about him later. Knowing how things played out, and what we now know, is useful, but it’s just as useful to understand why people didn’t predict those things, or didn’t know what we know. And I think a good place to start for thinking about why people didn’t worry as much about him as we think they should have is his March 23, 1933 speech to the Reichstag. Talking about that speech requires some background on Hitler and his context, and talking about comparing a current leader to Hitler requires at least a little bit of an explanation about Hitler analogies.

Everyone is like Hitler in some way–they have a two-syllable name, they’re charismatic, they like dogs, they eat pasta. An argument about a historical comparison needs to be about whether the analogy is apt, if the similarities are causally important to the outcome we want to avoid (Hitler didn’t destroy Germany because he liked dogs).

After all, Hitler did a lot of things–he was vegetarian, a dog lover, a shitty painter, a racist, a lame architect, an authoritarian who was cozy with the industrial class, a poseur art critic, a millionaire who dodged his taxes, a traditionalist when it came to gender roles, a charismatic leader. We worry about whether a current leader is just like Hitler because we’re worried about whether that leader will drag a country into authoritarian government, unnecessary war, an ultimately disastrous economic policy, the jailing of all political opponents, and genocide.

And so we need to figure out which of his characteristics are causally related to those outcomes. Being a dog owner wasn’t one of them. Being authoritarian, racist, and a charismatic leader (not a leader who is charismatic) was causally related to those outcomes, but they aren’t necessarily related (in the logical sense–not all racists engage in genocide, so the two aren’t necessarily related). Genocide is always racist, but not all racism ends in genocide.

So, how did he do it? Hitler didn’t take a nation of tolerant and peaceful supporters of democracy and wave a word wand that magically transformed them into racist warmongerers. He did four things. First, he rode various very powerful cultural and political waves in Weimar German culture to power. Second, when in power, he transformed Germany into a one-party state. Third, between 1933 and 1939 (by which time it was incredibly dangerous to oppose him), he made things better for a lot of Germans. Granted, he did so in ways that would only work for the short term, but people tend not to ask about the long term. Fourth, and the one I want to talk about here, he made his authoritarianism look like not authoritarianism by reframing it as decisiveness, a stance that was helped by his carefully controlling his public image and public rhetoric, looking more reasonable than anyone expected–he had set a low bar–and saying that he just wanted peace and prosperity. He had a rhetoric that made people feel they could trust him.

And so what was that rhetoric?

Pt. I: “This collapse is due to internal infirmities in our national body corporate:” Popular science, their conspiracies, and agreement is all we Need

Pt. II: “A source of unshakeable authority:” Authoritarian rhetoric

Pt. III: Immediate rhetorical background

Pt. IV: “Decide for Peace or War:” Hitler’s March 23, 1933 speech before the Reichstag

Let’s reinvigorate the charge of religious bigotry

In the US, the term “bigot” is used interchangeably with “racist,” but its use for a long time involved religious, not racial, bigotry. At a certain point, it became more broadly used for someone who could not be persuaded out of a belief, religious or political. The OED gives the first three definitions as:

A religious hypocrite; (also) a superstitious adherent of religion; A person considered to adhere unreasonably or obstinately to a particular religious belief, practice, etc.; In extended use: a fanatical adherent or believer; a person characterized by obstinate, intolerant, or strongly partisan beliefs. (OED, Third Edition, December 2008)

The OED notes that Smollet in 1751 condemned the political discourse of his era by referring to “The crazed tory, the bigot whig.”

And that’s what’s wrong with our political discourse. It isn’t whether people are “civil” or “hostile” or even “racist.” Our problem is that our political discourse is dominated by bigoted discourse. And a lot of those bigots pretend that their views are reasonable ones related to Scripture.

Democracy works when most people are open to persuasion, and it doesn’t work when too many of us are bigots. Being open to persuasion doesn’t mean that you’ll change your mind every time someone gives you new information (the test apparently used by some studies about persuasion), but it does mean that you can imagine changing your mind, and, ideally, you can identify the conditions under which you would change your mind.

A.J. Ayer famously argued that some beliefs are falsifiable (which he described as scientific) and some aren’t (which he defined as religious). I think he was wrong in the notion that science is always falsifiable and religious never is, and there are other quibbles with his claim, but, having spent a lot time arguing with people in academic, nonacademic, fringe, and just fucking loony realms, I have come to think, while there are lots of good criticisms of the specifics of his argument, his general point–that we have beliefs we open to change and we have beliefs we will not change—is a useful and accurate description. (In fact, a lot of descriptions about whether an argument is useful or not begin with exactly that determination—are you open to changing your mind about the argument? Are you arguing with someone who is?)

A bigot is someone who cannot imagine circumstances under which she might change her mind. Or, more aptly, a bigot is someone who imagines himself as never wrong, and always able to summon evidence to support his position. What he can’t imagine (and this is what makes him irrational) is the evidence that would prove him wrong, and she condemns everyone who disagrees as so completely and obviously wrong that they should be silenced without ever having carefully listened to their argument.

I do believe that Jesus is my savior, and in a God who is omniscient and omnipotent. That belief is not open to disproof. And I am comfortable with calling that a religious belief. And, so, in that regard, I am a bigot. On the other hand, I’ve read the arguments for atheism, and various other religions, and I don’t think advocates of those beliefs should be silenced.

In addition I don’t believe that those two claims necessarily attach me to beliefs about slavery or segregation—and it’s important to remember that, for much of American history, there were entire regions in which it was insisted that being Christian necessarily meant supporting slavery and segregation. When Christian scholars of Scripture pointed out that the Scriptural based defenses of slavery and segregation were problematic, they were condemned as having a prejudiced and politicized reading of Scripture by people who insisted the Scripture endorsed US slavery practices. The notion that Scripture justified slavery as practice in the US South, especially after 1830 or so, was a bigoted reading of Scripture—not because I think it was wrong, but because its proponents refused to think carefully or critically about their own reasons and positions. They could “defend” slavery in that they could come up with (cherry-picked) proof texts, but they couldn’t (or wouldn’t) argue fairly with their critics, and they couldn’t (or wouldn’t) articulate the conditions under which they would change their minds. There were none. It isn’t what they argued, but how they argued, that earns them the title of bigot.

Furthermore, they banned criticisms of slavery, enforcing that ban with violence. So, they had both parts of the bigot definition—their views weren’t open to disproof, and they advocated refusing to listen to criticism of their views. They were bigots on steroids, in that they advocated violence against their critics.

Right now, we’re in a situation in which a lot of very powerful people are insisting you shouldn’t listen to criticisms of the current GOP political agenda, and they’re claiming that their views are grounded in Scripture, and they are implicitly and explicitly advocating violence against their critics. You should read them. (You can start with American Family Association, or Family Research Institute, or any expert cited on Fox News. Really—go read them.)

They call themselves conservative Christians. But being theologically conservative in Christianity does not necessarily involve the current GOP political agenda. For instance, there are conservative Christian arguments for gay marriage, for women working outside the home, against patriarchy, against the argument that charity should be entirely voluntary, and even the connection between conservative Christianity and abortion is fairly new. I’m not saying that true conservative Christians have this or that view–I’m saying that being conservative theologically doesn’t necessarily lead you to the GOP political agenda. After all, it was, for a long time, argued that being a conservative Christian necessarily led to endorsing slavery and segregation, and conservative Christians don’t make those connections anymore–why assume that current “necessary” connections (made with the same exegetical method as the “necessary” connections to slavery and segregation) are any better than those? And even many conservative Christians who argue for positions more or less in line with the current GOP political agenda don’t do so in a bigoted way. So, there’s nothing about being a conservative Christian that requires religious bigotry.

So, let’s stop using the term “conservative Christian” for people who insist that being a true Christian so necessarily means believing that the GOP agenda is right that everyone who disagrees should be threatened with violence till they shut up. “Conservative Christian” for what is actually authoritarian bigotry is strategic misnaming. Whether the Founders imagined a Christian nation is open to argument; whether they imagined a nation without disagreement is not. They valued disagreement; they valued reconsideration, deliberation, and pluralist argument.

People who pant for a one-party state, who tell their audience not to listen to anyone who disagrees, and who threaten (or justify threatening) their critics with violence are not only violating what the Founders said our country means may or may not be Christian (since they’re explicitly violating the “do unto others rule” I think that’s open to argument) but they are showing themselves to be anti-democratic authoritarian bigots.

And here is one last odd point about people like this (since I spend a lot of time arguing with them). They have a tendency to equate calling them authoritarian bigots with calling for silencing them, and that’s an interesting and important instance of projection. They believe that people who disagree with them should be silenced, so they really seem to hear all criticism of their views as an argument for silencing them. But that’s just projection.

We shouldn’t silence them. We should ask them to argue, not just engage in sloppy Jeremiads. I think our country is better if there are people who are participating in public discourse from the perspective of conservative Christianity. I think that’s a view that should be heard, and it can be heard without insisting all other views should be threatened into silence.

[The image at the top of the post is from a series of stained glass celebrating the massacre of Jews.]

Neoliberalism, liberalism, neopurconliberalism and why some people hate the ACA (pt II)

This was originally part of another post, but I cut it from that one. There’s a bunch of stuff floating around these days about how we shouldn’t use the term neoliberalism, as well as a lot of flinging the term at fellow lefties with whom we disagree, though, so I thought I’d go ahead and post it.

Elsewhere, I argued that the GOP objection to the ACA is grounded in the just world hypothesis—the notion that good things happen to good people and bad things happen to bad people–and so good things (money, healthcare, food) should only be given to good people. If people want healthcare, for instance, they should get a job. If they don’t have a job, they aren’t a good person, after all.

There’s also the argument that many of the GOP objected to ACA only because it was Obama who supported it. And that’s a reasonable argument. It was based, after all, on the recommendations of a very conservative think tank and Mitt Romney’s healthcare plan in Massachusetts. The argument is that they didn’t want any Democratic plan to succeed because our political landscape is so rabidly factionalized that parties are willing to do harm to the country as a whole rather than let the other side succeed.

And that rabid factionalism certainly mattered, but I think there is also a sincere ideological objection, having to do with hostility to third-way neoliberalism (explained below), and the rise of what might be called neopurconliberalism because it’s a muddle of various philosophies.

Loosely, Obama’s healthcare plan was a classic example of his tendency toward what political theory folks call “third-way neoliberal.” Although in popular usage, “liberal” means people who believe in a social safety net (and tend to vote Dem or Green), in political theory, “liberal” means people who accept the Enlightenment principles of universal rights (especially property, due process, and fair trial), a separation of church and state, minimal interference in the market, and a separation of public and private. Until very recently (the 2000s, really), most GOP and Dem voters were liberal, and it was the dominant lay political theory (meaning how non-specialists explained how a government should work). There were lots of arguments as to what “minimal interference” meant, and what is private (for instance, for years, wife-beating was considered a private act, and outside the realm of government “interference”). So, most people agreed on the principle but disagreed as to how the principle plays out in specific cases.

The other category that matters for thinking about hostility to Obamacare is democratic socialism, which is often used to describe systems in which the government is democratic (little d) and the government provides an extensive safety net. Democratic socialist countries tend to have high taxes and excellent infrastructures.

In the 1970s or so, a lot of economic theorists began arguing for what is often called “neoliberalism,” which is not “liberal” in the common sense–in fact, it’s deeply and profoundly opposed to the principles of someone like LBJ, JFK, or FDR. Neoliberalism says that the market is purely rational, and we should take as much as we can away from the government and put it into the private sector. Neoliberals don’t vote Dem, and they don’t fit the common usage of liberal–they tend to vote GOP or Libertarian. Supporting neoliberalism requires ignoring the whole field of behavioral economics and all the empirical critiques of the fantasy of the rational market, but neoliberalism and neoconservatism both got coopted by people whose political and economic theories are purely ideological (in the sense that their claims are deduced from their premises, and their premises are non-falsifiable–that is, there is no evidence they would accept to get them to reconsider their premises).

On the far right, there emerged an ideology that might be called neopurconliberal, a reemergence of one very specific aspect of early American Puritanism (that wealth is a sign of saintliness), entangled into the neocon assumption that the US is entitled to dictate to all other countries how they should do things–an entitlement that should be enforced through a domination-oriented “diplomacy” and the continual threat of intervention (so, shout a lot and carry a big stick)–and the neoliberal notion that as many social practices should be thrown into the market as possible (so there is no such thing as public goods that should not be sold). Or, more accurately, the far right thoroughly and completely endorsed the “just world hypothesis” (that everyone in this world gets exactly what we deserve).

Neoliberals (who aren’t necessarily religious at all) and neopurconliberals found common ground on public policies like deliberately underfunding public schools, universities, the arts, the USPS, Social Security, and Medicare–the neoliberals because they believed (in a non-falsifiable way) that the market is always better, and the neopurconliberals because they won’t want a secular government that provides goods, and want the goods of the world (healthcare, education, retirement benefits) connected to being what they consider a Christian.

Third-way neoliberalism has two defenses. One is that, given that we are in a post-Citizens United world, no one can win without a lot of money because low-information voters are persuaded by ads, no matter how misleading or rebutted. And while it might be nice to imagine that a political figure could get elected by getting all the necessary money from the 85% and members of the 15% who happen to be committed to democratic socialism (probably not a large number), the pragmatic solution is to make sure that the Dem candidate can make large numbers of very wealth people believe that they will thrive under Dem policies. So, the pragmatic version of third-way neoliberalism says it is a compromise we need to make.

The other version says that the information economy changes everything, and that the Democratic values of honoring workers, having a strong social safety net, being inclusive, having a bright line between religion (private) and secular activities (public), investing in infrastructure, creating stable and productive relationships with other countries, and enabling social mobility can be achieved in partnership with the kinds of industries that would also benefit economically from such values being common.

If you think about it in terms of healthcare, you can see how these ideologies play out. Democratic socialism would have in place single-payer health care, most healthcare provided by the state, and paid for by taxes of some kind. Neoliberalism would leave it all up to the market with little or no governmental control of insurance companies or healthcare providers. Third-way neoliberalism would try to develop a system that created profit incentives for insurance and healthcare providers to serve everyone—more governmental control (such as mandates) than neoliberalism, but not by providing the insurance or healthcare directly (as would happen in democratic socialism).

I really like Bertrand Russell’s argument for socialized medicine. Here’s the problem every healthcare plan faces: it’s the problem of a gambling establishment because insurance is just legalized gambling If you are running a casino, you need to make a prediction as to how much you will pay out, and you need to ensure that you will take more than you have to pay out. So, you have to have a system that collects enough from losers to pay out the winners.

Russell’s argument was exactly right: casinos work because losers pay into the system more than the winners take out. And that’s how insurance works. You have a lot of people who pay to play on the grounds that they might be someone who later gets a lot. You pay a dollar for a lottery ticket, not because you’re certain you’ll win the lottery, but because you’re willing to pay for the chance that you might win. You pay into a benefits pool, not because you’re certain you’ll win, but because you think you might.

The argument about healthcare is an argument about how to gamble. Russell saw that.

What Russell didn’t predict is how ingroup/outgroup preferences would impact healthcare decisions. We always see outrageous expenses on the part of beings with whom we identify as justified. The GOP made a big deal about death panels at the same that it was the party that had put such panels in place http://www.nationalreview.com/corner/428426/death-panel-futile-care-law-texas by reframing the issue as Obama would kill your grandmother, and many in their audience believed it because Obama is outgroup. They either never mentioned the GOP-supported death panels, denied they existed, or characterized them as just fiscal responsibility.

Looking at the issue the way Russell describes means just doing the math, and not worrying about whether the people winning at the tables are good or bad people, whether we think they “deserve” to win. Neoliberals hate the ACA because it doesn’t leave things to the market, and neopurconliberals hate it because it is not grounded in an obsession with whether healthcare is only going to people who “deserve” it.

So, this is also an argument about what we think the government should do, and how we should think about policies—in pragmatic terms, or in terms of punish/reward. Whether third-way neoliberalism is inherently bad or good from the perspective of social democrats is an interesting question, if not engaged in a purely ideological way. Can it be a bridge? Can it lead to social democratic policies? The right certainly thinks it can, and that’s why they oppose it. And we should engage the argument in pragmatic ways.

Why not having insurance can be framed as a freedom

Over at The Resurgent, Senator Mike Lee (R-Utah) explains why he would not support the compromise health care plan, even with the (amended) amendment he and Cruz proposed. And I think that Lee is perfectly sincere in his argument, and I think that his argument shows why a lot of lefty critiques of Trumpcare just don’t quite work, but I’ll explain that after I try to be really, really fair to his argument.

Lee’s objection to ACA is that: “Millions of middle-class families are being forced to pay billions in higher health insurance premiums to help those with pre-existing conditions.” He calls it a “hidden tax,” since it’s “paid every month to insurance companies instead of to the government” and he maintains that hidden tax is: “one of the most crushing financial burdens middle-class families deal with today.”

Lee’s proposals is not, as many say, that people with pre-existing conditions and expensive medical costs would get thrown off insurance entirely. Instead, this plan would split insurees into two groups: people who already have high medical costs, and are bad risks for insurers, and people who have not yet developed expensive medical costs (whom Lee consistently identifies as “the middle class”–that’s an important point, since it implies that he thinks the middle class and people with serious medical costs are different groups). The people with high medical costs, Lee argues, shouldn’t be protected through price-fixing: “We don’t have to use price controls to force middle-class families to bear the brunt of the cost of helping those who need more medical care. We could just give those with pre-existing conditions more help to get the care they need.” So, insurers are “free” to charge whatever they want, and consumers are “free” to get insurance or not (hence the name “Consumer Freedom Amendment”), and this plan will not put the financial burden of healthcare of others on “middle-class families.”

There are a few points about Lee’s plan that are interesting. The first is that my social media has had a lot of criticisms of Trumpcare and this amendment, and none of them explained it correctly. The main criticism has been that this will throw large numbers of people with serious medical issues to the wolves–that millions will be unable to get insurance. The impression I had gotten from various articles was that Cruz, Lee, and others were cheerfully and knowingly ensuring that millions of people would lose access to their healthcare. And that isn’t quite right, and I think it’s important to get opposition arguments right (both because it’s more rhetorically effective, and because it’s more important for policy deliberation).

Jordan Weissman has a nice article at Slate that does an unusually good job of explaining the various proposals, especially Lee’s argument: “Lee doesn’t believe that healthy Americans should help pay for sick ones through their insurance premiums, and he doesn’t want to put his name on a bill that might—in theory, depending on regulatory decisions, maybe, one day—allow that to happen.”

So, what’s at stake for Lee (and many others) is the notion that paying for healthcare is paying for someone else–for a different group. The really tragic failure here is the failure to imagine an “us” that includes all Americans.

Lee’s argument is a little inconsistent on that point, though. He admits that the subsidies will be paid for in taxes, so the healthy will, in fact, still be paying for the unhealthy. Even if it’s done through tax breaks rather than subsidies, we all pay, since we will pay in the form of less infrastructure and lower funding of all public “goods.” While I do think I understand (but don’t agree with) the reasoning behind the insistence that people who don’t have jobs don’t “deserve” healthcare, I’m not sure I understand this theme that comes up a lot in current conservative talk about public goods–it’s as though they don’t understand that publicly-owned things aren’t owned by no one; they owned by everyone. And public goods aren’t given to them; they’re given to us.

The math on how healthcare expenses work out is not complicated. It might be worrisome ( e.g., how can we pay for an aging citizenry), but it isn’t really complicated: for ever person to who takes a dollar out, there must be someone who puts slightly more than a dollar in (so the insurance company can make some money, and let’s all start with the fact that they’re all doing pretty damn well). That dollar in/out might be direct (it’s a thing on your paystub, and you put it in) or it might be indirect–sales tax, user taxes, sin taxes, but (and this is important) if health care happens, someone pays for it.

A US Senator recently told this story. He was mowing his lawn, and a constituent came up to talk to him (because he is the kind of guy who sees every resident in his state as a constituent, unlike, say, Ted Cruz). That guy said he should be forced to pay for health care because he never got sick. “Oh really,” said the Senator. “You’ve never been to the ER?” “Oh, sure,” the constituent said, “but that’s free.”

That’s an important story–that you are not charged in the moment does not make a service free. Lee hasn’t learned that lesson. (And here I’ll make a generalization and say that I’ve yet to argue with a neoconservative who understands that point–you can see it in the twitterfluffle over Grover Norquist’s failure to explain taxes to his daughter.)

It doesn’t matter if someone (even a middle class person) has medical costs that are paid for by the state or by an insurance company. Ask not for whom the bell tolls. All costs in a nation will end up being shared by everyone in the nation; the only interesting question is whether that sharing is “fair,” and that’s the whole issue with someone like Lee. Fairness might mean “everyone gets the same treatment” or it might mean “people get what they deserve.” People who self-identify as “liberal” tend toward the former, and people who self-identify as “conservative” tend toward the latter–they think it is “unfair” for people like them (their ingroup) to pay, in any way, for people like them (their outgroups). It’s unfair because they believe “people like us” have worked hard for what we have, and they haven’t. And, so, what they want are policies that presume an absolute and easy distinction between good and bad people and that magically restrict the goods to good people.

I don’t think that’s practical, as I don’t think it’s possible for public policy to make such clear distinctions between good and bad people, and I certainly don’t think that Lee’s “middle class” versus “people with pre-existing conditions” distinction is sensible. But, it’s an attractive argument to a lot of people because it’s simple, satisfying, and has just enough punitive spice in it to be pleasing. And, as in all us v. them rhetoric, it’s flattering. If we’re going to try to argue against these sorts of policies, and I think we should, we need to do it while understanding what their argument is, and it’s more complicated (and attractive) than is being acknowledged in a lot of lefty rhetoric.

 

 

How not to make a Hitler analogy

Americans love the Hitler analogy, the claim that their political leader is just like Hitler. And it’s almost always very badly done—their leader (let’s call him Chester) is just like Hitler because…. and then you get trivial characteristics, such as characteristics that don’t distinguish either Hitler or Chester from most political leaders (they were both charismatic, they used Executive Orders), or that flatten the characteristics that made Hitler extraordinary (Hitler was conservative). That process all starts with deciding that Chester is evil, and Hitler is evil, and then looking for any ways that Chester is like Hitler. So, for instance, in the Obama is Hitler analogy, the argument was that Obama was charismatic, he had followers who loved him, he was clearly evil (to the person making the comparison–I’ll come back to that), and he maneuvered to get his way.

Bush was Hitler because he was charismatic, he had followers who loved him, he was clearly evil (to the people making the comparison), and he used his political powers to get his way. And, in fact, every effective political figure fits those criteria in that someone thought they were clearly evil: Lincoln, Washington, Jefferson, FDR, Reagan, Bush, and Trump, for instance.

He was clearly evil. In the case of Hitler it means he killed six million Jews; in the case of Obama it means he tried to reduce abortions in a way that some people didn’t like (he didn’t support simply outlawing them), in the case of Bush it was that he invaded Iraq, for Lincoln it was that he tried to end slavery, and so on. In other words, in the case of Hitler, every reasonable person agrees that the policies he adopted six or seven years into his time as Chancellor were evil. But not everyone who wants to reduce abortions to the medically necessary agrees that Obama’s policies were evil, and not everyone who wants peace in the middle East agrees that Bush was evil.

So, what does it mean to decide a political leader is evil?

For instance, people who condemned Obama as evil often did so on grounds that would make Eisenhower and Nixon evil (support for the EPA, heavy funding for infrastructure, high corporate taxes, a social safety net that included some version of Medicare, secular public education), and many of which would make Eisenhower, Nixon, Reagan, and the first Bush evil (faith in social mobility, protection of public lands, promoting accurate science education, support for the arts, an independent judiciary, funding for infrastructure, good relations with other countries, the virtues of compromise). So, were the people condemning Obama as evil doing so on grounds that would cause them to condemn GOP figures as evil? No—their standards didn’t apply to figures they liked. It just a way of saying he wasn’t GOP.

Every political figure has some group of people who sincerely believe that leader is obviously evil. And every political figure who gets to be President has mastered the arts of being charismatic (not every one gets power from charismatic leadership, but that’s a different post), compromising, manipulating, engaging followers. So, is every political leader just like Hitler?

Unhappily, we’re in a situation in which people make the Hitler analogy to everyone else in their informational cave, and the people in that cave think it’s obviously a great analogy. Since we’re in a culture of demagoguery in which every disagreement is a question of good (our political party) or evil (their political party), any effective political figure of theirs is Hitler.

We’re in a culture in which a lot of media says, relentlessly, that all political choices are between a policy agenda that is obviously good and a policy agenda that is obviously evil, and, therefore, nothing other than the complete triumph of our political agenda is good. That’s demagoguery.

The claim that He was clearly evil is important because it raises the question of how we decide whether something is true or not. And that is the question in a democracy. The basic principle of a democracy is that there is a kind of common sense, that most people make decisions about politics in a reasonable manner, and that we all benefit because we get policies that are the result of the input of different points of view. Democracy is a politics of disagreement. But, if some people are supporting a profoundly anti-democratic leader, who will use the power of government to silence and oppress, then we need to be very worried. So the question of whether we are democratically electing someone who will, in fact, make our government an authoritarian one-party state is important. But, how do you know that your perception that this leader is just like Hitler is reasonable? What is your “truth test” for that claim?

1. Truth tests, certainty, and knowledge as a binary

Talking about better and worse Hitler analogies requires a long digression into truth tests and certainty for two reasons. First, the tendency to perceive their effective political leaders as evil because their policies are completely evil is based on and reinforces the tendency to think of political questions as between obvious good and obvious evil, and that perception is reinforced by and reinforces what I’ll explain as the two-part simple truth test (does this fit with what I already believe, and do reliable authorities say this claim is true). Second, believing that all beliefs and claims can be divided into obvious binaries (you are certain or clueless, something is right or wrong, a claim is true or false, there is order or chaos) correlates strongly to authoritarianism, and one of the most important qualities of Hitler was that he was authoritarian (and that’s where a lot of these analogies fail—neither Obama nor Bush were authoritarians).

And so, ultimately, as the ancient Greeks realized, any discussion about democracy quickly gets to the question of how common people make decisions as to whether various claims are true or false. Democracies fail or thrive on the single point of how people assess truth. If people believe that only their political faction has the truth and every other political faction is evil, then democracies collapse and we have an authoritarian leader. Hitlers arise when people abandon democratic deliberation.

That’s the most important point about Hitler: leaders like Hitler come about because we decide that diversity of opinion weakens our country and is unnecessary.

The notion that authoritarian governments arise from assumptions about how people argue might seem counterintuitive, since that seems like some kind of pedantic question only interesting to eggheads (not what you believe but how you believe beliefs work) and therefore off the point. But, actually, it is the point—democracies turn into authoritarian systems under some circumstances and thrive under others, and it all depends on what is seen as the most sensible way to assess whether a claim is true or not. The difference between democracy and authoritarianism is that practice of testing claims—truth tests.

For instance, some sources say that Chester is just like Hitler, and other sources say that Hubert it just like Hitler. How do you decide which claim is true?

One truth test is simple, and it has two parts: does perceiving Chester as just like Hitler fit with what you already believe? do sources you think are authorities tell you that Chester is just like Hitler? Let’s call this the simple two-part truth test, and the people who use it are simple truth-testers.

Sometimes it looks as though is a third (but it’s really just the first reworded): can I find evidence to show that Chester is just like Hitler?

For many people, if they can confirm a claim through those three tests (does it fit what I believe, do authorities I trust say that, can I find confirming evidence), then they believe the claim is rational.

(Spoiler alert: it isn’t.)

That third question is really just the same as the first two. If you believe something—anything, in fact—then you can always find evidence to support it. If you are really interested in knowing whether your beliefs are valid, then you shouldn’t look to see whether there is evidence to support what you believe; you should look to see whether there is evidence that you’re wrong. If you believe that someone is mad at you, you can find a lot of evidence to support that belief—if they’re being nice, they’re being too nice; if they’re quiet, they’re thinking about how angry they are with you. You need to think about what evidence you would believe to persuade you they aren’t mad. (If there is none, then it isn’t a rational belief.) So, those three questions are two: does a claim (or political figure) confirm what I believe; do the authorities I trust confirm this claim (or political figure)?

Behind those two questions is a background issue of what decisions look like. Imagine that you’re getting your hair cut, and the stylist says you have to choose between shaving your head or not cutting your hair at all—how do you decide whether that person is giving you good advice?

And behind that is the question of whether it’s a binary decision—how many choices to you have? Is the stylist open to other options? Do you have other options? Once the stylist has persuaded you that you either do nothing to your hair or shave it, then all he has to do is explain what’s wrong with doing nothing. And you’re trapped by a logical fallacy, because leaving your hair alone might be a mistake, but that doesn’t actually mean that shaving your head is a good choice. People who can’t argue for their policy like the fallacy of the false division (the either/or fallacy) because it hides the fact that they can’t persuade you of the virtues of their policy.

The more that you believe every choice is between two absolutely different extremes, the more likely it is that you’ll be drawn to political leaders, parties, and media outlets that divide everything into absolutely good and absolutely bad.

It’s no coincidence that people who believe that the simple truth test is all you need also insist (sometimes in all caps) that anyone who says otherwise is a hippy dippy postmodernist. For many people, there is an absolute binary in everything, including how to look at the world—you can look and make a judgment easily and clearly or else you’re saying that any kind of knowledge at all is impossible. And what you see is true, obviously, so anyone who says that judgment is vexed, flawed, and complicated is a dithering weeny. They say that, for a person of clear judgment, the right course of action in all cases is obvious and clear. It’s always black (bad) or white (good, and what they see). Truth tests are simple, they say.

In fact, even the people who insist that the truth is always obvious and it’s all black or white go through their day in shades of grey. Imagine that you’re a simple truth tester. You’re sitting at your computer and you want an ‘e’ to appear on your screen, so you hit the ‘e’ key. And the ‘e’ doesn’t appear. Since you believe in certainty, and you did not get the certain answer you predicted, are you now a hippy-dippy relativist postmodernist (had I worlds enough and time I’d explain why that term is incredibly sloppy and just plain wrong) who is clueless? Are you paralyzed by indecision? Do you now believe that all keys can do whatever they want and there is no right or wrong when it comes to keys?

No, you decide you didn’t really hit the ‘e’ or your key is gummed up or autocorrect did something weird. When you hit the ‘e’ key, you can’t be absolutely and perfectly certain that the ‘e’ will appear, but that’s probably what will happen, and if it doesn’t you aren’t in some swamp of postmodern relativism and lack of judgment.

Your experience typing shows that the binary promoted by a lot of media between absolutely certainty and hippy dippy relativism is a sloppy social construct. They want you to believe it, but your experience of typing, or making any other decision, shows it’s a false binary. You hit ‘e’ key, and you’re pretty near certain that an ‘e’ will appear. But you also know it might not, and you won’t collapse into some pile of cold sweat of clueless relativism if it doesn’t. You’ll clean your keyboard.

It’s the same situation with voting for someone, marrying someone, buying a new car, making dinner, painting a room. You can feel certain in the moment that you’re making the right decision, but any honest person has to admit that there are lots of times we felt totally and absolutely certain and turned out to have been mistaken. Feeling certain and being right aren’t the same thing.

That isn’t to say that the hippy-dippy relativists are right and all views are equally valid and there is no right or wrong—it’s to say that the binary between “the right answer is always obviously clear” and hippy-dippy relativism is wrong. For instance, in terms of the assertion that many people make that the distinction between right and wrong is absolutely obvious: is killing someone else right or wrong? Everyone answers that it depends. So, does that mean we’re all people with no moral compass? No, it means the moral compass is complicated, and takes thought, but it isn’t hopeless.

Our world is not divided into being absolutely certain and being lost in clueless hippy dippy relativism. But, and this is important, that is the black and white world described by a lot of media—if you don’t accept their truth, then you’re advocating clueless postmodern relativism. What those media say is that what you already believe is absolutely true, and, they say, if it turns out to be false, you never believed it, and they never said it. (The number of pundits who advocated the Iraq invasion and then claimed they were opposed to it all along is stunning. Trump’s claiming he never supported the invasion fits perfectly what with Philip Tetlock says about people who believe in their own expertise.)

And that you have been and always be right is a lovely, comforting, pleasurable message to consume. It is the delicate whipped cream of citizenship—that you, and people like you, are always right, and never wrong and you can just rely on your gut judgment. Of course, the same media that says it’s all clear has insisted that something is absolutely true that turned out not to be (Saddam Hussein has weapons of mass destruction, voting for Reagan will lead to the people’s revolution, Trump will jail Clinton, Brad Pitt is getting back together with Angelina Jolie, studies show that vaccines cause autism, the world will end in 1987). The paradox is that people continue to consume and believe media who have been wrong over and over, and yet are accepted as trusted authorities because they have sometimes been right, or, more often, because, even if wrong, what they say is comforting and assuring.

But, what happens when media say that Trump has a plan to end ISIS and then it turns out his plan is to tell the Pentagon to come up with a plan? What happens when the study that people cite to say autism is caused by vaccines turns out to be fake? Or, as Leon Festinger famously studied, what happens when a religion says the world will end, and it doesn’t? What happens when something you believe that fits with everything else you believe and is endorsed by authorities you believe turns out to be false? You could decide that maybe things aren’t simple choices between obviously true and obviously false, but that isn’t generally what people do. Instead, we recommit to the media because now we don’t want to look stupid.

Maybe it would be better if we all just decided that complicated issues are complicated, and that’s okay.

There are famous examples that show the simple truth test—you can just trust your perception—is wrong.

For instance, there is this example.

If you’re looking at paint swatches, and you want a darker color, you can look at two colors and decide which is darker. You might be wrong. Here’s a famous example of our tendency to interpret color by context.

Those examples look like special cases, and they (sort of) are: if you know that you have a dark grey car, and there is a grey and dark grey car in the parking lot, you don’t stand in the parking lot paralyzed by not knowing which car is yours because you saw something on the internet that showed your perception of darkness might be wrong. That experiment shows you might be entirely wrong, but you will not go on in your life worrying about it.

But you have been wrong about colors. And we’ve all tried to get into the wrong car, but in those cases we get instant feedback that we were wrong. With politics it’s more complicated, since media that promoted what turns out to have been a disastrous decision can insist they never promoted it (when Y2K turned out not to be a thing, various radio stations that had been fear mongering about it just never mentioned it again), claim it was the right decision, or blame it on someone else. They can continue to insist that their “truth” is always the absolutely obvious decision and that there is binary between being certain and being clueless. But, in fact, our operative truth test in the normal daily decisions we make is one that involves skepticism and probability. Sensible people don’t go through life with a yes/no binary. We operate on the basis of a yes/various degrees of maybe/no continuum.

What’s important about optical illusions is that they show that the notion central to a lot of argutainment—that our truth tests for politics should involve being absolutely certain that our group is right or else you’re in the muck of relativistic postmodernism—isn’t how we get through our days. And that’s important. Any medium, any pundit, any program, that says that decisions are always between us and them is lying to us. We know, from decisions about where to park, what stylist to use, what to make for dinner, how to get home, that it isn’t about us vs. them: it’s about making the best guesses we can. And we’re always wrong eventually, and that’s okay.

We tend to rely on what social psychologists call heuristics—meaning mental short cuts—because you can’t thoroughly and completely think through every decision. For instance, if you need a haircut, you can’t possibly thoroughly investigate every single option you have. You’re likely to have method for reducing the uncertainty of the decision—you rely on reviews, you go where a friend goes, you just pick the closest place. If a stylist says you have to shave your head or do nothing, you’ll walk away.

You might tend to have the same thing for breakfast, or generally take the same route to work, campus, the gym. Your route will not be the best choice some percentage of the time because traffic, accidents, or some random event will make your normal route slower than others from time to time (if you live in Austin, it will be wrong a lot). Even though you know that you can’t be certain you’re taking the best route to your destination, you don’t stand in your apartment doorway paralyzed by indecision. You aren’t clueless about your choices—you have a lot of information about what tends to work, and what conditions (weather, a football game, time of day, local music festivals, roadwork) are likely to introduce variables in your understanding of what is the best route. You are neither certain nor clueless.

And there are dozens of other decisions we make every day that are in that realm of neither clueless nor certain: whether you’ll like this movie, if the next episode of a TV program/date/game version/book in a series/cd by an artist/meal at a restaurant will be as good as the last, whether your boss/teacher will like this paper/presentation as much as the previous, if you’ll enjoy this trip, if this shirt will work out, if this chainsaw will really be that much better, if this mechanic will do a good job on your car, if this landlord will not be a jerk, if this class/job will be a good one.

We all spend all of our time in a world in which we must manage uncertainty and ambiguity, but some people get anxious when presented with ambiguity and uncertainty, and so they talk (and think) as so there is an absolute binary between certain and clueless, and every single decision falls into one or the other.

And here things get complicated. The people who don’t like uncertainty and ambiguity (they are, as social psychologists say, “drawn to closure”) will insist that everything is this or that, black or white even though, in fact, they continually manage shades of grey. They get in the car or walk to the bus feeling certain that they have made the right choice, when their choice is just habit, or the best guess, or somewhere on that range of more or less ambiguous.

So, there is a confusion between certainty as a feeling (you feel certain that you are right) and certainty as a reasonable assessment of the evidence (all of the relevant evidence has been assessed and alternative explanations disproven)—as a statement about the process of decision-making. Most people use it in the former way, but think they’re using it in the latter, as though the feeling of certainty is correlated to the quality of evidence. In fact, how certain people feel is largely a consequence of their personality type (On Being Certain has a great explanation of that, but Tetlock’s Expert Political Judgment is also useful). There’s also good evidence that the people who know the most about a subject tend to express themselves with less certainty than people who are un- or misinformed (the “Dunning-Kruger effect”).

What all that means is that people who get anxious in the face of ambiguity and uncertainty resolve that anxiety by feeling certain, and using a rigid truth test. So, the world isn’t rigidly black or white, but their truth test is. For instance, it might have been ambiguous whether they actually took the best route to work, but they will insist that they did, and that they obviously did. They managed uncertainty and ambiguity by denying it exists. This sort of person will get actively angry if you try to show them the situation is complicated.

They manage the actual uncertainty of situations by, retroactively, saying that the right answer was absolutely clear.[1] That sort of person will say that “truth test” is just simply asking yourself if something is true or not. Let’s call that the simple truth test, and the people who use it simple truth testers.

The simple truth test has two parts: first, does this claim fit with what I already believe? and, second, do authorities I consider reliable promote this claim?

People who rely on this simple truth test say it works because, they believe, the true course of action is always absolutely clear, and, therefore, it should be obvious to them, and it should be obvious to people they consider good. (It shouldn’t be surprising that they deny having made mistakes in the past, simply refashioning their own history of decisions—try to find someone who supported the Iraq invasion or was panicked about Y2K.)

The simple truth test is comfortable. Each new claim is assessed in terms of whether it makes us feel good about things we already believe. Every time we reject or accept a claim on the basis of whether it confirms our previous beliefs it confirms our sense of ourselves as people who easily and immediately perceive the truth. Thus, this truth test isn’t just about whether the new claim is true, but about whether they and people like them are certainly right.

The more certain we feel about a claim, the less likely we are to doublecheck whether we were right, and the more likely we are to find ways to make ourselves have been right. Once we get to work, or the gym, or campus, we don’t generally try to figure out whether we really did take the fastest route unless we have reason to believe we might have been mistaken and we’re the sort of person will to consider that we might have been mistaken.

There’s a circle here, in other words: the sort of person who believes that there is a binary between being certain and being clueless, and who is certain about all of her beliefs, is less likely to do the kind of work that would cause her to reconsider her sense of self and her truth tests. Her sense of herself as always right appears to be confirmed because she can’t think of any time she has been wrong. Because she never looked for such a time.

Here I need to make an important clarification: I’m not claiming there is a binary between people who believe you’re either certain or clueless and people who believe that mistakes in perception happen frequently. It’s more of a continuum, but a pretty messy one. We’re all drawn to black or white thinking when we’re stressed, frightened, threatened, or trying to make decisions with inadequate information. Most people have some realms or sets of claims they think are certain (this world is not a dream, evolution is a fact, gravity happens). Some people need to feel certain about everything, and some people don’t need to feel certain much at all, and a lot of people feel certain about many things but not everything.

Someone who believes that her truth tests enable certainty on all or most things will be at one end of the continuum, and someone who managed to live in a constant state of uncertainty would be at the other. Let’s call the person at the “it’s easy to be certain about almost everything important” authoritarian (I’ll explain the connection better later).

Authoritarians have trouble with the concept of probabilities. For instance, if the weather report says there will be rain, that’s a yes/no. And it’s proven wrong if the weather report says yes and there is no rain. But if the weather report says there is a 90% chance of rain and it doesn’t rain, the report has not been proven wrong.

Authoritarians believe that saying there is a 90% chance is just a skeezy way to avoid making a decision—that the world really is divided into yes or no, and some people just don’t want to commit. And they consume media that says exactly that.

This is another really important point: many people spend their consuming media that says that every decision is divided into two categories: the obviously right decision, and the obviously wrong one. And that media says that anyone who says that the right decision might be ambiguous, unclear, or a compromise is promoting relativism or postmodernism. So, as those media say, you’re either absolutely clear or you’re deep in the muck of clueless relativism. Authoritarians who consume that media are like the example above of the woman who believes that her certainty is always justified because she never checks to see whether she was wrong. They live in a world in which their “us” is always right, has always been right, and will always be right, and the people who disagree are wrong-headed ditherers who pretend that it’s complicated because they aren’t man enough to just take a damn stand.

(And, before I go on, I should say that, yes, authoritarianism isn’t limited to one political position—there are authoritarians all over the map. But, that isn’t to say that “both sides are just as bad” or authoritarianism is equally distributed. The distribution of authoritarianism is neither a binary nor a constant; it isn’t all on one side, but it isn’t evenly distributed.)

I want to emphasize that the authoritarian view—that you’re certain or clueless—is often connected to a claim that people are either authoritarians or relativists (or postmodernists or hippies) because there are two odd things about that insistence. First, a point I can’t pursue here, authoritarians rarely stick to principles across situations and end up fitting their own definition of relativist/postmodern. (Briefly, what I mean is that authoritarians put their group first, and say their group is always right, so they condemn behavior in them that they praise or justify in us. In other words, whether an act is good or bad is relative to whether it’s done by us or them—that’s moral relativism. So, oddly enough, you end up with moral relativism attacked by people who engage in it.) Second, even authoritarians actually make decisions in a world of uncertainty and ambiguity, and don’t use the same truth test for all situations. When their us turns out to be wrong, then they will claim the situation was ambiguous, there was bad information, everyone makes mistakes, and go on to insist that all decisions are unambiguous.

So, authoritarians say that all decisions are clear, except when they aren’t, and that we are always right, except when we aren’t. But those unclear situations and mistakes should never be taken as reasons to be more skeptical in the future.

2. Back to Hitler

Okay, so how do most people decide whether their leader is like Hitler? (And notice that it is never about whether our leader is like Hitler.) If you believe in the simple two-part truth test, then you ask yourself whether their leader seems to you to be like Hitler, and whether authorities you trust say he is. And you’re done.

But what does it mean to be like Hitler? What was Hitler like?

There is the historical Hitler who was, I think, evil, but didn’t appear so to many people, and who had tremendous support from a lot of authoritarians, and there is the cartoon Hitler. Hitler was evil because he tried to exterminate entire peoples (and he started an unnecessary war, but that’s often left out). The cartoon version assumes that his ultimate goals were obvious to everyone from the beginning—that he came on the scene saying “Let’s try to conquer the entire world and exterminate icky people” and always stuck to that message, so that everyone who supported him knew they were supporting someone who would start a world war and engage in genocide.

But that isn’t how Hitler looked to people at the time. Hitler didn’t come across as evil, even to his opponents (except to the international socialists), until the Holocaust was well under way. Had he come across as evil he would never have gotten into power. While Mein Kampf and his “beerhall” speeches were clearly eliminationist and warmongering, once he took power his recorded and broadcasted speeches never mentioned extermination and were about peace. (According to Letters to Hitler, his supporters were unhappy when he started the war.) Hitler had a lot of support, of various kinds, and his actions between 1933 and 1939 actually won over a lot of people, especially conservatives and various kinds of nationalists, who had been skeptical or even hostile to him before 1933. His supporters ranged from the fans (the true believers), through conservative nationalists who wanted to stop Bolshevism and reinstate what they saw as “traditional” values, conservative Christians who objected to some of his policies but also liked a lot of them (such as his promotion of traditional roles for women, his opposition to abortion and birth control, his demonizing of homosexuality), and people of various political ideologies who liked that (they thought) he was making Germany respected again, had improved the economy, had ended the bickering and instability they associated with democratic deliberation, and was undoing a lot of the shame associated with the Versailles Treaty.

Until 1939, to his fans, Hitler came across as a truth-teller, willing to say politically incorrect things (that “everyone” knew were true), cut through all the bullshit, and be decisive. He would bring honor back to Germany and make it the military powerhouse it had been in recent memory; he would sideline the feckless and dithering liberals, crush the communists, and deal with the internal terrorism of the large number of immigrants in Germany who were stealing jobs, living off the state, and trying to destroy Germany from within; he would clean out the government of corrupt industrialists and financiers who were benefitting from the too-long deliberations and innumerable regulations. He would be a strong leader who would take action and not just argue and compromise like everyone else. He didn’t begin by imprisoning Jews; he began by making Germany a one-party state, and that involved jailing his political opponents.

Even to many people willing to work with him, Hitler came across as crude, as someone pandering to popular racism and xenophobia, a rabble-rouser who made absurd claims, and who didn’t always make sense, whose understanding of the complexities of politics appeared minimal. But conservatives thought he would enable them to put together a coalition that would dominate the Reichstag (the German Congress, essentially) and they could thereby get through their policy agenda. They thought they could handle him. While they granted that he had some pretty racist and extreme things (especially his hostility to immigrants and non-Christians, although his own record on Christian behavior wasn’t exactly great), they thought that was rabble-rousing he didn’t mean, a rhetoric he could continue to use to mobilize his base for their purposes, or that he could be their pitbull whom they could keep on a short chain. He instantly imposed a politically conservative social agenda that made a lot of conservative Christians very happy—he was relentless in his support for the notion that men earn money and women work in the home, homosexuality and abortion are evil [2], sexual immorality weakens the state, and his rhetoric was always framed in “Christian terms” (as Kenneth Burke famously argued—his rhetoric was a bastardization of Christian rhetoric, but it still relied on Christian tropes).

Conservative Christians (Christians in general, to be blunt) had a complicated reaction to him. Most Christian churches of the era were anti-Semitic, and that took various forms. There were the extreme forms—the passion plays that showed Jews as Christ-killers, who killed Christians for their blood at Passover, even religious festivals about how Jews stabbed consecrated hosts (some of which only ended in the 1960s).

There were also the “I’m not racist but” versions of Christian anti-Semitism promoted by Catholic and Protestant organizations (all of this is elegantly described in Antisemitism, Christian Ambivalence, and the Holocaust). Mainstream Catholic and Lutheran thought promoted the notion that Jews were, at best, failed Christians, and that the only reason not to exterminate them was so that they could be converted. There was, in that world, no explicit repudiation of the sometimes pornographic fantasies of greedy Jews involved in worldwide conspiracies, stabbing the host, drinking the blood of Christian boys at Passover, and plotting the downfall of Germany. And there was certainly no sense that Christians should tolerate Jews in the sense of treating them as we would want to be treated; it simply meant that they shouldn’t be killed. As Ian Kershaw has shown, a lot of German Christians didn’t bother themselves about oppression (even killing) of Jews, as long at it happened out of their ken; they weren’t in favor of killing Jews, but, as long as they could ignore it was happening, they weren’t going to do much to protest (Hitler, The Germans, and the Final Solution).

Many of his skeptics (even international ones) were won over by his rhetoric. His broadcast speeches emphasized his desire for peace and prosperity; they liked that he talked tough about Germany’s relations to other countries (but didn’t think he’d lead them into war), they loved that he spent so much of his own money doing good things for the country (in fact, he got far more money out of Germany than he put into it, and he didn’t pay taxes—for more on this, see Hitler at Home), and they loved that he had the common touch, and didn’t seem to be some inaccessible snob or aristocrat, but a person who really understood them (Letters to Hitler is fascinating for showing his support). They believed that he would take a strong stance, be decisive, look out for regular people, clear the government of corrupt relationships with financiers, silence the kind of people who were trying to drag the nation down, and cleanse the nation of that religious/racial group that was essentially ideologically committed to destroying Germany.

There were a lot of people who thought Hitler could be controlled and used by conservative forces (Van Papen) or was a joke. In middle school, I had a teacher who had been in the Berlin intelligentsia before and during the war, and when asked why people like her didn’t do more about Hitler, she said, “We thought he was a fool.” Many of his opponents thought he would never get elected, never be given a position of power.

But still, some students say, you can see in his early rhetoric that there was a logic of extermination. And, yes, I think that’s true, but, and this is important, what makes you think you would see it? Smart people at the time didn’t see it, especially since, once he got a certain level of attention he only engaged in dog whistle racism. Look, for instance, at Triumph of the Will—the brilliant film of the 1934 Nazi rally in Nuremburg—in which anti-Semitism appears absent. The award-winning movie convinced many that Hitler wasn’t really as anti-Semitic as Mein Kampf might have suggested. But, by 1934, true believers had learned their whistles—everything about bathing, cleansing, purity, and health was a long blow on the dog whistle of “Jews are a disease on the body politic.” Hitler’s first speech on the dissolution of the Reichstag (March 1933) never uses the word Jew, and looked reasonable (he couldn’t control himself, however, and went back to his non-dog whistle demagoguery in what amounted to the question and answer period—Kershaw’s Hubris describes the whole event).

We focus on Hitler’s policy of extermination, but we don’t always focus enough on his foreign policy, especially between 1933 and 1939. Just as we think of Hitler as a raging antisemite (because of his actions), so we think of him as a warmonger, and he was both at heart and eventually, but he managed not to look that way for years. That’s really, really important to remember. He took power in 1933, and didn’t show his warmongering card till 1939. He didn’t show his exterminationist card till even later.

Hitler’s foreign policy was initially tremendously popular because he insisted that Germany was being ill-treated by other nations, was carrying a disproportionate burden, and was entitled to things it was being denied. Hitler said that Germany needed to be strong, more nationalist, more dominating, more manly in its relations with other nations. Germany didn’t want war, but it would, he said, insist upon respect.

Prior to being handed power, Hitler talked like an irresponsible war-monger and raging antisemite (especially in Mein Kampf), but his speeches right up until the invasion of Poland were about peace, stability, and domestic issues about helping the common working man. Even in 1933-4, the Nazi Party could release a pamphlet with his speeches and the title Germany Desires Work and Peace.

What that means is that from 1933 to 1939 Hitler managed a neat rhetorical trick, and he did it by dog whistles: he persuaded his extremist supporters that he was still the warmongering raging antisemite they had loved in the beerhalls and for whom Streicher was a reliable spokesman, and he persuaded the people frightened by his extremism that he wasn’t that guy, he would enable them to get through their policy agenda. (His March 1933 speech is a perfect example of this nasty strategy, and some day I intend to write a long close analysis of it.)

And even many of the conservatives who were initially deeply opposed to him came around because he really did seem to be effective at getting real results. He got those results by mortgaging the German economy, and setting up both a foreign policy and economic policy that couldn’t possibly be maintained without massive conquest; it had short-term benefits, but was not sustainable.

Hitler benefitted by the culture of demagoguery of Weimar Germany. After Germany lost WWI, the monarchy was ended, and a democracy was imposed. Imposing democracy is always vexed, and it doesn’t always work because democracy depends on certain cultural values (a different post). One of those values is seeing pluralism—that is, diversity of perspective, experience, and identity—as a good thing. If you value pluralism, then you’ll tend to value compromise. If you believe that a strong community has people with different legitimate interests, points of view, and beliefs, then you will see compromise as a success. If, however, you’re an authoritarian, and you believe that you and only you have the obvious truth and everyone else is either a knave or a fool, then you will see refusing to compromise as a virtue.

And then democracy stalls. It doesn’t stall because it’s a flawed system; it stalls when people reject the basic premises of democracy, when, despite how they make decisions about how to get to work in the morning, or whether to take an umbrella, they insist that all decisions are binaries between what is obviously right (us) and what is obviously wrong (them).

And, in the era after WWI, Germany was a country with a democratic constitution but a rabidly factionalized set of informational caves. People could (and did) spend all their time getting information from media that said that all political questions are questions of good (us) and evil (them). Those media promoted conspiracy theories—the Protocols of the Elders of Zion, for instance—insisted on the factuality of non-events, framed all issues as apocalyptic, and demonized compromise and deliberating. They said it’s a binary. The International Socialists said the same thing, that anything other than a workers’ revolution now was fascism, that the collapse of democracy was great because it would enable the revolution. Monarchists wanted the collapse of the democracy because they hoped to get a monarchy back, and a non-trivial number of industrialists wanted democracy to collapse because they were afraid people would vote for a social safety net that would raise their taxes.

It was a culture of demagoguery.

But, in the moment, large numbers of people didn’t see it that way because, if you were in a factional cave, and you used the two-step test, everything you heard in your cave would seem to be true. Everything you heard about Hitler would fit with what you already believed, and it was being repeated by people you trusted.

Maybe what you heard confirmed that he would save Germany, that he was a no-bullshit decisive leader who really cared about people like you and was going to get shit done, or maybe what you heard was that he was a tool of the capitalists and liberals and that you should refuse to compromise with them to keep him out of power. Whether what you heard was that Hitler was awesome or that he was completely wrong, what you heard was that he was obviously one or the other, and that anyone who disagreed with you was evil. What you heard was the disagreement itself was proof that evil was present. And heard democracy was a failure.

And that helped Hitler, even the attacks on him . As long as everyone agreed that the truth is obvious, that disagreement is a sign of weakness, the compromise is evil, then an authoritarian like Hitler would come along and win.

There were a lot of people who more or less supported the aims he said he had—getting Germany to have a more prosperous economy, fighting Bolshevism, supporting the German church, avoiding war, renegotiating the Versailles Treaty, purifying Germany of anti-German elements, making German politics more efficient and stable—but who thought Hitler was a loose cannon and a demagogue. Many of those were conservatives and centrists.

And, once Hitler was in power they watched him carefully. And, really, all his public speeches, especially any ones that might get international coverage, weren’t that bad. They weren’t as bad as his earlier rhetoric. There wasn’t as much explicit anti-Semitism, for instance, and, unlike in Mein Kampf, he didn’t advocate aggressive war. He said, over and over, he wanted peace. He immediately took over the press, but, still and all, every reader of his propaganda could believe that Hitler was a tremendously effective leader, and, really, by any standard he was: he effected change.

There wasn’t, however, much deliberation as to whether the changes he effected were good. He took a more aggressive stance toward other countries (a welcome change from the loser stance adopted from the end of WWI, which, technically, Germany did lose), he openly violated the deliberately shaming aspects of the Versailles Treaty, he appeared to reject the new terms of the capitalism of the era (he met with major industrial leaders and claimed to have reached agreements that would help workers), he reduced disagreement, he imprisoned people who seemed to many people to be dangerous, he enacted laws that promoted the cultural “us” and disenfranchised “them.” And he said all the right things. At the end of his first year, Germany published a pamphlet of his speeches, with the title “The New Germany Desires Work and Peace.” So, by the simple two-art truth test (do the claims support what you already believe? do authorities you trust confirm these claims?) Hitler’s rhetoric would look good to a normal person in the 30s. Granted, his rhetoric was always authoritarian—disagreement is bad, pluralism is bad, the right course of action is always obvious to a person of good judgment, you should just trust Hitler—but it would have looked pretty good through the 30s. A person using that third test—can I find evidence to support these claims—would have felt that Hitler was pretty good.

3. So, would you recognize Hitler if you liked what he was saying?

What I’m trying to say is that asking the question of “Is their political leader just like Hitler” is just about as wrong as it can get as long as you’re relying on simple truth tests.

If you get all your information from sources you trust, and you trust them because what they say fits in with your other beliefs, then you’re living in a world of propaganda.

If you think that you could tell if you were following a Hitler because you’d know he was evil, and you are in an informational cave that says all the issues are simple, good and evil are binaries and easy to tell one from another, there is either certainty or dithering, disagreement and deliberation are what weak people do, compromise is weakening the good, and the truth in any situation is obvious, then, congratulations, you’d support Hitler! Would you support the guy who turned out to start a disastrous war, bankrupt his nation, commit genocide? Maybe—it would just be random chance. Maybe you would have supported Stalin instead. But you would definitely have supported one or the other.

Democracy isn’t about what you believe; it’s about how you believe. Democracy thrives when people believe that they might be wrong, that the world is complicated, that the best policies are compromises, that disagreement can be passionate, nasty, vehement, and compassionate–that the best deliberation comes when people learn to perspective shift. Democracy requires that we lose gracefully, and it requires, above all else, that we don’t assess policies purely on whether they benefit people like us, but that we think about fairness across groups. It requires that we do unto others as we would have them do unto us, that we pass no policy that we would consider unfair if we were in all the possible subject positions of the policy. Democracy requires imagining that we are wrong.

[1] That sort of person often ascribes to the “just world model” or “just world hypothesis” which is the assumption that we are all rewarded in this world for our efforts. If something bad happens to you, you deserved it. People who claim that is Scriptural will cherry-pick quotes from Proverbs, ignoring what Jesus said about rewards in this world, as well as various other important parts of Scripture (Ecclesiastes, Job, Paul).

[2] There is a meme circulating that Hitler was pro-abortion. His public stance was opposition to abortion at least through the thirties. Once the genocides were in full swing, Nazism supported abortion for “lesser races.”

Terrorist Peanuts and Immigration

When I teach about the Holocaust, one of the first questions students ask is: why didn’t the Jews leave? The answer is complicated, but one part isn’t: where would they go? Countries like the US had such restrictive immigration quotas for the parts of Europe from which the Jews were likely to come that we infamously turned back ships. And, so, students ask, why did we do that?

We did it because of that era’s version of the peanut argument.

The peanut argument (more recently presented with a candy brand name attached to it, but among neo-Nazis the analogy used is a bowl of peanuts) has been shared by many, including by members of our administration, as a mic-drop strong defense of a travel ban on people from regions and of religions considered dangerous because, as the analogy goes, would you eat from a bowl of peanuts if you knew that one was poisoned?

People who make that argument insist that they are not being racist, because their objection is, they say, not based in an irrational stereotype about this group. They say it is a rational reaction to what members of this group have really done. And, they say, for the same reason, that they are not being hypocritical: as descendants of immigrants, they are open to safe immigrant groups. These immigrants, unlike their forbears, have dangerous elements.

What they don’t know is that every ethnicity and religion that has come to America has had members that struck large numbers of existing citizens as dangerous—the peanut argument has always been around. And it’s exactly the argument that was used for sending Jews back to death. The tragedies of the US immigration policy during Nazi extermination were the consequence of the 1924 Immigration Act, a bill that set race-based immigration quotas grounded in arguments that this set of immigrants (at that point, Italians and eastern and central Europeans) was too fundamentally and dangerously antagonistic to American traditions and institutions to admit. Architects of that act (and defenders of maintaining the quotas, in the face of people escaping genocide) insisted that they weren’t opposed to immigration, just this set of immigrants.

At least since Letters from an American Farmer (first published in 1782), Americans have taken pride in being a nation of immigrants. And, since around the same time, large numbers of Americans who took pride in being descended from immigrants have stoked fear about this set of immigrants.

Arguments about whether Catholics were a threat to democracy raged throughout the nineteenth century, for instance. Samuel Morse (of the Morse code) wrote a tremendously popular book arguing that German and Irish Catholics were conspiring to overthrow American democracy, which appealed to popular notions about Catholics’ religion being essentially incompatible with democracy. Hostility towards the Japanese and Chinese (grounded in stereotypes that their political and religious beliefs necessarily made them dangerous citizens) resulted in laws prohibiting their naturalization, owning property, repatriation, and, ultimately, their immigration (and, in the case of the Japanese, it led to race-based imprisonment). After the revolutions of 1848, and especially with the rise of violent political movements in the late nineteenth century (anarchism, Sinn Fein, various anti-colonial and independence movements), large numbers of politicians began to focus on the possibility that allowing this group would mean that we were allowing violent terrorists bent on overthrowing our government.

And that’s exactly what it did mean. Every one of those groups did have individuals who advocated violent change.

A large number of the defendants in the Haymarket Trial (concerning a fatal bomb-throwing incident at a rally of anarchists–photo left) were immigrants or children of immigrants; by the early 20th century, people arguing that this group had dangerous individuals could (and did) cite examples like Emma Goldman (a Jewish anarchist imprisoned for inciting to riot), Nicola Sacco and Bartolomeo Vanzetti (Italian anarchists executed murder committed during a robbery), Jacob Abrams and Charles Schenck (Jews convicted of sedition), and Leon Czolgosz (the son of Polish immigrants, who shot McKinley). Even an expert like Harry Laughlin, of the Eugenics Record Office, would testify that the more recent set of immigrants were genetically dangerous (they weren’t—his math was bad).

History has shown that the fear mongerers were wrong. While those groups did all have advocates of violence, and individuals who advocated or committed terrorism, the peanut analogy was fallacious, unjust, and unwise. Those groups also contributed to America, and they were not inherently or essentially un-American.

Looking back, we should have let the people on those ships disembark. Looking forward, we should do the same.

[image: By Internet Archive Book Images – https://www.flickr.com/photos/internetarchivebookimages/14782377875/Source book page: https://archive.org/stream/christianheralds09unse/christianheralds09unse#page/n328/mode/1up, No restrictions, https://commons.wikimedia.org/w/index.php?curid=42730228]

Demagoguery and Democracy

John Muir and environmental demagoguery

One of the most controversial claims I make about demagoguery is that it isn’t necessarily harmful. When I make that argument, it’s common for someone to disagree with me by pointing out that some specific instance of demagoguery is harmful. But that isn’t refuting my argument because I’m not arguing for a binary of demagoguery being always or never harmful. I’m saying that not every instance of demagoguery is necessarily harmful. Whether demagoguery is harmful depends, I think, on where it lies on multiple axes: how demagogic the text is; how powerful that media is that is promoting the demagoguery; how widespread that kind of demagoguery is.

(Yeah, yeah, I know, that means a 3d map, but I honestly think you need all three axes.)

And the best way to talk about the harmless demagoguery is to talk more about one of the first examples of a failed deliberative process that haunted me. One spring, when I was a child, my family went to Yosemite Valley in Yosemite National Park. My family mostly tried (and failed) to teach one another bridge, and I wandered around the emerald valley. Having grown up in semi-arid southern California, the forested walks seemed to me magical, and I was enchanted. One evening, my mother took me to a campfire, hosted by a ranger, who told the story of John Muir, a California environmentalist crucial in the preservation of Yosemite National Park. The last part of the ranger’s talk was about Muir’s final political endeavor, his unsuccessful attempt to prevent the damming and flooding of the Hetch Hetchy Valley, a valley the ranger said was as beautiful as the one by which I had been entranced. The ranger presented the story as a dramatic tragedy of Good (John Muir) versus Evil (the people who wanted to dam and flood the valley), with Evil winning and Muir dying of a broken heart. I was deeply moved, and fascinated. And years later, I would come back to the story when trying to think about whether and how people can argue together on issues with profound disagreement.

The ranger had told the story of Good versus Evil, but that isn’t quite right, in several ways. For one thing, it wasn’t a debate with only two sides (something I have since discovered to be true of most political issues). In this case, it is more accurate to say that there were three sides: the corrupt water company currently supplying San Francisco that wanted to prevent San Francisco getting any publicly-owned water supply; the progressive preservationists like John Muir, who wanted San Francisco to get an outside publicly-owned water supply, but not the Hetch Hetchy; and the progressive conservationists like Gifford Pinchot or Marsden Manson, who wanted an outside publicly-owned water supply that included the Hetch Hetchy.

And a little background on each of the major figures in this issue. Gifford Pinchot was head of the Forest Service, with close political ties to Theodore Roosevelt. Born in 1865, he was a strong advocate of conservation—that is, keeping large parts of land in public ownership, sustainable foresting practices, and what is called “multiple use.” The principle of conservation (as opposed to preservation) is that public lands should be available to as many different uses as possible, such as foresting, hunting, camping, and fishing. The consensus among scholars is that Pinchot’s support for the Hetch Hetchy dam was crucial to its success.

Marsden Manson was far less famous than Pinchot. Born in 1850, he was an engineer (trained at Berkeley), member of the Sierra Club who had camped in Yosemite, and, from 1897 till 1912, was an engineer for the City of San Francisco, first serving on the San Francisco Drainage Committee, then in the Public Works Department, and finally City Engineer. It was in that capacity that he wrote the pamphlet I’ll talk about in a bit. He was an avid conservationist.

John Muir is probably the most famous of the people heavily involved in the controversy, and still a hero among environmentalists. Born in 1838 in Scotland, his family emigrated to the United States when he was around ten, to Wisconsin. He arrived in California in 1868, and promptly went to Yosemite Valley (which was not yet a national park). He stayed there for several years, writing about the Sierras, in what would become articles in popular magazines. His elegant descriptions of the beauties of the Sierra Nevada mountains were influential in persuading people to preserve the area, creating Yosemite National Park. He was the first President of the Sierra Club (formed in the early 1890s) which is still a powerful force in environmentalism. Muir was a preservationist, believing that some public lands should be preserved in as close to a wilderness state as possible.

Perhaps the most important character in the controversy is the Hetch Hetchy Valley. Part of the Yosemite National Park, it was less accessible than Yosemite Valley, and hence far less famous. Like many other valleys in the Sierra Nevada mountains, it was formed by glaciers. Two of its waterfalls are among the tallest waterfalls in North America.

The story the ranger told was between right and wrong, good and evil, and, even though I disagree with the stance Pinchot and Manson took, and believe that the Hetch Hetchy Valley should not have been dammed (and I believe they used some pretty sleazy rhetorical and political tactics to make it happen), I don’t think they were bad people. I don’t think they were selfish or greedy, or even that they didn’t appreciate nature. I think they believed that what they were doing was right, and they had some good arguments and good reasons, and they felt justified in some troubling rhetorical means because they believed their ends were good. I don’t think they were Evil.

After all, San Francisco had long been victimized by a corrupt water company, the Spring Valley Company, with a demonstrated record of exploiting users (particularly during the aftermath of the 1906 earthquake). San Francisco had a legitimate need for a new water supply, and the argument that such public goods should not be subject to the profit motive is a sensible argument. The proponents of the dam argued that turning the valley into a reservoir would increase the public’s access to it, and the ability of the public to benefit. The dam, it was promised, would provide electric power that would be a public utility (that is, not privately owned), thereby benefiting the public directly. Thus, both the preservationists and conservationists were concerned about public good, but they proposed different ways of benefitting the public.

Although John Muir was President and one of the founders of the Sierra Club, not everyone in the organization was certain the dam was a mistake, and so the issue was put to a vote—the Sierra Club at that point had both conservationists and preservationists. Muir wrote the case against, a pamphlet called “The Hetch Hetchy Valley,” which, along with Manson’s argument, “Statements of San Francisco’s Side of the Hetch Hetchy Reservoir Matter,” was distributed to members of the Sierra Club, and they were asked to vote.

For Muir’s pamphlet, he reused much of an 1873 article about Hetch Hetchy, originally written to persuade people to visit the Sierras. He kept much (but not all) of his highly poetical description of the Hetch Hetchy Valley, especially its two falls. His argument throughout the pamphlet is that the valley is beautiful, unique and sacred; it isn’t until the end of the pamphlet that he added a section specifically written for the dam controversy, and in that part he resorted to demagoguery, painting his opponents as motivated by greed and an active desire to destroy beauty, in the same category as the Merchants in the Temple of Jerusalem and Satan in the Garden of Eden: “despoiling gainseekers, — mischief-makers of every degree from Satan to supervisors, lumbermen, cattlemen, farmers, etc., eagerly trying to make everything dollarable […] Thus long ago a lot of enterprising merchants made part of the Jerusalem temple into a place of business instead of a place of prayer, changing money, buying and selling cattle and sheep and doves. And earlier still, the Lord’s garden in Eden, and the first forest reservation, including only one tree, was spoiled.” Muir presented the conflict as “part of the universal battle between right and wrong,” and characterized his opponents’ arguments as “curiously like those of the devil devised for the destruction of the first garden — so much of the very best Eden fruit going to waste; so much of the best Tuolumne water.” Muir called his opponents “Temple destroyers, devotees of ravaging commercialism,” saying, they “seem to have a perfect contempt for Nature, and, instead of lifting their eyes to the mountains, lift them to dams and town skyscrapers.” And he ended the pamphlet with the rousing peroration:

Dam Hetch-Hetchy! As well dam for water-tanks the people’s cathedrals and churches, for no holier temple has ever been consecrated by the heart of man. (John Muir Sierra Club Bulletin, Vol. VI, No. 4, January, 1908)

Muir’s argument is demagoguery—he takes a complicated situation (with at least three different positions) and divides it into a binary of good versus evil people. The bad people don’t have arguments; they have bad motives.

But this, too, is a controversial claim on my part, and it actually makes some people really angry with me for me to “criticize” Muir. The common response is that I shouldn’t criticize him because he was a good man and he was fighting for a good cause. In other words, the world is divided into good and bad people, and we shouldn’t criticize good people on our side. And I reject every part of that argument. I think we should criticize people on our side, especially if we agree with their ends (and especially if we’re looking critically at an argument in the past) because that’s how we learn to make better arguments. And I’m not even criticizing Muir in the sense those people mean—they mean I’m saying negative things about him, and that I believe he should have done things differently. The assumption is that demagoguery is bad, so by saying he engaged in demagoguery he’s a bad person.

Like Muir’s argument, that presumes a binary (or even continuum) between good and bad people. Whether there really is such a binary I don’t know, but I’m certain that it isn’t relevant. The debate wasn’t split into good and bad people, and we don’t have to make our heroes untouchable.

And, besides, I’m not criticizing Muir in the sense of saying he did the wrong thing. I’m not sure he did. His demagoguery had no particular harm. While his text (especially the last part) is demagoguery, and he was a powerful rhetor at the time, the kind of demagoguery in which he was engaged (against conservationists) wasn’t very widespread, so he wasn’t contributing to a broad cultural demonizing of some group. And I’m not even sure that his demagoguery did any harm (or benefit) to the effectiveness of his argument.

Muir was trying to get the majority of people in the Sierra Club—perhaps even all of them—to condemn the Hetch Hetchy scheme on preservationist grounds, so he already had the votes of preservationists like himself. What he had to do rhetorically is to move conservationists (or, at least, people drawn to that position) over to the preservationist side, at least in regard to the Hetch Hetchy Valley.

A useful step in an argument is identifying what, exactly, is the issue (or are the issues): why are we disagreeing? Called the “stasis” in classical rhetorical theory, the “hinge” of an argument points to the paradox that a productive disagreement requires agreement on several points—including on the geography of the argument: what is at the center, how broad an area can/should the argument cover, what areas are out of bounds? The stasis is the main issue in the argument, and arguments often go wrong because people disagree about what it is. In the case of the Hetch Hetchy, an ideal argument about the topic would be about whether damming and flooding that valley was the best long-term option for everyone who uses the valley—such a debate would require that people talk honestly and accurately about the actual costs, the various options, and as usefully as possible about the benefits (of all sorts) to be had from preserving the valley for camping (this is a big issue in California, in which camping is very popular).

It’s conventional in rhetoric to say that you have to argue from your opposition’s premises to persuade your opposition, and that would have necessitated Muir arguing on the premises that informed conservation.

Muir’s rhetorical options included:

    1. condemning conservationism in the abstract, and trying to persuade his conservationist audience to abandon an important value;
    2. arguing that conservationism is not a useful value in this particular case, and that this is a time when preservationism is a better route;
    3. arguing that damming and flooding the valley does not really enact conservationist values (e.g., it’s actually expensive).

But, to do any of those strategies effectively, he’d have to make the case on the conservationist premise that it’s appropriate to think about natural resources in terms of costs and benefits. And Muir’s stance about nature—his whole career—was grounded in the perception that such a way of looking at nature is a unethical.

Muir paraphrases (in quotes) the conservationist mantra: “Utilization of beneficent natural resources, that man and beast may be fed and the dear Nation grow great.” While I’ve never found any conservationist text that has that precise wording, it’s a fair representation of the basic principle of conservation; i.e., “greatest good for the greatest number.” And, certainly, conservationists did (and do) believe that there is no point in preserving any wilderness areas—all forests should be harvested, all lakes should be used, all areas should be open to hunting. But they didn’t do this out of a desire for financial gain, as much as from a different (and I would say wrong-headed) perception of how to define “the public.”

The conservationist argument in this case was pretty much bad faith, in that they claimed that they would improve the beauty of the valley by making it a lake. Muir argued they would destroy it. I agree with Muir, as it happens, and so my argument is not that Muir is factually wrong; the valley was destroyed by the damming. I also think some of the dam proponents—specifically Manson–knew that it would be destroyed, and Manson was lying when he described a road, increased camping, and other features that, as an engineer, he must have known were impossible. But many of the people drawn to the conservationist plan didn’t know that Manson was describing technologically impossible conditions, and they believed the proponents’ argument that the resulting reservoir would not only benefit San Franciscans (by providing safe cheap water and electric power) but it would have no impact on camping; it would, the conservationists claim, increase the accessibility of the area without interfering with the beauty of the valley at all. Again, that isn’t true, but it’s what people believed. And part of Aristotle’s point about rhetoric, and its reliance on the enthymeme, is that rhetoric begins with what people believe.

Manson’s response was fairly straightforward, and grounded, he insisted repeatedly, on facts. He argued:

    • San Francisco owned the valley floor.
    • Construction would not begin on the Hetch Hetchy dam until and unless San Francisco first developed Lake Eleanor (a water source not disputed by the preservationists) and then found that water source inadequate.
    • A photo he presented showed what the lake would look like when dammed and flooded—very little of the valley flooded, with no obstruction of the falls that Muir praised so heavily, a road around the edge enabling visitors to see more of the valley—so, he said, the valley will be more beautiful, reflecting the magnificent granite walls.
    • Keeping the reservoir water pure will not inhibit camping in any way.
    • The Hetch Hetchy plan is the least expensive option, and it will provide energy, thereby breaking the current energy monopoly.

Muir’s arguments, he says, “are not in reality based upon true and correct facts” (435).

Marsden Manson was City Engineer for San Francisco, and had done thorough reports on the issue. And so he had to know that almost all of what he was saying was “not in reality based upon true and correct facts.” San Francisco had bought the land, but, since it was within a national park, the seller had no right to sell it. Construction would begin immediately on the dam, flooding the entire valley, making the entire valley inaccessible, including the famous falls. It was not possible to build the roads that Manson drew on the photo and, being an engineer, he must have known that. The reservoir inhibited camping, and, most important, the Hetch Hetchy plan was the most expensive option available to San Francisco. Manson had muddled the numbers to make it appear less expensive.

In other words, either Manson lied, or he was muddled, uninformed, bad at arithmetic, and not a very good engineer.

Manson’s motives in all this are complicated, and ultimately irrelevant. He may have expected to benefit personally by the approval of the dam project, as he may have thought he would build it. But it would have been a benefit of glory but not money; I’ve never read anything to suggest that he was motivated by anything other than a sense that dominating nature is glorious, and that public projects providing water and power are better than preserving valleys. (He is reputed to have suggested damming and flooding Yosemite Valley.)

In other words, what presented itself as the pragmatic option was just as ideologically driven as what was rejected as the emotional one (I think the same thing happens now with arguments about the death penalty, welfare “reform,” the war on drugs, foreign policy, the deficit—there is a side that manages to be taken as more practical, but it might actually be the most ideologically driven).

Muir’s rhetorical options were limited by his opponent, an engineer, making claims about engineering issues that neither Muir nor his supporters had the expertise to refute. It took years for someone to look at the San Francisco reports and determine that the numbers were bad; preservationists didn’t know (and, presumably, many supporters of the dam didn’t know) that the numbers were misleading, and it was the most expensive option.

But would Muir have argued on such grounds anyway? To argue on the grounds of cost would have confirmed the Major Premise that public projects should be determined by cost—to say that the Hetch Hetchy should not be built because it is the most expensive would seem to confirm the perception that you can make natural cathedrals “dollarable” in Muir’s words. In other words, Muir rejected the very terms by which the conservationist argument was made—he rejected the premises. To argue on premises (except in rare circumstances) seems to confirm them, and so he would, in order to win the Hetch Hetchy argument, have argued against what he had spent a lifetime arguing for: that we should not look at nature in terms of money. Wilderness areas are, he insisted, sacred. And so he railed against his opposition.

As I mentioned above, I’m often attacked by people who think I’m attacking Muir. And I think that misunderstanding arises because of a particular perception of what the discipline of rhetoric is for: rhetorical analysis is often seen as implicitly normative; we do an analysis to say what a person should do or should have done. So, to say that Muir’s rhetorical strategies didn’t work is to say his rhetoric was bad, and it should have been different. Coupled with the notion that good people promote good things, if I say that Muir’s rhetoric was “demagoguery,” then I am saying he cannot have been a good person. There is, here, a theory of identity: that people are either good or bad; that good people say good things, and that bad people say bad things; that demagoguery is something only bad people do. That whole model of discourse and identity is wrong in too many ways to count, and I am not endorsing it.

I think Muir was a good man–he is a personal hero of mine—but that doesn’t mean he was perfect, and it certainly doesn’t mean we can’t learn from him. Muir did well within the Sierra Club (the Sierra Club vote was about 80% on Muir’s side and 20% in favor of the dam) , but ultimately lost the argument. And I think what we learn from his failure to persuade all conservationists to vote against the Hetch Hetchy project is not about Muir’s personal qualities or failings, but about rhetorical constraints and models of persuasion.

I’m arguing that, for Muir to have persuaded his opposition, he would have had to rely on premises that he rejected. This is sometimes called the “sincerity problem” in rhetoric. To what extent, and under what circumstances, should we make arguments we don’t believe in order to achieve an end in which we do believe? Muir didn’t argue from insincere premises; that may have weakened his effectiveness in the moment. But it definitely strengthened his effectiveness in the long run. His Hetch Hetchy pamphlet continues to be powerfully motivating for people, perhaps more motivating than it would have been had he compromised his rhetoric in order to be effective in the short-term. Muir’s demagoguery did no harm, and it may have even done some good. Demagoguery isn’t necessary harmful.

Demagoguery and Democracy

[image source: https://en.wikipedia.org/wiki/Hetch_Hetchy#/media/File:Hetch_Hetchy_Valley.jpg]