The Principled Position on Pussy-Grabbing

I crawl around the internet and argue with people. And there is a recurrent argument that, for me, is what’s wrong with our current political deliberation in a nutshell.

A person (often a woman) says she couldn’t vote for Hillary (note that Clinton is identified by her first name) because Clinton called the women her husband assaulted sluts and whores. So they voted for a man who bragged that he assaulted women, or they voted in a way that enabled a self-proclaimed sexual predator to become President because they wouldn’t vote for a woman who might have enabled a sexual predator. They wouldn’t vote for someone who did what they are doing by how they are voting. That’s interesting.

It’s interesting that the serious logical problems of that argument don’t occur to them. So, why don’t they?

It’s interesting that they’re trying to argue that their opposition to Clinton is principled, when the principle (don’t vote for someone who supports sexual predation) is violated by their arguing for a self-confessed (not just possibly an enabler) of sexual predation. Why vote for a self-confessed sexual predator (and thereby enable sexual predation) on the grounds that the other candidate might have enabled sexual predation? It’s also interesting how often these women claim that their stance is Christian, while they are cognitively reconciling voting for a self-confessed sexual predator, whose wife had porno photos (which conservative Christians claims to abhor, and yet neither he nor his wife has said they think those photos were a bad choice), who has a history of adultery, and whose “Christianity” only occurred when it was useful with believing they are promoting Christianity.

Okay, let’s take their argument at face value. They are saying that their position is not sheer factionalism—it isn’t that they would vote for roadkill were it the Republican nominee—they have principles for voting this way. Let’s call this argument the “sexual predation principle” argument.

And, obviously, it’s an argument that trips over its own tongue. Voting for a self-confessed sexual predator because you can’t vote for someone who is doing what you’re doing by voting for Trump (enabling a sexual predator) isn’t an argument from principle about abhorrence of sexual predation.

It’s something else entirely. So, what is it?

And here is something that makes it all more interesting. We have, on tape, Trump bragging about sexually assaulting women. There is no good evidence that Clinton said the accusers were whores or sluts. The sites that claim Clinton did that (and you can google it, because I don’t want to give them the clicks—they’re clickbaity sites) refer to an unsourced anonymous claim that someone said to someone that she had said it to them. There are no sites that quote Clinton directly, let alone show video or her calling the accusers sluts or whores.

I’ve argued with people who claim they saw a video of Clinton saying that. There is no video. There never was. (If there was , you would have seen it through all of 2016). That’s the known phenomenon of people creating an image of a claim they’ve heard over and over (for more on that, see Age of Propaganda). So, why do people have a clear image of a video that never existed?

Because their hatred of Clinton is so visceral as to be visual.

Well, okay, they hate Clinton, and they can list reasons. But are those reasons grounded in principle?

Here’s why that matters. There are, loosely, two ways to reason: one is grounded in ethical principles—that, regardless of who is doing something, you condemn or approve of that thing. Christ endorsed that method of thinking about ethics when he said “Do unto others as you would have them do unto you.” It’s also the good Samaritan story—an act is right or wrong on its own merits, and not on the basis of who does it.

The other method of thinking about whether something is right or wrong is the one Christ continually rejected—that a thing done by this kind of person is right (if you think that kind of person is right) and it’s wrong if it’s done by a kind of person you think is wrong. That kind of reasoning is purely factional (or tribal, if you prefer that term): people like you are good, and people not like you are bad.

It’s hard for people to see when we’re engaged in factional ethics because we can always come up with instances of bad behavior on the part of the other faction, and so we can sincerely believe our perception of our faction as always better is proven by evidence (aka, confirmation bias). But here’s what factional reasoning can’t do: hold all the factions to the same standards.

If Clinton was wrong to enable sexual predation, then Trump was worse.

That conclusion comes from holding principles the same regardless of faction, and people often don’t reason that way about ethics. People think that they’re behaving in a principled way when they’re reasoning on the basis, not of a logical principle, but a generalization about their group versus the other group–it seems like reasoning from a principle, but the logical principle is that “my group is good.”

And too much American political discourse is on those grounds, and that people reason factionally is shown most obviously when people point out the inconsistency. For instance, if you say to me, “Well, you say that Your Candidate is good because she cares about the environment, but she took $10 million dollars from an oil company to hide their oil spill,” a factional (and not principled) response is for me to say, “Well, Your Candidate did it too.” It doesn’t matter if Your Candidate did–that doesn’t mean mine didn’t.

Where that argument should go, if it’s a good one, is an acknowledgement on the part of everyone that both candidates did it, and then we can argue about which is worse

If you believe that your faction is always right, you might mistake reasoning from that premise (My faction is right; this person is a member of my faction; therefore, this person is right) as operating from a principle because you believe your faction to be more principled than any other.

Unhappily, a lot of the people who voted for a sexual predator did so because they believe that only the Republicans support Christ’s political agenda.

Let’s set aside the most obvious problems with that (Christ didn’t say “except for these people”), and just try to understand that these are people who believe that their political agenda is so Christian that they are justified in treating their political opponents in ways that violate what Christ said about how we should treat others.

What that means is that their political agenda is more important than a pretty clear commandment from Christ.

That’s political factionalism. Whether their political agenda is the same as what Christ would want is up for argument. Whether they’re violating what Christ said about doing unto others is not. They are, and they’re trying to come up with reasons as to why it’s okay.

So, it’s taking a particular and factional political agenda and insisting that only that agenda is good. That’s anti-democratic.

And here’s another way that it’s what’s wrong with American political discourse in a nutshell. It’s ignorant of history. American Christians have a long list of sins on our plate (especially conservative Christians)—policies that were, actually, sheer factionalism, in-group preference, or sheer prejudice. Advocating slavery, defending segregation, opposing unions or any protection for workers’ safety, refusing to allow Jewish refugees from Nazi Germany to come here—all of those things were presented by conservative Christians as the obvious political agenda of Jesus. Oddly enough, a lot of conservative Christians now want to claim those political stances as proof that they are right, but they’re evidence they’re probably wrong. Those positions were all progressive and liberal Christian movements, demonized by conservative Christianity. [1] Conservative, even moderate, Christians were opposed to Martin Luther King, Jr., and condemned him.

There is a second problem with trying to cite those movements as proof that what politically conservative Christians are doing now: all of those movements insisted on the “do unto others” test, the very one rejected by conservative Christians now

Support of Trump fails that test.

So, let’s stop pretending that “I voted for Trump because Clinton supported her husband” is some sort of principled stance. It isn’t. Let’s stop pretending that people who make that claim are feminists, or allies, or anything other than people who wanted Trump to get elected, and needed a reason that made them feel comfortable.

It’s what’s wrong with American political discourse in a nutshell because it looks as though the person is taking a principled stance, when, in fact, there is neither a logical nor ethical principle consistently applied. It’s a rabidly factional defense of a logically indefensible position. It’s just a way of managing the cognitive dissonance of voting for Trump only because he’s in their faction. But, let’s admit it isn’t principled, and it violates what Christ said about doing unto others.


[1] The appalling crime on the part of progressive Christianity, eugenics, (also supported by many conservative Christians) also violated the “do unto others” rule.


The Holocaust and Christianity

“Hitler attracted Christians by criticizing the liberalism of democratic government and by advocating a tougher, law-and-order approach to German society. He opposed pornography, prostitution, abortion, homosexuality, and the ‘obscenity’ of modern art, and he awarded bronze, silver, and gold medals to women who produced four, six, and eight children, thus encouraging them to remain in their traditional role in the home. This appeal to traditional values, coupled with the militaristic nationalism that Hitler offered in response to the national humiliation of the Versailles Treaty, made National Socialism an attractive option to many, even most Christians in Germany.” (11, _Betrayal: German Churches and the Holocaust_)

The alt-rechts, or, in English, the old right

The old right, I mean alt-right, is clear what they’re doing.

Here’s what’s interesting to me about the current old-right. I am a huge fan of Ralph Ezekiel’s ethnographic study of neo-Nazis (The Racist Mind). But I’ve often wondered the internet changed the dynamics. Years after the book came out, he wrote an article, “The Racist Mind Revisited” that I found really useful. I’ve been reminded of it this weekend, and I’m thinking that the internet changed how the message is disseminated, but not what the message is, nor why it’s persuasive.

These quotes are from this article: “An Ethnographer Looks at Neo-Nazi and Klan Groups: The Racist Mind Revisited.” American Behavioral Scientist, 09/2002, Volume 46, Issue 1.

I’ll mention that he also talks about the macho ideology and the marginalized participation of women.

“Americans today often learn about Nazis and the Ku Klux Klan through television clips of rallies or marches by men uniformed in camouflage garb with swastika armbands or in robes. These images often carry commentary implying that the racist people are particularly dangerous because they are so different from the viewer, being consumed by irrationality. The racists and their leaders are driven by hatred, it is suggested, and one can scarcely imagine where they come from or how to impede them.” (51)

“The movement’s ideology emerged as one interviewed leaders, listened to their speeches, and read movement newspapers and pamphlets. Two thoughts are the core of this movement: That “race” is real, and those in the movement are God’s elect. Race is seen in 19th-century terms: race as a biological category with absolute boundaries, each race having a different essence—just as a rock is a rock and a tree is a tree, a White is a White and a Black is a Black. ” (53, he’s avoiding the first person.)

“At the ideological level—in the writings and speeches of leaders—the contemporary Klan has joined the neo-Nazis in identifying the Jews as the prime source of evil. Leadership speeches throughout the movement present “the Jew” as the central enemy, with African Americans, Latinos, and Asians as the rather dumb members of “the mud races” who are pawns of the Jews, as are many brainwashed Whites. The leadership ranks gay men and lesbian women with Jews in the enemies list.

“Among the rank and file, the picture is more traditional. Most followers whom I have met exhibited intense prejudice against African Americans that tended to reflect the general prejudice of their families and neighborhoods. Followers could repeat the party line about the Jews, but my strong impression from interviews and from watching socialization into the Detroit group was that new members arrived with strong antipathy toward Blacks but little interest in Jews. They came in hating Blacks and liking the idea that the movement represented Whites in a struggle against Blacks; after entry, they had to be taught who the Jews are and why they should hate them.” (55)

“The youths I met had first become involved in racist activity in junior high school. Their prior (and subsequent) schooling had not led them to harbor a concept of community. The classroom had seldom been shaped as a community in which class members had felt mutual responsibility for one another. On the contrary, the classroom probably had reflected the desperation and the atomization of the society outside the school.

“Equally, the schools had left no feel for democracy. The youths had no positive association to the word, which seemed to them a meaningless term used by adults for hypocritical purposes. School had afforded little chance for real impact on decisions that mattered, opportunities to learn in action the meaning of the word democracy. Both community and democracy can be taught through experience in the classroom, when schools consider these goals part of the curriculum and invest energy in building related skills.

“For the neo-Nazi youths, the teaching in school of multiculturalism had been another adult exercise in hypocrisy. Black History Month was an annual annoyance. It is easy for an adult-led discussion to seem like sermonizing.” (65)

When giving a reason isn’t reasonable: Associational and logical reasoning

If I tell you that you should do something that you pretty much already want to do anyway, and my reason is something you think is true, you might sincerely believe that I’ve given you a logical and reasonable argument, even if there is no logical relationship between my conclusion and my reason.

I might say, “You should vote for [the candidate of the party you always support] because [the candidate of the party you hate] did/said this bad thing,” you might feel you’ve been given a logical and reasonable argument. But that isn’t a logical argument at all—it’s just an appeal to rabid factionalism. It feels logical because you’re likely to believe that your commitment to your party is rational, and that supporting the other party is irrational.

But, whether you feel your position is rational or not isn’t actually a good measure. Is there a major premise that you would accept in the abstract, even if it didn’t get the conclusion want? And this statement doesn’t do that. This bad thing the other candidate did—is it something that would cause you to refuse to vote for your party’s candidate? If not, then you don’t have a logical argument; you have rabid factionalism.

If I told you, “You should clean my litterboxes because 2+2 = 4” you would probably catch the logical problem. But it’s no less logical than “You should vote for [the candidate of the party you always support] because [the candidate of the party you hate] did/said this bad thing.” Neither has a defensible major premise.

We don’t tend to catch the logical problems (unless we deliberately work at it) when we like the conclusion and the minor premise. If my evidence is associationally related to my conclusion (if you believe my evidence and you’re sorta open to my conclusion) you won’t notice that problem. If you’re feeling a little guilty about not doing enough around the house, or you feel you owe me a huge favor, then, suddenly, “You should clean my litterboxes because 2 + 2 =4” might seem like a “good” argument to you. It’s only good insofar as it seems to give you the justification to do something you were pretty open to do anyway. But it’s no more logical than it was when you didn’t want to do it.

Associational rhetoric works particularly well when we’re talking about an outgroup. Since we generally consider an outgroup icky, then an argument that says, essentially, “My policies are good because the outgroup is icky,” will genuinely seem to be a logical or reasonable argument, but it’s all association. (The outgroup might actually be icky and the policies disastrous.)

If you really like Chester Burnette as a candidate, and you loathe squirrels, and I say, “Chester Burnette is a great candidate because squirrels are evil!” the argument might seem “logical” to you. I’ve given you a claim, and I’ve given you a reason. I would probably follow up with lots of evidence about evil things squirrels have done. So, you could easily believe that your attitude about Burnette was totally logical.

But, what if Hubert Sumlin also advocated policies that would restrict squirrels? In logical terms, the “major premise” of the “Chester Burnette is a great candidate because squirrels are evil” enthymeme is… well, what is it? An enthymeme is supposed to be a compressed syllogism.

A compressed syllogism would have the major premise of “Everyone who hates squirrels is a great candidate,” a minor premise (the evidence) of “Chester hates squirrels,” and the conclusion that “Chester is a great candidate.”

Look at it this way. Imagine that I said, “Hubert Sumlin is a Nazi because he wears a brown shirt,” and you really don’t like Sumlin. You might notice he really does sometimes wear a brown shirt, and, of course Nazis did too! That’s associational thinking, because it ignores the major premise. That’s only a logical argument if you are willing to say that everyone who wears a brown shirt is a Nazi. You aren’t (or shouldn’t be, anyway), and you also either need to say that Hubert is just as good a candidate as Chester or else your argument isn’t logical.[1]

Associational reasoning isn’t necessarily bad. I happen to think it’s really helpful when you’re brainstorming, and it’s clear from the history of science that associational reasoning has had some tremendous benefits. But, like arguments from identity, it’s just one data-point. It is useful, but not sufficient, for democratic deliberation. It isn’t policy argumentation.

What’s useful about thinking in terms of logic and not association is that it helps us step back from what social psychologists call motivated reasoning. We can always find a reason to do something we want to do anyway—we are motivated to find reasons to support our ingroup, justify what we want to do, rationalize away something awful we’ve done. But being able to attach a reason to a belief doesn’t make a belief reasonable.

[1] Sometimes when I make this argument, people will say, “I don’t care if it’s logical to support Chester because he hates squirrels”—I just do. Well, that’s fine, but you don’t support Chester because he hates squirrels; you just support Chester. And just admit that your opposition to Chester’s critics is just ingroup loyalty–you don’t value fairness across groups.



Reagan and Trump


A few years ago, I was talking to one of those young people raised to believe that Reagan was pretty nearly God, and he said to me, “At first I was mad when people said Reagan has Alzheimer’s but then I decided that it didn’t matter.”

I thought that was interesting. He wasn’t mad because it was false; he was mad because it seemed mean. He didn’t change his mind about it because he went from thinking it was false to thinking it was true; he changed his mind because he found a way to explain it as non-trivial.

This was in 2005 or so, but it perfectly reproduced my experience of arguing with Reagan supporters in 1980. Reagan said a lot of things about himself and his record that were untrue. He might have sincerely believed them or not–I think he did–but if you pointed out he was saying things that were untrue, his fans said you were mean. He declared his candidacy literally on the site of one of the most appalling pro-segregation murders of the 1960s, and said he was in favor of states’ rights, and his supporters were apoplectic if you said he was appealing to racism. “He isn’t racist,” they’d say. “He’s a good man.”

If you tried to point out that the economic model on which he was going to base US policy was thoroughly irrational in that it was completely unfalsifiable, you were rejected as some kind of egghead.

When I asked a few more questions (such as, if your policies are best advocated by a person with Alzheimer’s, maybe there are problems with the policies), it became clear that he saw Reagan’s failure to be able to grasp complicated things as a virtue. That’s what made Reagan go for simple solutions, he thought, and he thought that meant that Reagan cut through the bullshit.

That, too, was my experience of Reagan supporters in the 80s (except the Marxists I knew who voted for them because they said he would bring about the people’s revolution faster, and the Dems who voted for him as a protest vote against Carter and then Mondale). They liked that he didn’t seem to understand the complexities of political situations. They sincerely believed that political issues aren’t really complicated, but are made so by professional politicians and eggheads just trying to keep their jobs, and so a person who looked at things in black and white terms would get ‘er done.

I think we have the same situation now. Clearly, the WH is made up of people who don’t understand the law about any of the things they’re trying to enact or the things they’re doing (whose defense is that they don’t and never did), who never had clear plans for any of the things they said they would achieve, who don’t understand how government actually works, who don’t understand what it means to be President, who are mad that they’re being treated the way they treated the previous President, and who are just engaged in rabid infighting.

People with even a moderate understanding of history are worried because this never works out well (for anyone, including his own party). People with a cherrypicked version of history don’t think it matters because they think he’ll enact the GOP agenda (and they think that’s great). And his base thinks it’s great because they think that a person who doesn’t think anything is complicated and isn’t deeply informed is exactly what we need.

What happens in an era of a rabidly factionalized media

In May, where I work, a young man with mental health issues stabbed several people (including a person of color). He was immediately subdued by some police officers who arrived quickly because they were on bikes. The politically useful narratives of this event arrived just about as fast as the police officers.

In July of 1835, some gamblers were lynched in Vicksburg, Mississippi, after a typical pre-lynching “trial.” An early account says it was because they behaved badly at a fourth of July celebration. There were later other versions.

The incident at my university quickly became a datapoint about the victimization of white males, the inherent violence of black males, and the failure of the liberal media to be sufficiently alarmist about the black/liberal conspiracy to exterminate white males. That it was such a datapoint was proven by several other claims that turned out to be false (the attacker went after “Greek” white males, there was another stabbing of a “Greek” white male at the same time—the attacker stabbed a non-white, wasn’t targeting “Greeks,” and the other attack never happened). The incident of the gamblers involved gamblers morphing into abolitionists, and then getting glommed onto a non-existent conspiracy of a guy called Murrell.

These two incidents seem to me extremely similar, and the similarity between them is why I’ve been worried for some time about American political discourse—the way the public “knows” things is worryingly similar to the rhetoric that got us into a war.

That’s obviously a strange argument, but not deliberately perverse. I mean it.

The short version is that how both incidents were quickly renarrated and used signifies larger problems with the normal political discourse of the day. I could have picked another pair—the Charleston pamphlet mailing and Benghazi, for instance—and the similarity would remain. The similarity isn’t about the incidents, but about how they were transformed into an entirely and obviously false narrative that resisted all attempts at refutation. It’s about the easy demagoguery of everyday politics.

This is a sort of complicated argument in that there is so much demagoguery about demagoguery that I have to do a lot of clearing before I can make the argument I want to make. And I worry about starting with a statement of my argument, since my whole point is that what caused the Civil War was that a large number of people refused to listen to anything that might contradict their central beliefs. The Civil War is, unhappily, a great example of how the narration of historical events gets glued on to current issues of ingroup identification (so that whether a particular narration is “true” is determined by whether it is loyal to the ingroup).

In a culture of demagoguery, all issues are reduced to a competition between the ingroup and outgroup. A claim is “true” if it shows that the ingroup is better than the outgroup.

Popular understandings of the Civil War (really, a failed revolution) are dominated by ingroup/outgroup thinking. For many people, admitting that secession and the firing on Fort Sumter were bad ideas would entangle admitting that ingroup members behaved badly. There isn’t a way to look clearly at primary documents about slavery, the declarations of secession, the proslavery provocation of war, segregation, and “the South” that doesn’t involve acknowledging that “the South” (an instance of strategic misnaming, explained below) judged things badly.

The South—that is, the entirety of people in the southern regions of the US–never supported the fairly bizarre system that was US slavery. While Native American tribes in the southern regions had slaves, the system was nothing like the dominant version (which was primarily lucrative because of selling slaves); it’s reasonable to think the large number of slaves didn’t support slavery; Quakers and others were sometimes opposed to slavery, and often opposed to the dominant system (which, by the 1830s, largely prohibited teaching slaves to read the Bible, and which violated property rights by the amount of state control over what slaveholders could do with what law said was their property). The equation of “the South” and “proslavery” is an example of the “no true Scotsman” fallacy, in which disconfirming examples are simply not counted.

[This is NOT to say that the large number of white politicians who criticized slavery opposed it, by the way, since many of the were making either the “necessary evil” or “wolf by the ears” argument. The first of those was that slavery was bad, but it was necessary for some vague greater good, and the second (most famously promoted by Jefferson) was that slavery was a crime against Africans, and they were so justified in being angry about slavery that we couldn’t free them. Slavery enabled us to hold them down, and any release of that hold would result in their killing whites in an act of justified rage. So, we must maintain slavery.]

When we talk about “the South” we generally mean the white proslavery political leaders, and their motives in secession were absolutely clear: they were protecting and promoting slavery. And that is what they said, over and over, every time the issue came up. Speeches in Congress, speeches for secession, declarations of secession, speeches at fourth of July celebrations, sermons, judicial decisions—the South was about slavery.

So, anyone who wants to argue that the Civil War was about “states rights” and not slavery has to argue that the people who wrote the declarations of secession and the people arguing in favor of secession were lying. [They also have to explain how the Dred Scott decision and the Fugitive Slave Laws respected the principle of states’ rights—I’ve always found it entertaining how a CSA apologist will, when presented with that argument, either go silent or threaten violence—both responses are admissions that they have no rational response.]

There is a more complicated argument about secession not really being about slavery per se, but about how Southern political and intellectual leaders wove slavery into Southern culture. That argument is that proslavery rhetoric had become a staple of American politics, with oneupsmanship about loyalty to slavery requiring that Southern politicians (and their non-Southern allies, called “doughfaces” because proslavery politicians bragged they could make them have any emotion they wanted) get increasingly extreme in arguments about what should be done to ensure the expansion of slavery. So, it wasn’t slavery, but rhetoric about slavery that caused many slave states to engage in the extraordinarily unwise and unnecessary act of secession.

What people often don’t realize is that slavery was safe, even under Lincoln. Slavery was well-ensconced in US politics, with a majority of the Supreme Court, Congress, and the Presidency. Lincoln’s election was a glitch, in that he was only able to win the Presidency because the proslavery forces split. And he was willing to support a constitutional amendment to protect slavery in the existing slave states. The rational choice on the part of slave states would have been to sit tight until the next election, resolve their internal divisions, and elect another proslavery President.

Thus, were the secession really about slavery as an economic institution, it wouldn’t have happened—slavery as an economic institution was safe, unless you believe the evidence (which is pretty compelling) that slavery was not an economically efficient way to grow sugar or cotton. There are some who argue that slavery was deliberately uneconomic in that owning slaves wasn’t about making money, but it was a marker of success. So, just as driving an unnecessarily large car with poor gas mileage is a marker of masculine success in our culture, and not a rational economic choice, so owning slaves was a marker of masculine success in the antebellum south. A different argument is that slavery wasn’t profitable as an economic system, but it was profitable as a sales system—the profit in slavery came from selling slaves, so slavery was only profitable as an economic institution if there were expanding markets for slaves. If you put both these arguments together, then the otherwise irrational behavior of proslavery rhetors makes more sense, in that, while Lincoln was willing to allow slavery to exist eternally in slave states, he wouldn’t let it expand. Certainly, a lot of primary documents of the era insist on the importance of opening new markets to slavery. (If you want to see a longer review of scholarship on this argument, and my own take, see Fanatical Schemes.)

Whatever the motivations—and perhaps all three arguments are right about some set of people—from the 1820s until the Civil War, proslavery rhetoric was consistent: every single political issue was about ingroup (proslavery) and outgroup (not proslavery), and any success on the part of the outgroup meant the extermination of the ingroup. And that is our situation now. And while all parties engage in it too much, not all sides do so to the same degree.

And, no “both sides” aren’t equally guilty because saying there are two sides is part of the problem.

I’m saying that the Civil War wasn’t about slavery, per se, but the consequence of proslavery rhetoric. Slavery can’t cause a war, but how people value it, what they connect it to, what it means to them, how central it is to their sense of identity, how they think they would look if they were seen as not supporting slavery—all those things can cause people to go to war because those things cause people to believe that their identity is threatened with extermination if this policy passes. And that’s what pro-secession rhetoric said (which went back into the 1820s): if we don’t get this policy passed, then the Federal Government will send troops into the South and force abolition on us and then we’ll have race war (it’s disturbingly similar to NRA rhetoric about the Federal Government knocking down doors, taking guns, and then the riot of criminals that will ensue).

In a world in which you’re hearing the same claims and same kind of claims repeated everywhere, the fact that none of the are true doesn’t matter as far as the kind of impact that those rumors can have. There is a Chesterton story in which Father Brown says that people think that 0 + 0 + 0 + 0 equals more than 0, and I’ve always thought it’s a sweet description of how antebellum proslavery rhetoric worked (and how much rhetoric works now): a long series of non-events is taken as proof of something for many people simply because the series is so long, and they forget it’s a series of false predictions. If the media we’re consuming is repeatedly wrong, the rational choice is to abandon it as unreliable. But, if the media keeps making predictions we want to be true, then the fact that those predictions are always false doesn’t make us mistrust the media—we trust them more because we perceive them as media that want the same things we do. [I’ll mention two examples: Charles and Camilla breaking up, the world ending this year. The fact that those predictions are always wrong doesn’t destroy the credibility of the predicting media for many people because those media keep making the prediction—that the media is always wrong triggers the cognitive bias of no smoke without fire.]

Tremendous numbers of people who didn’t financially benefit from slavery personally identified with slavery, and so they sincerely believed that an end to slavery meant an extermination of their identity.

And none of that was true: Lincoln wouldn’t end slavery; the end of slavery wouldn’t mean race war; as was demonstrated in the non-slave states, it was quite easy to maintain white supremacy without slavery. But the proslavery claim that it was either support slavery in the most extreme ways possible or there will be race war would have seemed true to someone reading southern newspapers because those papers were full of reports of events that never happened. And that argument signified what was, to me, the most striking characteristic of antebellum Southern newspaper rhetoric—it was rabidly factional.

It wasn’t a binary. In the 1830s (the era in which I dredged deep), there were multiple parties. And each party had its newspaper system, and each system reprinted articles from others in the system. Some reports were shared (fabricated reports about abolitionist conspiracies would be reported in all the factions hoping to benefit from anti-abolitionist fear-mongering, for instance), and some weren’t, but an article was printed or not on the basis of whether it helped the faction. And all those papers had mottos like “free of faction.”

In rhetoric, that’s called strategic misnaming. You simply declare that you’re doing the opposite of what you’re doing. It works to a disturbing degree, mostly with people who make political decisions on the basis of political faction (or ingroup favoritism).

Someone reading southern newspapers could list all sorts of times that abolitionists engaged in conspiracies of extermination against them. The very real incidents of mass killings of “them”—Native Americans, African Americans, anyone accused of abolitionism—were not mentioned, or were not framed as incidents of ingroup violence. They were self-defense, even if the incidents that supposedly justified the revenge hadn’t actually happened (and that was common). Consumers of that media couldn’t have a reasonably accurate understanding of who was committing violence against whom. There were in the antebellum era (and in the postbellum) communally insane acts of violence against the bodies of Others (mostly African or Native American, but with other kinds of Other thrown in), all of which were rhetorically rationalized as self-defense, and none of which were. Some, like the gambler incident, had nothing to do with politics, and some were political only in the sense that the people enacting the rhetorically-framed “revenge” violence were motivated by a politics of racist or pro-slavery politics. So, in the antebellum era, everything was politicized. And even, as in the case of the gamblers, the correct version of the incident was available to the media, the false version lumbered around the public sphere, crushing any accurate version.

And here we return to the tragedy of my campus. The incident on my campus was not racially motivated, and it was not part of some massive conspiracy against privileged white males. The notion that it was part of a May Day revolution, that an antiracist group had anything to do with it, or that there were other attacks has been thoroughly and completely refuted in any media open to reason. But we live in a world so rabidly factionalized that many of the media that promoted the false version either continue to repeat the false one, or have never repudiated the false one. And so the fear-mongering one lumbers around the internet confirming people in that informational cave that black people and liberals are conspiring against them, that whites are the real victims here, and that the “liberal media” won’t report the truth about the war on whites. And so, as in the antebellum public sphere, there are people roused to violent levels of self-defense over incidents that never actually happened.

In other words, those two incidents worry me because they indicate eras with similar ways of arguing about politics. Then, as now, many people believe that you should get all your information from people who are like you, who share your values, and who remain in a state of permanently charged outrage about them. You only trust people who, like you, insist that we are inherently and essentially good and they are inherently and essentially bad.

Since the dominant method of political argument didn’t play out well in the antebellum era—it ended in a war that was unnecessary–maybe we should rethink that we’re doing it now.


The easy demagoguery of explaining their violence

When James Hodgkinson engaged in both eliminationist and terroristic violence against Republicans, factionalized media outlets blamed his radicalizing on their outgroup (“liberals”). In 2008, when James Adkisson committed eliminationist and terroristic violence against liberals, actually citing in his manifesto things said by “conservative” talk show hosts (namechecking some of the ones who blamed liberals for Hodgkinson), those media outlets and pundits neither acknowledged responsibility nor altered their rhetoric.[1]

That’s fairly typical of rabidly factional media: if the violence is on the part of someone who can be characterized as them (the outgroup), then outgroup rhetoric obviously and necessarily led to that violence. That individual can be taken as typical of them. If, however, the assailant was ingroup, then factionalized media either simply claimed that the person was outgroup (as when various media tried to claim that a neo-Nazi was a socialist and therefore lefty), or they insisted this person be treated as an exception.

That’s how ingroup/outgroup thinking works. The example I always use with my classes is what happens if you get cut off by a car with bumper stickers on a particularly nasty highway in Austin (you can’t drive it without getting cut off by someone). If the bumper stickers show ingroup membership, you might think to yourself that the driver didn’t see you, or was in a rush, or is new to driving. If the bumper stickers show outgroup membership, you’ll think, “Typical.” Bad behavior is proof of the essentially bad nature of the outgroup, and bad behavior on the part of ingroup membership is not. That’s how factionalized media works.

So, it’s the same thing with ingroup/outgroup violence and factionalized media (and not all media is factionalized). For highly factionalized right-wing media, Hodgkinson’s actions were caused by and the responsibility of “liberal” rhetoric, but Adkisson’s were not the responsibility of “conservative” rhetoric. For highly factionalized lefty media, it was reversed.

That factionalizing of responsibility is an unhappy characteristic of our public discourse; it’s part of our culture of demagoguery in which the same actions are praised or condemned not on the basis of the actions, but on whether it’s the ingroup or outgroup that does it. If a white male conservative Christian commits an of terrorism, the conservative media won’t call it terrorism, never mentions his religion or politics, and generally talks about mental illness; if a someone even nominally Muslim does the same act, they call it terrorism and blame Islam. In some media enclaves, the narrative is flipped, and only conservatives are acting on political beliefs. In all factional media outlets, they will condemn the other for “politicizing” the incident.

While I agree that violent rhetoric makes violence more likely, the cause and effect is complicated, and the current calls for a more civil tone in our public discourse is precisely the wrong solution. We are in a situation when public discourse is entirely oriented toward strengthened our ingroup loyalty and our loathing of the outgroup. And that is why there is so much violence now. It isn’t because of tone. It isn’t because of how people are arguing; it’s because of what people are arguing.

To make our world less violent, we need to make different kinds of arguments, not make those arguments in different ways.

Our world is so factionalized that I can’t even make this argument with a real-world example, so I’ll make it with a hypothetical one. Imagine that we are in a world in which some media that insist all of our problems are caused by squirrels. Let’s call them the Anti-Squirrel Propaganda Machine (ASPM).They persistently connect the threat of squirrels to end-times prophecies in religious texts, and both kinds of media relentlessly connect squirrels to every bad thing that happens. Any time a squirrel (or anything that kind of looks like a squirrel to some people, like chipmunks) does something harmful it’s reported in these media, any good action is met with silence. These media never report any time that an anti-squirrel person does anything bad. They declare that the squirrels are engaged in a war on every aspect of their group’s identity. They regularly talk about the squirrels’ war on THIS! and THAT! Trivial incidents (some of which never happened) are piled up so that consumers of that media have the vague impression of being relentlessly victimized by a mass conspiracy of squirrels.

Any anti-squirrel political figure is praised; every political or cultural figure who criticizes the attack on squirrels is characterized as pro-squirrel. After a while, even simply refusing to say that squirrels are the most evil thing in the world and that we must engage in the most extreme policies to cleanse ourselves of them is showing that you are really a pro-squirrel person. So, in these media, there is anti-squirrel (which means the group that endorses the most extreme policies) and pro-squirrel. This situation isn’t just ingroup versus outgroup, because the ingroup must be fanatically ingroup, so the ingroup rhetoric demands constant performance of fanatical commitment to ingroup policy agendas and political candidates.

If you firmly believe that squirrels are evil (and chipmunks are probably part of it too0, but you doubt whether this policy being promoted by the ASPM is really the most effective policy, you will get demonized as someone trying to slow things down, not sufficiently loyal, and basically pro-squirrel. Even trying to question whether the most extreme measures are reasonable gets you marked as pro-squirrel. Trying to engage in policy deliberation makes you pro-squirrel.

We cannot have a reasonable argument about what policy we should adopt in regard to squirrels because even asking for an argument about policy means that you are pro-squirrel. That is profoundly anti-democratic. It is un-American insofar as the very principles of how the constitution is supposed to work show a valuing of disagreement and difference of opinion.

(It’s also easy to show that it’s a disaster, but that’s a different post.)

ASPM media will, in addition, insist on the victimization narrative, and also the “massive conspiracy against us” argument, but that isn’t really all that motivating. As George Orwell noted in 1984, hatred is more motivating when it’s against an individual, and so these narratives end up fixating on a scapegoat. (Right now, for the right it’s George Soros, and for the left it’s Trump.) There can be institutional scapegoats—Adkisson tried to kill everyone in a Unitarian Church because he’d believed demagoguery that said Unitarianism is evil.

Inevitably, the more that someone lives in an informational world in which they are presented as in a war of extermination against us, the more that person will feel justified in using violence against them. If it’s someone who typically uses violence to settle disagreement, and there is easy access to weapons, it will end in violence against whatever institution, group, or individual that person has been persuaded is the evil incubus behind all of our problems.

At this point, I’m sure most readers are thinking that my squirrel example was unnecessarily coy, and that it’s painfully clear that I’m not talking about some hypothetical example about squirrels but the very real examples of the antebellum argument for slavery and the Stalinist defenses of mass killings of kulaks, most of the military officer class, and people who got on the wrong side of someone slightly more powerful.

And, yes, I am.

The extraordinary level of violence used to protect slavery as an institution (or that Stalin used, or Pol Pot, or various other authoritarians) was made to seem ordinary through rhetoric. People were persuaded that violence was not only justified, but necessary, and so this is a question of rhetoric—how people were persuaded. But, notice that none of these defenses of violence have to do with tone. James Henry Hammond, who managed to enact the “gag rule” (that prohibited criticism of slavery in Congress) didn’t have a different “tone” from John Quincy Adams, who resisted slavery. They had different arguments.

Demagoguery—rhetoric that says that all questions should be reduced to us (good) versus them (evil)—if given time, necessarily ends up in purifying this community of them. How else could it end? And it doesn’t end there because of the tone of dominant rhetoric. It ends there because of the logic of the argument. If they are at war with us, and trying to exterminate us, then we shouldn’t reason with them.

It isn’t a tone problem. It’s an argument problem. It doesn’t matter if the argument for exterminating the outgroup is done with compliments toward them (Frank L. Baum’s arguments for exterminating Native Americans), bad numbers and the stance of a scientist (Harry Laughlin’s arguments for racist immigration quotas), or religious bigotry masked as rational argument (Samuel Huntington’s appalling argument that Mexicans don’t get democracy).

In fact, the most effective calls for violence allow the caller plausible deniability—will no one rid me of this turbulent priest?

Lots of rhetors call for violence in a way that enables them to claim they weren’t literally calling for violence, and I think the question of whether they really mean to call for violence isn’t interesting. People who rise to power are often really good at compartmentalizing their own intentions, or saying things when they have no particular intention other than garnering attention, deflecting criticism, or saying something clever. Sociopaths are very skilled at perfectly authentically saying something they cannot remember having said the next day. Major public figures get a limited number of “that wasn’t my intention” cards for the same kind of rhetoric—after that, it’s the consequences and not the intentions that matter.

What matters is that whether it’s individual or group violence, the people engaged in it feel justified, not because of tone, but because they have been living in a world in which every argument says that they are responsible for all our problems, that we are on the edge of extermination, that they are completely evil, and therefore any compromise with them is evil, that disagreement weakens a community, and that we would be a better and stronger group were we to purify ourselves of them.

It’s about the argument, not the tone.

[A note about the image at the beginning: this is one of the stained glass windows in a major church in Brussels celebrating the massacre of Jews. The entire incident was enabled by deliberately inflammatory us/them rhetoric, but was celebrated until the 1960s as a wonderful event.]

[1] For more on Adkisson’s rhetoric, and its sources, see Neiwert’s Eliminationists (

For more about demagoguery:

Making sure the poor don’t get any food they don’t deserve

“But when thou makest a feast, call the poor, the maimed, the lame, the blind”

In a recent interview, Kellyanne Conway said that “able-bodied” people who will lose Medicare with the GOP health plan should “go find employment” and then get “employee-sponsored benefits.” Critics of Conway presented evidence that large numbers of adults on Medicaid do have jobs, as though that would prove her wrong. But that argument won’t work with the people who like the GOP plan because their answer is that those people should get better jobs. The current GOP plan regarding health care is based on the assumption that benefits like health care should be restricted to working people.

For many, this looks like hardheartedness toward the poor and disadvantaged—exactly the kind of people embraced and protected by Jesus, so many people on the left have been throwing out the accusation of hypocrisy. That the same people who are, in effect, denying healthcare to so many people have protected it for themselves seems, to many, to be the merciless icing on the hateful cake.

And so progressives are attacking this bill (and the many in the state legislatures that have the same intent and impact) as heartless, badly-intentioned, cynical, and cruel. And that is exactly the wrong way to go about this argument. The category often called “white evangelical” tends to be drawn to the just world hypothesis and prosperity gospel, and those two (closely intertwined) beliefs provide the basis for the belief that public goods should not be equally accessible (let alone evenly distributed) because, they believe, those goods should be distributed on the basis of who deserves (not needs) them more. And they believe that Scripture endorses that view, so they are not hypocrites—they are not pretending to have beliefs they don’t really have. This isn’t an argument about intention; this is an argument about Scriptural exegesis.

Progressives will keep losing the argument about public policy until we engage that Scriptural argument. People who argue that the jobless, underemployed, and government-dependent should lose health care will never be persuaded by being called hypocrites because they believe they are enacting Scripture better than those who argue that healthcare is a right.

  1. The Just World Hypothesis and Prosperity Gospel

There are various versions of the prosperity gospel (and Kate Bowler’s Prosperity Gospel elegantly lays them out), but they are all versions of what social psychologists call “the just world hypothesis.” That hypothesis is a premise that we live in a world in which people get what they deserve within their lifetimes—people who work hard and have faith in Jesus are rewarded. In some versions, it’s well within what Jesus says, that God will give us what we need. In others, however, it’s the ghost of Puritanism (as Max Weber called it) that haunts America: that wealth and success are perfect signs of membership in the elect. And it’s that second one that matters for understanding current GOP policies.

In that version, in this life, people get what they deserve, so that good people get and deserve good things, and bad people don’t deserve them—it is an abrogation of God’s intended order to allow bad people to get good things, especially if they get those good things for free. For people who believe that God perfectly and visibly rewards the truly faithful, there is a perfect match between faith and the goods such as health and wealth. People with sufficient faith are healthy and wealthy, and, because they have achieved those things by being closer to God, they deserve more of the other goods, such as access to political power. Rich people are just better, and their being rich is proof of their goodness. So, it’s a circular argument–good people get the good things, and that must mean that people with good things are good.

I would say that’s an odd reading of Scripture, but no odder than the defenses of slavery grounded in Scripture, nor of segregation, nor of homophobia. All of those defenders had their proof-texts, after all. And, in each case, the people who cited those texts and defended those practices had a conservative (sometimes reactionary) ideology. They positioned themselves as conserving a social order and set of practices they sincerely believed intended by God as against liberal, progressive, or “new” ways of reading Scripture.

[And here a brief note—they often didn’t know that their own readings were very new, but that’s a different post.]

Because they were reacting against the arguments they identified as liberal (or atheist), I’ll call them reactionary Christians for most of this post, and then in another post explain what’s wrong with that term.

In some cultures, political ideology and identity are identical, so that a person with a particular political belief automatically identifies everyone with that belief as in the category of “good person,” and anyone who doesn’t share that belief is a “bad person.” We’re in that kind of culture.

That easy equation of “believes what I do” and “good person” is enhanced by living within an informational enclave. In informational enclaves, a person only hears information that confirms their beliefs—antebellum Southern newspapers were filled with (false) reports of abolitionist plots, for instance,—so it would sincerely seem to their readers as though “everyone” agrees that abolitionists are trying to sow insurrection. In an informational enclave, “everyone” agrees that the Jews stab the host for no particular reason (the subject of the stained glass above–a consensus that resulted in massacre).

Informational enclaves are self-regulating in that anyone who tries to disrupt the consensus is shamed, expelled, perhaps even killed. By the 1830s, it was common for slave states to require the death penalty for anyone advocating abolition, and “advocating abolition” might be understood as “criticizing slavery.” American Protestant churches split so that Southern churches could guarantee they would not have a pastor that might condemn slavery (the founding of the SBC, for instance), and proslavery pastors could rain down on their congregations proof-texts to defend the actually fairly bizarre set of practices that constituted American slavery.

As Stephen Haynes has shown, the reliance of those pastors on an odd reading of Genesis IX became a Scriptural touchstone for defending segregation.

Southern newspapers were rabidly factional in the antebellum era, and (with a few exceptions) pro-segregation (or silent on segregation) in the Civil Rights eras. (This was not, by the way, “true of both sides,” in that the major abolitionist newspaper, The Liberator, often published the full text of proslavery arguments.) Because those proof-texts were piled up as defenses, and reactionary Christianity was hegemonic in various areas, many people simply knew that there were three kings who visited the baby Jesus, that those three kings related to the three races, with the “black” race condemned to slavery due to Noah’s curse.

If you’d like to see how hegemonic that (problematic) reading of Scripture was, look at older nativity scenes, and you will see that there is always a white, someone vaguely semitic, and an African. Ask yourself, how many wise men visited Jesus? Try to prove that number through Scripture.

That whole history of reactionary Christianity is ignored, and even the SBC has tried to rewrite its own history, not acknowledging the role of slavery in their founding. My point is simply that, when a method of interpreting Scripture becomes ubiquitous in a community, then people don’t realize that they’re interpreting Scripture through a particular lens—they think they’re just reading what is there.

For years, the story of Sodom was taken as a condemnation of homosexuality, but there is really nothing about homosexuality in it—the Sodomites were more commonly condemned for oppressing the poor. There are rapes in it, and one of them would have been homosexual, but there is no indication that homosexuality was accepted as a natural practice in the community. Yet, for years, the story of Sodom was flipped on the podium as though it obviously condemned all same-sex relationships.

For readers of The New York Times, The Nation, or other progressive outlets, the Scriptural argument over homosexuality was under the radar, but it was crucial to how far we’ve gotten for the civil rights of people with  sexualities stigmatized by reactionary Christians. The Scriptural argument about queer sexuality was always muddled—Sodom wasn’t really about gay sex, the word “homosexuality” is nowhere in Scripture, people who cite Leviticus about men lying with each other get that sentiment tattooed on themselves while wearing mixed fibers, Paul was opposed to sex in general.

Reactionary Christians managed to promote their muddled view as long as no one raised questions about exegesis, and the Christian Left raised those questions over and over. And now even mainstream reactionary churches who argue that Scripture condemns homosexuality have abandoned the story of Sodom as a proof text. That success can be laid at the feet of progressive Christians.

One thing that turned large numbers of people, I think, was the number of bloggers, popular Christian authors, and pastors making the more sensible Scriptural argument: there isn’t a coherent method of reading Scripture that demonizes queer sexuality and allows the practices reactionary Christians want to allow (such as non-procreative sex, divorce, wildflower mixes, corduroy, oppressing the poor).

Similarly, an important realm in the Civil Rights movements was that in which progressive Christians debated the Scriptural argument. One of the more appalling “down the memory hole” moments in American history is the role of reactionary Christians in civil rights. Segregation was a religious issue, supported by Genesis IX, and various other texts (about God putting peoples where they belong, and all the texts about mixing). Even “moderate” Christians, like those who opposed King, and to whom he responded in his letter, opposed integration.

That’s important. The major white churches in the South supported segregation, and all of the reactionary ones.The opponents of segregation (like the opponents of slavery) were progressive Christians, sometimes part of organizations (like the black churches) and sometimes on the edge of getting disavowed by their organizations. And that is obscured, sometimes deliberately, as when reactionary Christians try to claim that “Christianity” was on the side of King—no, n fact, reactionary Christianity was on the side of segregation.

Right now, there is a complicated fallacy of genus-species among many reactionary Christians, in that they are trying to claim the accomplishments of people like Jesse Jackson and Martin Luther King, Jr., and Stokely Carmichael on the grounds that King was Christian, while ignoring that their churches and leaders disavowed and demonized those people (and, in the case of Jackson and Carmichael, still do).

Reactionary Christianity has two major problems: one is a historical record problem, and the second, related, is an exegesis problem. They continually deny or rewrite their own participation in oppression, and they have thereby enabled the occlusion of the problems their method of exegesis presents. If their method of reading got them to support slavery and segregation, practices they now condemn, then their method is flawed. Denying the problems with their history enables them to deny the problems with their method.

Reactionary Christianity’s method of reading of Scripture begins by assuming that the current cultural hierarchy is intended by God, that this world is just, that everything they believe is right, and then goes in support of texts that will support that premise. And there is also a hidden premise that the world is easily interpretable, that uncertainty and ambiguity are unnecessary because they are the signs of a weak faith, and that the world is divided into the good and the bad.

  1. The Scriptural argument

The proof-text for the notion that poor people don’t deserve health care or other benefits is 2 Thessalonians 3:10, “For even when we were with you, this we commanded you: that if any would not work, neither should he eat.”

Thessalonians may or may not have been written by Paul (probably not), but it certainly contradicts what both Paul and Jesus said about how to treat the poor. There are far more texts that insist on giving without question, caring for the poor, tending to people without judging, and for humans not presuming to be God (that is, we are not perfect judges of good and evil, and our fall was precisely on the grounds of thinking we should be).

That we have a large amount of public policy wavering on that single wobbly text of 2 Thessalonians 3:10 is concerning, but it isn’t new—the Scriptural arguments for slavery, segregation, and homophobia were and are similarly wobbly. Prosperity gospel has a very shaky Scriptural foundation, and the whole notion that Scripture supports an easy division into makers and takers isn’t any easier to argue than the readings that supported antebellum US practices regarding slavery.

Their reading of Scripture says that they should feel good about health insurance being restricted to people who have jobs (which is why Congress is cheerfully giving themselves benefits they’re denying to others—they see themselves as having earned those benefits by having the job of being in Congress). They can feel justified (in the religious sense) in cutting off people on Medicaid, those who are un- or underemployed, and those with pre-existing conditions because they believe that Scripture tells them that those people could simply stop being un- or underemployed, or have made different choices that wouldn’t have landed them on Medicaid, or could have prayed enough not to have those pre-existing conditions. They believe that they are, in this life, sitting by Jesus’ side and handing out judgments.

I think they’re wrong. But calling them hypocrites won’t work.

This is an argument about Scripture, and progressives need to understand that, as with other policy debates, progressive Christians will do some of the heavy lifting. And progressive Christians need to understand that it is our calling: to point, over and over, to Jesus’ passion for the poor and outcast, and to his insistence that the rewards of this world should never be taken as proof of much of anything.


“Just Write!” and the Rhetoric of Self-Help

There is a paradox regarding the large number of scholars who get stalled in writing—and a large number do get stalled at some point (50% of graduate students drop out)—they got far enough to get stalled because, for some long period of time, they were able to write. People who can’t write a second book, or a first one, or a dissertation, are people who wrote well enough and often enough to get to the point that they needed to write a dissertation, first book, second book, grant, and so on. So, what happened?

The advice they’re likely to be given is, “Just write.” And the reason we give that advice (advice I gave for years) is that we have the sense that they’re overthinking things, that, when they sit down to write, they’re thinking about failure, and success, and shame, and all the things that might go wrong, and all the ways what they’re writing might be inadequate, and all the negative reactions they might get for what they’ve written. So, we say, “Just write,” meaning, “Don’t think about those things right now.”

The project of writing may seem overwhelming because existentially risky, and the fear created by all the anxiety and uncertainty is paralyzing. It can seem impossibly complicated, and so we give simple advice because we believe that persuading them to adopt a simpler view of the task ahead will enable them to write something. Once they’ve written something, once they’re unstuck, then they can write something more, and then revise, and then write more. Seeing that they have written will give them the confidence they need to keep writing.

And I think that advice often works, hence the (deserved) success of books like Writing Your Dissertation in Fifteen Minutes a Day or Destination Dissertation. They simplify the task initially, and present the tasks involved in ways that are more precise than accurate, but with the admirable goal of keeping people moving. Many people find those books useful, and that’s great. But many people don’t, and I think the unhappy consequence of the “you just have to do this” rhetoric is that there is an odd shaming that happens to people for whom that advice doesn’t work. And, while it’s great that it works for a lot of people, there are a lot for whom it doesn’t, and I’m not happy that they feel shamed.

These books have, as Barbara Kamler and Pat Thomson have argued, characteristics typical of the self-help genre (“The Failure of Dissertation Advice Books”), especially in that it presents dissertation writing as “a series of linear steps” with “hidden rules” that the author reveals. While I am not as critical of those books, or of the genre of self-help, as Kamler and Thomson, I think their basic point is worth taking seriously: that this advice misleads students because it presents dissertation writing as a set of practices and habits rather than cognitive challenges and developments.

Academic writing is hard because it’s hard. Learning to master the postures, steps, and dances of developing a plausible research question, identifying and mastering appropriate sources, determining necessary kinds of support, managing a potentially sprawling project, and positioning a new or even controversial claim in an existing scholarly conversation—all of that is hard and requires cognitive changes, not just writing practices.

Telling people academic writing “just” requires anything (“just write,” “just write every day,” “just ignore your fears,”) is a polite and sometimes useful fiction. And self-help books’ reliance on simple steps and hidden rules is, I’d suggest, not necessarily or manipulative, but based in the sense that telling people something hard is actually hard can discourage them. If you lie, and thereby motivate them to try doing it, then they might realize that, while hard, it isn’t impossible.

I think the implicit analogy is to something like telling a person who needs to exercise that they should “just get up off the couch.” Telling people that improving their health will be a long and slow process with many setbacks is unlikely to motivate someone to start the process; it makes the goal seem impossible, and unrewarding. Telling someone that getting healthier is simple, and they “just” need to increase their exercise slightly, or reduce portion size slightly, or do one thing differently will at least get them started. Having gotten a little healthier might inspire them to do more, but, even if it doesn’t, they are getting a little better.

But that’s the wrong analogy.

A scholar who is having difficulty writing is not analogous to someone who needs to get up off the couch: it’s a person with a long record of successes as a writer. That is what we (and people who are stuck) so often lose track of when we give the “just write” advice. They are not a person sitting on a couch; they are someone with an exercise practice that has always worked for them in the past and it isn’t working now.

The better analogy, I would suggest, is a sprinter who is now trying to run a marathon. Sprinting has worked for them in the past, and many academics have a writing process that is akin to sprinting—chunks of time in which we do nothing but write, and try to get as much done as quickly as we can. Writing a dissertation or book, on the other hand, is more like running a marathon.

It would be unethical to tell a sprinter who is unable to run a marathon that she should “just run.” She has been running; she’s quite good at it. But the way that she has been running is not working for this new distance. And if she does try to run a marathon the way she has always run short races, she will hurt herself.

My intuition is that people who have trouble writing are people who have always used the sprinting method, and have simply managed to develop the motivational strategies to sprint for longer, or collapse from time to time while on the race, and pick themselves up. Often, it seems to me, that motivation relies on panic and negative self-talk—they manage to binge write because otherwise, they tell themselves, they are a failure.

So I’m not saying that “Just write” is always bad advice. I am saying that it sometimes is; it is sometimes something that can send people into shame spirals. It only works for some people, for people who do find that polite fiction motivating. For others, though, telling them “just write” is exactly like telling a person in a panic attack “just calm down” or someone depressed “just cheer up.”

The “just write” comes from a concern that lack of confidence will paralyze a student. But I think we might be solving the wrong problem.

Part of the problem is the myth of positive thinking, which has taken on an almost magical quality for some people. There is a notion that you should only think positive thoughts, as though thinking negative things brings on bad events. Since thinking clearly about how hard it is to write a book, dissertation, or grant (and, specifically, thinking clearly about how we might have habits or processes that inhibit our success) is thinking about “bad” things, about how things might go wrong or what troubles we might have, the myth of positive thinking says you shouldn’t do it. You should, instead, just imagine success.

This is a myth. It isn’t just a myth, but pernicious, destructive nonsense. A (sometimes secular) descendant of the positive psychology elegantly described by Bowler in Blessed, this is magical thinking pure and simple, and perfectly contrary to what research shows about how positive thinking actually affects motivation.

But here I should be clear. Some people who advocate wishful thinking do so because believe that the only other possibility is wallowing in self-loathing and a sense that the task is impossible, and they believe that telling students that academic writing is hard will necessarily lead to their believing it is impossible. In other words, there is an assumption that there is a binary between thinking only and entirely about positive outcomes or thinking only and entirely about tragic outcomes. The former is empowering and the latter is paralyzing. That narrative is wrong on all three counts—positive thinking is not necessarily enabling, moments of despair are not necessarily disabling, and our attitude toward our own challenges is not usefully described as a binary between pure optimism and pure despair. Left out of that binary is being hopefully strategic: aware of possible failures, mindful of hurdles, with confidence in our resilience as much as in our talents.

As to the first, studies clearly show that refusing to think negative thoughts about possible outcomes is actively harmful, and frequently impairs achievement. That’s important to remember: telling students they shouldn’t think about their own flaws, the challenges ahead of them, and how things might go wrong is not helping them, and it is making it less likely they will do what they need to do.

Gabriele Oettingen’s considerable research shows that (summarized in the very helpful book Rethinking Positive Thinking), while wishful thinking can be useful for maintaining hope in a bad situation or identifying long-term goals, it inhibits action. Fantasizing about how wonderful a dissertation or book will be doesn’t inspire us to write either; for many people, it makes the actual sometimes gritty work so much more unattractive in comparison that it’s impossible to write. The fantasy is far more fun than writing a crummy first draft. Similarly, Carol Dweck’s research on mindsets shows that success depends on acknowledging what has gone wrong and identifying how one might grow and change to get a different outcome in the future.

A sense that the task is so hard as to be impossible is not inevitably and necessarily disabling. It is, however, inevitable. It is dishonest to tell students that we never feel that what we’re trying to do can’t be done or isn’t worth doing, because so many of us do. And most of us got (and get) through it. Sometimes it took time, therapy, medication, changing things in our personal lives, changing jobs, changing projects, all of the above. But I don’t know any productive scholar free from times of slogging through the slough of despond.

In my experience, academic writing gets easier, but it’s never easy. The hardest writing is probably finishing a dissertation while writing job materials—nothing after that is so hard. But it’s always hard. If we tell students that it’s easy, or that it gets easy, even if we do so with the intention of keeping them moving, we do them a disservice. If they believe us, if they believe that we find it easy, then, when it gets hard, as it necessarily will, they have to conclude that there is something wrong with them. They are unhappily likely to conclude that they have been exposed for the imposter they always worried they were.

The “just write” advice almost certainly works for some people in some situations, as does the “just write every day” or “just freewrite” or “just start with your thesis” or any of the other practices and rules that begin with “just.” They work for someone somewhere and maybe they work for everyone some of the time, and they always strike me as sensible enough to suggest that people experiment with them. But we shouldn’t pretend that they’re magical and can’t possibly fail, or that someone “just” needs to do them. The perhaps well-intentioned fiction that academic writing “just” requires certain practice is magical thinking, and we need to stop saying it.

In my experience, people who find the “just write” advice useless find it too abstract. So, I think we need to be clear that scholarly productivity is, for most people, hard, and it’s find that a person finds it hard. And it takes practice, so there are some things a person might “just write”:

  • the methods section;
  • descriptions of an incident, moment in a text, interaction, or some other very, very specific epitome of their problem (Pirsig’s brick in the wall of the opera house);
  • summaries of their secondary materials with a discussion of how each text is and is not sufficient for their research;
  • a collection of data;
  • the threads from one datum to another;
  • a letter to their favorite undergrad teacher about their current research;
  • a description of their anxieties about their project;
  • an imitation of an introduction, abstract, conclusion, or transition paragraph they like written by a junior scholar.

I’m not presenting that list as a magical solution. It would be odd for me to say that simplistic advice is not helpful and then give a list of the five (or seven, or ten) things we “just” have to do to become (or teach others to become) skilled and productive academic writers. What we have to do is acknowledge that the project requires significant and complicated cognitive changes: that, for most of us, scholarly writing is hard because it’s hard. Let’s be honest about that.






Arguments from identity and the easy demagoguery of everyday commenting

I recently had a piece published on Salon, and it was thrilling. the comments quickly skeeved off into the direction of whether “liberals” or “republicans” are better people. That was frustrating.

My argument about demagoguery has several parts:

  1. demagoguery shifts the stasis (as rhetoricians say) from policy arguments to identity arguments, relying on the assumption that all that matters is whether advocates/critics of a policy are ingroup or outgroup.
  2. therefore, in a culture of demagoguery all arguments about policy end up relying on two points: which group is better, and what group an advocate is in—in other words, it’s all identity politics.
  3. so, all arguments end up being deductive arguments from identity.
  4. this part is barely mentioned in either book I’ve done on the issue, but that reasoning on identity is done by homogenizing the outgroup, so if a person seems to be a member of this group, you can attribute to them everything any other member of that group has said or done.

There are other characteristics, but these are the ones that seemed especially important in the comment section on the article.

And here I have to go back to some really old work, and say that I think we remain muddled on how public discourse operates—we flop around among models of expression, deliberation, and purchasing.

Lay theories of public deliberation aren’t expected to be entirely consistent—as social psychologists have noted, we all toggle between naïve realism and skepticism in our everyday lives. But I think there are important consequences of our failing to realize that we flop around among various models of arguing and various models of knowing.

There is a basic premise: major policy decisions shouldn’t be made on the basis of some kind of model of us versus them when we’re talking about a culture that includes us and them. The idea that only group is entitled to determine policy isn’t democratic, sensible, or Christian.

If we want a thriving community (or nation state or world or even club) then we want enough disagreement that we can prevent the problems associated with what is often called groupthink—when a bunch of like-minded and ingroup people agree that what they think and who they are is, obviously, the best.

It’s clearly demonstrated the people have trouble admitting error, and therefore, if we want to make good decisions, we need people who will tell us we’re wrong. Good decisions rely on people contributing from various perspectives—not just people like us.

That’s the deliberative model of public argument: that the point of Congress and state legislatures is that they would consider various points of view, the impacts on all communities, and then come to a decision. If we look at public decision-making from that perspective (what’s often called the deliberative model), then we would ensure that there is diverse representation in deliberative assemblies, such as the state legislature or Congress. (The notion that the best decisions involve various perspectives is a given in successful business decision-making models.)

There is another model: the expressive model. For many people, there is no such thing as persuasion, and public discourse is all about people expressing their opinions (usually their statements of commitments to their group). Public discourse isn’t about deliberation or communal reasoning—it’s a bunch of people shouting in a stadium, and the group that has the people who shout the loudest win. You don’t go into that stadium intending to listen carefully to what other people are shouting in order to come to a new understanding of your own views: you come to shout out the others.

I can’t think of a time when this model of public discourse led to a community coming to a good decision.

The third model is that ideas/policies are products sold just like shampoo. The hope is that the market is rational, and so if a particular shampoo sells the best, it is the best product. This is a problematic model in many ways, not the least of which is that it’s circular. The market is assumed to be rational because it represents what people value, and it’s assumed that people’s values are rational. This is an almost religious belief in that it can’t be supported empirically, and has often been falsified (bubbles). The problem with the market model is three-fold: people buy products on the basis of short-term benefits and inadequate information, whereas policy decisions should be made in light of long-term consequences; second, it makes voters passive, who can whinge about a candidate not being adequately sold (instead of seeing it as being our responsibility to inform ourselves about candidates); finally, if I buy the wrong shampoo, my hair falls out, but if I buy the wrong candidate, my community is harmed.

The activity of market always represents short-term choices, and assessments of “marketability” tend to be about short-term gains. Unless you have a circular argument (the market choice is rational because the market choice is defined as rational—which a surprising amount of people on this issue assume), then the market does not represent the long-term best interest of the people (think bubbles). In addition, the market, by definition, cannot represent the values of those without the resources to participate (future generations, for instance). The market is always the tragedy of the commons.

(You never get a defense of the inherent rationality of the market that isn’t logically circular, doesn’t assume the just world hypothesis, or doesn’t appeal to prosperity gospel.)

While I believe that the deliberative model is best for community decision-making, I think a healthy public sphere has places where each of these models is practiced. It’s fine if someone’s facebook page (or twitter feed) is entirely expressive. But, on the whole, there should be a place where people try to deliberate with one another, or, at least, acknowledge in the abstract that the inclusion of people with them they disagree is valuable. The problem is that people are spending all of their time in expressive public spheres, and making decisions on the basis of group identity.

I was definitely one of the people who thought that the digitally-connected world would be the Habermasian public sphere, and that isn’t how it played out. I think there were moments (in the 80s) when it seemed to be something like what Habermas described—a realm in which argument and not identity mattered. But, what became clear is that identity does matter.

And so here is what I came to believe: in good arguments there are a lot of data. And identity is a datum. But that’s all it is. It isn’t a premise: it’s a datum.

[As an aside, I have to say that sometimes I think that public deliberation could be wonderful were we to understand five points: 1) a premise and datum are not the same thing; 2) don’t put always or never or necessarily into someone else’s argument; 3) treat others as you want to be treated; 4) there isn’t a binary between certainty and sloppy relativism; 5) a claim can be false and/or illogical even if the evidence for the claim is true.}

But, what happens in a lot of public discourse is that people assume that you can deduct the goodness of an argument from the goodness of the person making the argument, and you can make that determination on the basis of cues. That is, if a person says something that, for you, cues that they are a member of a particular group, you can assume that they believe all the things you think members of that group believe. If that particular group is one you share, then you’ll attribute all sorts of wonderful qualities and beliefs to them; if it’s an outgroup for you, then you’ll attribute all sorts of stupid beliefs, bad motives, and bad behavior to them.

That last point is simultaneously simple and complicated. We tend to homogenize the outgroup, and so if an outgroup member says that squirrels are awesome, and another outgroup member says that little dogs are the best, we’ll assume that second person thinks squirrels are awesome. People who are particularly drawn to thinking in terms of us versus them will take mere criticism of the ingroup as sufficient proof that the critic is a member of the outgroup, and will then attribute to that person all the things that are supposed to be true of outgroup members.

This is deductive reasoning—inferring beliefs of individuals from our assumptions about what those people believe. It’s pervasive in toxic publics.

And, no, it isn’t particular to any one “side” of the political spectrum. But, the fact that that question even comes up—who does this more?—is a sign of how uselessly committed to group loyalty our political world has become.

Democracy presumes that there is no single person, or single group, that knows all that is necessary to make good policy decisions. And that means that, while it isn’t necessary that people in a democracy believe that all views are equally valid (or even that all views are valid), it is necessary that we believe that we have something to learn from people with whom we disagree—we cannot delegitimate everyone who disagrees with us and continue to claim that we believe in democracy. (For me, this tendency to dismiss every other point of view as corrupt, servile, or in other ways illegitimate is especially troubling in people who self-identify as democratic socialists—c’mon, folks, it isn’t democratic if it’s a one-party system.) The tendency to insist that only one point of view if legitimate is profoundly anti-democratic—it assumes that the ideal situation is a one-party system. And that’s authoritarianism. And it has never ended well.