On career choices as mingling in Burke’s parlor

On Wednesday, I sent off the scholarly version of the demagoguery argument. It isn’t the book I once planned (that would involve a deeply theoretical argument about identity and the digital world), but it’s the one I really wanted to write, that would (I think) reach more people than that other one.

And it’s the last scholarly book I’ll write. I intend to spend the rest of my career trying to solve the interesting intellectual problem of making scholarly concepts and debates more accessible to non-academics. But that isn’t because I reject highly specialized academic writing as, in any way, a bad thing.

I have no problem with highly theoretical and very specialized books. My books have barely grazed the 1000 sales point, and that’s pretty good for a scholarly book. People have told me that something I’ve written has had an impact on their scholarship, pedagogy, program administration, so I’m really happy with my record as a scholar.

And I’m happy with the record of people who have sold both more and less because measuring impact is so very difficult. Publishing a book with an academic press is an extraordinary achievement, and measuring the impact of such books accurately is nigh impossible—a really powerful book is shared in pirated pdfs, checked out of libraries, passed from one person to another. Sales and impact are orthogonal in academia.

If you study the history of ideas even a little you have to know that what seemed major in the moment was sometimes just a trend (like mullets) and sometimes a sea change (like the synthesizer). No one reads Northrop Frye anymore, but he was a big deal at one moment, and yet Hannah Arendt is still in the conversation, who was also a big deal around the same time. And there are all those people who weren’t big deals in their era, but later came to have tremendous impact, such as Mikhail Bakhtin.

Some trade books on scholarly issues have had extraordinary sales (such as Mortimer Adler’s writings), but it’s hard to know what impact they had. Madison Grant’s racist book Passing of the Great Race had poor sales, but appears to have had a lot of impact. And there are lots of trade books that have come and gone without leaving any impact, so there’s no good reason to conclude that trade books necessarily have more impact than scholarly ones. I don’t think there are a lot of (any?) necessary conclusions that one can draw about whether trade or scholarly books have more impact, are more or less important, more or less valuable intellectual activity.

I have always loved Kenneth Burke’s analogy of the parlor for what it means to be interested in major questions. You show up at a party, he says, and it’s been going on for a while, and you find some conversation that seems interesting. You listen for a while, and then you take a side or point out something new. You get attacked and defended, and some people leave the conversation, and others join, and eventually you too leave. And it goes on, with other people taking sides that may or may not have to do with what you were arguing.

What Burke fails to mention is that, if it’s a good party, there are a lot of conversations going on. You might choose to leave that particular conversation, but not leave the party.

I have loved writing scholarly pieces (although I didn’t initially think I would), and my work has placed me in some particular conversations. I’ve moved from one conversation to another, but all on the side of the parlor engaged in very scholarly arguments. I’d like to leave that side of the parlor, not because it’s a bad one—it’s a wonderful one—but because it’s a party with a lot of conversations going on. I’d like to mingle.

I think a lot of discussions of the public intellectual make a lot of odd assumptions that assume binaries—that either specialized or public scholarship is good, for instance. Scholarship that speaks with authority to a small group is neither better nor worse than scholarship that reaches a broad audience—it’s just a different conversation in Burke’s parlor. And I’m going to wander over there for a bit.


The easy demagoguery of explaining their violence

When James Hodgkinson engaged in both eliminationist and terroristic violence against Republicans, factionalized media outlets blamed his radicalizing on their outgroup (“liberals”). In 2008, when James Adkisson committed eliminationist and terroristic violence against liberals, actually citing in his manifesto things said by “conservative” talk show hosts (namechecking some of the ones who blamed liberals for Hodgkinson), those media outlets and pundits neither acknowledged responsibility nor altered their rhetoric.[1]

That’s fairly typical of rabidly factional media: if the violence is on the part of someone who can be characterized as them (the outgroup), then outgroup rhetoric obviously and necessarily led to that violence. That individual can be taken as typical of them. If, however, the assailant was ingroup, then factionalized media either simply claimed that the person was outgroup (as when various media tried to claim that a neo-Nazi was a socialist and therefore lefty), or they insisted this person be treated as an exception.

That’s how ingroup/outgroup thinking works. The example I always use with my classes is what happens if you get cut off by a car with bumper stickers on a particularly nasty highway in Austin (you can’t drive it without getting cut off by someone). If the bumper stickers show ingroup membership, you might think to yourself that the driver didn’t see you, or was in a rush, or is new to driving. If the bumper stickers show outgroup membership, you’ll think, “Typical.” Bad behavior is proof of the essentially bad nature of the outgroup, and bad behavior on the part of ingroup membership is not. That’s how factionalized media works.

So, it’s the same thing with ingroup/outgroup violence and factionalized media (and not all media is factionalized). For highly factionalized right-wing media, Hodgkinson’s actions were caused by and the responsibility of “liberal” rhetoric, but Adkisson’s were not the responsibility of “conservative” rhetoric. For highly factionalized lefty media, it was reversed.

That factionalizing of responsibility is an unhappy characteristic of our public discourse; it’s part of our culture of demagoguery in which the same actions are praised or condemned not on the basis of the actions, but on whether it’s the ingroup or outgroup that does it. If a white male conservative Christian commits an of terrorism, the conservative media won’t call it terrorism, never mentions his religion or politics, and generally talks about mental illness; if a someone even nominally Muslim does the same act, they call it terrorism and blame Islam. In some media enclaves, the narrative is flipped, and only conservatives are acting on political beliefs. In all factional media outlets, they will condemn the other for “politicizing” the incident.

While I agree that violent rhetoric makes violence more likely, the cause and effect is complicated, and the current calls for a more civil tone in our public discourse is precisely the wrong solution. We are in a situation when public discourse is entirely oriented toward strengthened our ingroup loyalty and our loathing of the outgroup. And that is why there is so much violence now. It isn’t because of tone. It isn’t because of how people are arguing; it’s because of what people are arguing.

To make our world less violent, we need to make different kinds of arguments, not make those arguments in different ways.

Our world is so factionalized that I can’t even make this argument with a real-world example, so I’ll make it with a hypothetical one. Imagine that we are in a world in which some media that insist all of our problems are caused by squirrels. Let’s call them the Anti-Squirrel Propaganda Machine (ASPM).They persistently connect the threat of squirrels to end-times prophecies in religious texts, and both kinds of media relentlessly connect squirrels to every bad thing that happens. Any time a squirrel (or anything that kind of looks like a squirrel to some people, like chipmunks) does something harmful it’s reported in these media, any good action is met with silence. These media never report any time that an anti-squirrel person does anything bad. They declare that the squirrels are engaged in a war on every aspect of their group’s identity. They regularly talk about the squirrels’ war on THIS! and THAT! Trivial incidents (some of which never happened) are piled up so that consumers of that media have the vague impression of being relentlessly victimized by a mass conspiracy of squirrels.

Any anti-squirrel political figure is praised; every political or cultural figure who criticizes the attack on squirrels is characterized as pro-squirrel. After a while, even simply refusing to say that squirrels are the most evil thing in the world and that we must engage in the most extreme policies to cleanse ourselves of them is showing that you are really a pro-squirrel person. So, in these media, there is anti-squirrel (which means the group that endorses the most extreme policies) and pro-squirrel. This situation isn’t just ingroup versus outgroup, because the ingroup must be fanatically ingroup, so the ingroup rhetoric demands constant performance of fanatical commitment to ingroup policy agendas and political candidates.

If you firmly believe that squirrels are evil (and chipmunks are probably part of it too0, but you doubt whether this policy being promoted by the ASPM is really the most effective policy, you will get demonized as someone trying to slow things down, not sufficiently loyal, and basically pro-squirrel. Even trying to question whether the most extreme measures are reasonable gets you marked as pro-squirrel. Trying to engage in policy deliberation makes you pro-squirrel.

We cannot have a reasonable argument about what policy we should adopt in regard to squirrels because even asking for an argument about policy means that you are pro-squirrel. That is profoundly anti-democratic. It is un-American insofar as the very principles of how the constitution is supposed to work show a valuing of disagreement and difference of opinion.

(It’s also easy to show that it’s a disaster, but that’s a different post.)

ASPM media will, in addition, insist on the victimization narrative, and also the “massive conspiracy against us” argument, but that isn’t really all that motivating. As George Orwell noted in 1984, hatred is more motivating when it’s against an individual, and so these narratives end up fixating on a scapegoat. (Right now, for the right it’s George Soros, and for the left it’s Trump.) There can be institutional scapegoats—Adkisson tried to kill everyone in a Unitarian Church because he’d believed demagoguery that said Unitarianism is evil.

Inevitably, the more that someone lives in an informational world in which they are presented as in a war of extermination against us, the more that person will feel justified in using violence against them. If it’s someone who typically uses violence to settle disagreement, and there is easy access to weapons, it will end in violence against whatever institution, group, or individual that person has been persuaded is the evil incubus behind all of our problems.

At this point, I’m sure most readers are thinking that my squirrel example was unnecessarily coy, and that it’s painfully clear that I’m not talking about some hypothetical example about squirrels but the very real examples of the antebellum argument for slavery and the Stalinist defenses of mass killings of kulaks, most of the military officer class, and people who got on the wrong side of someone slightly more powerful.

And, yes, I am.

The extraordinary level of violence used to protect slavery as an institution (or that Stalin used, or Pol Pot, or various other authoritarians) was made to seem ordinary through rhetoric. People were persuaded that violence was not only justified, but necessary, and so this is a question of rhetoric—how people were persuaded. But, notice that none of these defenses of violence have to do with tone. James Henry Hammond, who managed to enact the “gag rule” (that prohibited criticism of slavery in Congress) didn’t have a different “tone” from John Quincy Adams, who resisted slavery. They had different arguments.

Demagoguery—rhetoric that says that all questions should be reduced to us (good) versus them (evil)—if given time, necessarily ends up in purifying this community of them. How else could it end? And it doesn’t end there because of the tone of dominant rhetoric. It ends there because of the logic of the argument. If they are at war with us, and trying to exterminate us, then we shouldn’t reason with them.

It isn’t a tone problem. It’s an argument problem. It doesn’t matter if the argument for exterminating the outgroup is done with compliments toward them (Frank L. Baum’s arguments for exterminating Native Americans), bad numbers and the stance of a scientist (Harry Laughlin’s arguments for racist immigration quotas), or religious bigotry masked as rational argument (Samuel Huntington’s appalling argument that Mexicans don’t get democracy).

In fact, the most effective calls for violence allow the caller plausible deniability—will no one rid me of this turbulent priest?

Lots of rhetors call for violence in a way that enables them to claim they weren’t literally calling for violence, and I think the question of whether they really mean to call for violence isn’t interesting. People who rise to power are often really good at compartmentalizing their own intentions, or saying things when they have no particular intention other than garnering attention, deflecting criticism, or saying something clever. Sociopaths are very skilled at perfectly authentically saying something they cannot remember having said the next day. Major public figures get a limited number of “that wasn’t my intention” cards for the same kind of rhetoric—after that, it’s the consequences and not the intentions that matter.

What matters is that whether it’s individual or group violence, the people engaged in it feel justified, not because of tone, but because they have been living in a world in which every argument says that they are responsible for all our problems, that we are on the edge of extermination, that they are completely evil, and therefore any compromise with them is evil, that disagreement weakens a community, and that we would be a better and stronger group were we to purify ourselves of them.

It’s about the argument, not the tone.

[A note about the image at the beginning: this is one of the stained glass windows in a major church in Brussels celebrating the massacre of Jews. The entire incident was enabled by deliberately inflammatory us/them rhetoric, but was celebrated until the 1960s as a wonderful event.]

[1] For more on Adkisson’s rhetoric, and its sources, see Neiwert’s Eliminationists (https://www.amazon.com/Eliminationists-Hate-Radicalized-American-Right/dp/0981576982)

For more about demagoguery: https://theexperimentpublishing.com/catalogs/fall-2017/demagoguery-and-democracy/

Making sure the poor don’t get any food they don’t deserve

“But when thou makest a feast, call the poor, the maimed, the lame, the blind”

In a recent interview, Kellyanne Conway said that “able-bodied” people who will lose Medicare with the GOP health plan should “go find employment” and then get “employee-sponsored benefits.” Critics of Conway presented evidence that large numbers of adults on Medicaid do have jobs, as though that would prove her wrong. But that argument won’t work with the people who like the GOP plan because their answer is that those people should get better jobs. The current GOP plan regarding health care is based on the assumption that benefits like health care should be restricted to working people.

For many, this looks like hardheartedness toward the poor and disadvantaged—exactly the kind of people embraced and protected by Jesus, so many people on the left have been throwing out the accusation of hypocrisy. That the same people who are, in effect, denying healthcare to so many people have protected it for themselves seems, to many, to be the merciless icing on the hateful cake.

And so progressives are attacking this bill (and the many in the state legislatures that have the same intent and impact) as heartless, badly-intentioned, cynical, and cruel. And that is exactly the wrong way to go about this argument. The category often called “white evangelical” tends to be drawn to the just world hypothesis and prosperity gospel, and those two (closely intertwined) beliefs provide the basis for the belief that public goods should not be equally accessible (let alone evenly distributed) because, they believe, those goods should be distributed on the basis of who deserves (not needs) them more. And they believe that Scripture endorses that view, so they are not hypocrites—they are not pretending to have beliefs they don’t really have. This isn’t an argument about intention; this is an argument about Scriptural exegesis.

Progressives will keep losing the argument about public policy until we engage that Scriptural argument. People who argue that the jobless, underemployed, and government-dependent should lose health care will never be persuaded by being called hypocrites because they believe they are enacting Scripture better than those who argue that healthcare is a right.

  1. The Just World Hypothesis and Prosperity Gospel

There are various versions of the prosperity gospel (and Kate Bowler’s Prosperity Gospel elegantly lays them out), but they are all versions of what social psychologists call “the just world hypothesis.” That hypothesis is a premise that we live in a world in which people get what they deserve within their lifetimes—people who work hard and have faith in Jesus are rewarded. In some versions, it’s well within what Jesus says, that God will give us what we need. In others, however, it’s the ghost of Puritanism (as Max Weber called it) that haunts America: that wealth and success are perfect signs of membership in the elect. And it’s that second one that matters for understanding current GOP policies.

In that version, in this life, people get what they deserve, so that good people get and deserve good things, and bad people don’t deserve them—it is an abrogation of God’s intended order to allow bad people to get good things, especially if they get those good things for free. For people who believe that God perfectly and visibly rewards the truly faithful, there is a perfect match between faith and the goods such as health and wealth. People with sufficient faith are healthy and wealthy, and, because they have achieved those things by being closer to God, they deserve more of the other goods, such as access to political power. Rich people are just better, and their being rich is proof of their goodness. So, it’s a circular argument–good people get the good things, and that must mean that people with good things are good.

I would say that’s an odd reading of Scripture, but no odder than the defenses of slavery grounded in Scripture, nor of segregation, nor of homophobia. All of those defenders had their proof-texts, after all. And, in each case, the people who cited those texts and defended those practices had a conservative (sometimes reactionary) ideology. They positioned themselves as conserving a social order and set of practices they sincerely believed intended by God as against liberal, progressive, or “new” ways of reading Scripture.

[And here a brief note—they often didn’t know that their own readings were very new, but that’s a different post.]

Because they were reacting against the arguments they identified as liberal (or atheist), I’ll call them reactionary Christians for most of this post, and then in another post explain what’s wrong with that term.

In some cultures, political ideology and identity are identical, so that a person with a particular political belief automatically identifies everyone with that belief as in the category of “good person,” and anyone who doesn’t share that belief is a “bad person.” We’re in that kind of culture.

That easy equation of “believes what I do” and “good person” is enhanced by living within an informational enclave. In informational enclaves, a person only hears information that confirms their beliefs—antebellum Southern newspapers were filled with (false) reports of abolitionist plots, for instance,—so it would sincerely seem to their readers as though “everyone” agrees that abolitionists are trying to sow insurrection. In an informational enclave, “everyone” agrees that the Jews stab the host for no particular reason (the subject of the stained glass above–a consensus that resulted in massacre).

Informational enclaves are self-regulating in that anyone who tries to disrupt the consensus is shamed, expelled, perhaps even killed. By the 1830s, it was common for slave states to require the death penalty for anyone advocating abolition, and “advocating abolition” might be understood as “criticizing slavery.” American Protestant churches split so that Southern churches could guarantee they would not have a pastor that might condemn slavery (the founding of the SBC, for instance), and proslavery pastors could rain down on their congregations proof-texts to defend the actually fairly bizarre set of practices that constituted American slavery.

As Stephen Haynes has shown, the reliance of those pastors on an odd reading of Genesis IX became a Scriptural touchstone for defending segregation.

Southern newspapers were rabidly factional in the antebellum era, and (with a few exceptions) pro-segregation (or silent on segregation) in the Civil Rights eras. (This was not, by the way, “true of both sides,” in that the major abolitionist newspaper, The Liberator, often published the full text of proslavery arguments.) Because those proof-texts were piled up as defenses, and reactionary Christianity was hegemonic in various areas, many people simply knew that there were three kings who visited the baby Jesus, that those three kings related to the three races, with the “black” race condemned to slavery due to Noah’s curse.

If you’d like to see how hegemonic that (problematic) reading of Scripture was, look at older nativity scenes, and you will see that there is always a white, someone vaguely semitic, and an African. Ask yourself, how many wise men visited Jesus? Try to prove that number through Scripture.

That whole history of reactionary Christianity is ignored, and even the SBC has tried to rewrite its own history, not acknowledging the role of slavery in their founding. My point is simply that, when a method of interpreting Scripture becomes ubiquitous in a community, then people don’t realize that they’re interpreting Scripture through a particular lens—they think they’re just reading what is there.

For years, the story of Sodom was taken as a condemnation of homosexuality, but there is really nothing about homosexuality in it—the Sodomites were more commonly condemned for oppressing the poor. There are rapes in it, and one of them would have been homosexual, but there is no indication that homosexuality was accepted as a natural practice in the community. Yet, for years, the story of Sodom was flipped on the podium as though it obviously condemned all same-sex relationships.

For readers of The New York Times, The Nation, or other progressive outlets, the Scriptural argument over homosexuality was under the radar, but it was crucial to how far we’ve gotten for the civil rights of people with  sexualities stigmatized by reactionary Christians. The Scriptural argument about queer sexuality was always muddled—Sodom wasn’t really about gay sex, the word “homosexuality” is nowhere in Scripture, people who cite Leviticus about men lying with each other get that sentiment tattooed on themselves while wearing mixed fibers, Paul was opposed to sex in general.

Reactionary Christians managed to promote their muddled view as long as no one raised questions about exegesis, and the Christian Left raised those questions over and over. And now even mainstream reactionary churches who argue that Scripture condemns homosexuality have abandoned the story of Sodom as a proof text. That success can be laid at the feet of progressive Christians.

One thing that turned large numbers of people, I think, was the number of bloggers, popular Christian authors, and pastors making the more sensible Scriptural argument: there isn’t a coherent method of reading Scripture that demonizes queer sexuality and allows the practices reactionary Christians want to allow (such as non-procreative sex, divorce, wildflower mixes, corduroy, oppressing the poor).

Similarly, an important realm in the Civil Rights movements was that in which progressive Christians debated the Scriptural argument. One of the more appalling “down the memory hole” moments in American history is the role of reactionary Christians in civil rights. Segregation was a religious issue, supported by Genesis IX, and various other texts (about God putting peoples where they belong, and all the texts about mixing). Even “moderate” Christians, like those who opposed King, and to whom he responded in his letter, opposed integration.

That’s important. The major white churches in the South supported segregation, and all of the reactionary ones.The opponents of segregation (like the opponents of slavery) were progressive Christians, sometimes part of organizations (like the black churches) and sometimes on the edge of getting disavowed by their organizations. And that is obscured, sometimes deliberately, as when reactionary Christians try to claim that “Christianity” was on the side of King—no, n fact, reactionary Christianity was on the side of segregation.

Right now, there is a complicated fallacy of genus-species among many reactionary Christians, in that they are trying to claim the accomplishments of people like Jesse Jackson and Martin Luther King, Jr., and Stokely Carmichael on the grounds that King was Christian, while ignoring that their churches and leaders disavowed and demonized those people (and, in the case of Jackson and Carmichael, still do).

Reactionary Christianity has two major problems: one is a historical record problem, and the second, related, is an exegesis problem. They continually deny or rewrite their own participation in oppression, and they have thereby enabled the occlusion of the problems their method of exegesis presents. If their method of reading got them to support slavery and segregation, practices they now condemn, then their method is flawed. Denying the problems with their history enables them to deny the problems with their method.

Reactionary Christianity’s method of reading of Scripture begins by assuming that the current cultural hierarchy is intended by God, that this world is just, that everything they believe is right, and then goes in support of texts that will support that premise. And there is also a hidden premise that the world is easily interpretable, that uncertainty and ambiguity are unnecessary because they are the signs of a weak faith, and that the world is divided into the good and the bad.

  1. The Scriptural argument

The proof-text for the notion that poor people don’t deserve health care or other benefits is 2 Thessalonians 3:10, “For even when we were with you, this we commanded you: that if any would not work, neither should he eat.”

Thessalonians may or may not have been written by Paul (probably not), but it certainly contradicts what both Paul and Jesus said about how to treat the poor. There are far more texts that insist on giving without question, caring for the poor, tending to people without judging, and for humans not presuming to be God (that is, we are not perfect judges of good and evil, and our fall was precisely on the grounds of thinking we should be).

That we have a large amount of public policy wavering on that single wobbly text of 2 Thessalonians 3:10 is concerning, but it isn’t new—the Scriptural arguments for slavery, segregation, and homophobia were and are similarly wobbly. Prosperity gospel has a very shaky Scriptural foundation, and the whole notion that Scripture supports an easy division into makers and takers isn’t any easier to argue than the readings that supported antebellum US practices regarding slavery.

Their reading of Scripture says that they should feel good about health insurance being restricted to people who have jobs (which is why Congress is cheerfully giving themselves benefits they’re denying to others—they see themselves as having earned those benefits by having the job of being in Congress). They can feel justified (in the religious sense) in cutting off people on Medicaid, those who are un- or underemployed, and those with pre-existing conditions because they believe that Scripture tells them that those people could simply stop being un- or underemployed, or have made different choices that wouldn’t have landed them on Medicaid, or could have prayed enough not to have those pre-existing conditions. They believe that they are, in this life, sitting by Jesus’ side and handing out judgments.

I think they’re wrong. But calling them hypocrites won’t work.

This is an argument about Scripture, and progressives need to understand that, as with other policy debates, progressive Christians will do some of the heavy lifting. And progressive Christians need to understand that it is our calling: to point, over and over, to Jesus’ passion for the poor and outcast, and to his insistence that the rewards of this world should never be taken as proof of much of anything.




King Lear and charismatic leadership

Recently, various highly factionalized media worked their audience into a froth by reporting that New York’s “Shakespeare in the Park” had Julius Caesar represented as Trump. That these media were successful shows people are willing to get outraged on the basis of no or mis-information. Shakespeare’s Caesar is neither a villain nor a tyrant.

And it’s the wrong Shakespeare anyway for a Trump comparison. Shakespeare was deeply ambivalent about what we would now consider democratic discourse (look at how quickly Marc Antony turns the crowd, or Coriolanus’ inability to maintain popularity). But he wasn’t ambivalent about leaders who insist on hyperbolic displays of personal loyalty. They are the source of tragedy.

The truly Shakespearean moment recently was Trump’s cabinet meeting, which he seemed to think would gain him popularity with his base, since it was his entire cabinet expressing perfect loyalty to him. And anyone even a little familiar with Shakespeare immediately thought of the scene in King Lear when Lear demands professions of loyalty. Trump isn’t Caesar; he’s Lear.

Lear’s insistence on loyalty meant that he rejected the person who was speaking the truth to him, and the consequence was tragedy. It isn’t exactly news, at least among people familiar with the history of persuasion and leadership, that leaders who surround themselves with people who make the leader feel great (or who worship the leader) make bad decisions. Ian Kershaw’s elegant Fatal Choices makes the point vividly, showing how leaders like Mussolini, Hitler, or Hirohito skidded into increasingly bad decisions because they treated dissent as disloyalty.

In business schools, this kind of leadership is called “charismatic,” and it is often presented as an unequivocal good—something that is surely making Max Weber (who initially described it in 1916) turn in his grave. Weber identified three sources of power for leaders: tradition, legal, and charismatic, and Hannah Arendt (the scholar of totalitarianism) added a fourth: someone whose authority comes from having demonstrated context-specific knowledge. Weber argued that charismatic leadership is the most volatile.

In business schools, charismatic leadership is praised because it motivates followers to go above and beyond; followers who believe in the leader are less likely to resist. And, while that might seem like an unequivocal good, it’s only good if the leader is leading the institution in a good direction. If the direction is bad, then disaster just happens faster.

Charismatic leadership is a relationship that requires complete acquiescence and submission on the part of the followers. It assumes that there is a limited amount of power available (thus, the more power that others have, the less there is for the leader to have). And so the charismatic leader is threatened by others taking leadership roles, pointing out her errors, or having expertise to which she should submit. It is a relationship of pure hierarchy, simultaneously robust and fragile, because it can withstand an extraordinary amount of disconfirming evidence (that the leader is not actually all that good, does not have the requisite traits, is out of her depth, is making bad decisions) by simply rejecting them; it is fragile, however, insofar as the admission of a serious flaw on the part of the leader destroys the relationship entirely. A leader who relies on legitimacy isn’t weakened by disagreement (and might even be strengthened by it), but a charismatic leader is.

Hence, leaders who rely on legitimacy encourage disagreement and dissent because that leader’s authority is strengthened by the expertise, contributions, and criticism of others, but charismatic leaders insist on loyalty.

Charismatic leadership is praised in many areas because it leads to blind loyalty, and blind loyalty certainly does make an organization that has people working feverishly toward the leaders’ ends. But what if those ends aren’t good?

Whether charismatic leadership is the best model for business is more disputed than best sellers on leadership might lead one to believe. There is no dispute, however, that it’s a model of leadership profoundly at odds with a democratic society. It is deeply authoritarian, since the authority of the leader is the basis of decision-making, and dissent is disloyalty.

Lear demanded oaths of blind loyalty, and, as often happens under those circumstances, the person who was committed to the truth wouldn’t take such an oath. And that person was the hero.

“Just Write!” and the Rhetoric of Self-Help

There is a paradox regarding the large number of scholars who get stalled in writing—and a large number do get stalled at some point (50% of graduate students drop out)—they got far enough to get stalled because, for some long period of time, they were able to write. People who can’t write a second book, or a first one, or a dissertation, are people who wrote well enough and often enough to get to the point that they needed to write a dissertation, first book, second book, grant, and so on. So, what happened?

The advice they’re likely to be given is, “Just write.” And the reason we give that advice (advice I gave for years) is that we have the sense that they’re overthinking things, that, when they sit down to write, they’re thinking about failure, and success, and shame, and all the things that might go wrong, and all the ways what they’re writing might be inadequate, and all the negative reactions they might get for what they’ve written. So, we say, “Just write,” meaning, “Don’t think about those things right now.”

The project of writing may seem overwhelming because existentially risky, and the fear created by all the anxiety and uncertainty is paralyzing. It can seem impossibly complicated, and so we give simple advice because we believe that persuading them to adopt a simpler view of the task ahead will enable them to write something. Once they’ve written something, once they’re unstuck, then they can write something more, and then revise, and then write more. Seeing that they have written will give them the confidence they need to keep writing.

And I think that advice often works, hence the (deserved) success of books like Writing Your Dissertation in Fifteen Minutes a Day or Destination Dissertation. They simplify the task initially, and present the tasks involved in ways that are more precise than accurate, but with the admirable goal of keeping people moving. Many people find those books useful, and that’s great. But many people don’t, and I think the unhappy consequence of the “you just have to do this” rhetoric is that there is an odd shaming that happens to people for whom that advice doesn’t work. And, while it’s great that it works for a lot of people, there are a lot for whom it doesn’t, and I’m not happy that they feel shamed.

These books have, as Barbara Kamler and Pat Thomson have argued, characteristics typical of the self-help genre (“The Failure of Dissertation Advice Books”), especially in that it presents dissertation writing as “a series of linear steps” with “hidden rules” that the author reveals. While I am not as critical of those books, or of the genre of self-help, as Kamler and Thomson, I think their basic point is worth taking seriously: that this advice misleads students because it presents dissertation writing as a set of practices and habits rather than cognitive challenges and developments.

Academic writing is hard because it’s hard. Learning to master the postures, steps, and dances of developing a plausible research question, identifying and mastering appropriate sources, determining necessary kinds of support, managing a potentially sprawling project, and positioning a new or even controversial claim in an existing scholarly conversation—all of that is hard and requires cognitive changes, not just writing practices.

Telling people academic writing “just” requires anything (“just write,” “just write every day,” “just ignore your fears,”) is a polite and sometimes useful fiction. And self-help books’ reliance on simple steps and hidden rules is, I’d suggest, not necessarily or manipulative, but based in the sense that telling people something hard is actually hard can discourage them. If you lie, and thereby motivate them to try doing it, then they might realize that, while hard, it isn’t impossible.

I think the implicit analogy is to something like telling a person who needs to exercise that they should “just get up off the couch.” Telling people that improving their health will be a long and slow process with many setbacks is unlikely to motivate someone to start the process; it makes the goal seem impossible, and unrewarding. Telling someone that getting healthier is simple, and they “just” need to increase their exercise slightly, or reduce portion size slightly, or do one thing differently will at least get them started. Having gotten a little healthier might inspire them to do more, but, even if it doesn’t, they are getting a little better.

But that’s the wrong analogy.

A scholar who is having difficulty writing is not analogous to someone who needs to get up off the couch: it’s a person with a long record of successes as a writer. That is what we (and people who are stuck) so often lose track of when we give the “just write” advice. They are not a person sitting on a couch; they are someone with an exercise practice that has always worked for them in the past and it isn’t working now.

The better analogy, I would suggest, is a sprinter who is now trying to run a marathon. Sprinting has worked for them in the past, and many academics have a writing process that is akin to sprinting—chunks of time in which we do nothing but write, and try to get as much done as quickly as we can. Writing a dissertation or book, on the other hand, is more like running a marathon.

It would be unethical to tell a sprinter who is unable to run a marathon that she should “just run.” She has been running; she’s quite good at it. But the way that she has been running is not working for this new distance. And if she does try to run a marathon the way she has always run short races, she will hurt herself.

My intuition is that people who have trouble writing are people who have always used the sprinting method, and have simply managed to develop the motivational strategies to sprint for longer, or collapse from time to time while on the race, and pick themselves up. Often, it seems to me, that motivation relies on panic and negative self-talk—they manage to binge write because otherwise, they tell themselves, they are a failure.

So I’m not saying that “Just write” is always bad advice. I am saying that it sometimes is; it is sometimes something that can send people into shame spirals. It only works for some people, for people who do find that polite fiction motivating. For others, though, telling them “just write” is exactly like telling a person in a panic attack “just calm down” or someone depressed “just cheer up.”

The “just write” comes from a concern that lack of confidence will paralyze a student. But I think we might be solving the wrong problem.

Part of the problem is the myth of positive thinking, which has taken on an almost magical quality for some people. There is a notion that you should only think positive thoughts, as though thinking negative things brings on bad events. Since thinking clearly about how hard it is to write a book, dissertation, or grant (and, specifically, thinking clearly about how we might have habits or processes that inhibit our success) is thinking about “bad” things, about how things might go wrong or what troubles we might have, the myth of positive thinking says you shouldn’t do it. You should, instead, just imagine success.

This is a myth. It isn’t just a myth, but pernicious, destructive nonsense. A (sometimes secular) descendant of the positive psychology elegantly described by Bowler in Blessed, this is magical thinking pure and simple, and perfectly contrary to what research shows about how positive thinking actually affects motivation.

But here I should be clear. Some people who advocate wishful thinking do so because believe that the only other possibility is wallowing in self-loathing and a sense that the task is impossible, and they believe that telling students that academic writing is hard will necessarily lead to their believing it is impossible. In other words, there is an assumption that there is a binary between thinking only and entirely about positive outcomes or thinking only and entirely about tragic outcomes. The former is empowering and the latter is paralyzing. That narrative is wrong on all three counts—positive thinking is not necessarily enabling, moments of despair are not necessarily disabling, and our attitude toward our own challenges is not usefully described as a binary between pure optimism and pure despair. Left out of that binary is being hopefully strategic: aware of possible failures, mindful of hurdles, with confidence in our resilience as much as in our talents.

As to the first, studies clearly show that refusing to think negative thoughts about possible outcomes is actively harmful, and frequently impairs achievement. That’s important to remember: telling students they shouldn’t think about their own flaws, the challenges ahead of them, and how things might go wrong is not helping them, and it is making it less likely they will do what they need to do.

Gabriele Oettingen’s considerable research shows that (summarized in the very helpful book Rethinking Positive Thinking), while wishful thinking can be useful for maintaining hope in a bad situation or identifying long-term goals, it inhibits action. Fantasizing about how wonderful a dissertation or book will be doesn’t inspire us to write either; for many people, it makes the actual sometimes gritty work so much more unattractive in comparison that it’s impossible to write. The fantasy is far more fun than writing a crummy first draft. Similarly, Carol Dweck’s research on mindsets shows that success depends on acknowledging what has gone wrong and identifying how one might grow and change to get a different outcome in the future.

A sense that the task is so hard as to be impossible is not inevitably and necessarily disabling. It is, however, inevitable. It is dishonest to tell students that we never feel that what we’re trying to do can’t be done or isn’t worth doing, because so many of us do. And most of us got (and get) through it. Sometimes it took time, therapy, medication, changing things in our personal lives, changing jobs, changing projects, all of the above. But I don’t know any productive scholar free from times of slogging through the slough of despond.

In my experience, academic writing gets easier, but it’s never easy. The hardest writing is probably finishing a dissertation while writing job materials—nothing after that is so hard. But it’s always hard. If we tell students that it’s easy, or that it gets easy, even if we do so with the intention of keeping them moving, we do them a disservice. If they believe us, if they believe that we find it easy, then, when it gets hard, as it necessarily will, they have to conclude that there is something wrong with them. They are unhappily likely to conclude that they have been exposed for the imposter they always worried they were.

The “just write” advice almost certainly works for some people in some situations, as does the “just write every day” or “just freewrite” or “just start with your thesis” or any of the other practices and rules that begin with “just.” They work for someone somewhere and maybe they work for everyone some of the time, and they always strike me as sensible enough to suggest that people experiment with them. But we shouldn’t pretend that they’re magical and can’t possibly fail, or that someone “just” needs to do them. The perhaps well-intentioned fiction that academic writing “just” requires certain practice is magical thinking, and we need to stop saying it.

In my experience, people who find the “just write” advice useless find it too abstract. So, I think we need to be clear that scholarly productivity is, for most people, hard, and it’s find that a person finds it hard. And it takes practice, so there are some things a person might “just write”:

  • the methods section;
  • descriptions of an incident, moment in a text, interaction, or some other very, very specific epitome of their problem (Pirsig’s brick in the wall of the opera house);
  • summaries of their secondary materials with a discussion of how each text is and is not sufficient for their research;
  • a collection of data;
  • the threads from one datum to another;
  • a letter to their favorite undergrad teacher about their current research;
  • a description of their anxieties about their project;
  • an imitation of an introduction, abstract, conclusion, or transition paragraph they like written by a junior scholar.

I’m not presenting that list as a magical solution. It would be odd for me to say that simplistic advice is not helpful and then give a list of the five (or seven, or ten) things we “just” have to do to become (or teach others to become) skilled and productive academic writers. What we have to do is acknowledge that the project requires significant and complicated cognitive changes: that, for most of us, scholarly writing is hard because it’s hard. Let’s be honest about that.






Arguments from identity and the easy demagoguery of everyday commenting

I recently had a piece published on Salon, and it was thrilling. http://www.salon.com/2017/06/10/demagoguery-vs-democracy-how-us-vs-them-can-lead-to-state-led-violence/And the comments quickly skeeved off into the direction of whether “liberals” or “republicans” are better people. That was frustrating.

My argument about demagoguery has several parts:

  1. demagoguery shifts the stasis (as rhetoricians say) from policy arguments to identity arguments, relying on the assumption that all that matters is whether advocates/critics of a policy are ingroup or outgroup.
  2. therefore, in a culture of demagoguery all arguments about policy end up relying on two points: which group is better, and what group an advocate is in—in other words, it’s all identity politics.
  3. so, all arguments end up being deductive arguments from identity.
  4. this part is barely mentioned in either book I’ve done on the issue, but that reasoning on identity is done by homogenizing the outgroup, so if a person seems to be a member of this group, you can attribute to them everything any other member of that group has said or done.

There are other characteristics, but these are the ones that seemed especially important in the comment section on the article.

And here I have to go back to some really old work, and say that I think we remain muddled on how public discourse operates—we flop around among models of expression, deliberation, and purchasing.

Lay theories of public deliberation aren’t expected to be entirely consistent—as social psychologists have noted, we all toggle between naïve realism and skepticism in our everyday lives. But I think there are important consequences of our failing to realize that we flop around among various models of arguing and various models of knowing.

There is a basic premise: major policy decisions shouldn’t be made on the basis of some kind of model of us versus them when we’re talking about a culture that includes us and them. The idea that only group is entitled to determine policy isn’t democratic, sensible, or Christian.

If we want a thriving community (or nation state or world or even club) then we want enough disagreement that we can prevent the problems associated with what is often called groupthink—when a bunch of like-minded and ingroup people agree that what they think and who they are is, obviously, the best.

It’s clearly demonstrated the people have trouble admitting error, and therefore, if we want to make good decisions, we need people who will tell us we’re wrong. Good decisions rely on people contributing from various perspectives—not just people like us.

That’s the deliberative model of public argument: that the point of Congress and state legislatures is that they would consider various points of view, the impacts on all communities, and then come to a decision. If we look at public decision-making from that perspective (what’s often called the deliberative model), then we would ensure that there is diverse representation in deliberative assemblies, such as the state legislature or Congress. (The notion that the best decisions involve various perspectives is a given in successful business decision-making models.)

There is another model: the expressive model. For many people, there is no such thing as persuasion, and public discourse is all about people expressing their opinions (usually their statements of commitments to their group). Public discourse isn’t about deliberation or communal reasoning—it’s a bunch of people shouting in a stadium, and the group that has the people who shout the loudest win. You don’t go into that stadium intending to listen carefully to what other people are shouting in order to come to a new understanding of your own views: you come to shout out the others.

I can’t think of a time when this model of public discourse led to a community coming to a good decision.

The third model is that ideas/policies are products sold just like shampoo. The hope is that the market is rational, and so if a particular shampoo sells the best, it is the best product. This is a problematic model in many ways, not the least of which is that it’s circular. The market is assumed to be rational because it represents what people value, and it’s assumed that people’s values are rational. This is an almost religious belief in that it can’t be supported empirically, and has often been falsified (bubbles). The problem with the market model is three-fold: people buy products on the basis of short-term benefits and inadequate information, whereas policy decisions should be made in light of long-term consequences; second, it makes voters passive, who can whinge about a candidate not being adequately sold (instead of seeing it as being our responsibility to inform ourselves about candidates); finally, if I buy the wrong shampoo, my hair falls out, but if I buy the wrong candidate, my community is harmed.

The activity of market always represents short-term choices, and assessments of “marketability” tend to be about short-term gains. Unless you have a circular argument (the market choice is rational because the market choice is defined as rational—which a surprising amount of people on this issue assume), then the market does not represent the long-term best interest of the people (think bubbles). In addition, the market, by definition, cannot represent the values of those without the resources to participate (future generations, for instance). The market is always the tragedy of the commons.

(You never get a defense of the inherent rationality of the market that isn’t logically circular, doesn’t assume the just world hypothesis, or doesn’t appeal to prosperity gospel.)

While I believe that the deliberative model is best for community decision-making, I think a healthy public sphere has places where each of these models is practiced. It’s fine if someone’s facebook page (or twitter feed) is entirely expressive. But, on the whole, there should be a place where people try to deliberate with one another, or, at least, acknowledge in the abstract that the inclusion of people with them they disagree is valuable. The problem is that people are spending all of their time in expressive public spheres, and making decisions on the basis of group identity.

I was definitely one of the people who thought that the digitally-connected world would be the Habermasian public sphere, and that isn’t how it played out. I think there were moments (in the 80s) when it seemed to be something like what Habermas described—a realm in which argument and not identity mattered. But, what became clear is that identity does matter.

And so here is what I came to believe: in good arguments there are a lot of data. And identity is a datum. But that’s all it is. It isn’t a premise: it’s a datum.

[As an aside, I have to say that sometimes I think that public deliberation could be wonderful were we to understand five points: 1) a premise and datum are not the same thing; 2) don’t put always or never or necessarily into someone else’s argument; 3) treat others as you want to be treated; 4) there isn’t a binary between certainty and sloppy relativism; 5) a claim can be false and/or illogical even if the evidence for the claim is true.}

But, what happens in a lot of public discourse is that people assume that you can deduct the goodness of an argument from the goodness of the person making the argument, and you can make that determination on the basis of cues. That is, if a person says something that, for you, cues that they are a member of a particular group, you can assume that they believe all the things you think members of that group believe. If that particular group is one you share, then you’ll attribute all sorts of wonderful qualities and beliefs to them; if it’s an outgroup for you, then you’ll attribute all sorts of stupid beliefs, bad motives, and bad behavior to them.

That last point is simultaneously simple and complicated. We tend to homogenize the outgroup, and so if an outgroup member says that squirrels are awesome, and another outgroup member says that little dogs are the best, we’ll assume that second person thinks squirrels are awesome. People who are particularly drawn to thinking in terms of us versus them will take mere criticism of the ingroup as sufficient proof that the critic is a member of the outgroup, and will then attribute to that person all the things that are supposed to be true of outgroup members.

This is deductive reasoning—inferring beliefs of individuals from our assumptions about what those people believe. It’s pervasive in toxic publics.

And, no, it isn’t particular to any one “side” of the political spectrum. But, the fact that that question even comes up—who does this more?—is a sign of how uselessly committed to group loyalty our political world has become.

Democracy presumes that there is no single person, or single group, that knows all that is necessary to make good policy decisions. And that means that, while it isn’t necessary that people in a democracy believe that all views are equally valid (or even that all views are valid), it is necessary that we believe that we have something to learn from people with whom we disagree—we cannot delegitimate everyone who disagrees with us and continue to claim that we believe in democracy. (For me, this tendency to dismiss every other point of view as corrupt, servile, or in other ways illegitimate is especially troubling in people who self-identify as democratic socialists—c’mon, folks, it isn’t democratic if it’s a one-party system.) The tendency to insist that only one point of view if legitimate is profoundly anti-democratic—it assumes that the ideal situation is a one-party system. And that’s authoritarianism. And it has never ended well.



Comey’s testimony and identity politics

Comey, being a careful person, documented his deeply problematic meetings with Trump in the moment, and he’s released a statement with all anyone needs to know—Trump used his power to fire Comey in order to try to coerce him into closing down an investigation.

But that isn’t how it will play out in the hearing tomorrow.

For many years now (at least since the rise of Fox News), the GOP Political Correctness Machine has so consistently engaged in projection that you can tell the weakest point of a GOP candidate by noticing what accusations the Fox media (and other water carriers, as Limbaugh called himself) make about their opponents (think about their attack on Kerry for his war record).

For years, they’ve been flinging the accusation of political correctness at their opposition, and it’s a great example of projection.

Originally, the term came from the way that the Stalinist propaganda machine would decide what was the correct line to take on some event: Nazis are evil, Nazis are okay, Nazis are evil. To be politically correct meant that you were in line with what the higher-ups said was the right line to take on a political issue. And it was even better if you could pivot quickly.

To be politically correct means that you don’t have principles that operate across groups (adultery is bad whether it’s a GOP, Libertarian, Dem, Green), but that you know what your beliefs are supposed to be. And the GOP is all about political correctness in that sense—that’s why they accuse others of it so often. Michelle Obama dishonored the office of First Lady by wearing a sleeveless dress—that was presented as a principle. But, that they hadn’t objected to Nancy Reagan’s sleeveless dresses, nor the current First Lady’s problematic sartorial choices long ago shows it was never about the principle. They pivoted to condemn Obama and then pivoted again not to condemn Trump.

So, what will be the politically correct thing to say about Comey?

While large numbers of people across the political spectrum make policy judgments on the basis of their perceptions of identity (if “good” people support a policy, it must be a “good” policy), loyalty to the group is more a value among people who self-identify as conservative (see Jonathan Haidt’s The Righteous Mind). Authoritarians also tend to reason from ingroup membership, and authoritarians are more likely self-identify as conservative (Hetherington and Weiler’s Authoritarianism and Polarization in American Politics has a good summary of the research on this; so does John Jost’s work in political psychology).

In other words, the GOP Political Correctness Machine has also been engaged in projection in its making one of the politically correct things to say that lefties engage in identity politics. They’re all about identity politics.

So, what we can expect is that the politically correct Congresscritters will attack Comey’s identity. They’ll dodge any of his claims of what happened in favor of questions that enable them to present him as a bad person, especially as one disloyal to GOP values.

Of course the head of the FBI should not be a loyal Republican. The very same people who will condemn him for that disloyalty would fling themselves around in outrage were a Congress with a Dem majority and/or President to insist that he be loyal to Dems.

So, let’s be clear: this isn’t about a principle that operates across groups. This is purely and simply about factional politics. This is about loyalty only being a value when it’s a loyalty to their group.

It will be identity politics.