King Lear and charismatic leadership

Recently, various highly factionalized media worked their audience into a froth by reporting that New York’s “Shakespeare in the Park” had Julius Caesar represented as Trump. That these media were successful shows people are willing to get outraged on the basis of no or mis-information. Shakespeare’s Caesar is neither a villain nor a tyrant.

And it’s the wrong Shakespeare anyway for a Trump comparison. Shakespeare was deeply ambivalent about what we would now consider democratic discourse (look at how quickly Marc Antony turns the crowd, or Coriolanus’ inability to maintain popularity). But he wasn’t ambivalent about leaders who insist on hyperbolic displays of personal loyalty. They are the source of tragedy.

The truly Shakespearean moment recently was Trump’s cabinet meeting, which he seemed to think would gain him popularity with his base, since it was his entire cabinet expressing perfect loyalty to him. And anyone even a little familiar with Shakespeare immediately thought of the scene in King Lear when Lear demands professions of loyalty. Trump isn’t Caesar; he’s Lear.

Lear’s insistence on loyalty meant that he rejected the person who was speaking the truth to him, and the consequence was tragedy. It isn’t exactly news, at least among people familiar with the history of persuasion and leadership, that leaders who surround themselves with people who make the leader feel great (or who worship the leader) make bad decisions. Ian Kershaw’s elegant Fatal Choices makes the point vividly, showing how leaders like Mussolini, Hitler, or Hirohito skidded into increasingly bad decisions because they treated dissent as disloyalty.

In business schools, this kind of leadership is called “charismatic,” and it is often presented as an unequivocal good—something that is surely making Max Weber (who initially described it in 1916) turn in his grave. Weber identified three sources of power for leaders: tradition, legal, and charismatic, and Hannah Arendt (the scholar of totalitarianism) added a fourth: someone whose authority comes from having demonstrated context-specific knowledge. Weber argued that charismatic leadership is the most volatile.

In business schools, charismatic leadership is praised because it motivates followers to go above and beyond; followers who believe in the leader are less likely to resist. And, while that might seem like an unequivocal good, it’s only good if the leader is leading the institution in a good direction. If the direction is bad, then disaster just happens faster.

Charismatic leadership is a relationship that requires complete acquiescence and submission on the part of the followers. It assumes that there is a limited amount of power available (thus, the more power that others have, the less there is for the leader to have). And so the charismatic leader is threatened by others taking leadership roles, pointing out her errors, or having expertise to which she should submit. It is a relationship of pure hierarchy, simultaneously robust and fragile, because it can withstand an extraordinary amount of disconfirming evidence (that the leader is not actually all that good, does not have the requisite traits, is out of her depth, is making bad decisions) by simply rejecting them; it is fragile, however, insofar as the admission of a serious flaw on the part of the leader destroys the relationship entirely. A leader who relies on legitimacy isn’t weakened by disagreement (and might even be strengthened by it), but a charismatic leader is.

Hence, leaders who rely on legitimacy encourage disagreement and dissent because that leader’s authority is strengthened by the expertise, contributions, and criticism of others, but charismatic leaders insist on loyalty.

Charismatic leadership is praised in many areas because it leads to blind loyalty, and blind loyalty certainly does make an organization that has people working feverishly toward the leaders’ ends. But what if those ends aren’t good?

Whether charismatic leadership is the best model for business is more disputed than best sellers on leadership might lead one to believe. There is no dispute, however, that it’s a model of leadership profoundly at odds with a democratic society. It is deeply authoritarian, since the authority of the leader is the basis of decision-making, and dissent is disloyalty.

Lear demanded oaths of blind loyalty, and, as often happens under those circumstances, the person who was committed to the truth wouldn’t take such an oath. And that person was the hero.

A crank theory about individualism as an epistemology

It’s striking to me that a certain sort of person will blissfully reject disconfirming scholarship or expertise on the grounds that it appears to be contradicted by a single experience of theirs. That same sort of person will, if you make an explicit generalization (“most people in Europe are multilingual”), consider your point refuted if they give you a single example (“my cousin Terry only speaks English”). I say disconfirming because these same people don’t do this if the scholarship or generalization confirms what they believe. These people tend to make decisions entirely on the basis of their personal experience, and the experiences of their friends. And, it seems to me, they’re singularly prone to getting scammed, following harmful health fads (such as ephedra), misunderstanding the argument about vaccines, denying climate change. I’ve watched people (and sometimes myself) try to persuade them with studies, citations, and expert opinion, and it doesn’t work. And we aren’t trying (as they often think) to persuade them that they didn’t have the experience they did, or that what they’re claiming happened never happened, but just that their experience isn’t the end of the argument. Yet we get nowhere.

I’m not opposed to arguing from personal experience, or bringing in personal experience when assessing other kinds of data—this whole piece is based in personal experience. I don’t think experts are always right, nor that common sense is necessarily wrong. I think we shouldn’t be thinking in terms of which is right and which is wrong, as though it’s a binary. I think one of the reasons that we have problems with arguments about vaccinations and climate change is that this isn’t argument about claims (is this claim good or bad) but about epistemology. I think that people who value a certain model of identity (that an authentic individual is a person of certainty and clarity) tend to value a highly individualized version naïve realism (the notion that the truth about any situation is always easily obvious to a person of good sense and few prejudices).

If that’s right, then we need to stop arguing about what studies say, and we need to argue about epistemology, and the way a lot of scientists argue (a binary of naïve realism or rampant subjectivism) is just making it all worse.

Why Christians should not endorse the “sincerely held religious belief” standard…

….unless they’re racists who wish we hadn’t ended segregation.

It has become a talking point in certain circles that there should not be restrictions on what people with “sincerely held religious beliefs” can do, even if they’re governmental employees. If it’s your sincerely held religious belief that, for instance, homosexuality is wrong, you should not be “forced” to bake a cake for a gay marriage, or, as a government employee, sign a marriage certificate for such a marriage. This is presented as a fairness and tolerance argument.

It seems to be tolerant because you’re allowing people to act on “sincerely held” religious beliefs. I think the major political figures know what they’re doing (they don’t mean to allow all people to act on those beliefs), but I think a lot of reasonable people look at this as a way to be respectful and tolerant. What those people don’t know is that this is an argument for segregation. It’s also an argument for shariah law.

What people don’t understand is that the most appalling things in our history, such as slavery, genocide of Native Americans, and segregation, were all enacted by people who sincerely believed they were commanded by Scripture to do those things. People who think “sincerely held religious” beliefs won’t lead to awful things don’t know about groups like Christian Identity, who argue for appalling racist policies on the grounds of sincerely held religious beliefs.

I think it’s important to look carefully at just how bad that “sincerely held” standard is.

Here’s why it seems to be reasonable: it looks like it’s fair. It isn’t saying “my religion is good and yours is bad” (it actually is, but that’s below); it seems to be tolerant of all religions, so it’s tolerant.

But let’s stop here for a second.

This argument is assuming that people who act on “non-religious” values don’t deserve the same consideration as people who claim a religious belief. So, the very premise of this argument is that people who are religious should be treated better than non-religious people. It’s an explicit rejection of fairness across groups—religious people are saying that, because we’re religious, we should treat nonreligious people in a way we wouldn’t want to be treated.

Or, in other words, although we’re claiming to be religious, we aren’t claiming to follow Christ. I’ll come back to that.

The fairness issue gets even uglier when you look at how its advocates behave when confronted with religions other than theirs.

This policy is being sold as a tolerant and respectful thing to do, and it’s framed entirely in terms of liberty. And, therefore perfectly reasonable people, who don’t happen to pay a lot of attention to the history of religious discrimination in our country, and who are wickedly (sometimes I think deliberately) misinformed about the history of segregation, think it’s tolerant, respectful, reasonable, and fair.

It isn’t tolerant, respectful, reasonable, or about liberty. And it is nowhere near fair. It’s about the government giving members of one religion the ability to treat others in a way they would never tolerate. It’s about privileging one political/religious agenda.
Here’s simply one point. I work in a state where I cannot ban guns from my classroom, even were I Quaker or Amish. The “sincerely held religious belief” of Quakers and Jehovah’s Witnesses and other pacifists never come into play here. They have to pay taxes for war, after all. I’m religiously opposed to the Death Penalty, but I have to pay for it, and I’m struck from juries because I don’t believe in it. If that last thing isn’t religious discrimination, I don’t know what is–I am banned from being on a jury for murder trials because of my religion. My religion says that homosexual marriages are marriages; people claiming religious freedom haven’t been staying up nights worrying about the fact that they’ve denied me that religious freedom for years. That isn’t snark—that’s an important point. If something is a principle as opposed to a useful argument to get your way then you stand by that principle even if it makes something happen that you don’t want to happen.

So, when was the last time that the people now claiming to support religious freedom supported the freedom of a religion with which they disagreed? How hard did they argue for Quakers?

“Conservative Christians” want Kim Davis, as a government employee, to be able to do only those things in her job that fit with her interpretation of her religion, but they don’t want pacifists to be able to ban guns from their classrooms. Were the defenders of Kim Davis acting on the principle of “government employees should not be required to act against their sincerely held religious beliefs,” then they would include all religious beliefs in their legislation. In fact, if you look, they specify gay marriage. So, this isn’t about religious freedom, this is about gay marriage.

That means that this isn’t about the principle of religious freedom, but about one kind of person of faith getting privileged treatment. This is not even a little about fairness.

I think that a lot of the people I see (and read) repeating the “religious freedom” point just don’t know a lot of people of different religions, and so they don’t imagine things from those points of view. They don’t even know much about Christianity. They don’t know, for instance, that my commitment to marriage equality is a religious belief.

Allowing someone like Kim Davis to refuse to allow certain kinds of marriages means my government is violating my sincerely held religious beliefs. Passing a law that requires guns in classrooms violates the sincerely held religious beliefs of many teachers. Ending segregation violated the sincerely held religious beliefs of many Christians.

Many political figures support the “freedom” of a teacher to lead prayer until the moment they imagine that teacher being Muslim. It’s fine if someone on the street fails to think that way, but when political figures with considerable power think that way, then they are either failing in the major job responsibility they have (to think from various perspectives about policies they support), or they’re engaged in strategic misnaming. They never meant religious freedom—they meant the freedom for people like them to force their religion on others; they meant theocracy.

And I think it’s the second because, so often, when people point out that the “right” they are promoting would have to be extended to Muslims, Quakers, Jehovah’s Witnesses, major figures suddenly argue that the US is and must always be a “Christian” country. There’s a longer argument there, but here I’ll just mention that the argument they make for that case is internally inconsistent (they don’t use terms like “founders” or “Christian” consistently) and contradicted by the historical record.

Here’s simply one example. People with access to google will sometimes argue that the government should promote the celebration of Christmas because the Founders were Christian. And those same people sometimes include the seventeenth century New England Puritans in their definition of “founder.” But the New England Puritans weren’t the first people to settle what would later become the US, they weren’t the first Europeans to do so, they weren’t the first Europeans to settle what would later become the thirteen colonies, they weren’t even the first English to settle what would later become the thirteen colonies, and they prohibited the celebration of Christmas.

So, really, it’s a group of people arguing (badly) that the government should promote their political agenda.

Well, okay, that’s what everyone does. The difference is that this group is pretending that their political agenda is the only sincerely held religious one. They aren’t arguing for fairness across religious beliefs; they’re pretending only their religion counts. And they don’t even know the history of their religion.

There are two problems with that argument. One I’ll mention now, and the other I’ll get to later. The one I’ll mention now is simply this: let your yea be yea and your nay be nay. Don’t lie. If you want to argue for theocracy, go for it. But don’t argue for theocracy under the cover of religious freedom. The two are opposites.

It is a hobby horse of mine that we teach the history of civil rights movements in the US so badly, and this is an example of why it matters. Everyone loves the people who engaged in the Greensboro sit-in, but they don’t realize that was a private property (Woolworth’s). If you think “sincerely held religious belief” should be sufficient grounds for a private business refusing service, then you endorse segregation. If SCOTUS thought the way you think they should, we would still have race-based segregation.

That’s what segregation was—it was a practice defended by appeals to religion. You can see this in the major arguments for segregation, such as Theodore Bilbo’s Take Your Choice, texts going back to defenses of slavery (it was rare for someone to defend segregation and not slavery), and the numerous pro-segregation sermons and doctrinal statements (Haynes’ Curse of Noah traces out the importance of Genesis IX in both slavery and segregation).

Take, for example, Newman v. Piggie Park Enterprises, a SCOTUS case in which an owner of a drive-in barbeque place argued that it was his right to refuse to serve nonwhites. He said he had that right because the federal law didn’t apply to him (a technical issue easily solved—it did), property rights (another easily solved issue), and his religious freedom.

In that era, the religious freedom issue was also easily solved. The tendency of SCOTUS was to say that religious freedom was a private issue, and so could be relatively easily trounced in the public by other concerns, especially fairness (more on that below). Also, courts tended to rule on the basis of mainstream religious beliefs. If you read the transcript of testimony, you would notice that the judge refuses to take Bessinger’s reading of the Bible as a basis of authority. When Bessinger tries to support his claim with a newspaper clipping, the judge cuts it short. And the judge never worries about Bessinger’s personal reading of Scripture.

And so he shut down the head of the National Association for the Advancement of White People and all the other bigots who wanted to refuse to serve African Americans. He did so because he rejected Bessinger’s religious expertise.

But, had he used the standard of “sincerely held religious belief,” then he would have had to rule in favor of Bessinger, because all Bessinger would have had to do was to show that his reading of Scripture was sincere, not reasonable.

Notice this exchange:

Q: And is it—in your treatment with every individual everyday, do you follow this?

Bessinger: Well, I certainly think I try to. I mean I do as much as I possibly can. What I mean by that, I certainly hope I am living that life, that is what your question is.

Q: Is it your belief to that effect?

Bessinger: Absolutely.

Q: Do you have any beliefs concerning segregation of the races, is that intwined or intermingled with or part of your beliefs as a Christian?

Bessinger: Yes, sir, that is very much part of my belief as a Christian, mixing of the races certainly is.

Q: By races you refer to what, sir?

Bessinger: By races, I refer to the race as the black race, the white race, and the yellow race.

Q: What is the Biblical basis, if any, for such a belief?

Bessinger: Well in the Old Testament God commanded the Hebrews not to mix with other peoples and races.

Anyone even a little bit familiar with the history of racism in the US is, at this point, saying, Oh, really, not this shit again, because Bessinger is mentioning one of the racist proof texts. But people who only know the triumphalist version want to read Bessinger as some crank.

Nope. He was mainstream. Segregation was a religious issue, with many proof texts, and he mentioned one. He could have mentioned Genesis IX, or various passages about not planting certain seeds in with others, or God having placed peoples in different parts of the world. There were a lot of proof texts people had for segregation (more than current bigots have about homosexuality, in fact, since some of those texts are about pederasty).

The court rejected his religious freedom argument because he didn’t cite external authorities (the testimony goes into an argument about a newspaper clipping he presented). And, I’d like to think, all the people now supporting the “sincerely held religious belief” argument would be appalled at the sorts of proof texts people like him provided.

But law is always an issue of principle.

And, if the principle is sincerely held religious belief, he met that standard.

So, people who want to say that Kim Davis can do what she wants are saying that Bessinger should have been able to refuse to serve African Americans. They are (unintentionally, I think) endorsing the principle that segregation was right. That’s worth taking some time to consider. If Davis is right, then so was Bessinger.

If we should allow Davis to refuse to allow some people to marry because she thinks that kind of marriage is a violation of Scripture, and our only standard is personal belief, then we have to say that the courts should have ruled that the people who believed that states could refuse to allow whites and nonwhites to marry, and businesses could refuse to serve nonwhites, and school districts could insist on segregated schools—those were all sincerely held religious beliefs. Arguing for Kim Davis is arguing for Bessinger; it’s arguing for segregation. It’s also arguing for county clerks refusing to allow bi-racial marriages, marriage after divorce, marriage of anyone wearing mixed fibers, dealing with anyone with a tattoo or who eats shellfish.

Bessinger sincerely thought he was violating Scripture by serving nonwhites in the same place he served whites. And he thought that because a tremendous amount of southern religion promoted that view. He wasn’t a crank; he was acting on what was a commonplace in southern religious discourse.

I said earlier that the “sincerely held religious principle” is important in two ways: if it’s a principle for us, then we really hold all religions to it; if we aren’t going to do that (which would mean allowing communities to enact segregation, sharia law, gay marriage, Satan worship), then this is an argument pretending to be about fairness that is actually an argument for theocracy.

The “sincerely held religious principle” either means that communities imposing sharia law is okay, as is segregation, pacifists not allowing guns in classrooms, my serving on death penalty juries despite what prosecutors want, a teacher insisting the class pray to Satan, and all sorts of other practices, or we only mean “sincerely held religious principles with which we agree.” In that case, we’re violating the notion that we should treat others as we want to be treated.

So, in service of what is supposed to be a religious argument, Christians have to violate one of the basic precepts of our religion.
That is, it seems to me, an important problem, since, if we reject the notion of “do unto others” we are also rejecting the person said that we should act on that principle. Either we allow segregation or we reject Christ.

Or maybe it means that the “sincerely held religious belief” is a disastrously bad way to base public policy.

Compromise and Purity (Pt. 1)

When I first began to pay attention to politics, it seemed to me that the problem was clear: people started out with good principles, and then compromised them for short-term gains, and so we should never compromise. Change happens because someone sets a far goal and refuses to be moved.

Then I got more involved in various kinds of change—not just what we think of as “political,” but institutional and even personal changes. And that complicated my notion that change was best achieved by someone setting a far goal and refusing to compromise. I came to think I had misunderstood the role that compromise plays in progressive politics.

I can partially blame my misunderstanding on how history is taught in American high schools—Rosa Parks is presented as an extreme case, as opposed to someone who was part of a very savvy and deliberate campaign; King was actually a moderate; the most effective abolitionists were savvy about their compromises. Of course, one can also create a long list of appalling compromises (I think it’s plausible that LBJ decided to escalate in Vietnam because he thought it was a compromise that would get him what he wanted in terms of domestic policy, FDR may have gone along with Japanese internment as part of a nasty political compromise).

A long swim in the murky waters of the history of progressive (and reactionary) politics has persuaded me that compromise is sometimes a great move and sometimes a disastrous one. And, while, in the abstract, I can repeat what other scholars have said about the conditions under which compromise is savvy, I’m still not very good at knowing the right move in specific moments.

Part of my uncertainty involves what it means to compromise. It can mean that you’ve listened carefully to what everyone involved has to say, and you really think you’ve made all the compromises that can be made. You believe the deliberative possibilities are exhausted because you haven’t been treated as a part of the conversation.

It can also mean that you’re certain that you’re right, that your position is the best one, and that everyone who disagrees with you is spit from the bowels of Satan—you don’t need to listen to anyone else because you’re right.

Here’s the short version: it depends on whether you’re in a bargaining or deliberative situation (refusal to compromise in an expressive situation is just wanking). In a deliberative situation, the refusal to compromise can be very persuasive, if it’s grounded in good evidence that all the compromises have been made, that the compromise being requested is unreasonable, and that the power situation is imbalanced—you’ve listened, but not been listened to (listening doesn’t mean agreeing with—it means the ability to summarize someone else’s argument in a way they would say is accurate, even if you disagree with it).

If it’s a faux deliberative situation (people are claiming it’s deliberative and it isn’t), then shifting to strategies appropriate for bargaining is what a sensible person does.

Bargaining situations aren’t as simple as I used to think they were. Basically, bargaining situations are all about power. When you’re in a bargaining situation, it doesn’t matter if you’re right—that only matters in deliberation—your threats or promises only matter to the extent that they’re strategically useful, and that’s determined by:

    1. whether it’s plausible that you can enact your threat/promise,
    2. whether your interlocutor cares very much about your threat/promise (they really fear your threats and really desire your promises),
    3. whether you can offer more than they can get without you or cost them less than they can get with you,
    4. whether s/he can thwart your ability to enact them.

So, if you threaten to take your ball and go home, and it isn’t your ball, and you aren’t big enough to take it away from anyone else, no one is going to care (this is also known as the “I’m going to hold my breath till I turn blue” threat). If it is your ball, and you could take it and leave, and no one there wants you to stay, and they have another ball, you aren’t bringing a lot of power to the bargaining situation. If people really want you to continue to play, but you tell them you’ll leave unless they let you win, then keeping you there will cost them at least as much as letting you go, and they’ll let you go. If you threaten to take your ball and go home and people think they can get another ball, they’ll let you go.

It isn’t always obvious prior to a bargaining situation (and even often while in it) what threats or promises are strategically winners. Were it obvious, there wouldn’t be bargaining—it would be like playing poker with all the cards dealt at once and face up. The only one that can be obvious ahead of time is the third—if the cost of the bargain you’re offering is the same as not bargaining at all, then there is no incentive for someone to bargain with you.

It took me a long time to see that, largely because I was confusing deliberative and bargaining situations. My entrance into politics was environmentalism, and I thought (and still think) that, as David Brower said, all the compromises have been made. We shouldn’t compromise anymore because what we were asking for was the right thing. And it seemed to me so obviously right that we need to protect the earth for future generations, that we have a sacred obligation to steward the earth’s resources in ways responsible to all the present and future inhabitants, that I thought simply insisting on our rightness was the only possible strategy.

What I was not seeing was that many members of my opposition sincerely believed not just that they could get what they wanted, but that what they were doing was right. They weren’t just motivated by greed or a desire to destroy—they believed their arguments were better than mine. This isn’t some kind of hippy-dippy woah man have you ever looked at your hand all sides are equally right argument. I still sincerely believe that the arguments for drill here, drill now are internally inconsistent and irrational, but I now know they aren’t obviously so, and showing what’s wrong with them involves long discussions about Scriptural exegesis, Millerism, the prosperity gospel, the just world hypothesis, and short- versus long-term economic gain/stability.

What I’m saying is that, in a deliberative situation, my simply insisting on how right I was wasn’t going to work—regardless of whether it was true. In a bargaining situation, it was a waste of time. And refusing to compromise would mean (as I came to see) that, unless my side had some kind of plausible threat—we’ll sue, boycott, protest, cost you an election—we would end up with nothing at all. Compromising felt physically painful to me, and it felt as though it cost me in dignity (I also bought into all sorts of slippery slope narratives, about how you compromise once and then pretty soon you’re hunting endangered species while drinking heavy-metal water).

More experienced lefty activists in favor of compromise tried to argue against my insisting on being right as the only possible right strategy was to say that I was being selfish. And that, to me, seemed another obviously wrong argument: my position came from a genuine concern for beings other than me, so it couldn’t be selfish. What they were saying, I late came to understand, was that, once I’d made the realization that my strategy wasn’t going to work, my refusal to compromise came from concerns about my dignity, my aversion to the mucky murky work of compromise, my desire for clean hands.

What I had to think about, though, was what my refusal to compromise was costing, and who was paying that cost. The cost to my dignity had to be weighed against the costs paid by people who lived in neighborhoods with poisoned water, or who had to breathe unsafe air.

Being right wasn’t enough to get the right outcomes. And I had to think strategically about those outcomes.

Once I got to that point, I discovered every experienced lefty activist responded to my insight with a “No fucking shit, Sherlock.” They had figured it out long ago.

Again, this isn’t to say that compromise is always necessary. There are times we all say, “There is some shit I will not eat.” But, when we decide this is where we go and we go no further, we have to think about who will pay the cost.

A conversation about conspiracy theory web design

[context: I posted a link that had embedded a link to a conspiracy theory site]

Original post : Do NOT click on the link toward the top of the page. It will send you to the kind of site that has epilepsy-inducing web design. Someday (and I’m perfectly serious) I want someone to do a study as to why conspiracy sites all have the same kind of awful web design. The correlation is too strong for it to be a coincidence.

 

[Cody] I remember you mentioning that correlation when I was in your class. I wish I had the kind of time and insight to do it myself.

[Fred] Hmm. Interesting question. I’m sure all of those sites are made from templates (e.g. WordPress or some canned Drupal crap), so if you are used to a certain kind of “design” (using the term very loosely) it’s a simple thing to reproduce it.

More complexly, I think there is a class or segment of American society that is suspicious and even actively hostile towards beauty and design. So much of our landscape has been made so blighted and ugly by what we build. Examples are endless: Billboards in Death Valley, an outlet mall at the gateway to the Columbia Gorge, etc, ad nauseum. But so many people seem not to see it, much less care about it. It’s almost a badge of honor. (The visual equivalent of “smells like money to me”.) I imagine it must be related to the grand old tradition of American anti-intellectualism. An appreciation for beauty and design is effete, Continental, liberal, a weakness compared to the muscular disdain for anything that is not competitive capitalism.

That’s where I’d start my inquiry if I were still researching stuff like this.=

[me] That’s an interesting point. Also, they’re VERY busy, and their basic strategy of argumentation is accumulatio. So, they don’t argue by one thing leading to another, or being logically connected to another, but by the sheer accumulation of data (that are, usually, disconnected).

It’s funny–isn’t that what a mall is? Maybe it’s some version of consumerism? You just want a lot of shit, and it doesn’t really matter what any individual piece of shit means–it’s that you’ve got a lot of it. So that’s what you do with the site? It’s a lot of shit?

[Fred] I bet a big part of the Right’s hatred of Apple is related to this. How else would you explain so-called free-market Conservatives despising one of the most successful private companies in history?

[me] Huh. That would be REALLY interesting. So it would mean something like simplicity is threatening to reactionary politics?

[Cody] I would jump off of Fred’s statement and say it also represents “plainness”. The evidence speaks for itself, so why do I need to pretty it up? Also, the idea that they have better things to do, like doing REAL American work or researching these coverups than to worry about how pretty their website is.

I’d also tie it into the idea that they’re not spending money on their website’s design. Because they’re “simple folk”. That’s why a lot of them are using free sites like Bloggerand WordPress and tumblr.

[me] Wellllll… they aren’t plain sites, though. They’re very busy. But not complicated, and certainly not pretty. So it’s a weird aesthetic.

[Cody] Right, but it’s surprisingly easy to make a busy website. It’s a lot more work to try and make it all flow correctly. And, like Fred said, it’s effete.

I’d also think it’s an anti-intellectual statement. They’re not well-versed in internet design because they aren’t “those people”. To me, the interest here would be using a platform you’re not wholly familiar with to try and deliver your message. I imagine there might be a correlation with early film? Now I’m sort of just throwing darts.

[Fred] But yes, beauty and design are seen as forms of obfuscation, things that impede and obscure common sense and exchange. Beauty and design are indulgences that get in the way of the business of consumption.

[Cody] By plain, I didn’t mean to imply “simple”. My old Angelfire website back in the 90s was just pictures of Austin Powers and dancing hamsters. It was the easiest thing in the world to make, but would blind a person.

Right, it’s like fast food is American because we don’t have time to research our food. We’re too busy working and being American. Only the intellectuals and the artists have time to sit around and think.

[me] Ooooooooohhhhh….interesting. So a certain kind of simplicity is deliberate and thoughtful, as opposed to a kind of expressive get to the point that means you’re flinging data out there.

Yeah, I think you’re both right. It has something to do with being thoughtful and deliberate (bad) rather than authentic and … what? messy?

[Fred] I also think Trish is onto something with the idea of accumulation. I think there’s a common trope that associates expertise with possession. Owning all the tools for X makes you an expert (as an amateur woodworker, I can tell you plain, this is not the case), and something like that is at work in these websites that are like Hoarders but with “facts” instead of cats.

 

Why I thought Trump might win

trumpfascism

trumpracist temp trumpvictim

Why did I think it was likely that Trump would win?

This will be really quick, and, if folks are interested, I’ll try expand later, but I thought a Trump win was likely (by which I meant 45/65, but then in the last week it became 50/50). Here’s why:

As I think y’all know, I periodically crawl under in the dark side of the interwebz, and I’ve been spending a lot of time lately wandering through pro-Trump FB pages and various evangelical websites. Folks in those places aren’t low information—they’re high misinformation.

What you see in an entire world of information is that Clinton:

    1. personally murdered or had murdered a lot of people;
    2. is prone to seizures;
    3. laughed at a rape victim;
    4. told the families of Benghazi victims that they should “move on and get over it;”
    5. supports “abortion” up until five minutes before birth, so she thinks a woman should be able to kill a perfectly healthy fetus until the moment of birth;
    6. subverted the constitution because the DNC tried to get her the nomination;
    7. and a bunch of other stuff.

Just to be clear, I wouldn’t have voted for Clinton had I thought any of that was true. None of it was, of course, but they didn’t know that.

Meanwhile, the main problem with American politics on these pages–pro-Trump and more or less secular and fundagelical–are slightly different. For the pro-Trump secular pages, it’s what’s described by the Stealth Democracy folks. Commenters on those pages believe that the solutions to our problems are simple. Politicians don’t go for those simple solutions because they want to keep their jobs by making things complicated and they’re corrupted by “special interests.”

“Special interests” are any interests other than the person making that criticism. So, since I’m a normal person, and I am a ferret rancher, the government should do lots of things to promote ferret farming. I’m not a special interest; I’m American. But, my neighbor, the lynx farmer, gets things—THAT is special interests.

So, there is a profound rejection of the pluralism of our world, and a normalizing of experience.

Why, then, don’t politicians do that obviously rational thing and support ferret farming? Because they are professional politicians, who get a lot of money from lobbyists to promote special interests like lynx farming.

Here’s what those folks believe about Trump:

    1. He’s an amazingly successful businessman;
    2. He has incredibly good judgment (thank Celebrity Apprentice for that);
    3. He isn’t beholden to anyone;
    4. He isn’t smart or subtle or well-educated: he doesn’t bullshit;
    5. He never lies; he engages in hyperbole, but he never deliberately manipulates anyone else;
    6. He’s like them. He isn’t a member of the cultural or intellectual elite.

On the evangelical side, it’s more complicated. To be fair, they resisted him much longer than the secular GOP did.

But, still and all, they accepted all the claims about Clinton, and they have made a nasty deal with their consciences about being so oriented toward killing. The fundagelical right thoroughly supported segregation, has never complained about police brutality, never met a GOP-supported war it didn’t like, loves it some death penalty (despite what Christ said), is all in favor of indiscriminate killing because some bad people might die, and supports social services policies that kill people.

There are lots of studies out there about doing a single good thing gets unconsciously interpreted as a “get out of guilt” free card for a far larger number of douchey behaviors. For instance, people who buy organic in a grocery store are less likely to be nice to the people collecting money for a good cause just outside the door. (This explains why drivers in the Whole Foods parking lots are unmitigated shitheads.)

Fundagelical Christianity in the US has been damaged by an attachment to sloppy Calvinism in the form of prosperity gospel. Unhappily, fundagelical Christianity has come to preach that we should not treat outgroups as we insist on being treated. (Making Christ’s golden rule a non starter.) We can help them, but only if help is associated with trying to make them part of our ingroup.

Government assistance is bad, not because it’s assistance, but because it’s secular.

All assistance should be connected to conversion. (Hence, people say that slut-shaming “abortion information centers” are more appropriate than giving women birth control.)

Basically, a lot of fundagelicals believe that the government is the problem, not the solution. And they believe they should contribute a lot to their church and not to the government.

Therefore, they’re drawn to cheap stances. Wanting to prohibit abortion costs nothing.

Actually reducing the number of abortions would cost a lot and it would involve giving women autonomy over our bodies. But claiming to be opposed to abortion, while also opposing the policies that would actually reduce abortion, reduces the cognitive dissonance created by the very death-oriented policies of the fundagelical right.

It’s a “get out of guilt free” card.

Finally, fundagelical Christianity has bought into imparted justification—that a saved person is a good person, with good judgment. So, for them, all arguments are identity arguments: is this person saved. And, unhappily, that comes down to: does this person claim to support the positions I think are necessarily associated with my view of being Christian.

So, there’s an analogy to the ferret farmer. The ferret farmer sees her interests as universal, and the basis of Americanism, and the lynx farmer as a corrupt special interest. Similarly, fundagelicals see their (quite specific, and even problematic) notions about religion as “Christian” and will not admit that people who don’t share their agenda on homosexuality or abortion are Christians. They’re special interest lynx farmers.

Anyway, I started to worry when I realized that the National Enquirer effect was in place for Trump supporters.

The National Enquirer is always wrong, in that it spends all the time saying this celebrity couple is breaking up. When, as sometimes happens, the couple does break up, the audience takes that outcome as proof that it was on to something, as opposed to admitting it was wrong far more often than it was right. Paradoxically, that fundagelicals have been predicting the end of the world for over a hundred years and have always been wrong has strengthened, not weakened, many people’s beliefs that the end is nigh.

All of the “scandals” about Clinton turned out to be wrong (and far less important than Trump’s) but they got better play. The moment I thought Trump would win was when, for the third week in a short period of time (maybe four weeks?) the National Enquirer had a headline about Clinton being ill or corrupt or whatever. Wandering in Trump pages, I learned that people were operating on a kind of “no smoke without fire” premise.

In other words, Trump’s appeal was to people who are living in a world excessive (and thoroughly false) information and a denial of difference as a value. They also hate complexity. And there is an odd kind of epistemic narcissism—their beliefs are the basis of all truth. But that’s a different post.

Rhetoric and Demagoguery

dsc_4455

What I want to do today is begin by talking a little bit about the place of “demagoguery” in contemporary rhetorical scholarship, then offer my own definition, and then show its application in regard to someone I admire. And, in a way, that’s the whole project in a nutshell: scholarship in rhetoric can’t serve a useful critical purpose if it just comes down to scholars praising people we like and blaming people we don’t—rhetorical scholarship should be deliberative, and neither epideictic nor judicial.

Jurgen Habermas’ “What is universal pragmatics” has a wonderful footnote with a diagram of kinds of discourse–communicative versus strategic action.

habermas

As is common among scholars of public discourse, Habermas’ focus is on communicative action, and the rest of his career has been spent trying to identify the ontological bases, precise criteria, and most promising models for public deliberation as opposed to instrumental reason, a.k.a., strategic action. As I said, that’s fairly common for scholars of public discourse, who tend to focus on what deliberative discourse is, how to teach it, how to foster it, how to balance inclusion and civility. Since I spent a chunk of my career working on that problem, I’m not critical of that kind of scholarship—it’s important.

But we have tended to ignore the other side of the chart– why are people so drawn to instrumental reason even in cases where deliberative approaches would be more helpful? It isn’t particularly controversial in business, management, counseling, mediation, interpersonal mediation, and a variety of other fields that businesses, communities, and relationships benefit more from approaches to decision-making that are toward the deliberative rather than toward the strategic action side.

And by “toward deliberative side” I don’t mean anything particularly high-minded or complicated – in fact, I have fairly low standards about what constitutes deliberative discourse. I’m not necessarily talking about a community in which people are nice to each other, or unemotional, or in which no one is offended, or everyone feels safe — I just mean one in which it’s considered necessary to listen to, and therefore fairly represent, the other side.

The notion that you should listen to your opposition seems to me to be a no-brainer–you can’t even be sure that you actually disagree unless you’ve listened enough to know what your interlocutor is arguing. And, if you and that person aren’t just vehemently agreeing, and you want to change the mind of the person with whom you’re disagreeing, it’s going to be very hard to persuade them to change their mind unless they feel you’re engaging the arguments they’re really making. Yet early on in my teaching I discovered that a fair number of my students thought it was actively dangerous to listen to the opposition, let alone restate their argument in a way the opposition would consider fair.

So I became interested in how to try to persuade people to listen to the opposition, and that task necessarily led me to think about two questions: first, what makes listening to the opposition dangerous; second, what makes living in a world of demonizing, dehumanizing, and irrationalizing the opposition attractive, even pleasurable.

Once you’ve posed that second question you may well find yourself, as I have, studying what I’ve ended up thinking of as train wrecks in public deliberation– times that communities took a lot of time and a lot of talk to come to a decision they later regretted, and concerning which they had all the evidence they needed in the moment to come to different conclusions.

Thus, I’m not talking about times when communities had no choice, or inadequate information, or when they made decisions I think they shouldn’t have made. I mean things like the Sicilian expedition of 415 BCE, the Salem witch trials, the US commitment to slavery and then segregation, anti-immigration fear mongering of the 1920s and the related forced sterilization of around 65,000 people in the United States, the Holocaust, Japanese internment, LBJ’s decision to escalate in Vietnam, the Iraq invasion, and other more specific incidents, such as Hitler’s refusal to order a retreat from Stalingrad or Haig’s insistence on the direct approach in various World War I battles.

And there are a few common characteristics about these incidents, in terms of what had become “normal” political discourse—specifically, heightened factionalism, so that politics became a performance of ingroup loyalty, and the ethos of the nation was reduced to one faction—that is, pluralism is demonized–and thwarting the opposition is just as much a laudable goal as enacting policies, because there is no sense that multiple factions might be legitimate or that the community benefits from disagreement. The nation is the party, and failure to support the party is treason. Obviously, in such a world, compromise and bargaining, let alone inclusive and pluralist deliberation, are disloyal, cowardly, and evil.

So it began to look to me as though there was a strong correlation between bad decisions and bad decision-making processes — not that they are necessarily and inevitably related, but that they often are, and so I started trying to identify the specific characteristics of those bad decision-making processes.

Largely because I don’t like neologisms, I started using the term demagoguery for that approach to public discourse. That may have been a mistake. Often people engaged in this kind of work do come up with a new term. Chip Berlet and Matthew Lyon use the term right-wing populism; David Neiwert calls it eliminationist; Kenneth Burke describes the same phenomenon in regard to Hitler, doesn’t use any term in particular. That’s something worth discussing—whether I should simply use a different term, rather than try to salvage a deeply troubled one.

Early in this project I, like most other scholars of this kind of rhetoric, focused on individual rhetors, on demagogues; I’m now certain that was a mistake.

Scholarship on demagogues went out of fashion in rhetoric in the seventies, largely because that scholarship consistently appealed to premises that were rationalist, elitist, and anti-democratic. Most definitions of “demagogues” emphasized the emotionalism of their arguments, the populism of their policies, and the selfishness of their motives. The conventional criticism of such scholarship was four-part:

    1. The criticism looked rhetorical, but it was really political—”demagogue” was simply a term for an effective rhetor in service of a political agenda that the scholar didn’t like;
    2. By condemning demagogues for emotionalism, scholars were idealizing a public sphere of technocratic and instrumental argument, and necessarily banishing individuals and groups who were passionate about their cause—since victims of injustice tend to feel pretty strongly about their situation, prohibiting demagogues would have a disproportionate impact on marginalized and oppressed groups;
    3. Demagogues are always “men of the people,” so the scholarship on demagogues is anti-populist—why assume that only the masses are misled?
    4. Scholarship condemning demagogues is demophobic—the (false) assumption is often that the rise of a demagogue is the consequence of “too much democracy,” once again implying that the elite don’t make mistakes.

I agree with all of those criticisms—I do think that much existing scholarship on demagogues is not particularly helpful for doing much other than saying, “I don’t like that rhetor.” But I don’t think that makes the project hopeless—I think the problem comes from focusing on demagogues, rather than demagoguery.

Take, for instance, Kenneth Burke’s 1939 brilliant analysis of Hitler’s Mein Kampf, and the odd logical problem it falls into by trying to explain Nazism through Hitler’s individual psychology. According to Burke, Hitler became obsessed with the Jews because he lost arguments to them in Vienna (to be honest, I think that might have been a factor, but his own anxiety that he might be Jewish was probably more important), but, if that’s what made him anti-Semitic, why did his anti-Semitic rhetoric work? Did every member of his audience go to Vienna and lose an argument to a Jew? Of course not. Whatever Hitler’s personal motivations were—he was anxious about his heredity, he was out-argued by Jews–, they don’t explain why he was effective with people who didn’t have those anxieties or experiences.

Burke set out to analyze Hitler’s rhetoric because of Hitler’s political power—the scholarly method was to select the rhetor and then look a the rhetoric. Similarly, scholars of demagogues, ranging from James Fennimore Cooper to Michael Signer, generally use the process of beginning with political figures they considered demagogues, and then looking to see what those figures had in common.

That method guarantees that what they will have in common is that the scholar doesn’t like them. The “demagogues” are always in the scholars’ outgroup, and that may be why there is so much motivism. Scholarship on demagogues generally focuses on the motives of the demagogue—demagogues, unlilke statesmen (thank Plutarch for that fallacious distinction), look out for themselves. They want power, but statesmen (and I use the gendered term deliberately) want what’s best for the country or community.

Since people attribute bad motives to members of the outgroup and good motives to members of the ingroup for exactly the same behavior—an ingroup member who makes a lot of money is a hard worker, and an outgroup member who makes a lot of money is greedy–, this criterion of motive means that it will never help us identify ingroup demagogues. After all, the basic premise of this approach to finding demagogues is that they are bad people—if we admire someone, we won’t admit they’re bad, so they can’t be demagogues.

In addition to the problem that it prevents us from seeing when we’re being persuaded by demagoguery, this criterion doesn’t even capture the most notorious demagogues, who almost certainly sincerely believed that they were doing the right thing for their communities and countries. Hitler thought the Holocaust was necessary and justified and right. He meant well.

So focusing on identity and “bad motives” doesn’t help us identify the kind of rhetoric we want to.

The emphasis on demagogues presumes that, as Burke said of Hitler, they can lead a great country in their wake—they are masters in control of the masses. But, if you look at the leadup to the train wrecks, that isn’t what you see at all. You don’t see an individual who magically changed what the masses thought—Hitler would never have succeeded without considerable help from the elite, and, famously, Hitler wasn’t saying anything new. People moved to support Nazism weren’t all moved by Hitler (Adolf Eichmann doesn’t mention Hitler’s rhetoric), and Hitler’s rhetoric wouldn’t have been effective if it had been entirely new. It was commonplace.

Demagogues don’t create a wake—they ride a wave.

Probably more important, if you go about it by looking at the leadup to train wrecks, you sometimes don’t see a demagogue at all, but you do see demagoguery. Proslavery forces didn’t have a single rhetor who led everyone along—the antiabolitionist alarmism, scapegoating, and general demagoguery wasn’t emanating from one rhetor, but was almost ubiquitous. It was in newspapers—even of opposing parties—speeches in Congress on all sorts of topics (including the question of the Sunday mails), novels, poetry, plays; it was used by major figures, minor figures. Prosegregation rhetoric was similarly demagogic, ubiquitous, and headless—there wasn’t a figure from whom it emanated. There wasn’t an individual who led the US in his wake; there were a lot of figures who decided to ride a wave.

If we look at decision-making processes rather than demagogues, I think we’d end up with a definition like this:

Demagoguery is a discourse that promises stability, certainty, and escape from the responsibilities of rhetoric through framing public policy in terms of the degree to which and means by which (not whether) the outgroup should be punished for the current problems of the ingroup. Public debate largely concerns three stases: group identity (who is in the ingroup, what signifies outgroup membership, and how loyal rhetors are to the ingroup); need (usually framed in terms of how evil the outgroup is); what level of punishment to enact against the outgroup (restriction of rights to extermination).

There are certain recurrent characteristics. It

    • reduces all policy discussions to questions of identity and motive, so there is never any need to argue policies qua policies;
    • polarizes a complicated political situation into us (good) and them (some of whom are deliberately evil and the rest of whom are dupes);
    • insists that the Truth is easy to perceive and convey, so that complexity, nuance, uncertainty, and deliberation are cowardice, dithering, or deliberate moves to prevent action (naïve realism);
    • is heavily fallacious, relying particularly on straw man, projection, appeal to inconsistent premises, and argument from conviction;
    • is not necessarily emotional or vehement, but there is considerable emphasis on the “need” portion of policy argumentation (which is generally an “ill” caused by the presence or actions of “them”) often with implicit or explicit threats that “we” (the ingroup) are faced with extermination, emasculation, and/or rape;
    • draws on certain “motivational passions” (in Robert Paxton’s terms) shared with fascism, although it can be used in favor of non-fascist political agenda, and even in non-political circumstances.

One of the advantages of this approach—demagoguery rather than demagogues, or rhetoric rather than identity—is that it can enable us to see ingroup demagoguery.

For instance, take two people who a personal hero of mine: Earl Warren.

In the spring of 1942, California Attorney General Earl Warren testified before the Tolan Commission regarding the mass imprisonment of Japanese Americans. A typical passage of his testimony concerns a map he gave the Committee showing Japanese land ownership. He explains what the map shows:

Notwithstanding the fact that the county maps showing the location of Japanese lands have omitted most coastal defenses and war industries, still it is plain from them that in our coastal counties, from Point Reyes south, virtually every feasible landing beach, air field, railroad, highway, powerhouse, power line, gas storage tank, gas pipe line, oil field, water reservoir or pumping plant, water conduit, telephone transmission line, radio station, and other points of strategic importance have several — and usually a considerable number — of Japanese in their immediate vicinity. The same situation prevails in all of the interior counties that have any considerable Japanese population.

I do not mean to suggest that it should be thought that all of these Japanese who are adjacent to strategic points are knowing parties to some vast conspiracy to destroy our State by sudden and mass sabotage. Undoubtedly, the presence of many of these persons in their present locations is mere coincidence, but it would seem equally beyond doubt that the presence of others is not coincidence. It would seem difficult, for example, to explain the situation in Santa Barbara County by coincidence alone. (National defense migration. Hearings before the Select Committee Investigating National Defense Migration, House of Representatives, Seventy-seventh Congress, first[-second] session, pursuant to H. Res. 113, a resolution to inquire further into the interstate migration of citizens, emphasizing the present and potential consequences of the migration caused by the national defense program. pt. 11; 10974)

Notice that this argument, in favor of mass race-based imprisonment without trial, is neither emotional nor populist. And Warren was not motivated by political or personal gain—he sincerely believed that he was doing the right thing. He doesn’t fit common, or even many scholarly, definitions of a demagogue.

And, too, notice all the hedging—”Notwithstanding” and “It would seem difficult.” And notice his adopting the posture of a reasonable person—he isn’t saying all Japanese are knowingly part of the conspiracy. He isn’t unreasonable; he acknowledges some coincidence. So, his assertion that this can’t be coincidence seems more reasonable because of his having established himself as a person not prone to conspiracy theories.

In this passage, as throughout his testimony, there is a rhetoric of realism, factity, and submission to the data. Warren’s motives were good, in that he sincerely believed California was in danger—he didn’t gain any political power from taking this stance. It isn’t very emotional—as I said, there’s a matter of fact tone, with really only one brief exhortation—and it isn’t populist. He doesn’t fit the common definitions of demagogue.

But it is sheer demagoguery.

Warren is redirecting the complicated policy question—what, if anything, should we do about enemy nationals—into an identity question about “the Japanese.” Even the need question (should we fear sabotage) is reframed as an identity question: the Japanese can’t be trusted. His evidence, such as the maps, assume what’s at stake—it’s a circular argument.

The question he’s answering is whether the Japanese are trustworthy, and he’s answering that question with an enthymeme that has the major premise that the “the Japanese” are nefarious: The Japanese are nefarious because they own land near important war resources. This isn’t an argument he makes about Germans, Austrians, Italians, or French—he didn’t even bother to look into their land-owning patterns. And, of course, there are much more obvious and innocent explanations for those land owning practices—areas with a “considerable” Japanese population would have people engaged in fishing, farming, canning, and other activities that would make owning land near beaches, water, and power quite desirable.

Warren’s argument is unanswerable because it’s unfalsifiable.

But, Warren wasn’t a magician with a word-wand who swept citizens of the western states into a panic. There isn’t really even any good evidence that his testimony was widely reported—it probably had little impact on the juggernaut of mass imprisonment. It probably legitimated the racist panic of other people listening to him, by making them feel that their perceptions were reasonable and fact-based, but I doubt it changed anyone. He was appealing to perceptions about “the Japanese” that had been promoted by thousands of rhetors in the previous forty years—especially the Hearst papers, but also the Japanese Exclusion League, the FBI, the Los Angeles Times, major and minor politicians, and scholars of race. He was repeating what “everyone” knew.

Warren was refuted during the hearings—an expert on Norway pointed out that the notion of sabotage having had any impact on Nazi success was a myth, others noted that there hadn’t been sabotage at Pearl Harbor, and one person said about Warren’s argument that the lack of sabotage was proof that sabotage was planned, “I don’t think that’s real logic.” But he didn’t stick around to listen.

Warren later regretted his involvement in the mass imprisonment. He said, “Whenever I thought of the innocent little children who were torn from home, school friends, and congenial surroundings, I was conscious-stricken” (The Memoirs 149). But why didn’t he think of that in the first place? Because he didn’t imagine what his plan would really look like. He imagined the need—he had a great imagination when it came to the horrors of Japanese sabotage—but he didn’t imagine what his plan would actually look like.

Nor did he listen enough. He listened to police officers, sheriffs, and other law enforcement, but he didn’t listen to any of the people who testified against imprisonment. He didn’t listen to the opposition.

Warren was a good man, a progressive who helped clean up California politics, a compassionate man, whose leadership of the US Supreme Court gave us Brown v. Board, but a man drinking deep from demagoguery.

It isn’t clear that his demagoguery had much impact—the juggernaut was already started, and the really important demagoguery was all the anti-Japanese fear-mongering of various California media (especially the Hearst papers), organizations like the very powerful Japanese Exclusion League, even thrillers and their conventional representation of Japanese. Had Warren been the only one making the kind of argument he did, it wouldn’t have matter. He didn’t matter—his demagoguery did.

And that raises a point that is important. Demagoguery isn’t necessarily harmful. I mentioned it’s not always political—there is demagoguery in movie or music criticism that is actually pretty hilarious. As long as it’s a small amount, it’s fine. I generally say it’s like eating chocolate-covered caramels or sitting on the couch watching a bad movie. If that’s all you ate, or all you did, you’d get sick, but it isn’t always harmful.

So our problem now isn’t whether this or that political figure is a demagogue—that is itself accepting the major premise of demagoguery: that we can and should decide all political questions in terms of identity. We shouldn’t, as scholars, teachers, or citizens, be worrying about who is or is not a demagogue: we should be worrying about whether we are encouraging, rewarding, and deciding on the basis of demagoguery.

“Political Eschatology, Imparted Justification, and Sloppy Calvinism: The Religious Basis of Neoliberalism”

wallace

This is a complicated argument, so I’ll do something I don’t normally do: I’ll start with my thesis. What I’m saying is that the problems with our polity right now—our difficulties arguing politics—isn’t just because of the hegemonic dominance of neoliberalism (Wendy Brown’s argument) but because of the resonance between neoliberalism and a particular religious culture—one that premises an ontological shift at the moment of belief, a shift that turns a person into a warrior in the inevitable war between Good and Evil.

Neoliberalism has been described as hegemonic discourse and a political rationality. As Wendy Brown points out, the political rationality of neoliberalism pervades educational policy, Supreme Court decisions, and what we think of as conventionally political discourse, and she (and others) have persuasively argued that one of the consequences is to depoliticize political deliberation insofar as it turns all interactions into market interactions. I’m interested in why it has such power as a cultural rationality.

I’ve been intrigued with this phenomenon in the relationship between religion and politics, since, oddly enough, the pervasion of neoliberalism (a profoundly nonreligious ethos) coincides with the sacralization of politics. Thus, religion has become monetized and politics sacralized at precisely the same time. That’s kind of weird.

The relationship between religion and politics has long been vexed in American public discourse. For instance, in postbellum areas that promoted segregation, religious discourse that supported segregation was considered “normal” and was therefore both common and allowed. It appeared unpolitical. Religious entities that criticized segregation were considered “political” (because “political” and “nonhegemonic” are pretty much synonymous for a lot of people), so major religious organizations were silent about segregation either because they thought it was bad or because they thought it was good. Segregation was explicitly a religious issue, and, because of various religious entities’ agreement to silence their criticism, dominant white religious defenses of segregation were normalized and therefore considered neutral.

That’s a mouthful. To be more clear: in areas with segregation (not just “the south”) white churches either never mentioned segregation or actively promoted it. And it’s hard for people now to understand the extent to which the major southern protestant religions actively supported segregation as Christian. It was central—that’s important to understand. And, because it was central, it was normal.

In other words, American fundagelical Christianity was always already (as they say) deeply implicated in segregation. But, in a weird way: segregation was so religiously normalized that to support it was seen as nonpolitical, and to oppose it was political. (This is a not uncommon misperception about what it means to “politicize” something—people use it when they’re talking about something political being brought into the realm of argument. In this model, “normal” behavior, even oppressive policies, isn’t “political” until there is an argument about it, so the people who object to “normal” policies are the ones seen as “politicizing” an issue. It’s a bad model.)

Thus, and this is important, American religious institutions that decided not to be “political” were, in fact, heavily and thoroughly politicized in regard to segregation to their core, whether they were supporting it or (in theory) opposed.

Paradoxically, then, segregation was protected by the notion that religious organizations should stay out of politics (since supporting segregation wasn’t “political”).

The shit hit the fan with Brown v. Board for southern Protestantism, since segregation was at the core of “southern culture” and southern religion. When Brown v. Board happened, there were multiple pro-segregation responses.

    • Resort to terror. This wasn’t a surprising response, since it had worked for almost 100 years—just lynch, or threaten to lynch, anyone who criticized white supremacy. North Carolina, for instance, had over 100 reported lynchings, meaning ones that made it into the news. Who knows how many black males (a few Jews might have been in there too) were lynched for being disrespectful or successful that didn’t make it into that tally? Every scholar of southern history notes the reliance on state-sponsored terrorism—that the black population would be kept in control by the government allowing terrorism against them. That isn’t to say that every southerner was actively bad, but every white southerner allowed that terrorism to happen.

Everyone knows about this response, and everyone (now) condemns it. But it wasn’t the most common pro-segregation response.

    • Support segregation but not through terror. The idea was that Brown v. Board was the consequence of Marxist infiltration of SCOTUS (you think I’m kidding, but I’m not). So, if we could get a non-Marxist SCOTUS, we’d be good. Let’s just delay as much as we can till we get that SCOTUS. This was considered a respectable and moderate position, and supported by people like Boutwell (who managed a discourse of “civility”).

Since segregation was not a winning argument (Wallace’s bid showed that), fundagelicals decided they couldn’t win on segregation, so they’d go for something else. They went for abortion. The hope was that “abortion” could be used to motivate people to get religiously conservative justices who would then under mind the decisions regarding segregation.

If you think I’m wrong, go the google, and find a fundagelical prior to Roe v. Wade up in arms about abortion. You might actually find a surprising number of fundagelicals advocating abortion (email me, and I’ll send some refs). Short version: every single scholar of birth control issues says this is true. Fundagelicals were not opposed to abortion till after Roe v. Wade.

There was also creationism, and I think that the two forces happened to converge—a desire to maintain creationism, and a desire to maintain segregation by getting “conservative” SCOTUS. That’s how to understand Reagan’s dog whistles about states rights, and Nixon’s Southern Strategy. (It’s important to note that “preventing abortions” did not become a political issue; instead, “outlawing abortions” was the issue.)

In any case, it’s simply clear that, after Roe, fundagelicals became more active at the local level, particularly School Boards. American political discourse has long had an evangelical flavor—think of the controversies about a Catholic president, and the evangelizing narrative behind Wilsonian foreign policy—but it has seemed to me that there has been something different about the kind of religion we’re seeing in two ways: first, the insistence, on the part of a large number of voters, that all candidates be fundagelical (not just Christian); second, open embrace of apocalyptic visions among major political figures and policies.

I think both are explained by some late nineteenth and early twentieth century shifts in American religion. Part of it has to do with seeing American foreign policy in triumphalist and missionary terms. There is a triumphalist narrative about American imperialism: they engage in imperialism in order to oppress others, but we are benevolent.

Oddly enough, instead of the triumphalist narrative of Wilsonian imperialism—we come as missionaries of democratic liberalism, who will free the oppressed from the chains of superstition and bad colonialism—there is now a narrative I find even more troubling, namely that America is taking its place in the world-ending battle between good and evil. When policy debates are framed in that context, then pragmatic discussions of long-term consequences become moot, as do questions of fairness or ethics across ingroup/outgroup boundaries.

For instance, if you look at fundagelical discussions of US Middle East policies, you can see an open rejection of such pragmatic discussions in favor of unalloyed support for whatever policy current Israeli leaders pursue. And such support is framed, not as savvy or pragmatic, but most in line with a belief in Armageddon.

That some people would feel that way doesn’t interest me; that it’s a compelling way for a large number of people to think is interesting.

The evasion of politics, and the reframing of politics as Good v. Evil, doesn’t just trouble Middle East policy. You can see it elsewhere as well—look at how much this election is a question of identity and not policies. Is Hillary a crook? Is Trump a liar? (And notice the first v. last name.) For years I’ve been wondering why we’re so averse to arguing policy. And why all policy arguments end up as identity ones. Why do we think identity is enough?

I want to toss out an explanation: that neoliberalism is a return to the prereformation religious formulation of the relationship of “good” (aka, “justified”) coupled with the reformation model of individualism and political action. Basically, we are now in a world in which many people assume that people who are saved have been ontologically changed. That ontological change guarantees that their works are justified, and that they are part of the elect who will lead the chosen people to salvation. My argument is that that version, a kind of sloppy Calvinism, displaces political deliberation with expression of identity.

It’s not uncommon to argue that liberalism has its roots in reformation notions of justification. Instead of imparted justification—Christ’s righteousness is given to believers—reformers like Luther and Calvin argued for imputed justification—we will act as though it has been given. There is not an ontological shift at the moment of justification; the person, even a believer, remains a sinner.

It’s often argued that this formulation of justification was connected to (caused? was caused by?) Enlightenment and/or humanist notions about the falliability of human perception and belief. You can’t know that you’re saved, nor that anyone else is, but you will act as though good standing members of your church are. Similarly, participation in the civic doesn’t require an ontological shift, and decision-making power can be given to people as though they have the abilities necessary to make good political decisions.

In such a moment, policy arguments would have to be about policy, and not identity (something you see, interestingly enough, in the Putney Debates, where Cromwell of all people argues that everyone has good motives, even though they disagree, and that the true course of action is hard to perceive). After all, that someone is a believer does NOT guarantee that what she is saying is true.

The Reformation didn’t question eschatology—the study of Christ’s church on earth, and the sense that human history is intensely teleological. If anything, it heightened the notion that we can interpret all human history in eschatological term. Hence, at the same moment that there is an introduction of skepticism about goodness and identity, there is the sacralizing of political history—the creation of a community of believers, of the refounding of the state of Israel, is part of the history of Christianity itself, headed toward Christ’s Second Coming. Eschatology—the history of “the church” on earth—is universalized and politicized; and political history becomes eschatology. The troubling consequence of this humanizing of eschatology is that politics is taken out of the realm of argument, compromise, and deliberation, and into a battle of good and evil.

It can be argued that this formulation of identity—imputed justification—implies a certain amount of skepticism; we don’t know who is saved, and being justified and being sanctified aren’t the same thing. Thus, we might be wrong to think we’re saved, or that someone else is. I think it’s harder to maintain a culture of skepticism within a political eschatology. If we’re inevitably headed toward a battle between good and evil, it’s hard to imagine any culture saying, “Hmmm…. are we good? or evil?” as something about which they would be skeptical and value hearing multiple sides.

In a culture of political eschatology all leaders can be divided into the Good (those who are leading us toward the good side of the inevitable battle) and the Bad (those who are deliberately leading us toward evil and the dupes who don’t realize what they’re doing). So, how do we know that a policy is good in this frame? We can look to see whether the people advocating a policy are good…. or evil. We look to their identity.

In the late nineteenth century, American evangelicalism began to slip back toward imparted justification, conflating the moment of belief with the moment of sanctification—to become a “believer” is to experience an ontological shift from sinner to saint. Imputed justification was no longer a part of American fundagelical religion, and with it any skepticism about whether a person who claimed to be saved would do good or bad things.

Thus, speaking as though one is “saved” (as long as it is coincident with endorsing the political agenda fundagelicals now argue is the necessary consequence of being saved) means an endless stack of “get out of jail” cards.

And there was one more factor, famously described by Weber—the equation of success with salvation. This was a kind of sloppy Calvinism, one that accepts the notion of an absolute ontological divide between saints and sinners, but (and?) with the assumption that saints prosper, and that their saintly identity is known to them and others. And, since the saints are, well, saints, they deserve all the good—there is no point in insisting on fairness in a culture—you don’t treat saints and sinners the same way. You give power to saints and take it away from sinners.

What I’m saying is that the problems with our polity right now—our difficulties arguing politics—isn’t just because of the hegemonic dominance of neoliberalism (Wendy Brown’s argument) but because of the resonance between neoliberalism and a particular religious culture—one that premises an ontological shift at the moment of belief, a shift that turns a person into a warrior in the inevitable war between Good and Evil.

We are all preppers now.

Ingroups, outgroups, groupiness, and bias

 

 

cropped-DSC_4064.png

If you spend as much time as I do crawling around the internet arguing with extremists, you quickly learn the “that source is biased” move. You present a piece of evidence, and the person won’t even look at it because, they say, that source is biased.

Let’s start with that isn’t what you do with a biased source. You don’t reject it; you look at it skeptically–you check its sources. Right now, a lot of people are refusing to look at claims that Trump hasn’t been as successful as he claims because, they say, that argument is from the Hillary camp. That’s called the genetic fallacy–it doesn’t matter where the claim originated; it matters if it’s true. Whether it originated with Hillary camp or not, it’s possible to check whether they are using Trump’s numbers about his own wealth. If they are, it’s a claim to take seriously.

But, for a lot of people, that isn’t how it works. They believe that you can reject anything said by what social psychologists call the “outgroup.” The basic premise is that the “ingroup” is “objective” and the “outgroup” is “biased,” so, to determine if someone is “objective,” you just ask yourself if they’re in the in or outgroup.

Let me explain a little about in and outgroups. An ingroup isn’t necessarily powerful—it’s the group you’re in. So, if someone asked you to talk about yourself, you would describe yourself in terms of various group memberships—you’re a Pastafarian, Sooner, essentialist feminist, neoliberal, knitter. Social psychologists call that group (the one you’re in) the “ingroup” and various groups you’re not in (it’s important to your self of identity that you’re not like Them) “outgroups.”

We all have a lot of ingroups, and we have a lot of outgroups, and the importance of any given one can heighten or lower depending on the situation. We are made aware of those many (even contradictory) group memberships when they’re under threat, unusual, or interesting. If you are an American, and you find yourself in a space where American is an outgroup, you’ll likely bond with other Americans. Sitting in a group of people in a classroom in Tilden, Texas, if asked to say something about yourself, you wouldn’t say, “I’m an American.” Sitting in a group of people in a classroom of mixed national origin in Belgium, you’d be pretty likely to say, “I’m an American.” If, in Tilden, another American said something critical about Americans, you’d be more likely to listen than if a non-American said it in Belgium. If you’re already feeling a little marginalized for your group membership, you’re more likely to be at least a little defensive.

And here is a funny thing about ingroup membership. There is a kind of circular relationship between your sense of your self and your sense of the ingroup—you think of yourself as good partially because you see yourself as a member of a group you think is good, and you think that group is good partially because you think it’s made up of people like you, and you think you’re good. Your group is good because you’re a good person, and you’re a good person because your group is good.

Because you’re good, and because your group is good, then you and other ingroup members necessarily have good motives. Duh.

Thus, we have a tendency to attribute good motives to members of the ingroup, and bad motives to members of the outgroup, for exactly the same behavior. An ingroup member who works long hours in order to make a lot of money is a hard worker; an outgroup member who does that is greedy. An ingroup member who gives a lot of advantages to family members is loyal; an outgroup member who does that is motivated by prejudice against outsiders. An ingroup member who says something untrue is mistaken; an outgroup member is deliberately lying. Politicians we like are motivated by a desire to benefit their community or country; politicians we don’t like are driven by a lust for power.

An example I use in teaching a lot is how we respond to a driver in a car with a lot of bumper stickers who cuts us off on the road. If the bumper stickers suggest the person is a member of an ingroup important to us—we like the politician they endorse, for instance—we’re likely to find excuses for what they’re doing. We might think to ourselves that they’re running late, or didn’t see us, or perhaps (as I once thought to myself) it’s actually an outgroup member who borrowed a car. If the bumper stickers show it’s someone we think of as an outgroup, we’ll think, “Typical.”

In other words, we rationalize or explain away bad behavior on the part of ingroup members as a temporary aberration, an accident, or something caused by external circumstances. But, bad behavior on the part of someone in an outgroup is proof that they are all like that—it’s an example of how they are essentially bad people.

If a member of the ingroup behaves well, then we say it’s the consequence of internal qualities—their essence. If the driver with all those bumper stickers we like does something really nice, we’ll think, “Typical.” It’s proof that ingroup members are essentially good people. If a member of the outgroup behaves well, then we say it was done for bad reasons, or done by accident.

So, if an ingroup political figure kicks a puppy, she was mistaken, or meant well, the puppy deserved it, or we might even try to find ways to say it wasn’t really kicking. If an outgroup political figure kicks a puppy, it’s proof that he is evil and hateful and that’s what they’re all like.

If an outgroup political figure saves a drowning puppy, she just did it get votes. If an ingroup political figure does it, that incident is proof that people like us are just plain better.

We are more likely to empathize (or, in rhetorical terms, identify) with people who persuade us they’re members of an ingroup important to us. We’re more likely to be persuaded by them—that’s why salespeople immediately try to find some point of shared identification. They’ll also often try to bond by claiming to share an outgroup—rhetoricians call this “identification through division.” That is, the salesperson tries to get you to identify with her by sharing your dislike of “them.”

In Texas, where I live, there is a notorious rivalry between Texas A&M (the “Aggies”) and University of Texas (the “Longhorns”). My husband went to A&M, and wears an “Aggie” ring. When we go shopping for big-ticket items, the salespeople will often notice his ring and start talking trash about Longhorns, and how awful the University of Texas is. They’re trying to bond with us by showing that they share the Longhorns as an outgroup. Since I’m a professor at UT, it doesn’t generally go over very well.

If I get you to identify with me, to see yourself as like me in some important way, I’ve persuaded you that you and I are in the same ingroup. If I’m really successful, I get you to identify with me so much that you will perceive an attack on me as an attack on you. I now have your ego attached to my success. That’s an important part of demagoguery (but not every time someone does that is demagoguery—it’s a part, but not the whole).

One of the main goals of demagoguery is to persuade people not to listen to the outgroup (basically because the claims of the ingroup would fall apart if people looked at them critically). And it does so by saying, “They are biased; we are objective.”

But, again, you don’t reject a biased source; you look at it more carefully. Whether claims about his wife’s immigration status, his wealth, the lawsuits against him, his hiring of illegal immigrants, his screwing over little people, his poor financial record, his lying originated with the Clinton camp doesn’t matter–what matters is whether they’re true. And that can be determined by drilling deep into the sources of those “biased” sources. That is how you assess evidence.

How the teaching of rhetoric has made Trump possible

DSC_2797

People who support Trump do so because they believe that

    • politics is inherently corrupt, and politicians favor special interests because they depend on those “special interests” for campaign donations—Trump doesn’t owe anyone anything, and he is his own man;
    • Trump is authentic; normal politicians say what they’re supposed to say, and normal politics has gotten us into a state where normal people (aka, het white males) aren’t getting the things to which they’re entitled; therefore, we need an abnormal politician who will say that “normal” people are getting screwed;
    • all the criticisms of Trump come from biased sources;
    • Trump’s motives are good because he expresses kind thoughts about non-whites, and he is concerned about them—he has good motives; he is, therefore, not racist;
    • Trump’s arguments are rational because he can give evidence to support them—he makes a claim, and he gives an example or single piece of evidence that would look good to someone not especially informed on the issue;
    • Trump’s arguments are rational because his claims are endorsed by experts;
    • Trump’s arguments are rational because he gives specific datum, they support what people believe, and he doesn’t have an irrational affect;
    • Trump’s arguments are “objective” because he is speaking the Truth;
    • Trump’s arguments are good because someone uninformed about the topic on which he’s speaking can assess a good argument;
    • Trump has really good judgment, since he is a billionaire;
    • Trump, despite his problematic history regarding fidelity, child molesting, fraud, and lying, is a good person because he is one of “us.”

These aren’t just claims about Trump; these are grounded in premises in what it means to make a good argument. And where did people learn what it means to make a good argument?

In their rhetoric classes. And, although I loathe putting my thesis first (another way rhet/comp is gerfucked) I will say that these seem to be good arguments because we, as field, have said they are. We fucked up. We taught them that a person with literally no expertise in the subject can tell you whether you’ve made a good argument.

This has made me ragey for my entire career, and it’s the basis of every single fucking program. We take students, usually literature students, and we tell them they are appropriate judges of whether someone has made a good argument on topics about which they know nothing. We tell them they can assess the credibility of a source on the basis of several rules that are pretty wonky (is it a peer-reviewed journal, is it a recent source, does the author have an advanced degree). We tell them either not to worry about the logic of the argument, or we encourage them to apply the rational/irrational split, a notion that muddles the argument someone is making with the posture they appear to be taking while making it. We tell them to teach their students that “bias” is easy to assess, and comes from motives, and, finally, we encourage them to infer bias/motive from identity. We can judge an argument on formal qualities. Teachers who have, literally, never taken a single course in linguistics, logic, argumentation, or rhetoric can tell students that their language, logic, or argumentation is bad.

Well, that’s what Trump tells his followers—you don’t know anything about this, but you can decide, without any knowledge, what’s true and what isn’t. You can tell them I am speaking the truth without looking at my sources. You can judge my argument by judging me.

And on the basis of what?

Their own sense.

Trump appeals to his voters’ “sense” about what is right and wrong. We have teachers—we have textbook authors—who are relying on their own “sense” about right and wrong in regard to topics on which there is actual research. So, who are we to say, “Well, I have no actual expertise on these issues, but you should rely on experts?” We can’t. So, we have spent generations telling students that “good” arguments are… ….well, really, what are they? Arguments that please the teacher?

We have spent many, many years telling people the wrong things about argument and argumentation, and all those wrong things are in Trump. (At this point, assuming people got this far, I’ve probably lost a bunch of folks. And that’s the consequence of the thesis-first method of arguing. We expect someone to put their argument at the beginning because our faith in persuasion is so small—and because we want to know whether we should put our guard up. A lot of people in rhetoric cite studies that supposedly show that people aren’t persuaded, but that isn’t what those studies actually show. That’s a different rant.)

There was a time when argumentation textbooks would have a section on fallacies and logic, but that is long past. And why? How many teachers of argumentation (or authors of argumentation textbooks) could pass a simple test on fallacies? What, for instance, is argumentum ad misericordiam (aka, appeal to pity)? Is it an appeal to emotions? Is an appeal to emotions an irrational appeal?

Short version of my argument: no person who says an appeal to emotions is irrational should be teaching argumentation. That is an actively harmful way to approach discourse.

Argumentum ad misericordiam is one of the fallacies of relevance—it is an irrelevant appeal to emotion; it is a kind of red herring. And you can’t judge whether a single argument is engaging in that fallacy without knowing the context of the argument—without knowing the larger debate in which that argument is happening. So, I’m not making the old argument that rhetoric teachers shouldn’t teach political topics because we aren’t political scientists; I’m saying that we shouldn’t assess arguments without knowing the context of that argument—the sources it’s using, the oppositions it’s establishing.

We stopped having lists of fallacies in argumentation textbooks because of a confusion between formal and informal logic (two related, but distinct, fields). Formal logic is, as its names implies, associated with the forms that a “logical” argument can take, so it assumes that you can talk about arguments the way you talk about a math problem, with symbols. Formal logic has little (or nothing) to do with how people need to argue about political, ethical, or aesthetic topics, since they aren’t usefully captured in forms. Informal logic (or argumentation) concerns the ways that we argue, and it emphasizes that an argument needs to be assessed in relation to the context and conversation (something is a false dilemma not because it only presents two options, but it reduces a variety of options to two—if there are only two options on the table, then it isn’t a fallacy).

Authors of the most popular comp textbooks appear to have known only about the former, and didn’t know about the latter. I know many of those teachers, and they’re good people, but they spent so much time writing textbooks that they stopped reading scholarship. So, most textbooks in composition and rhetoric are gleefully disconnected from scholarship in relevant fields. Again, if you think I’m being ugly, just look for footnotes or endnotes citing recent research. Not there.

If I had time, I would talk about what it would mean to incorporate actual scholarship about reasoning and persuasion into our comp textbooks. Short version: Aristotle was right—it’s about enthymemes and paying attention to major premises. Ariel Kruglanski has argued that people tend to reason syllogistically—this is a dog; dogs hate cats; therefore, this dog must hate cats. And so, if we wanted to think usefully about logic, we would look at major premises—how reasonable is the assumption that dogs hate cats? Does the argument assume that premise consistently, or does it sometimes assume that dogs love cats?

As a culture, we oppose emotion and “rationality,” and that means that, to determine if an argument is “rational,” we try to infer whether the rhetor is “rational.” And we generally do that by trying to infer if the rhetor is letting his/her emotions “distort” their thinking. Or, connected, we rely on a definition of “logic” that is commonly in textbooks—a “logical” argument is one that appeals to facts, statistics, and data. [Notice that an argument might be logical in that sense—it makes those appeals—but completely illogical in the sense of its reasoning (what Aristotle actually meant by “logos”).] But, if we think of a rational argument as an argument made by a rational person, then we can look at a rhetor and judge whether s/he is the sort of person who speaks the truth, and who has data to support their claims. That’s a terrible definition of logic.

(As an aside, I’ll mention a better way to think about rationality—first, does the argument fairly represent its sources, including oppositions; second, does the argument appeal to consistent major premises; third, are standards of “logic” applied across interlocutors.)

But, let’s set that aside for a bit. Let’s talk about Trump. There are some issues regarding Clinton and the Clinton Foundation, but they pale in comparison to the issues regarding Trump and his “charitable” foundations. So, why do Trump supporters condemn Clinton about “corruption”, happily ignoring that their candidate has done worse?

They do so for three reasons, all of which fyc textbooks have taught them are good ways to argue.

First, they say any source that says the Trump Foundation did a bad thing is “biased.” (Okay, they usually say it’s “bias,” but you know what I mean.) They infer that bias by pointing out that the source is criticizing Trump (in other words, it’s a circular argument—you can reject all criticism of Trump on the grounds that it’s biased, and you can show it’s biased by pointing out it’s critical of Trump).

Second, and closely related, they say that any site with disconfirming evidence is written by someone with a bad motive. This too is inferred from the fact that someone is making a critical argument.

Third, perhaps (usually the stop at the first two), they show that there is a reason they’re right—data or statistics.

What all of this is assuming is that a good argument is something floating in space, unconnected to any other arguments—it has a certain form.

And Trump’s arguments have those forms—he is sincere, he really believes what he’s saying (even if it contradicts what he said recently), he can give an example to support what he’s saying, he has all the best experts, he is saying things his audience wants to believe. Trump’s arguments are appallingly apt examples of bad faith argumentation. He is a casebook in demagoguery. There is no rhetoric worse than his. And common methods of teaching argument would give him an A. This is our child. We taught generations of students that having a few (more or less random) experts supporting us, starting with your thesis, giving some examples, and leading with main claims, all of that makes a good argument. We taught them that a person with literally no expertise in the subject can tell you whether you’ve made a good argument. Because that’s how we graded them.