Class size and college writing (another version of the same argument)

[Also co-authored by Reinhold Hill, and also from the early 2000s]

Introduction

Any Writing Program Administrator occasionally has the frustrating experience of failing to get administrators, colleagues, parents, and even students to understand the bases of our decisions–why classes must remain small, why instructors need training in rhetoric and composition, and so on. This kind of experience is frustrating because we often find ourselves talking to someone with different assumptions about teaching, writing, and research. For such an audience, position statements are often not helpful, as our interlocutors do not even know the organizations whose position statements we’re likely to cite.

This is not to say that such discussions are necessarily an impasse, nor that the different assumptions interlocutors have are incommensurable. It is simply that, often being from different disciplines, we all bring different assumptions that seem transparently obvious to each of us. We may know from experience that one gets better writing from students if they are required to revise, but an administrator with a different disciplinary background may be sincerely concerned that we are not assessing our classes and students on the extent to which students have retained the information we have given them in lectures and readings. For many people, that is learning. As far as they are concerned, if we are not lecturing and assigning reading, then we are not teaching; if we are not testing our students, then we are not assessing students objectively.

In addition, the articles and books that we are likely to cite to explain our practices look very strange to some people–it’s just argument, a colleague once complained. We can rely on argument because we teach argument, and we are comfortable assessing arguments. We can rely on anecdote and personal experience more than people in many fields because we share an experience–the teaching of writing. Thus, if an author narrates a specific incident, we are likely to find it a reasonable form of proof, if the incident is typical of our own experience. In some other fields, however, quantified, empirical evidence is the only credible sort of proof, or an assertion must be supported by a large number of studies (regardless of how problematic any individual study might be). This is particularly an issue with class size, as minor change in enrollment (from an administrator’s perspective) are strongly resisted by Writing Program Administrators. Our intention in this article is to try to help Writing Program Administrators argue for responsible and ethical class sizes in writing courses.

There are few topics about which Writing Program Administrators and upper administrators are likely to disagree quite so unproductively as class size. While Writing Program Administrators typically argue for keeping first year writing courses as small as possible, upper administrators are often focussed on the considerable savings that could be effected by even a small change in enrollment. WPAs can cite position statements and recommendations from NCTE and ADE, but upper administrators cite such passages as the following from Pascarella and Terenzini who summarize the “substantial amount of research over the last sixty years” on class size in college teaching:

The consensus of these reviews–and of our own synthesis of the existing evidence–is that class size is not a particularly important factor when the goal of instruction is the acquisition of of subject matter knowledge and academic skills. (87).

With the backing of such an authority, upper administrators are likely to be mystified at WPA’s resistance to first year writing classes of twenty-five to thirty.

This is not to say that WPAs have no research on the side of smaller classes. Despite what Pascarella and Terenzini say, there is considerable research which identifies benefits in smaller classes. The meta-analysis of Glass and Smith (not mentioned by Pascarella and Terenzini) concludes that reduced class size is beneficial at all grade levels; Slavin found small positive short-term benefit; and several studies found benefit if (and only if) teachers engaged in teaching strategies that took advantage of the smaller size (Chatman, Tomlinson). On the other hand, there is at least one study too recent to be cited by Pascarella and Terenzini that concludes no demonstrable benefit to reducing class size (e.g., David Williams). Thus, it may seem to be a case of warring research.

On the contrary,  we will argue that the apparently disparate results of research can be explained, in a comment made by Pascarella and Terenzini. After the passage quoted above, they say “It is probably the case, however, that smaller classes are somewhat more effective than larger ones when the goals of instruction are motivational, attitudinal, or higher-level cognitive processes” (87).

There are two points which we wish to make about Pascarella and Terenzini’s negative conclusion regarding class size. First, it is striking how dated the research is–although Pascarella and Terenzini’s book came out in 1991, the most recent study they cite is 1985. Of the eighteen studies they mention, three are from the twenties, one from 1945, two from the fifties, two from the sixties, seven from the seventies, and three from the eighties. This is particularly important for the teaching of writing, as there was a major reversal in the sixties in pedagogy, returning from the lecture-based presentation of models which students were expected to imitate to the classical method which put greater emphasis on the process of inventing and arranging an effective argument.

This issue of teaching model is crucial. The impact that varying class size has on the outcome in terms of student writing depends heavily on the goal and method of the writing courses in question. If the courses are lecture courses, in which only the teacher is expected to read the students’ writing, then the only limit on class size comes from the amount of time one expects the teacher to spend grading. While that is not a model we endorse (and we will discuss the reasons below), it can still be the basis of a useful discussion.

At a “Research I” institution, faculty members are usually assessed on the assumption that they spend forty per cent of their time teaching two courses, or, one day out of a five day work week (eight hours). At schools with more teaching responsibilities, the math works out in similar ways (with a fairly ugly exception for universities with Research I publishing expectations and a three or four course teaching load). Graduate students are usually assumed to have teaching responsibilities that account for half of their half-time appointment, or ten hours per course. With three hours per week in the classroom, and three hours of office hours, graduate students instructors are left with four hours per week of grading and course preparation.

One reason that administrators and WPAs often disagree about the amount of work involved in teaching writing courses is that administrators’ experience is with what Hillocks calls the “presentational” mode of teaching. The first year is hellish, but then the instructor has prepared the presentations, and future years involve tinkering with prepared lectures. Hence, course preparation is presumed to be minimal. But, of course, most WPAs are not imagining instructors’ spending class time lecturing because the presentational mode has been demonstrated, conclusively, to be the least effective method of teaching writing.

Still and all, if one assumes that a course is supposed to take 150 hours of an instructor’s time over the course of a semester (not including pre-semester course preparation), and 45 hours of that time is spend in class, and another 45 hours is spent in office hours, there are 60 hours left for individual conferences, grading, and course preparation. If there are twenty students per class, then meeting twice with each student for a half hour conference uses up 20 hours. Even assuming an efficient teacher who is dusting off lecture notes for course preparation, one should expect an hour per week of course preparation (15), leaving 25 hours for grading. Advocates of minimal marking (a problematic issue to be discussed below) describe a process that takes only twenty minutes per paper. Obviously, then, the amount of time an instructor spends on grading depends upon the number of papers, but a course with only three papers would use up almost all of the time left. Since most programs require more than three papers (and most instructors spend more than twenty minutes per paper), more than twenty students per course puts instructors into unethical working conditions.[1]

But, as we said, the deeper issue concerns just what happens in a writing class. The issue is whether one sees writing instruction as the inculcation of subject matter knowledge or as the development of higher-level cognitive processes. To the extent that it is the latter, classes should be small; to the extent that it is the former, class size is limited only by instructor workload Or, in other words, what do we teach when we teach college writing?

Interestingly enough, this is one of those questions that is not a question for people outside of the field. It seems obvious enough to people unfamiliar with research in Linguistics, Rhetoric, and English Education who tend to give what appears a straightforward answer: we teach the rules of good writing. Behind that apparent consensus is an interesting disagreement. For some people, the “rules of good writing” describe formal characteristics in writing that all educated readers acknowledge is high quality (e.g., the thesis in the first paragraph, an interest-catching first sentence). For others, those rules describe procedures that all good writers follow when writing (e.g., keep notes on three by five cards, write a formal outline with at least two sub-points). As qualitative and quantitative research has shown, however, both of those perceptions regarding the rules of good writing–regardless of how widespread they are–are false.

In the first place, there is less consensus about what constitutes “good” writing than many people think. Writers have fallen in and out of fashion, so that there is not any author who has not had his or her detractors–critical reception of Henry David Thoreau’s Walden was so hostile that it was nearly turned into pulp; Addison and Steele, always included in composition textbooks until the 1960s, are now considered nearly unreadable; even Shakespeare has been severely criticized for his mixed metaphors, complicated language, and drops into purple prose. As research in reader response criticism demonstrated as long ago as the early part of this century (see I.A. Richards), students (and readers) do not immediately recognize the merits of canonical literature; there is considerable disagreement as to just what the best writing is, so that the “canon” of accepted great writing has constantly been in flux (see Ohmann, Fish, Graff).

To a large degree, this disagreement is disciplinary; that is, different disciplines have different requirements for writing. This divergence is most obvious in regard to format–such as citation methods and order of elements. It is equally present and more important in regard to style: in the experimental and social sciences, for instance, “good” writing uses passive voice, nominalization, long clusters of noun phrases, and various other qualities which are considered “bad” writing in journalism, literature, and various humanistic disciplines. Even the notion of what constitutes error varies–social science writing is rife with what most usage handbooks identify as mixed metaphors, predication errors, reference errors, non parallel structure, split infinitives, dangling modifiers, and agreement errors. Lab reports, resumes, and much business writing permit, if not require, fragment sentences. Meanwhile, people from some disciplines recoil at the use of first person in ethnographic writing, literary criticism, some journalism, and other humanistic courses.

Disciplines also disagree as to what constitutes good evidence (for more on this issue, see Miller, Bazerman). Some disciplines accept personal observation (e.g., cultural anthropology), while some do not (e.g., economics). There are similarly profound disagreements regarding the validity of textual analysis, quantitative experimentation, qualitative research, interviews, argument from authority, and so on. There is a tendency for people to be so convinced of the epistemological superiority of their form of research that, when confronted with the fact of differing opinions on what constitutes good writing, they dismiss the standards of some other discipline (thus, for instance, Richard Lanham’s popular textbook Revising Prose condemns all use of the passive voice, and Joseph Williams’ even more popular Style: Ten Lessons in Clarity and Grace prohibits clusters of nouns). Our goal is not to take a side in the issue of which discipline promotes the best writing, but to insist upon the important point out that there is disagreement. Thus, a writing course cannot teach “the” rules of good writing that will be accepted in all disciplines because no such rules exist (unless the rules are extremely abstract, as discussed below).

In the second place, as several studies have shown, the ‘”rules” of good writing which we give students for student writing do not describe published writing.  For instance, students are generally told to end their introductions with their ‘thesis statements,” to begin each paragraph with their topic sentences (which is assumed to be the main claim of the paragraph), and to focus on the use of correct grammar. Published writing, however, does not have those qualities. Thesis statements are usually in conclusions (Trail), and introductions most often end with a clear statement of the problem (Swale), or what classical rhetoricians called the “hypothesis” (meaning a statement that points toward the thesis). Textbook advice regarding topic sentences is simply false (Braddock), and, readers are much more oblivious about errors in published writing than much writing instruction would suggest (Williams). In fact, error may not have quite the role that many teachers think–while college instructors say that correctness is an important quality of good writing (Hairston), studies in which they rank actual papers shows a privileging of what compositionists call “higher order concerns”–appropriateness to assignment, quality of reasoning, and organization over format and correctness (see Huot’s review of research on this issue).

Finally, several meta-analyses of research conclude that teaching writing as rules has a harmful effect on student writing (see especially Knoblauch and Brannon, Hillocks, Rose). The common sense assumption is that students prone to writing blocks lack the knowledge of rules for writing that effective writers have; on the contrary, students prone to writing block may know too many rules. In contrast to more fluid writers, who tend to focus on what is called “the rhetorical situation” (explained below), student writers prone to writing blocks focus on rules they have been told (Flower and Hayes, Rose).  Students taught these rules of writing try to produce an error-free first draft  which they minimally revise (Emig, Sommers). Effective and accomplished writers, in contrast, have rich and recursive writing processes that depends heavily upon revision (Emig, Flower and Hayes, Berkenkotter, Faigley and Witte).

For many people unaware of research in linguistics and English education, the assumption is that the “rules” of good writing are the rules regarding usage (usually described as “grammar rules,” which is itself an instance of an error in usage). It is assumed that there is agreement regarding these rules, and they are to be found in any usage handbook. Further, it is assumed that one can improve students’ “grammar” (another interesting usage error–what people mean is “reduce usage error” or “improve correctness”) by getting them to memorize those universally agreed-upon usage rules.  These assumptions are wrong in almost every way.

Research in linguistics demonstrates that language has considerable variation over time and region. To put it simply, at any given moment, there are numerous dialects within a language which are each “correct” within their community of discourse (e.g., “impact” for “influence,” “thinking outside the box”). Some dialects are more privileged than others, and the uninformed often assume that facility with the more privileged dialect signifies greater intelligence; this is patently false (Chomsky, Labov and Smitherman, Baron). All dialects have a grammar, so students (and colleagues) who use a different dialect are not ignorant of “grammar;” they know the grammar of a dialect not considered appropriate in academic discourse, the dialect which linguists sometimes call “standard edited English.” It is easy to overstate agreement regarding “standard edited English,” as that dialect has varied substantially over time; the “shall” versus “will” distinction used to be considered extraordinarily important, “correct” comma usage differs in British and American English and even more from the nineteenth century to now, and usage handbooks disagree on numerous issues (such as agreement). The notion of a correct dialect upon which there is universal agreement is simply a fantasy.

In our experience, people respond to this research by objecting to the pedagogy they assume it necessarily implies. People assume that to note the reality–considerable regional and historical disagreement regarding linguistic correctness–necessarily implies a complete abandonment of attention to error. That is not the necessary conclusion, nor is it our point. Our point here is simply that one central assumption in this view of writing instruction is wrong–there is not universal agreement as to rules regarding “correct” language use.

In addition, this research does not necessarily imply a “whatever goes” pedagogy. While some have drawn that conclusion, others have used this research to argue for teaching grammar and usage as a community of discourse issue (e.g., Labov and Smitherman); that is, rather than denigrate some dialects, teachers should present “standard edited English” as a useful dialect which students should use under some circumstances and with some audiences (see, for instance, “Students’ Right to Their Own Language”). Others have argued that grammar and usage should be taught as a rhetorical issue, as a question of clarity and rhetorical effect (Williams, Kolln, Dawkins).

And this leads us to the second point–the assumption that one can reduce errors in student writing through making students learn the rules of standard edited English. On the contrary, in the nearly one hundred years that this issue has been studied, there has not been a single study which showed improvement in student writing resulting from formal instruction in the rules of grammar, while there are several studies which showed a mark deterioration (see Knoblauch and Brannon, Hartwell, Hillocks for more on the history of this research). That deterioration may be the consequence of increased anxiety leading to students’ mistrusting their implicit knowledge (Hartwell), or that the time taken for grammar instruction was time away from more productive forms of writing instruction (Knoblauch and Brannnon).

In our experience, this point too is misunderstood. We are not saying that instruction in grammar and usage is pointless, but that certain approaches to it demonstrably are. And those are precisely the pedagogies into which one is forced in large classes–lecturing, drilling, assigning worksheets, and testing students on usage rules.

Indeed, research suggests that there is probably not a pedagogy which can be applied to all students in the same way. Issues of linguistic correctness result from different causes, depending upon the students. Hence, the solution varies. For students whose native dialect is fairly close to standard edited English, for instance, errors in usage sometimes result from lack of clarity about their own argument; students make more usage errors, for instance, when they are writing about something they do not fully understand. For such students, clarifying the concepts will enable the students to correct the errors.

For other students, usage errors are a time management issue–they did not leave themselves time to proofread. What Haswell has somewhat misleadingly called “minimal marking” is generally the best strategy under those circumstances (it is misleading in that it depends upon students’ resubmitting their corrected papers, so it can be fairly time-consuming for the instructor, albeit far less time-consuming and more effective than copy-editing). What he advocates, however, is not a kind of marking that takes minimal time on the part of the instructor.

For students whose dialect is markedly different from standard edited English, there is the possibility of what linguists call “dialect interference”–instances of using their (academically inappropriate) dialect, engaging in hypercorrectness (“between you and I”), or  simply being unsure how to apply the rules. There are also students whose experience with written English is minimal, and who may have a tendency toward what are called “errors of transciption” (e.g., errors regarding the placement of commas and periods).  For these students, “minimal marking” is ineffective, but neither do they benefit from lectures and quizzes on grammar rules. Instead, they seem to benefit most from individual instruction. Several studies show strong short-term improvement from sentence embedding (Hillocks), but  many instructors moved away from it due to its inherently time-consuming nature.

In short, as Mina Shaughnessy pointed out long ago, improving students’ usage is not something one can do in the same way with all students. One must know exactly what specific problems exist with each student, why that student is having that problem, and what method will best work with that problem and that student.  In other words, effective instruction in grammar and usage necessitates classes small enough that the teacher can know students well enough to know the cause of the problem. If the students have major problems, as from dialect interference, then the classes have to be small enough for the teacher to be able to engage in the extremely time-consuming methods necessary for such students.

One might wonder, if writing teachers are not teaching rules of writing, what are we teaching?  And the answer seems to be that we are teaching rhetoric. That is, while one cannot present students with rules that apply to all circumstances–never use I, always begin with a personal anecdote, your thesis should have three reasons–there are principles which do seem effective in most circumstances. Those principles are encapsulated in the concept of the “rhetorical situation”–that the quality of a piece of discourse is determined by the extent to which its strategies are appropriate for effecting the author(s) particular intention on the specific audience. Thus, were one to examine prize-winning articles in philosophy, economics, literary criticism, engineering, behavioral psychology, and theoretical physics, one would see wide variation in terms of  format, style, organization, and nature of evidence, one would see that each piece was appropriate for its audience.

One advantage of this approach to the teaching of writing is that it is more effective. Lecturing and drilling are, as several studies have shown, ineffective methods of writing instruction (Hillocks). This method remains tremendously popular, however, especially among teachers whose own instruction followed that method, who are cynical regarding student achievement, and who are generally convinced that the teaching of writing is the transmission of information (Hillocks).  This point is important, as it showed up in our own experiment with reducing class size–students in classes with teachers who relied heavily on lecture did not show any benefit from a smaller class. The fact is that lectures are ineffective in writing classes; reducing the class size does not suddenly make lecturing an effective teaching strategy.

When we had the opportunity to look closely at class size at our previous institution, we made some surprising discoveries.  One of the major motivations for undertaking the experiment was a sense of frustration, among faculty and graduate students, with graduate student instructors’ progress toward their degrees. Prior to the change in program emphasis, a large number of our instructors used class time to present advice on writing papers as well as to present writing products which students used as models (what Hillocks calls the “presentational” mode, and which he identifies as the least effective method of writing instruction). Perhaps because this method of instruction did not work particularly well for so many students, instructors also relied heavily on individual conferences with students–conferences which took so much time that they necessitated long blocks of time outside normal office hours. The dominance of this mixing of presentational and individualized modes of instruction had fairly predictable consequences.

The accretion of assignments and expectations for the course meant that it was actually impossible to teach the course in the ten hours per week a graduate student was supposed to spend on it. While such a situation is far from uncommon–many programs pay writing teachers a salary that presumes that the course takes much less time than it actually does–it is unethical. It also means that instructors, especially ones with multiple commitments (e.g., graduate students who are also taking courses, part-time instructors with obligations at several campuses, tenure-track teachers facing publication pressures), are encouraged to adopt pedagogies which feel more efficient but which research strongly indicates are less effective (i.e., the presentational mode of teaching, discussed previously).

Graduate student instructors responded to this situation in various ways. According to a survey, as well as faculty observation, many let their own coursework suffer in favor of their teaching. Others simplified assignments, so that the papers were short and simple enough that they could be graded in ten to fifteen minutes a piece. Several instructors essentially abandoned assessing student work, and graded students purely on attendance. Many instructors reported spending long hours on teaching, something that, not surprisingly, resulted in frustration–the first year composition course was openly discussed as the least desirable teaching assignment. In this context, it should be clear why we were looking for a method that would reduce the amount of time that instructors spent on their first year composition courses, without simply shifting them to quick, but ineffective, methods such as lecturing, drilling, and superficial grading.

When we reduced class size to fifteen for many of the instructors, we found that those instructors generally spent less time on the course (instructors in control groups reported spending an average of ten to fourteen hours per week on their courses, instructors in the sections with reduced class size reported averages of between twelve and fifteen). We also found that many instructors took advantage of the reduced class size to create new assignments, to take more time to comment on papers, to meet more often with students, or to add another project. Such a consequence–instructors taking the opportunity to increase the amount of work in the course–is echoed in at least one other study on class size.  The San Juan Unified School District report on the results of the Morgan-Hart Class Size Reduction Act of 1989 concludes that

As a result of smaller classes, students were more actively involved in the instructional process.  This was demonstrated by an increase in the number of student reading and writing assignments, more oral presentations and frequent classroom discussions.  Students also received increased feedback on their English assignments and teachers had time to work with students individually.

One benefit of reducing class size, then, is that instructors appear more willing to experiment with and examine their teaching styles. Whether this is a bug or feature would depend on the program goals. Certainly, although they may not have spent less time on the courses, they reported much higher satisfaction. Teachers like smaller classes.

But, they did not always use the time well. We found that instructors heavily committed to the presentational mode did not effect much change in their students’ writing processes. Similarly, class size did not increase overall student satisfaction if the instructor engaged in the presentational mode.

In conclusion, our experience fits with Sheree Goettler-Sopko’s summary of research on class-size and reading achievement. She concludes that “The central theme which runs through the current research literature is that academic achievement does not necessarily improve with the reduction of student/teacher ratio unless appropriate learning styles and effective teaching styles are utilized” (5).

Class Size and Minimal Teaching

George Hillocks long ago showed the importance and superiority of constructivist approaches to the teaching of writing (Research in Written Composition, Teaching Writing as Reflective Practice, and more recently Ways of Thinking, Ways of Teaching).  This means that effective teaching requires an approach which does not set the task of teaching writing as getting students to memorize and understand certain objects of knowledge (the objectivist approach), but as setting students tasks during which they will learn and giving them appropriate feedback along the way.  The more that one engages in constructivist teaching, the more important is class size; the more that the goals and practices of a program are objectivist, the less class size matters. While reducing class size does not guarantee constructivist teaching, increasing class size does prevent it.

One can see this effect simply by thinking about the amount of time for which writing instructors are paid. The assumption at many universities is that each class is supposed to take 8-10 hours per week of instructor time. Instructors spend three hours each week in class, and it is optimistic, but not necessarily irrationally so, to assume that an efficient and highly experienced teacher can prepare for class on a one-to-one basis (that is, that it takes approximately one hour to prepare for one hour of class). A teacher therefore has two to four hours a week (almost precisely what is required by most universities for office hours). If an instructor has twenty students per class, s/he has, over the course of the semester 30-60 hours, which comes, at best, to three hours per student for conferences and grading. This situation necessitates cutting the students short on something–short papers which can be graded quickly, cursory grading of student work generally, discouraging students from using office hours. All in all, it means that one cannot do what Pascarelli and Terenzini say “effective teachers do” when “They signal their accessibility in and out of the classroom” (652).  Simply put, if instructors have to use office hours to grade student work, they cannot signal accessibility. Pascarelli and Terenzini say, “They give students formal and informal feedback on their performance” (652), but, if instructors are restricted to three hours of grading per semester per student, they have to minimize the amount of feedback given. In other words, large classes force instructors away from what “we know” to be good practice.

The larger the class, the more the teacher is forced into lecturing. Yet, according to Pascarelli and Terenzini,

            Our review indicates that individualized instructional approaches that accommodate variations in students’ learning styles and rates consistently appear to produce greater subject matter learning than do more conventional approaches, such as lecturing. These advantages are especially apparent with instructional approaches that rely on small, modularized content units, require a student to master one instructional unit before proceeding to the next, and elicit active student involvement in the learning process. Perhaps even more promising is the evidence suggesting that these learning advantages are the same for students of different aptitudes and different levels of subject area competence. Probably in no other realm is the evidence so clear and consistent. (646, emphasis added)

If we want instructors to be effective writing instructors, then we have to ensure that they are in a situation which will permit good practice. Reducing class size will not necessarily cause such practice, but it is a necessary condition thereof.

Works Cited

ADE.  “ADE Guidelines for Class Size and Workload for College and University Teachers of English: A Statement of Policy.” Online. http://www.ade.org/policy/policy_guidelines.htm. 1998.

Baron, Dennis E. Grammar and Good Taste : Reforming the American Language. New Haven: Yale University Press, 1982.

Bazerman, Charles. Shaping Written Knowledge: The Genre and Activity of the Experimental Article in Science. Madison: University of Wisconsin Press, 1999.

Berkentotter, Carol. “Decisions and Revisions: The Planning Strategies of a Publishing Writer.”  Landmark Essays on Writing Process.  Sondra Perl, ed. Davis, CA: Hermagoras Press, 1994. 127-40.

Braddock, Richard.  “The Frequency and Placement of Topic Sentences in Expository Prose.” On Writing Research: The Braddock Essays, 1975-1998.  Ed. Lisa Ede. New York:  Bedford, St. Martin’s, 1999. 29-42.

Chatman, Steve.  “Lower Division Class Size at U.S. Postsecondary Institutions.”  Paper presented at the Annual Forum of the Association for Institutional Research. Albuquerque: 1996.

Chomsky, Noam. N. Aspects of the Theory of Syntax. Cambridge:

MIT P, 1965.

Davis, Barbara Gross, Michael Scriven, and Susan Thomas.  The Evaluation of Composition Instruction. 2nd. Ed. New York: Teachers College Press. 1987.

Dawkins, John. “Teaching Punctuation as a Rhetorical Tool.” CCC (Dec. 1995): 533-548.

Emig, Janet.  The Composing Processes of Twelfth Graders. Urbana: NCTE, 1971.

Faigley, Lester, and Stephen Witte. Evaluating College Writing Programs. Carbondale: Southern Illinois UP, 1983.

Fish, Stanley. Is There a Text in this Class? Cambridge: Harvard UP, 1982.

Flower, Linda, and John R. Hayes. “The Cognition of Discovery: Defining a Rhetorical Problem.” Landmark Essays on Writing Process. Sondra Perl, ed. Davis, CA: Hermagoras Press, 1994. 63-74.

Glass, Gene V., and Mary Lee Smith.Meta-Analysis of Research on the Relationship of Class-Size and Achievement.  The Class Size and Instruction Project.”  Washington D.C.: National Institute of Education, 1978.

Goettler-Sopko, Sheree. “The Effect of Class Size on Reading Achievement.” Washington D.C.: U.S. Department of Education, 1990.

Graff, Gerald. Beyond the Culture Wars: How Teaching the Conflicts Can Revitalize American Education. New York: WW Norton, 1993.

Hairston, Maxine. “Working with Advanced Writers.” CCC 35(1984): 196–208.

Hartwell, Patrick. “Grammar, Grammars, and the Teaching of Grammar.” College English 47 (February 1985): 105–27.

Haswell, Richard H.  “Minimal Marking.” College English 45.6 (1983): 166-70.

Hillocks, George.  Research in Written Composition: New Directions for Teaching.  Urbana: NCTE, 1986.

– – -. Teaching Writing as Reflective Practice: Integrating Theories. New York: Teachers College P., 1995.

– – -. Ways of Thinking, Ways of Teaching. New York: Teachers College P., 1999.

Huot, Brian.  “Toward a New Theory of Writing Assessment.” CCC 47.4 (1996): 549-66.

Knoblauch, C.H. and Lil Brannon. “On Students’ Rights to Their Own Texts: A Model of Teacher Response”, College Composition and Communication, 33 (1982): 157-66.

Kolln, Martha. Rhetorical Grammar: Grammatical Choices, Rhetorical Effects. 4th Ed. New York: Pearson, 2002.

Labov, William. The Logic of Non-Standard English. Champaign: National Council of Teachers of English, 1970.

Lanham, Richard. Revising Prose. 4th ed. New York: Pearson Longman, 1999.

Miller, Susan. Textual Carnivals: The Politics of Composition. Carbondale: Southern Illinois UP, 1991.

NCTE College Section Steering Committee. “Guidelines for the Workload of the College English Teacher.” Online. http://www.ncte.org/positions/workload-col.html. 1998.

Ohman, Richard. English in America: A Radical View of Profession. New York: Oxford UP, 1976.

Pascarella, E.T. And Terenzini, P.T. How College Affects Students: Findings and Insights from Twenty Years of Research. San Francisco: Jossey-Bass, 1991.

Richards, I.A. The Meaning of Meaning: A Study of the Influence of Language upon

 Thought and of the Science of Symbolism. 8th ed. New York: Harcourt, Brace

& World, 1946.

Rose, Mike. Lives on the Boundary. New York: Penguin, 1990.

Sommers, Nancy. “Revision Strategies of Student Writers and Experienced Adult Writers.” CCC 31 (December 1980): 378–88.

San Juan Unified School District. “Class Size Reduction Evaluation: Freshman English, Spring 1991.” Washington D.C.: U.S. Department of Education, 1992.

Shaughnessy, Mina. Errors and Expectations. New York: Oxford UP, 1979.

Slavin, Robert, “Class Size and Student Achievement: Is Smaller Better?” Contemporary Education 62 (Fall 1990): 6-12.

Smitherman, Geneva. “‘Students’ Right to Their Own Language’: A Retrospective.” English Journal 84.1 (l995): 21-27.

Swales, John, and Hazem Najjar.  “The Writing of Research Article Introductions”  Writtten Communication 4.2 (April 1987): 175-91.

Tomlinson, T. M. “Class Size and Public Policy: Politics and Panaceas.” Educational Policy 3 (1989): 261-273.

Trail, George Y. Rhetorical Terms and Concepts: A Contemporary Glossary. New York: Harcourt, 2000.

Williams, David D., et al.  “University Class Size: Is Smaller Better?” Research in Higher Education 23.3: 307-318.

Williams, Joseph.  “The Phenomenology of Error.”  CCC 32 (May 1981): 152-68.

—. Style: Ten Lessons in Clarity and Grace. Chicago: U. Chicago P., 1997.

[1] Unhappily, in our experience, the expectation is that instructors should spend more than forty hour per week on their jobs, or cut corners in various ways. For instance, it is often assumed that office hours can be used for course preparation or grading, but that amounts to an official policy that office hours are not times when students can expect the full attention of the instructor. Hence, when upper administrators say that office hours should not be counted separately from course preparation, the correct answer is, “Put that in writing.”

Class size in college writing (an old paper)

[This was co-authored with Reinhold Hill in 2007, based on research done in the late 90s at our then-institution. People have sometimes cited it, although it wasn’t published, so I’m posting it.]

The issue of class size in first year college writing courses is of considerable importance to writing program administrators.  While instructors and program administrators generally want to keep classes as small as possible, keeping class size low takes a financial and administrative commitment which administrators are loath to make in the absence of clear research.  While the ADE and NCTE recommendations of fifteen students are persuasive to anyone who has taught first-year writing courses, they often fail to persuade administrators who are looking for research-based recommendations.  And, in actual fact, class sizes at major institutions ranges from ten to twenty five students.

Unfortunately, anyone looking to the available research on class size in college writing courses is likely to come away agnostic.  While there is considerable research on class size and college courses in general, there are several important reasons that one should doubt its specific applicability to college writing courses.  First, much of the general research on class size includes students of all ages.  Second, the research often involves the distinction between huge and simply large courses, such as between forty and two hundred students,  whereas most writing program administrators are concerned about the difference between fifteen and twenty-five students. Third, the courses involved in the studies often have very different instructional goals from first year writing courses.  Finally, the assessment mechanisms are often inappropriate for evaluating effectiveness and student satisfaction in writing courses.

In other words, the NCTE recommendations for writing courses are not based on research, and the research on class size in general cannot yield recommendations.        At the University of Missouri, we were given the opportunity to engage in some informal experimentation regarding class size.  While the limitations of our own research mean that we have not resolved the class size question, our results do have thought-provoking indications for class size and program administration.  In brief, our work suggests that reducing class size, while very popular among instructors, appears not to result in marked improvement in student attitudes about writing unless the instructors use that reduction in class size as an opportunity to change their teaching strategies.  In other words, we seem to have confirmed what Daniel Thoren has concluded about class size research: “Reducing class size is important but that alone will not produce the desired results if faculty do not alter their teaching styles.  The idea is not to lecture to 15 students rather than 35” (5).  If, however, instructors are able to take advantage of the smaller class size, then even a small reduction can result in students perceiving considerable improvement in their paper writing abilities.  We do not wish to imply that reducing class size should not be a goal for writing program administrators, but as a goal in and of itself it is not enough – we need to be aware that pedagogical changes must be initiated together with reductions in class size.

1. Institutional Background

Our study, largely funded by the Committee on Undergraduate Education, was the result of recommendations made by a Continuous Quality Improvement team on our first year composition course (English 20).  That team was itself part of increased campus, college, and departmental attention to student writing.  As a result of that attention, the English 20 program underwent philosophical and practical changes.

The most important change was probably the shift in program philosophy. While there remains some variation among sections, the philosophy of the program as a whole is to provide an intellectually challenging course in which students write several versions of researched papers on subjects of scholarly interest about which experts disagree.  Students write and substantially revise at least three papers, each of which is four to five pages long.  There are four separate but connected goals in these changes.  First, for instructors, our goal is to provide a teaching experience which will make the teaching of first-year composition appropriate preparation for teaching writing intensive courses in their area.  Hence, instructors need to develop their own assignments.

Second, for students, one goal of the course is to enable students to master the delicate negotiation of self and community necessary for effective academic writing.  As Brian Huot has noted, research in writing assessment indicates that students tend to be fairly competent at expressive writing, but have greater difficulty with “referential/participant writing” (241).  Our sense was that this assessment is especially true of students entering the University of Missouri.  They are quite competent at many aspects of writing, but they have considerable difficulty enfolding research into an interpretive argument.  Thus, we did not need to teach The Research Paper that Richard Larson has so aptly criticized; nor do students need instruction in personal narrative.  Instead, students needed practice with assignments which called for placing oneself in a community of experts who are themselves disagreeing with one another.  Achieving this goal was nearly indistinguishable from achieving the goal described above for instructors–assisting instructors to write assignments which called for an intelligent interweaving of research and interpretation into a college-level argument would necessarily result in students’ getting experience with that kind of assignment.

Our third goal was to teach students the importance of a rich and recursive writing process, one which involves considerable self-reflection, attention to the course and research material, and substantial revision in the light of audience and discipline expectations.  Research in composition indicates over the last thirty years suggests that such an attention toward the writing process is the most important component to success in writing, especially academic papers (Flowers and Hayes, Berkenkotter, Emig).

It should be briefly explained that this is not to say that the program endorses what is sometimes called a “natural process” mode of instruction–that term is usually used to describe a program which is explicitly non-directional, in which students write almost exclusively for peers and on topics of their own choosing, and which endorses an expressivist view of writing.  In fact, attention to the writing process does not necessarily preclude the instructor taking a “skills” approach to writing instruction (that is, providing exercises or instruction in what are presumed to be separable aptitudes in composition) but it does necessitate course design with careful attention to paper topics.

And this issue of modes of instruction raises our fourth goal–to enable instructors to use what George Hillocks calls the “environmental” mode of instruction.  When we began making changes to the first year composition program, it was our impression that the dominant mode of instruction was what Hillocks calls the “presentational” mode, which

is characterized by (1) relatively clear and specific objectives…(2) lecture and teacher-led discussion dealing with concepts to be learned and applied; (3) the study of models and other material which explain and illustrate the concept; (4) specific assignments or exercises which generally involve imitating a pattern or following rules which have been previously discussed; and (5) feedback following the writing, coming primarily from teachers.  (116-117)

It is important to emphasize that this mode does not depend exclusively on lecture.  A class “discussion” in which the instruction guides students through material by asking questions intended to elicit specific responses is also presentational mode.  Insofar as we can tell, a large number of instructors used class time to present advice on writing papers as well as to present writing products which students might use as models.  Instructors then used individual conferences in order to discuss strategies for revising papers.

The dominance of this mixing of presentational and individualized modes of instructions in our program had two obvious consequences.  First, it was exhausting for instructors.  An instructor’s time was generally split between the equally demanding tasks of preparing the information to be presented in class and engaging in individual conferences with students. The standardized syllabus recommended four papers; each class has eighteen to twenty students; many of our instructors teach two classes per semester.  Instructors were forced to choose between not providing individual instruction for students on each paper or spending a minimum of eighty hours per semester in conference with students.  If instructors are also spending six hours per week preparing class material, and three hours per week in class, they are spending one hundred and seventy five hours per semester per class on their teaching–not including the time spent grading and commenting on papers.  Standards for good standing and recommendations regarding course load assume that such students are spending only one hundred fifty hours per semester on each course.

It should be emphasized that shifting instructional mode and changing the syllabus to only three papers cannot solve the problem of overworking instructors.  Class preparation and time in class account for one hundred thirty five hours per semester; if instructors spend forty-five minutes grading a first submission and only fifteen minutes grading a second submission, an enrollment of twenty students brings their commitment to one hundred ninety five hours per semester per course, and this amount of time does not include any conferences.

An informal survey of our instructors indicated the consequences of these conflicting expectations: some instructors did minimal commenting on papers, some instructors permitted their own status as students to suffer, while others encouraged students to write inappropriately short papers, and all were over-worked.

The second consequences of the programatic tendency to alternate between presentational and individualized modes of instruction has to do with Hillocks’ own summary of research on modes of instruction.  Hillocks concludes that the presentational mode of instruction is not as effective as what he calls the “environmental mode”: “On pre-to-post measures, the environmental mode is over four times more effective than the traditional presentational mode” (247). In other words, our instructors were working very hard in ways that may not have been the most effective for helping students write better papers.

So, we wanted instructors to use the “environmental” mode of instruction, which

“is characterized by (1) clear and specific objectives…(2) materials and problems selected to engage students with each other in specifiable processes important to some particular aspect of writing; and (3) activities, such as small-group problem-centered discussions, conducive to high levels of peer interaction concerning specific tasks….Although principles are taught, they are not simply announced and illustrated as in the presentational mode.  Rather, they are approached through concrete materials and problems, the working through of which not only illustrates the principle but engages students in its use.”  (122)

In the environmental mode, one neither lectures to students, nor does one simply let class go wherever the students want.  Instead, the instructor has carefully prepared the tasks for the students–thinking through very carefully exactly what the writing assignments will be and why.

2. Other Research on Class Size and College Writing

The relevance of the considerable body of research on class size is largely irrelevant to first-year composition.  Glass et al’s 1979 meta-analysis of 725 previous studies, for instance,  remains one of the fundamental studies on the subject.  Yet, it includes a large number of studies on primary and secondary students; hence, there is reason to wonder what role age plays in the preference for smaller class size.  A more recent, and frequently quoted, meta-analysis of college courses which claims, as measured by student achievement by final examination scores, that class size has no effect on student achievement begins with classes as small as 30 to 40(Williams et al 1985).  But, this study does not appear to have included a writing course.  Considering that the study was restricted to courses with “one or more common tests across sections” (1985 311) it is unlikely to have been a composition course; if it was, then it was one which presumed that improvement in writing results from learning information which can be tested–a problematic assumption.

A more fundamental problem–because it is shared with numerous other studies of class size–is the measurement mechanism.  That is, examinations are not appropriate measures of student achievement in courses whose goal is to teach the writing of research papers (see Huot, 1990, CCCC Committee on Assessment, 1995, White, 1985, White and Polin, 1986); hence, any study which relies on examination grades is largely irrelevant in terms of its measurement mechanism.

Finally, there are good reasons to doubt the implicit assumption that course goals and instructional method are universal across a curriculum.  Feldman’s 1984 meta-analysis of 52 studies does not list any study which definitely involved a writing class; most of the studies, on the contrary, definitely did not include any such course.  Smith and Cranton’s 1992 study of variation of student perception of the value of course characteristics (including class size) concludes that those perceptions “differ significantly across levels of instruction, class sizes, and across those variables within departments” (760).  They conclude that the relationships between student evaluations and course characteristics “are not general, but rather specific to the instructional setting” (762).

This skepticism regarding the ability to universalize from research is echoed in Chatman who argues that class size research indicates that “instructional method should probably be the most important variable in determining class size and should exceed disciplinary content, type and size of institution, student level, and all other relevant descriptive information in creating logical, pedagogical ceilings” (8).  And, indeed, common sense would suggest that there is no reason to assume that research on courses whose major goal is the transmission of information applies very effectively to writing courses.

3. Methods and Results of Our Research

We had two main assessment methods.  Because we were concerned about reducing the time commitment of teaching English 20, we asked instructors to keep time logs.  The mainstay of our initial method of assessment was a set of questionairres given to students at the beginning and end the semesters.  While questionnaires are a perfectly legitimate method of program assessment, they do not provide as complete a picture of a program as a more thorough method would (for more on advantages and disadvantages of questionnaires in program assessment, see Davis et al 100-107).  Given the budget and time constraints, however, we were unable to engage in those methods usually favored by writing program administrators for accuracy, validity, and reliability such as portfolio assessment.  We are relying to a large degree on self-assessment, which, while not invalid, has obvious limitations.  Nonetheless, the results of the questionairres were informative.

Because the program goals emphasize the students’ understanding of the writing process, the questionnaires were intended to elicit any changes in student attitude toward the writing process.  We were looking for confirmation of three different hypotheses.

First, there should be a change in their writing process.  Scholarship in composition suggests that we will find that students begin with a linear and very brief composing process (writing one version of the paper which is revised, if at all, at the lexical level).  If English 20 is fulfilling its mission, that second set of answers will indicate that the majority of students end the course with a richer sense of the writing process–they will revise their papers more, their writing processes will lengthen, and they will revise at more levels than the lexical.

Second, their hierarchy of writing concerns should change.  According to Brian Huot, composition research indicates that raters of college level writing are most concerned with content and organization (1990, 210-254). In various studies which he reviews, he concludes that readers, while concerned with mechanics and sentence structure, consider them important only when the organization is strong (1990, 251).  That is, readers of college papers have a hierarchy of concerns, in that they expect writers to be concerned with mechanics, correctness, and format (sometimes called “lower order concerns”), but that they expect writers to spend less time on those issues than on effectiveness of organization, quality of argument, appropriateness to task, depth and breadth of research, and other “higher order concerns.”

Beginning college students, however, often have that hierarchy exactly reversed: they are often under the impression that mechanics, format, and sentence level correctness are the most important to their readers, and deserve much less attention than the argument (or substance of the paper).  Hence, if English 20 is succeeding, there should be a shift in student ranking of audience concerns.  That is, their beginning questionnaire answers will indicate that they pay the most attention to lower order concerns and least attention to higher order considerations (whether or not the paper fulfills the assignment; if the paper is well-researched; if the evidence is well-presented; if the organization is effective).  At the end of the semester, they should demonstrate a more accurate understanding of audience expectations–not that they have dropped lexical or format concerns, but that they understand those concerns to be less important for success than the higher order concerns.

Third, there should be variation in student and teacher satisfaction with the courses.  This shift is more difficult to predict than the other hypotheses, but it does make sense to expect that the sections in which students receive greater personal attention would be more satisfying for both instructors and students.  In this regard, we expected to confirm what a report from the National Center for Higher Education Management Systems has identified as “an overwhelming finding”: that students believe they learn more in smaller classes, and that they are far more satisfied with such courses.

As with many studies, our results are most useful for suggesting further areas of research.  One area should be mentioned here.  The very constraints of the assessment method–a quantitative and easily administered method–meant that we were asking students to use language other than what they might have.  Open-ended interviews with students would almost certainly elicit much richer results.  One advantage of our study of class size was that it was part of experimenting with various changes in our program; thus, a large number of sections participated in the study as a whole.  Each semester, we had about twenty sections participating in the study in some form or another, and each semester at least four were held to an enrollment of 15 students.[i]  We also designated at least four sections “control” groups, meaning that we did not reduce class size, or consciously make any of the other modifications to English 20 we were contemplating.

An important limitation of our experiment should be mentioned before discussing the results. We ran the experiment over three semesters (WS97, FS97, and WS98), but were only able to use the survey results from the second and third semesters (because we changed the survey between the first and second semester).  In the first semester that we did the experiment, we made a conscious attempt to balance each group in terms of instructor experience and subjective judgments regarding the quality of their teaching.  Given the intricacies of scheduling, however, we were unable to maintain the balances over the next two semesters of the experiment.  This imbalance obviously affected the experimental results in ways that will be noted.

In terms of reducing the time that instructors spent on the course, reducing class size did not have markedly good results.  In FS97, instructors teaching the smaller sections averaged just under twelve hours per week, but they averaged just under fifteen hours per week in WS98.  The control groups reported spending an average of ten and fourteen hours respectively.  Thus, reducing class size did not reduce the amount of time that instructors spent on their courses.

The instructor surveys indicate some reasons that their time commitment might not have reduced.  In FS97, for instance, the teachers mentioned that having a smaller class size inspired them to make changes to their teaching–creating new assignments, taking longer to comment on papers, conferring with students for longer periods of time or more often, adding in an extra paper.  In other words, the instructors took the opportunity to try something that a class size of twenty had previously dissuaded them from trying.

Obviously, this experimentation on the part of the instructors would have had some kind of impact on our own experiment, but it is impossible to predict what it would have been.  It may well be that we would have had very different results with the same instructors had they continued with a reduced class size for a second semester.  Working with that class size for the second time, they might have made different decisions about how to spend their time.  It’s also possible that this experimentation accounts for some of the unpredicted results in regard to student satisfaction and writing process, but, again, it is impossible to know.  Thus, one conclusion which we can draw from our own experiment is that one is likely to get better results by having the same instructors work with a reduced class size for several semesters in a row.

As was mentioned earlier, students were given a survey at the beginning and the end of the semester, eliciting their views of the relative importance of various aspects of the writing process, the amount of revision (and kind) in which they typically engaged, and their understanding of the expectations of college teachers. Most of them were comparison questions, asking the same question about the students’ high school experiences at the beginning of the semester that were then asked about their English 20 experience at the end.  For instance, students were asked: “What aspects of a paper were most emphasized in your high school English course?” at the beginning of the course and “What aspects of a paper were most emphasized in your English 20 course?” at the end of the course. Students were asked to select five aspects of writing a paper most emphasized in high school and five most emphasized in their English 20 classes.  The results from FS97 are shown in the table below.  The area of emphasis is listed in order, and the number is the percentage of students who listed that area among their five.  One term which should be explained is “Thesis statement,” which we take to mean, because of the emphasis of our program, revising the central argument, and not simply rewriting the last sentence fo the introduction.

WS97

HS

CONTROL

CLASS SIZE

Organization

71.66

Drafting 67.4

Peer Review 86.5

Grammar 61.92

Logic 65.3

Revising TS 71.2

Logic and Reasoning 57.38

Peer Review 65.3

Logic 61.5

Format 54.8

Organization 57.1

Revising Organization 53.9

Revising one’s TS 51.78

Revising TS 51

Organization 48.1

WS98

HS

CTRL

CLASS SIZE

Grammar 73.7

Peer review 87.5

Organization 85.7

Organization 67.7

Organization 75

Peer review 85.7

Logic 60

Logic 65.9

Logic 66.7

Research 55.9

Revising ts 59.1

Research 61.9

Format 54.4

Revising one’s organization 48.9

Revising ts 59.1

The results only partially confirmed our hypotheses.  We had predicted that the students would indicate that their high school writing courses put the most emphasis on grammar, format, and outlining and the least emphasis on revision.  We discovered, however, that high school instructors, while putting much emphasis on lower order concerns (e.g., format and grammar) do also emphasize some higher order concerns (e.g., organization and reasoning).   We also discovered more variation between semesters than expected.  While the WS98 results were much the same, with the five areas of most emphasis in high school being (in order) grammar, organization, logic and reasoning, research, format, and outlining, revising one’s thesis was second from last (with only 33.6% of students noting it as an area of emphasis in high school).

Our hypotheses were partially confirmed in that, in both semesters, the high school courses put the least emphasis on any form of revision: revising one’s grammar, revising one’s organization, or engaging in  peer review.  There was consistently a shift from high school in terms of greater emphasis on revision–it is interesting to note, for instance, that students perceive their high school courses putting considerable emphasis on organization (71.66 and 61.7), but almost none on revising organization (18.9).  Similarly, while students noted that grammar was emphasized in high school (73.7), revising one’s grammar was not (36.5).  In contrast, while English 20 is perceived as putting much less emphasis on grammar and usage (24.9), that number is much closer to the number of students who perceived an emphasis on revising one’s grammar and usage (25).  We infer that there is considerable variation among high schools–more than we had predicted–but that most high schools emphasize grammar and format more than English 20, and that English 20 emphasizes revision more than most high schools.

It is also interesting to note that students tend to report considerable experience with group work in high school courses.  Yet, students consistently reported little high school emphasis on peer review.  This discrepancy suggests that high school groups are not being used for peer review, or that–despite being put in these groups consistently–students do not perceive the peer reviews as important.

Students were also asked what aspects of a paper college teachers think most important by selecting four out of eight possibilities.  We had expected that this question would show a shift from lower order to higher order concerns–that, for instance, the method of library research would be rated high at the beginning of the semester, but would be replaced by the sources and relevance of evidence.  As with the previous table, the results from FS97 are presented in order, with the number representing the percentage of students who selected that aspect among their four.

FS97

HS

CONTROL

CLASS SIZE

Clarity of org

65.8

Clarity 71.4

Method 80.8

Correct grammar and usage 57.28

Logic 65.3

Persuasiveness 80.8

Logic and reasoning 57.38

Persuasiveness 55.1

Clarity 71.2

Persuasiveness of argument 55.12

Grammar 36.7

Logic 61.5

Mastery of subject 54.7

Mastery 36.7

Sources 50

WS98

HS

CTRL

CLASS SIZE

Clarity of org 69.5

Clarity of org 78.4

Clarity of org 76.2

Logic 60

Persuasiveness 71.6

Logic 66.7

persuasiveness 58.5

Logic 65.9

Persuasiveness 61.9

Mastery 54.8

Grammar/format/sources 34.1

Mastery 50

Grammar  50.5

Grammar 47.6

What is possibly most interesting about these charts is what is indicated about the high school preparation.  Students are relatively well informed about college instructors’ expectations before they begin the course; what little change there is in the control group in the first semester (and the almost complete lack of change in the second semester) suggests that simply being in college for one semester will inform students’ audience expectations.

The second most interesting result is that the reduced class size was a distinct failure in the first semester by our own program goals.  We did not want instructors emphasizing the method of library research; it was positively dismaying to see that listed as the greatest area of emphasis.  This result is typical of what Faigley and Witte have called unexpected results, and it is one consequence of how instructors were selected for the study.

Because scheduling of graduate students is often a last minute scramble, there were not specific criteria for participating in the reduced class size experiment.  In FS97, one instructor had participated in considerable training (Adams), one was still using a version of the old standardized syllabus and had participated in no training after her entry into the graduate program several years previous (Chapman), one was taking comprehensive exams and had engaged in only the required training (Brown), and one had participated in some training above what was required (Desser).  Adams generally engaged in the environmental mode; Chapman and Brown almost exclusively in presentational mode; Desser largely in environmental mode, but with some reliance on presentational.  Similarly, the instructors had a variety of years of experience–ranging from two to nine years.  As will be discussed below, the number of years of experience had no effect on the results, but the extent to which a person participated in training did.  In regard to the question discussed above, for instance, one can see the range of training reflected in the range of answers: Adams had only 9 per cent of students list method of library research as important; Brown had 37.5; Chapman had 41.6; Desser had 30.77.  In other words, the amount that a person participated in departmental training was reflected in the amount that their course reflected departmental goals.

As mentioned above, the exigencies of scheduling prevented our being able to balance the study groups.  Thus, what we generally called the control group was not necessarily analogous to the other sections in terms of instructor quality, experience, or preparation.  We have, therefore, also included the average number for each question–that is, the average number for all eighteen sections included in the study.

Students were asked about their perception of any change in the quality of their papers.  In asking this question, we did not assume that students were necessarily accurate judges of the quality of their papers, but we did think that their answer would provide a more specific way of evaluating the course than our course evaluations provided.  That is, whether or not they think their papers are better seems to us a useful way for thinking about student satisfaction.  The number represents the percentage of students who checked that item.  “Average” means the average number for all eighteen sections participating in the study.

Substantially better

Somewhat better

same

Somewhat worse

Substantially worse

control

40.1

44.9

4.1

0

0

size

21.2

55.8

15.4

3.9

0

average

34

WS98

Sub better

Some better

same

Some worse

Sub worse

ctrl

23.9

53.4

15.9

2.3

0

size

16.7

61.9

11.9

4.8

0

Here again one sees the results of how instructors were selected to participate.  If one looks at this same table for FS97 in regard to individual instructors, one sees a wide variation in student reaction.

Sub better

Some better

same

Some worse

Sub worse

adams

0

54.5

36.3

0

0

brown

0

50

37.5

12.5

0

chapman

25

50

25

0

0

desser

38.4

46.1

15.3

0

0

It is striking that the different sections had very nearly the same percentage of students who reported some improvement–where one sees the greatest difference is in the number of students who reported substantial improvement.  At least with these four instructors, the more training the instructor had, the more likely students were to report substantial gains.

Only one of these instructors participated in the study the next semester–Desser.  In WS98, Desser was in the control group, and the results were as follows:

No answer

Sub better

Some better

same

Some worse

Sub worse

11.1

5.5

55.5

22.2

5.5

0

Another instructor, Ellison, participated both semesters.  He was in another kind of experimental group fall semester (he met regularly with a faculty member and a group of instructors to discuss assignments, teaching videos, and so on) and reduced class size WS98.  One sees a similar pattern in the difference between the two semesters for his students–when he had a reduced class size, more students reported substantial and some improvement:

Sub better

Some better

same

Some worse

Sub worse

fs97

15.7

57.8

21

0

0

ws98

20

70

0

10

0

Granted, it is dangerous to speculate on the basis of two instructors, but it is intriguing that these instructors received very different results with a reduced class size.  If these instructors are typical, then one can conclude that the same person will get better results with a reduced class size.

There was not always a correlation between amount of training and survey results. For instance, students were asked whether their enjoyment of the paper writing process had changed.  This question was intended as a slightly different way to investigate student satisfaction–ideally, the course would improve both the students’ ability to write college-level papers at the same time that it increased their enjoyment of writing. We were unsure whether or not the question would elicit useful information, however, as we predicted it might be nothing more than an indication of the rigor of the instructors’ grading standards–that students might enjoy writing more in courses with higher GPAs.

Substantially more

Somewhat more

same

Somewhat less

Substantially less

Adams

0

27.2

63.6

0

0

Brown

0

28.7

62.5

12.5

6.25

Chapman

0

41.6

41.6

16.6

0

Desser

15.3

46.1

38.4

0

0

average

7.35

There is not quite as close a correlation between training and results as there was in regard to improved ability, but it is interesting that instructors with more training did not have any students reporting a decrease in enjoyment.  Similarly, the instructor with the least training–an instructor who tends to rely on the presentational mode–had no students report that their papers were substantially better after taking English 20, and the lowest number of students reporting that they received substantially more (12.5) or somewhat more (12.5) attention in English 20 than they had thought they would get.

We had assumed that students in the sections with fewer students would report more individual attention, but this was not necessarily the case.  The table below shows the results for FS97 and the results for Desser and Ellison for both semesters.

Sub more

Some more

same

Some less

Sub less

ctrl

38.8

38.8

14.3

0

0

average

Class size

34.6

19.2

32.7

9.6

1.9

adams

27.2

45.4

18.1

0

0

brown

12.5

12.5

56.2

18.7

0

chapman

33.3

16.6

25

16.6

8.3

Desser fs97

69.2

7.6

23

0

0

Desser ws98

22.2

38.8

33.3

5.5

0

Ellison fs97

31.5

40

30

0

0

Ellison ws98 (red)

31.5

36.8

26.3

0

0

Here one sees no striking correlation to amount of training, nor to instructional method.  We speculate that this lack of correlation results from the more important factor being the amount that the instructor engages in individual conferences with students.  While one does see a striking difference for Desser, there is no change for Ellison (the apparent change is simply the result of 5.2% of his WS98 students not answering that question).  The (highly tentative) inference is that reducing class size will not necessarily result in any group of instructors giving students more individual attention than any other group of instructors might do, but it may result in particular instructors doing so.

This range of results in regard to instructors with lower class size indicates our most important result:  that reducing class size does not increase overall student satisfaction if the instructor uses the presentational mode.  Reducing class size might, however, increase the student satisfaction and confidence on an instructor by instructor basis.

The final table that has provocative results is in response to the question: “If your writing process has changed, in what areas have you seen the greatest change?” Students were asked to select five.  The table is arranged by descending order of frequency in the control group.  The number represents the percentage of students who selected that area among their four.

CTRL

PLA

Class size

Close

Wkshp

Organization

57.1

Library research

51

Revise TS

44.9

Logic

42.9

drafting

30.6

27.1

28.9

45.6

27.1

Peer review

30.6

Revise org

30.6

Time management

26.5

Knowledge of format

24.5

18.6

26.9

29.4

20.8

Write elegant sentences

20.4

Computer use

14.3

Internet research

14.3

Knowledge of grammar

12.2

Reading course material

4.1

reading

2

outlining

2

WS98

ctrl

Close sup

size

wrkshp

Org 48.9

Logic 48.9

Rev org 45.2

Org 41.7

Rev ts 42.1

org46.8

Logic 42.9

Peer rev 41.7

Peer rev 40.9

Rev ts

Org 38.1

Revise org 41.7

Rev org 36.4

Rev org

Rev ts 35.7

Logic 40

Lib 28.4

Computers 27.7

Peer rev 28.6

Rev ts 36.7

The survey results as a whole did not indicate important gains in the reduced class size sections.  For instance, on average, the students in FS97 did not feel that they received more individual attention than the students in the control group did.  They showed slightly more shifting from lower order to higher order concerns on the whole than did students in the control sections, but a fewer number rated their paper writing as “substantially better.” At the beginning and end of the semester, we asked students how much of a paper they typically revised; we expected that students in the smaller class sizes would report engaging in greater revision than in the control groups.  But, that was not the case.  At the beginning of the semester, 22.4% of students in the reduced class size sections reported changing under 10% of a paper between drafts compared to 16.1% of students in the control groups.  At the end of the semester the results were 9.6 and 4.1 respectively.  The largest gain for the reduced class size group was in the 11-25% range (from 41.4 to 51.9) and in the 26-50% group for the control (28.6-40.8%).  Similarly, the control group had a larger number of students who reported that they revised “substantially” than did the instructors whose class sizes were reduced (22.5 compared to 17.3).

Students perceived the greatest emphasis in the course was on peer review; revising the thesis; logic and reasoning; revising organization; organization; format; drafting.  They saw the greatest change in their writing processes in regard to: peer review; organization; thesis revision; organization revision; library research.  In other words, the students saw the greatest changes in at least one area that they did not think that the instructors had especially emphasized (library research).  Most discouraging, 3.9% of the students thought that the papers they were writing after taking English 20 were somewhat worse, and 15.4% thought they were the same.  (None of the students in the control group thought their papers were somewhat worse, and only 4.1% of students thought their papers had remained the same.)

Looking at the results for individual instructors, however, has very different implications.  Instructors teaching the reduced class size sections did not necessarily have any training, and they were not required (or even encouraged) to change their teaching practices to take advantage of the reduced class size.  Instructors who taught reduced class size who did have some kind of previous training did have markedly different results. If an instructor engages in presentational mode, as some of our instructors did, then there is not an obvious improvement for the students in being in a smaller class.

There is, however, some reason to doubt that assumption.  For instance, according to Hillocks, research on grammar, usage, and correctness in student writing indicates that knowledge of grammatical rules has little or no effect on correctness in student performance.  That is, the transferring of information about writing does not improve writing itself.

While lecturing has repeatedly been demonstrated to be of little use in teaching writing, there is no reason to conclude that it is useless in other sorts of courses.  Common sense suggests that a good lecturer can lecture equally well to 15 students or 50 students–indeed, the research on class size indicates that the ability to present and communicate material in an interesting way may well be more important than class size for lecture courses (see, for instance, Feldman 1984).  The environmental mode of instruction, on the contrary, is almost certainly affected by class size.  As McKeachie has said, “The larger the class, the less the sense of personal responsibility and activity, and the less the likelihood that the teacher can know each student personally and adapt instruction to the individual student” (1990, 190).

[i]. The other kinds of sections were: ones with an attached peer-learning assistant; ones whose instructors met regularly with a faculty member to discuss the course; ones in which students met exclusively in small groups with fewer required contact hours per semester.

Writing Centers and copy-editing

Faculty and administrators at UT are extraordinarily supportive of the University Writing Center, something I attribute to the previous directors who set in place a good culture and set of processes. We get fan mail, financial support, and faculty who cheerfully run workshops for us. And our end-of-consultation and follow-up surveys show that students appreciate what we do—98% of 13k surveys say they love what we’re doing.

But what about that 2%?[1] And what if I include faculty who grump at me in meetings or email?

One really interesting complaint, that comes from faculty and students, is that we won’t “edit” student writing. And what they mean by “edit” is go through a paper and write in the “correct” version of every “error” (what is more accurately called “copy-editing”).[2] These people (again, less than two percent of our visitors) want the Writing Center to be, not just directive, but red-pen editors. And they want it because they care about writing, but they care in different ways:

    • They just want someone to edit their writing because editing is hard.
    • Some people believe that editing (or “writing” as they call it) is a specialized skill set they don’t need to acquire—knowing the correct rules of grammar is a kind of knowledge unrelated to (and less important than) content knowledge.
    • They think sentence-level correctness is important, and easy to convey.
    • They think careful attention to sentence-level decisions is important, and they can point to a time when someone harshly editing their writing opened a new world.
    • They want to read error-free writing.

I appreciate that these people want the UWC to do something that they think will make writing better.

What they don’t understand is that there is a field of research on writing center practices and, in fact, on directive vs. non-directive methods of commenting. There is also a long history of practice. People in writing centers want to improve students writing—it’s our mission, passion, and reason for going to work. If red-pen copy-editing of consultees’ work resulted in students being better writers, we’d do it. We don’t because experience and research show that, despite it seeming like the obviously right choice, it doesn’t really help most students.

When I was hired at the Berkeley Writing Center, in the late 70s, there was no training. They hired people who wrote good papers with no grammatical errors, and we met once a week for the first year or so to talk about what was happening in our consultations.

I thought my job was telling people how to change their papers, so I did. That’s what most of us did, and no one told us not to. But, quickly, I learned that wasn’t useful. A good teacher who is giving sensible writing assignments gives a lot of information in class about his/her expectations, about the discipline, about the assignment, and I hadn’t heard any of that. I didn’t actually know what the consultee should do.

And that’s what was happening across writing centers in that era—writing centers learned that consultants shouldn’t evaluate because consultants don’t know the criteria by which a faculty member will evaluate. We shouldn’t pretend to have knowledge we don’t have. That’s why writing centers are non-evaluative—because no one should evaluate the papers of a class who hasn’t been intimately involved with the class.

Well, okay, but why not correct all the commas?  Well, first off, because rules about commas aren’t all that clear—these are rhetorical as much as correctness choices. And, oddly enough, that applies to a lot of “rules” that people think are grammatical, but are stylistic, and vary from one discipline to another (passive voice, bundling nouns as though they’re adjectives, comma splice, use of second person, modifying errors that result from passive agency).

And a lot of “errors” aren’t easily corrected errors of “grammar” but signals of muddled thinking. Errors in predication, mixed construction, reference, modifying, parallelism, metaphor use, and even style choices such as whether to use passive voice/agency often can only be corrected by reconsidering an argument. We can’t just “edit” or “correct” a paper because shifting correcting mixed construction is a cognitive, not grammatical, choice.

In addition to all that, we shouldn’t just rewrite student papers for them because we’re a teaching unit. Except for the rare people who become professors, and even not for them until the moment they are engaged in a discipline, most writers don’t learn much about writing by having someone else go through a paper and correct errors.

We think that red-penning a paper is a good strategy because we can often look back and remember some very dramatic moment when we benefitted from having a paper red-penned. We got it back, looked it over, and tried to figure out what all the marks meant, and how they made the paper better. We learned. We assume it would help all students (as a colleague said, a certain amount of narcissism is probably necessary for success in academia)—that’s what initially made me mark up consultees’ papers. But we aren’t like most students. That moment was generally one when an expert in the field (thus, someone with considerable expert authority) helped us learn discipline-specific discourse (such as graduate school) at a moment we wanted to learn that discourse. I appreciate the faculty who red-penned my work, and I applaud others who do that for students who are at a moment when that is useful information.

The writing center is not that moment. You are that moment, and only for some of your students.

Writers who are anxious to learn the conventions of a field are often appreciative of directive advice as to how we’re not meeting those expectations, and faculty are always people who were that kind of student. We forget that we were atypical. So, yes, red-penning the work of a fairly advanced and very promising student who wants to be an academic can be profoundly useful. But, to be blunt, that is not the job of the UWC because we don’t know who is and is not very promising in a field. Our job is to teach. Not direct.

And most students don’t benefit from that kind of red-penning—they don’t look again at the corrections; they just make them.

As I tell students in my class when I explain why I don’t edit their first submissions, I’m not going through life with them editing their papers. I need to teach them to edit their own papers. If I teach them to rely on me to correct their papers, I’ve done them a disservice. The UWC doesn’t help students be better writers if we copy-edit their papers. Our mission isn’t helping students turn in better papers; it’s helping students be better writers.

[1] In UWC exit surveys, this is less than 2%. It’s a higher percentage of faculty who email or call me, since I don’t get 97 calls or emails about how what we do is great, but it’s still a very small number of calls. Still and all, all of the emails or calls are from people who really care about student writing, and I love that.

[2] “Correct” and “error” are in scare quotes because a lot of times it isn’t a grammar error, but a disciplinary or personal preference. People often assume that, if you don’t copy-edit, you don’t care about sentence-level correctness issues at all. We care about them very much, enough that we ensure that our consultants engage in practices that, unlike copy-editing, are likely to have long-term impact on student writing.

Kavanaugh and the GOP and bungling apologia

Rhetoric is an old art, with what amount to textbooks going back, just in the western tradition, to the 4th century BCE. And, one of the very old concepts in rhetoric is the apologia, or defense speech: the genre of speech in which someone is responding to an accusation. It’s an old concept, and there’s a lot of advice out there as to good and bad practices in apologia. More recently, businesses got interested in the topic, and the field of “crisis communication” was born (with the sub-field of reputation repair). And there are people who work with public figures who can advise people facing accusations as to the best ways to respond.

And they all say the same thing: be clear, take responsibility, be honest.

Kavanaugh, the GOP, and the pundits trying to support him have blown this about as badly as it’s possible. They are clearly not talking to anyone who knows anything about how to handle this kind of situation, and that’s concerning.

There are complicated situations in which no apologia is going to work, or in which it might take months or years. And apologia is a rhetorical strategy–in public rhetoric, it might be purely Machiavellian (the person might not really be very sorry at all). But, there are some principles that are so straightforward they can be taught in a first-year college course in rhetoric. (In fact, they were laid out in a 1973 article.)

So, setting aside the question of ethics or sincerity, the savvy move for Kavanaugh and his handlers to have made was to get advice from at least a first-year rhetoric student, if not an actual expert. Kavanaugh had, from the Machiavellian perspective, an easy case.

The accusation, to be clear, is that, as a drunk teen he tried to rape another teen. No one is claiming that he could not have done it–there is plenty of evidence that Kavanaugh and friends were living a kind of life in which it could have happened. They’re claiming it hasn’t been proven to have happened, and they’re pulling out all the standard misogynist rape culture strategies.

And someone who knew apologia 101 would have told them DO NOT DO THAT. The right response would have been an apologia that  engaged in  denial of intent, bolstering, and differentiation. That would have been something like, “I am tremendously sorry for anything I did in those days–I never had any intent to rape anyone, but I was young, stupid, irresponsible, and drinking too much. I don’t know what I did, but I’m sure I hurt people, and I have put those days behind me” [and then a move to bolstering].

Regardless of whether it was sincere or not, it would have been rhetorically savvy–it would have put opponents of Kavanaugh in the position of trying to attack him for something he might have done a long time ago, for which he has apologized, and which he can plausibly say is not a reflection of who he is now. Opponents would have been trying to deny someone a SCOTUS seat on the basis of the character he had at 17.

But, because they went both barrels of rape culture defenses, Kavanaugh and his supporters have made it clear that he probably still is that entitled and irresponsible person, he doesn’t take responsibility, and they still basically endorse the premises of rape culture. They have made it a question of his character now.

And it’s also now a question of his judgment. And theirs. What is striking to me about the current GOP leadership–and this is a new phenomenon–is the extent to which they reject expertise. There are experts out there who could have helped them with this problem, experts whom they either didn’t consult or whose advice they ignored. And that’s the new GOP in a nutshell. It’s all about each of these guys being all the expert he needs.

Sensible crisis communication is a basic concept in business, and it’s one that’s news to the GOP leadership.

Table of Contents for Hitler and Rhetoric coursepack

Table of contents for the Rhetoric and Hitler course.

This coursepack is in addition to the required texts.

Required texts: Hitler, Mein Kampf (required)

Gregor, How to Read Hitler (recommended)

Evans, The Coming of the Third Reich (required)

                                    The Third Reich in Power (recommended)

Ullrich, Hitler (required)

coursepack at Jenn’s (required)

Jasinski, Sourcebook (available as an e-book through the UT   Library)

 

Syllabus

Rhetoric and Hitler: an introduction

Kenneth Buke, “Rhetoric of Hitler’s ‘Battle’”

O’Shaughnessy, from Selling Hitler

McElligott, from Rethinking Weimar Germany

Hitler, March 23, 1933 speech

Sample papers

“Advice on Wrting”

Hitler, speech to the NSDAP 9/13, 1937

—. speech, 8/22/39

—. interview with Johst

—. speech, 1/27/32

Tourish and Vatcha, “Charismatic Leadership and Corporate Cultism at Enron: The Elimination of Dissent, the Promotion of Conformity and Organizational Collapse”

Entry on interpellation

Hitler, speech 4/28/39

Selection from Hitler’s Table Talk (480-83)

Kershaw, from The End (386-400)

Hitler, speech 7/13/34

Longerich, selection from Holocaust (Nazi evolution on genocide)

Selection from Hitler’s Table Talk 12-16, 422-426

Entry on inoculation

Selection from Tapping Hitler’s Generals (30-62)

Kershaw, from Hitler, The Germans, and the Final Solution (197-206)

Selection from Mayer, They Thought They Were Free (166-173)

“Dog whistle politics”

Selections from Shirer’s radio broadcasts

Selection from Snyder’s Black Earth

Selection from Hitler’s Table Talk (75-79)

Selection from Spicer’s Antisemitism, Christian Ambivalence, and the Holocaust

Hitler, speech 4/12/22

“Dissociation” from Perelman and Olbrecths-Tyteca’s The New Rhetoric

Selection from Encyclopedia of Rhetoric

Selection from Eichmann in Jerusalem

Selection from Eichmann Interrogated

Selection from Hitler and His Generals

Selection from Ordinary Men

Louis Goldblatt’s testimony before the Committee on National Defense Migration

Letter to Mr. Monk

Thomas Mann, “That Man is My Brother”

“Masculinity and Nationalism”

“Art of Masculine Victimhood”

Hitler, speech 6/22/41

selection from Longerich’s Hitler

selection from Maschmann’s Account Rendered

 

Stasis shifts (distracting people from how bad your argument is)

You can’t get a good answer if you ask a bad question. And one of the best ways to shut out any substantial criticism of your position is to ensure that the questions asked about it are softball questions. If your policy isn’t very good, make sure the debate isn’t on the stasis of “is this a pragmatic and feasible policy that will solve the problem we’ve identified.” Shift the stasis.

In a perfect world, we make arguments for or against policies on the basis of good reasons that can be defended in a rational-critical way (not unemotional—it’s a fallacy to think emotions are inappropriate in argumentation). But, sometimes our argument is so bad it can’t stand the exposure of argumentation, in that we can’t put forward an internally consistent argument. Saying that Louis would be a great President because squirrels are evil is a stasis shift—trying to get people to stop thinking about Louis and just focus on their hatred for squirrels.

Arguments have a stasis, a hinge point. Sometimes they have several. But it’s pretty much common knowledge in various fields that the first step in getting a conflict to be productive (marital, political, business, legal) is to make sure that the stasis (or stases) is correctly identified and people are on it. If we’re housemates, and I haven’t cleaned the litterboxes, and we have an agreement I will, then you might want the stasis to be: my violating our agreement about the litterboxes.

Let’s imagine I don’t want to clean out the litterboxes, but, really, it’s just because I don’t want to. I have made an agreement that I would, and when I made the agreement I knew it was fair and reasonable. So, even I know that I can’t put forward an argument about how tasks are divided, or who wanted a third cat and promised to clean litterboxes in order to get that cat. Were this a deliberative situation, I would be open to your arguments about the litterboxes, but let’s say I’m determined to get out of doing what I said I would do. I don’t want deliberative rhetoric. I want compliance-gaining—I just want you to comply with my end point (I don’t have to clean the litterboxes).

I will never get you to comply as long as we are on the stasis of my violating an agreement I made about the litterboxes, since that’s pretty much slam dunk for you, so I have to change the stasis.

The easiest one (and this is way too much of current political discourse) is to shift it to the stasis of which of us is a better human. If you say, “Hey, you said if we got a third cat, you’d clean the litterboxes, and we got a third cat, and you aren’t cleaning them,” I might say, “Well, you voted for Clinton in the primaries and that’s why Trump got elected,” and now we aren’t arguing about my failure to clean the litterboxes—we’re engaged in a complicated argument about the Dem primaries. I can’t win the litterbox argument, but I might win that one, and, even if I don’t, I might confuse you enough that will stop nagging me about the litterboxes.

[I might also train you to believe that talking about the litterboxes will get me on an unproductive rant about something else, and so you just don’t even raise the issue. That’s a different post, about how Hitler deliberated with his generals.]

Or, I might acknowledge that I don’t clean the litterboxes, but put the blame for my failure on you because your support of Clinton is so bad that I just can’t think about the litterboxes—that’s another way of shifting the stasis off of my weak point and onto an argument I might win.

Hitler and Rhetoric

As Nicholas O’Shaughnessy says, anyone looking at the devastation of World War II and the Holocaust is likely to wonder: “How was it possible for a nation as sophisticated as Germany to regress in the way that it did, for Hitler and the Nazis to enlist an entire people, willingly or otherwise, into a crusade of extermination that would kill anonymous millions?” (1) The conventional answer is to attribute tremendous rhetorical power to Adolf Hitler. Kenneth Burke calls Hitler “a man who swung a great deal of people into his wake” (“Rhetoric” 191). William Shirer, who was an American correspondent in Germany in the 30s, describes that, listening to a speech he knew was nonsense, “was again fascinated by [Hitler’s] oratory, and how by his use of it he was able to impose his outlandish ideas on his audience” (131). Shirer says Hitler “appeared able to swing his German hearers into any mood he wished” (128). Shirer is clear that Hitler owed his power to his rhetoric: “his eloquence, his astonishing ability to move a German audience by speech, that more than anything else had swept him from oblivion to power as dictator and seemed likely to keep him there” (127).

Scholars don’t necessarily agree, however. Ian Kershaw says, “Hitler alone, however important his role, is not enough to explain the extraordinary lurch of a society, relatively non-violent before 1914, into ever more radical brutality and such a frenzy of destruction” (Hitler, The Germans, and the Final Solution 347). While Hitler’s personal views were important, and neither the Holocaust nor war would have happened without his personal fanaticism and charisma, they weren’t all that was necessary: “Concentrating on Hitler’s personal worldview, no matter how fanatically he was inspired and motivated by it, cannot readily serve to explain why a society, which hardly shared the Arcanum of Hitler’s “philosophy,” gave him such growing support from 1929 on—in proportions that rose with astonishing rapidity. Nor can it explain why, from 1933 on, the non-Nationalist Socialist élites were prepared to play more and more into his hands in the process of “cumulative radicalization.”” (Hitler, the Germans, and the Final Solution 57)

In other words, Hitler’s followers were  not passive automatons controlled by Hitler’s rhetorical magic. So, how powerful was that rhetoric?

The answer to that question is more complicated than conventional wisdom suggests for several reasons. First, while Hitler was quick to use new technologies, including ones of travel, most of the Nazi rhetoric consumed by converts wasn’t by Hitler. People like Adolf Eichmann talk about being persuaded by other speakers, pamphlets, even books.

Second, no one claims that Hitler was a creative or inventive ideologue: “Hitler was not an originator but a serial plagiarist” (O’Shaughnessy 24). Joachim Fest said Hitler’s beliefs were the “sum of the clichés current in Vienna at the turn of the century” (qtd. in Gregor, 2), and Gregor says, “Neither can one claim that Hitler was an original thinker. There is little in his writings or speeches that we cannot find in the penny pamphlets of pre-1914 Vienna where he began to form his political views. His racial anti-Semitism rehearses the familiar slogans of many on the pre-war right. His visions of German expansion echo the ideas of the more extreme wing of the radical-nationalist Pan German movement [….] And, in essence, his anti-democratic, anti-Socialist sentiments similarly reproduce the conventional thinking of broad sectors of the German right from both before and after the First World War.” (2)

If Hitler wasn’t saying anything new, to what extent can we say he persuaded people? What did he persuade them of?

A closely related problem is that large numbers of Germans supported Hitler politically but rejected the central aspects of his ideology—such as his eliminationist racism and his desire for another war. Although he’d long been absolutely clear that those were central to his views, when he began to downplay them (especially in 1932 and 33), many people believed those were trivial aspects that could be ignored. Many people supported him strategically, especially the Catholic and Lutheran churches, both of which were outraged by the Social Democrats’ (democratic socialists) liberal social policies (e.g., legalizing homosexuality, supporting feminism, and, especially, breaking the religious monopoly on primary schools). Since Hitler and the Nazis were socially conservative, and Hitler promised to allow the churches more power than the Social Democrats would allow, many Protestants voted for Nazis, and the official Catholic Party (the Centre Party) Reichstag members voted unanimously for Hitler taking on dictatorial power (for more on this background, see Evans; Spicer).

Some scholars refer to “the propaganda of success,” by which they mean that Hitler gained the support of people not because he put forward good arguments, or even because of anything he said, but because they liked his locking up Marxists and Socialists, industrialists liked his support of big business, people liked the increased amount of order, they liked the improved economy, they liked his conservative social policies, a lot of Germans liked his persecution of immigrants, and a lot of people either liked or didn’t mind the legitimating and legalizing of discrimination against Jews (even the churches only objected to discrimination against converted Jews). And large numbers of Germans didn’t particularly like the idea of democracy—the premise of democracy is that political situations are complicated, and that there aren’t obvious solutions. Or, more accurately, there are solutions that appear to be obviously right from one perspective, but are obviously wrong from another perspective. Democratic processes assume that the various perspectives need to be taken into consideration, and so the best policy for the community as a whole will not be perfect for anyone and will take a lot of time to determine—many people would rather that a powerful leader make all the decisions and leave them out of it. After Hitler had been in power a year, many people felt that their lives were better, and that’s all they really cared about—that they were headed down a road that would make their lives much worse didn’t concern them because they didn’t think about it.

Finally, many people came to support Nazis because they liked that Hitler made them feel proud of being German again. He didn’t make them feel proud of being German by changing their minds about anything, but by insisting publicly and endlessly that they were victims—that nothing about their situation was the consequence of bad decisions they had made. He wasn’t saying anything that was new, but it was new for a political leader—he was simply the first major German political figure in a long time to say, unequivocally, Germany was for Germans, and Germans were entitled to run Europe (if not the world).

All these characteristics of Hitler’s relationship with his supporters—his lack of originality, strategic acquiescence, hostility to democracy, narrow self-interest on the part of many Germans, and the propaganda of success—mean that it’s actually an open question as to whether Hitler’s rhetoric was unique, let alone how much power we should ascribe to it. And so this course will consider the questions: what were Hitler’s rhetorical strategies? how unique or unusual was (is) it? what kind of impact does it have? to what extent (and under what circumstances) does it work?

 

Works Cited

Burke, Kenneth. “Rhetoric of Hitler’s ‘Battle.'” Philosophy of Literary Form. U of California P, 1974.

Evans, Richard. The Coming of the Third Reich. Penguin, 2005.

Gregor, Neil. How to Read Hitler. Norton, 2005.

Kershaw, Ian.  Hitler, the Germans, and the Final Solution. Yale U P, 2009.

O’Shaughnessy, Nicholas. Selling Hitler: Propaganda and the Nazi Brand. Oxford UP, 2016.

Shirer, William. The Nightmare Years: 1930-1940. Boston: Little, Brown and Company, 1984.

Spicer, Kevin, ed. Antisemitism, Christian Ambivalence, and the Holocaust. Indiana UP, 2007.

Ethos, pathos, and logos

Since the reintroduction of Aristotle to rhetoric in the 60s, there has been a tendency to read him in a post-positivist light. That is, the logical positivists (building on Cartesian thought) insisted on a new way of thinking about thinking—on an absolute binary between “logic” and “emotion.” This was new—prior to that binary, the dominant models of thinking involved multiple faculties (including memory and will) and a distinction within the category we call “emotions.” While it was granted that some emotions inhibited reasoning (such as anger and vengeance) theorists of political and ethical deliberation insisted on the importance of sentiments. The logical positivists (and popular culture), however, created a zero-sum relationship between emotion (bad) and reasoning (logic–good). Thus, when we read Aristotle’s comment about the three “modes” of persuasion post-positivist world, we tend to assume that he meant “pathos” in the same way we mean “emotion” and “logos” in the same (sloppy) way we use the word “logic.” And we get ourselves into a mess.

For instance, for many people, “logic” is an evaluative term—a “logical” argument is one that follows rules of logic. Yet, textbooks will describe an “appeal to facts” as a logos (logical) argument. That’s incoherent. Appealing to “facts” (let’s ignore how muckled that word is) isn’t necessarily logical—the “facts” might be irrelevant, they might be incorporated into an argument with an inconsistent major premise, the argument might have too many terms. In rhetoric, we unintentionally equivocate on the term “logical,” using it both to mean any attempt to reason and only logically correct ways of reasoning. (It’s both descriptive and evaluative.)

The second problem with the binary of emotion and reason is that, as is often the case with binaries, we argue for one by showing the other often fails. Since relying entirely on emotion often leads to bad decisions, then it must be bad, and relying on logic must be good. That’s an illogical argument because it has an invalid major premise. Were it valid, then someone who made that argument would also agree that relying on emotion must be good because relying purely on logic sometimes misleads (it’s the same major premise—if x sometimes has a bad outcome, then not-x must be good).

So, even were we to assume that emotion and logic are binaries (they aren’t), then what we would have to conclude is that neither is sufficient for deliberating.

And, in any case, there’s no reason to take a 19th century western notion and try to trap Aristotle into it.

A better way to think about Aristotle’s division is that he is talking about: what the argument of a speech is, who is making the speech, and how they are making it. So, the logos (discourse) in a speech can be summarized in an enthymeme because, he said, that’s how people reason about public affairs. There are better and worse ways of reasoning, and he names a few ways we get misled, but he didn’t hold rhetoric to the same standards he held disputation—that is where he went into details about inference. An appeal to logos, in Aristotle’s terms, isn’t necessarily what we mean by a logical argument.

Aristotle pointed out that who makes the speech has tremendous impact on how persuasive it is (and also how we should judge it)—both the sort of person the rhetor is (young, old, experienced, choleric), and how the person appears in the speech (reasonable, angry). And, finally, how the person makes the speech has a strong impact on the audience, whether it’s highly styled, plain, loud, and so on.

And all of those play together. A vehement speech still has enthymemes, and it’s only credible if we believe the speaker to be angry—if we believe the speaker to be generally angry (or an angry sort of person) that will have a different impact from an angry speech on the part of someone we think of as normally calm. Ethos, pathos, and logos work together, and they don’t map onto our current binary about logic and emotion.

As long as I can think of someone more racist, I’m not racist at all

My *favorite* assignment in the Rhetoric of Racism course is having students look at a text (or practice) about which there is an argument (ideally a text they think is racist) and explain why there is a disagreement.

There are basically eight ways people argue that a text isn’t racist:

  1. a text isn’t racist if it doesn’t make a big deal about race;
  2. texts are either racist or not racist and so if there is any way in which this text criticizes racism, then it can’t be racist;
  3. it’s just a “feel-good” text and you’re over-reading;
  4. it isn’t racist because what it says is true (in other words, the person saying the text isn’t racist is racist);
  5. racists are people who explicitly and self-consciously hate everyone of every other race, and only racist people say racist things, so if the person created the text isn’t someone who never ever associates with or who never says anything “nice” about any member of any other race, then the text can’t be racist (also known as the “some of my best friends are…” defense);
  6. the author didn’t intend to be racist (so it’s only racist if the individual who created the text engaged in actions s/he knew to be racist);
  7. it doesn’t have the marks of hostility toward another race (the tone isn’t over-the-top, it doesn’t use racial epithets);
  8. it isn’t racist because there are other texts that are more racist, or it doesn’t endorse the most extreme versions of racism, or the person knows of people who are more racist (what I’ll call the “Eichmann defense”).

This is also a list of how racism is legitimated—these are the ways that people allow racist practices to continue. They’re all complicated to talk someone out of (although there are ways), and here I want to focus on two of them: 4) and 8), which often co-exist. These are the ones that really muckle my students, and they are really interesting.

I think the two of them share the assumption that calling a text racist is a personal attack on, not just the author(s) of the text in question, but anyone who likes it. The underlying logic is: racists are evil, evil people are entirely not-good, people who like something racist are racist, so calling someone racist, or saying something they like is racist, is saying they are entirely evil.

That logic is a good example of what Chaim Perelman and Lucie Olbrechts-Tyteca called “philosophical paired terms.” The logic maps out like a question on a standardized test “Dogs are to mammals as parakeets are to ____.”

And, therefore, since good and evil are binaries (something is entirely good or entirely evil), then, if you can imagine something more evil, you must have some good, and so can’t be entirely evil, and so you can’t be evil at all. Therefore, you must be on the “not racist” side of the equation.

Most of us (perhaps all) engage in judgments comparatively, so that, as long as we are more [whatever] than our peers, we feel good about ourselves. Clearly, 8) relies on that move—as long as you aren’t as racist as someone else, you can feel good about your attitudes.

Interestingly enough, Adolph Eichmann relied on that argument a lot. In the interrogations, he several times condemned people for a Streicher-kind of anti-Semitism—part of trying to persuade his Jewish interrogators that he wasn’t anti-Semitic. He also continually tried to represent his job as okay because it wasn’t as directly death-dealing as the people who actually pulled the triggers or applied the gas.

If someone else was more guilty, then he wasn’t guilty at all.

This move is sometimes characterized as “whataboutism” but it’s actually different. Whataboutism is sheer tu quoque—it’s an attempt to shift the stasis of the argument away from what I did to some competition as to which group or individual is better. It’s almost always an admission that the people making the argument are engaged in sheer factionalism (there are complicated exceptions). So, for instance, defenders of Trump said Clinton did it too (a fallacy). But, some critics of Bill Clinton pointed out that he claimed he was a feminist and supporter of women’s rights, so his sexually harassing women was a violation of feminist principles. That’s a legitimate and important argument.

People who claim that the GOP is morally superior to the DNC can’t logically use the “Clinton groped women” argument at all because it shows that they think both parties are just as bad—and they’re claiming theirs is better.

“Whataboutism” works by accusing the out-group of doing the same thing the in-group has recently been outed for doing. But this move doesn’t accuse the out-group of anything—it just points out that there is a worse version (perhaps even a worse in-group version) of this behavior.

Eichmann defended himself as not anti-Semitic because another Nazi was more extreme. During slavery, slaveholders defended their treatment of slaves on the grounds that there were other slaveholders who were worse (they also engaged in tu quoque, but that’s a different story); pro-segregationists posited the KKK and violent segregationists as worse than they; the people I know who drink the Rush Limbaugh/Fox News flavor-aid all name somein-group pundit too extreme for them.

That someone may be more racist doesn’t mean you aren’t racist. Both you and they might be racist.

Talking about racism means, I think, getting the argument away from whether people are racist, whether their intentions are deliberately racist, and whether racist/not racist is a binary.

[Image screenshot from here.]