Class size and college writing (another version of the same argument)

[Also co-authored by Reinhold Hill, and also from the early 2000s]

Introduction

Any Writing Program Administrator occasionally has the frustrating experience of failing to get administrators, colleagues, parents, and even students to understand the bases of our decisions–why classes must remain small, why instructors need training in rhetoric and composition, and so on. This kind of experience is frustrating because we often find ourselves talking to someone with different assumptions about teaching, writing, and research. For such an audience, position statements are often not helpful, as our interlocutors do not even know the organizations whose position statements we’re likely to cite.

This is not to say that such discussions are necessarily an impasse, nor that the different assumptions interlocutors have are incommensurable. It is simply that, often being from different disciplines, we all bring different assumptions that seem transparently obvious to each of us. We may know from experience that one gets better writing from students if they are required to revise, but an administrator with a different disciplinary background may be sincerely concerned that we are not assessing our classes and students on the extent to which students have retained the information we have given them in lectures and readings. For many people, that is learning. As far as they are concerned, if we are not lecturing and assigning reading, then we are not teaching; if we are not testing our students, then we are not assessing students objectively.

In addition, the articles and books that we are likely to cite to explain our practices look very strange to some people–it’s just argument, a colleague once complained. We can rely on argument because we teach argument, and we are comfortable assessing arguments. We can rely on anecdote and personal experience more than people in many fields because we share an experience–the teaching of writing. Thus, if an author narrates a specific incident, we are likely to find it a reasonable form of proof, if the incident is typical of our own experience. In some other fields, however, quantified, empirical evidence is the only credible sort of proof, or an assertion must be supported by a large number of studies (regardless of how problematic any individual study might be). This is particularly an issue with class size, as minor change in enrollment (from an administrator’s perspective) are strongly resisted by Writing Program Administrators. Our intention in this article is to try to help Writing Program Administrators argue for responsible and ethical class sizes in writing courses.

There are few topics about which Writing Program Administrators and upper administrators are likely to disagree quite so unproductively as class size. While Writing Program Administrators typically argue for keeping first year writing courses as small as possible, upper administrators are often focussed on the considerable savings that could be effected by even a small change in enrollment. WPAs can cite position statements and recommendations from NCTE and ADE, but upper administrators cite such passages as the following from Pascarella and Terenzini who summarize the “substantial amount of research over the last sixty years” on class size in college teaching:

The consensus of these reviews–and of our own synthesis of the existing evidence–is that class size is not a particularly important factor when the goal of instruction is the acquisition of of subject matter knowledge and academic skills. (87).

With the backing of such an authority, upper administrators are likely to be mystified at WPA’s resistance to first year writing classes of twenty-five to thirty.

This is not to say that WPAs have no research on the side of smaller classes. Despite what Pascarella and Terenzini say, there is considerable research which identifies benefits in smaller classes. The meta-analysis of Glass and Smith (not mentioned by Pascarella and Terenzini) concludes that reduced class size is beneficial at all grade levels; Slavin found small positive short-term benefit; and several studies found benefit if (and only if) teachers engaged in teaching strategies that took advantage of the smaller size (Chatman, Tomlinson). On the other hand, there is at least one study too recent to be cited by Pascarella and Terenzini that concludes no demonstrable benefit to reducing class size (e.g., David Williams). Thus, it may seem to be a case of warring research.

On the contrary,  we will argue that the apparently disparate results of research can be explained, in a comment made by Pascarella and Terenzini. After the passage quoted above, they say “It is probably the case, however, that smaller classes are somewhat more effective than larger ones when the goals of instruction are motivational, attitudinal, or higher-level cognitive processes” (87).

There are two points which we wish to make about Pascarella and Terenzini’s negative conclusion regarding class size. First, it is striking how dated the research is–although Pascarella and Terenzini’s book came out in 1991, the most recent study they cite is 1985. Of the eighteen studies they mention, three are from the twenties, one from 1945, two from the fifties, two from the sixties, seven from the seventies, and three from the eighties. This is particularly important for the teaching of writing, as there was a major reversal in the sixties in pedagogy, returning from the lecture-based presentation of models which students were expected to imitate to the classical method which put greater emphasis on the process of inventing and arranging an effective argument.

This issue of teaching model is crucial. The impact that varying class size has on the outcome in terms of student writing depends heavily on the goal and method of the writing courses in question. If the courses are lecture courses, in which only the teacher is expected to read the students’ writing, then the only limit on class size comes from the amount of time one expects the teacher to spend grading. While that is not a model we endorse (and we will discuss the reasons below), it can still be the basis of a useful discussion.

At a “Research I” institution, faculty members are usually assessed on the assumption that they spend forty per cent of their time teaching two courses, or, one day out of a five day work week (eight hours). At schools with more teaching responsibilities, the math works out in similar ways (with a fairly ugly exception for universities with Research I publishing expectations and a three or four course teaching load). Graduate students are usually assumed to have teaching responsibilities that account for half of their half-time appointment, or ten hours per course. With three hours per week in the classroom, and three hours of office hours, graduate students instructors are left with four hours per week of grading and course preparation.

One reason that administrators and WPAs often disagree about the amount of work involved in teaching writing courses is that administrators’ experience is with what Hillocks calls the “presentational” mode of teaching. The first year is hellish, but then the instructor has prepared the presentations, and future years involve tinkering with prepared lectures. Hence, course preparation is presumed to be minimal. But, of course, most WPAs are not imagining instructors’ spending class time lecturing because the presentational mode has been demonstrated, conclusively, to be the least effective method of teaching writing.

Still and all, if one assumes that a course is supposed to take 150 hours of an instructor’s time over the course of a semester (not including pre-semester course preparation), and 45 hours of that time is spend in class, and another 45 hours is spent in office hours, there are 60 hours left for individual conferences, grading, and course preparation. If there are twenty students per class, then meeting twice with each student for a half hour conference uses up 20 hours. Even assuming an efficient teacher who is dusting off lecture notes for course preparation, one should expect an hour per week of course preparation (15), leaving 25 hours for grading. Advocates of minimal marking (a problematic issue to be discussed below) describe a process that takes only twenty minutes per paper. Obviously, then, the amount of time an instructor spends on grading depends upon the number of papers, but a course with only three papers would use up almost all of the time left. Since most programs require more than three papers (and most instructors spend more than twenty minutes per paper), more than twenty students per course puts instructors into unethical working conditions.[1]

But, as we said, the deeper issue concerns just what happens in a writing class. The issue is whether one sees writing instruction as the inculcation of subject matter knowledge or as the development of higher-level cognitive processes. To the extent that it is the latter, classes should be small; to the extent that it is the former, class size is limited only by instructor workload Or, in other words, what do we teach when we teach college writing?

Interestingly enough, this is one of those questions that is not a question for people outside of the field. It seems obvious enough to people unfamiliar with research in Linguistics, Rhetoric, and English Education who tend to give what appears a straightforward answer: we teach the rules of good writing. Behind that apparent consensus is an interesting disagreement. For some people, the “rules of good writing” describe formal characteristics in writing that all educated readers acknowledge is high quality (e.g., the thesis in the first paragraph, an interest-catching first sentence). For others, those rules describe procedures that all good writers follow when writing (e.g., keep notes on three by five cards, write a formal outline with at least two sub-points). As qualitative and quantitative research has shown, however, both of those perceptions regarding the rules of good writing–regardless of how widespread they are–are false.

In the first place, there is less consensus about what constitutes “good” writing than many people think. Writers have fallen in and out of fashion, so that there is not any author who has not had his or her detractors–critical reception of Henry David Thoreau’s Walden was so hostile that it was nearly turned into pulp; Addison and Steele, always included in composition textbooks until the 1960s, are now considered nearly unreadable; even Shakespeare has been severely criticized for his mixed metaphors, complicated language, and drops into purple prose. As research in reader response criticism demonstrated as long ago as the early part of this century (see I.A. Richards), students (and readers) do not immediately recognize the merits of canonical literature; there is considerable disagreement as to just what the best writing is, so that the “canon” of accepted great writing has constantly been in flux (see Ohmann, Fish, Graff).

To a large degree, this disagreement is disciplinary; that is, different disciplines have different requirements for writing. This divergence is most obvious in regard to format–such as citation methods and order of elements. It is equally present and more important in regard to style: in the experimental and social sciences, for instance, “good” writing uses passive voice, nominalization, long clusters of noun phrases, and various other qualities which are considered “bad” writing in journalism, literature, and various humanistic disciplines. Even the notion of what constitutes error varies–social science writing is rife with what most usage handbooks identify as mixed metaphors, predication errors, reference errors, non parallel structure, split infinitives, dangling modifiers, and agreement errors. Lab reports, resumes, and much business writing permit, if not require, fragment sentences. Meanwhile, people from some disciplines recoil at the use of first person in ethnographic writing, literary criticism, some journalism, and other humanistic courses.

Disciplines also disagree as to what constitutes good evidence (for more on this issue, see Miller, Bazerman). Some disciplines accept personal observation (e.g., cultural anthropology), while some do not (e.g., economics). There are similarly profound disagreements regarding the validity of textual analysis, quantitative experimentation, qualitative research, interviews, argument from authority, and so on. There is a tendency for people to be so convinced of the epistemological superiority of their form of research that, when confronted with the fact of differing opinions on what constitutes good writing, they dismiss the standards of some other discipline (thus, for instance, Richard Lanham’s popular textbook Revising Prose condemns all use of the passive voice, and Joseph Williams’ even more popular Style: Ten Lessons in Clarity and Grace prohibits clusters of nouns). Our goal is not to take a side in the issue of which discipline promotes the best writing, but to insist upon the important point out that there is disagreement. Thus, a writing course cannot teach “the” rules of good writing that will be accepted in all disciplines because no such rules exist (unless the rules are extremely abstract, as discussed below).

In the second place, as several studies have shown, the ‘”rules” of good writing which we give students for student writing do not describe published writing.  For instance, students are generally told to end their introductions with their ‘thesis statements,” to begin each paragraph with their topic sentences (which is assumed to be the main claim of the paragraph), and to focus on the use of correct grammar. Published writing, however, does not have those qualities. Thesis statements are usually in conclusions (Trail), and introductions most often end with a clear statement of the problem (Swale), or what classical rhetoricians called the “hypothesis” (meaning a statement that points toward the thesis). Textbook advice regarding topic sentences is simply false (Braddock), and, readers are much more oblivious about errors in published writing than much writing instruction would suggest (Williams). In fact, error may not have quite the role that many teachers think–while college instructors say that correctness is an important quality of good writing (Hairston), studies in which they rank actual papers shows a privileging of what compositionists call “higher order concerns”–appropriateness to assignment, quality of reasoning, and organization over format and correctness (see Huot’s review of research on this issue).

Finally, several meta-analyses of research conclude that teaching writing as rules has a harmful effect on student writing (see especially Knoblauch and Brannon, Hillocks, Rose). The common sense assumption is that students prone to writing blocks lack the knowledge of rules for writing that effective writers have; on the contrary, students prone to writing block may know too many rules. In contrast to more fluid writers, who tend to focus on what is called “the rhetorical situation” (explained below), student writers prone to writing blocks focus on rules they have been told (Flower and Hayes, Rose).  Students taught these rules of writing try to produce an error-free first draft  which they minimally revise (Emig, Sommers). Effective and accomplished writers, in contrast, have rich and recursive writing processes that depends heavily upon revision (Emig, Flower and Hayes, Berkenkotter, Faigley and Witte).

For many people unaware of research in linguistics and English education, the assumption is that the “rules” of good writing are the rules regarding usage (usually described as “grammar rules,” which is itself an instance of an error in usage). It is assumed that there is agreement regarding these rules, and they are to be found in any usage handbook. Further, it is assumed that one can improve students’ “grammar” (another interesting usage error–what people mean is “reduce usage error” or “improve correctness”) by getting them to memorize those universally agreed-upon usage rules.  These assumptions are wrong in almost every way.

Research in linguistics demonstrates that language has considerable variation over time and region. To put it simply, at any given moment, there are numerous dialects within a language which are each “correct” within their community of discourse (e.g., “impact” for “influence,” “thinking outside the box”). Some dialects are more privileged than others, and the uninformed often assume that facility with the more privileged dialect signifies greater intelligence; this is patently false (Chomsky, Labov and Smitherman, Baron). All dialects have a grammar, so students (and colleagues) who use a different dialect are not ignorant of “grammar;” they know the grammar of a dialect not considered appropriate in academic discourse, the dialect which linguists sometimes call “standard edited English.” It is easy to overstate agreement regarding “standard edited English,” as that dialect has varied substantially over time; the “shall” versus “will” distinction used to be considered extraordinarily important, “correct” comma usage differs in British and American English and even more from the nineteenth century to now, and usage handbooks disagree on numerous issues (such as agreement). The notion of a correct dialect upon which there is universal agreement is simply a fantasy.

In our experience, people respond to this research by objecting to the pedagogy they assume it necessarily implies. People assume that to note the reality–considerable regional and historical disagreement regarding linguistic correctness–necessarily implies a complete abandonment of attention to error. That is not the necessary conclusion, nor is it our point. Our point here is simply that one central assumption in this view of writing instruction is wrong–there is not universal agreement as to rules regarding “correct” language use.

In addition, this research does not necessarily imply a “whatever goes” pedagogy. While some have drawn that conclusion, others have used this research to argue for teaching grammar and usage as a community of discourse issue (e.g., Labov and Smitherman); that is, rather than denigrate some dialects, teachers should present “standard edited English” as a useful dialect which students should use under some circumstances and with some audiences (see, for instance, “Students’ Right to Their Own Language”). Others have argued that grammar and usage should be taught as a rhetorical issue, as a question of clarity and rhetorical effect (Williams, Kolln, Dawkins).

And this leads us to the second point–the assumption that one can reduce errors in student writing through making students learn the rules of standard edited English. On the contrary, in the nearly one hundred years that this issue has been studied, there has not been a single study which showed improvement in student writing resulting from formal instruction in the rules of grammar, while there are several studies which showed a mark deterioration (see Knoblauch and Brannon, Hartwell, Hillocks for more on the history of this research). That deterioration may be the consequence of increased anxiety leading to students’ mistrusting their implicit knowledge (Hartwell), or that the time taken for grammar instruction was time away from more productive forms of writing instruction (Knoblauch and Brannnon).

In our experience, this point too is misunderstood. We are not saying that instruction in grammar and usage is pointless, but that certain approaches to it demonstrably are. And those are precisely the pedagogies into which one is forced in large classes–lecturing, drilling, assigning worksheets, and testing students on usage rules.

Indeed, research suggests that there is probably not a pedagogy which can be applied to all students in the same way. Issues of linguistic correctness result from different causes, depending upon the students. Hence, the solution varies. For students whose native dialect is fairly close to standard edited English, for instance, errors in usage sometimes result from lack of clarity about their own argument; students make more usage errors, for instance, when they are writing about something they do not fully understand. For such students, clarifying the concepts will enable the students to correct the errors.

For other students, usage errors are a time management issue–they did not leave themselves time to proofread. What Haswell has somewhat misleadingly called “minimal marking” is generally the best strategy under those circumstances (it is misleading in that it depends upon students’ resubmitting their corrected papers, so it can be fairly time-consuming for the instructor, albeit far less time-consuming and more effective than copy-editing). What he advocates, however, is not a kind of marking that takes minimal time on the part of the instructor.

For students whose dialect is markedly different from standard edited English, there is the possibility of what linguists call “dialect interference”–instances of using their (academically inappropriate) dialect, engaging in hypercorrectness (“between you and I”), or  simply being unsure how to apply the rules. There are also students whose experience with written English is minimal, and who may have a tendency toward what are called “errors of transciption” (e.g., errors regarding the placement of commas and periods).  For these students, “minimal marking” is ineffective, but neither do they benefit from lectures and quizzes on grammar rules. Instead, they seem to benefit most from individual instruction. Several studies show strong short-term improvement from sentence embedding (Hillocks), but  many instructors moved away from it due to its inherently time-consuming nature.

In short, as Mina Shaughnessy pointed out long ago, improving students’ usage is not something one can do in the same way with all students. One must know exactly what specific problems exist with each student, why that student is having that problem, and what method will best work with that problem and that student.  In other words, effective instruction in grammar and usage necessitates classes small enough that the teacher can know students well enough to know the cause of the problem. If the students have major problems, as from dialect interference, then the classes have to be small enough for the teacher to be able to engage in the extremely time-consuming methods necessary for such students.

One might wonder, if writing teachers are not teaching rules of writing, what are we teaching?  And the answer seems to be that we are teaching rhetoric. That is, while one cannot present students with rules that apply to all circumstances–never use I, always begin with a personal anecdote, your thesis should have three reasons–there are principles which do seem effective in most circumstances. Those principles are encapsulated in the concept of the “rhetorical situation”–that the quality of a piece of discourse is determined by the extent to which its strategies are appropriate for effecting the author(s) particular intention on the specific audience. Thus, were one to examine prize-winning articles in philosophy, economics, literary criticism, engineering, behavioral psychology, and theoretical physics, one would see wide variation in terms of  format, style, organization, and nature of evidence, one would see that each piece was appropriate for its audience.

One advantage of this approach to the teaching of writing is that it is more effective. Lecturing and drilling are, as several studies have shown, ineffective methods of writing instruction (Hillocks). This method remains tremendously popular, however, especially among teachers whose own instruction followed that method, who are cynical regarding student achievement, and who are generally convinced that the teaching of writing is the transmission of information (Hillocks).  This point is important, as it showed up in our own experiment with reducing class size–students in classes with teachers who relied heavily on lecture did not show any benefit from a smaller class. The fact is that lectures are ineffective in writing classes; reducing the class size does not suddenly make lecturing an effective teaching strategy.

When we had the opportunity to look closely at class size at our previous institution, we made some surprising discoveries.  One of the major motivations for undertaking the experiment was a sense of frustration, among faculty and graduate students, with graduate student instructors’ progress toward their degrees. Prior to the change in program emphasis, a large number of our instructors used class time to present advice on writing papers as well as to present writing products which students used as models (what Hillocks calls the “presentational” mode, and which he identifies as the least effective method of writing instruction). Perhaps because this method of instruction did not work particularly well for so many students, instructors also relied heavily on individual conferences with students–conferences which took so much time that they necessitated long blocks of time outside normal office hours. The dominance of this mixing of presentational and individualized modes of instruction had fairly predictable consequences.

The accretion of assignments and expectations for the course meant that it was actually impossible to teach the course in the ten hours per week a graduate student was supposed to spend on it. While such a situation is far from uncommon–many programs pay writing teachers a salary that presumes that the course takes much less time than it actually does–it is unethical. It also means that instructors, especially ones with multiple commitments (e.g., graduate students who are also taking courses, part-time instructors with obligations at several campuses, tenure-track teachers facing publication pressures), are encouraged to adopt pedagogies which feel more efficient but which research strongly indicates are less effective (i.e., the presentational mode of teaching, discussed previously).

Graduate student instructors responded to this situation in various ways. According to a survey, as well as faculty observation, many let their own coursework suffer in favor of their teaching. Others simplified assignments, so that the papers were short and simple enough that they could be graded in ten to fifteen minutes a piece. Several instructors essentially abandoned assessing student work, and graded students purely on attendance. Many instructors reported spending long hours on teaching, something that, not surprisingly, resulted in frustration–the first year composition course was openly discussed as the least desirable teaching assignment. In this context, it should be clear why we were looking for a method that would reduce the amount of time that instructors spent on their first year composition courses, without simply shifting them to quick, but ineffective, methods such as lecturing, drilling, and superficial grading.

When we reduced class size to fifteen for many of the instructors, we found that those instructors generally spent less time on the course (instructors in control groups reported spending an average of ten to fourteen hours per week on their courses, instructors in the sections with reduced class size reported averages of between twelve and fifteen). We also found that many instructors took advantage of the reduced class size to create new assignments, to take more time to comment on papers, to meet more often with students, or to add another project. Such a consequence–instructors taking the opportunity to increase the amount of work in the course–is echoed in at least one other study on class size.  The San Juan Unified School District report on the results of the Morgan-Hart Class Size Reduction Act of 1989 concludes that

As a result of smaller classes, students were more actively involved in the instructional process.  This was demonstrated by an increase in the number of student reading and writing assignments, more oral presentations and frequent classroom discussions.  Students also received increased feedback on their English assignments and teachers had time to work with students individually.

One benefit of reducing class size, then, is that instructors appear more willing to experiment with and examine their teaching styles. Whether this is a bug or feature would depend on the program goals. Certainly, although they may not have spent less time on the courses, they reported much higher satisfaction. Teachers like smaller classes.

But, they did not always use the time well. We found that instructors heavily committed to the presentational mode did not effect much change in their students’ writing processes. Similarly, class size did not increase overall student satisfaction if the instructor engaged in the presentational mode.

In conclusion, our experience fits with Sheree Goettler-Sopko’s summary of research on class-size and reading achievement. She concludes that “The central theme which runs through the current research literature is that academic achievement does not necessarily improve with the reduction of student/teacher ratio unless appropriate learning styles and effective teaching styles are utilized” (5).

Class Size and Minimal Teaching

George Hillocks long ago showed the importance and superiority of constructivist approaches to the teaching of writing (Research in Written Composition, Teaching Writing as Reflective Practice, and more recently Ways of Thinking, Ways of Teaching).  This means that effective teaching requires an approach which does not set the task of teaching writing as getting students to memorize and understand certain objects of knowledge (the objectivist approach), but as setting students tasks during which they will learn and giving them appropriate feedback along the way.  The more that one engages in constructivist teaching, the more important is class size; the more that the goals and practices of a program are objectivist, the less class size matters. While reducing class size does not guarantee constructivist teaching, increasing class size does prevent it.

One can see this effect simply by thinking about the amount of time for which writing instructors are paid. The assumption at many universities is that each class is supposed to take 8-10 hours per week of instructor time. Instructors spend three hours each week in class, and it is optimistic, but not necessarily irrationally so, to assume that an efficient and highly experienced teacher can prepare for class on a one-to-one basis (that is, that it takes approximately one hour to prepare for one hour of class). A teacher therefore has two to four hours a week (almost precisely what is required by most universities for office hours). If an instructor has twenty students per class, s/he has, over the course of the semester 30-60 hours, which comes, at best, to three hours per student for conferences and grading. This situation necessitates cutting the students short on something–short papers which can be graded quickly, cursory grading of student work generally, discouraging students from using office hours. All in all, it means that one cannot do what Pascarelli and Terenzini say “effective teachers do” when “They signal their accessibility in and out of the classroom” (652).  Simply put, if instructors have to use office hours to grade student work, they cannot signal accessibility. Pascarelli and Terenzini say, “They give students formal and informal feedback on their performance” (652), but, if instructors are restricted to three hours of grading per semester per student, they have to minimize the amount of feedback given. In other words, large classes force instructors away from what “we know” to be good practice.

The larger the class, the more the teacher is forced into lecturing. Yet, according to Pascarelli and Terenzini,

            Our review indicates that individualized instructional approaches that accommodate variations in students’ learning styles and rates consistently appear to produce greater subject matter learning than do more conventional approaches, such as lecturing. These advantages are especially apparent with instructional approaches that rely on small, modularized content units, require a student to master one instructional unit before proceeding to the next, and elicit active student involvement in the learning process. Perhaps even more promising is the evidence suggesting that these learning advantages are the same for students of different aptitudes and different levels of subject area competence. Probably in no other realm is the evidence so clear and consistent. (646, emphasis added)

If we want instructors to be effective writing instructors, then we have to ensure that they are in a situation which will permit good practice. Reducing class size will not necessarily cause such practice, but it is a necessary condition thereof.

Works Cited

ADE.  “ADE Guidelines for Class Size and Workload for College and University Teachers of English: A Statement of Policy.” Online. http://www.ade.org/policy/policy_guidelines.htm. 1998.

Baron, Dennis E. Grammar and Good Taste : Reforming the American Language. New Haven: Yale University Press, 1982.

Bazerman, Charles. Shaping Written Knowledge: The Genre and Activity of the Experimental Article in Science. Madison: University of Wisconsin Press, 1999.

Berkentotter, Carol. “Decisions and Revisions: The Planning Strategies of a Publishing Writer.”  Landmark Essays on Writing Process.  Sondra Perl, ed. Davis, CA: Hermagoras Press, 1994. 127-40.

Braddock, Richard.  “The Frequency and Placement of Topic Sentences in Expository Prose.” On Writing Research: The Braddock Essays, 1975-1998.  Ed. Lisa Ede. New York:  Bedford, St. Martin’s, 1999. 29-42.

Chatman, Steve.  “Lower Division Class Size at U.S. Postsecondary Institutions.”  Paper presented at the Annual Forum of the Association for Institutional Research. Albuquerque: 1996.

Chomsky, Noam. N. Aspects of the Theory of Syntax. Cambridge:

MIT P, 1965.

Davis, Barbara Gross, Michael Scriven, and Susan Thomas.  The Evaluation of Composition Instruction. 2nd. Ed. New York: Teachers College Press. 1987.

Dawkins, John. “Teaching Punctuation as a Rhetorical Tool.” CCC (Dec. 1995): 533-548.

Emig, Janet.  The Composing Processes of Twelfth Graders. Urbana: NCTE, 1971.

Faigley, Lester, and Stephen Witte. Evaluating College Writing Programs. Carbondale: Southern Illinois UP, 1983.

Fish, Stanley. Is There a Text in this Class? Cambridge: Harvard UP, 1982.

Flower, Linda, and John R. Hayes. “The Cognition of Discovery: Defining a Rhetorical Problem.” Landmark Essays on Writing Process. Sondra Perl, ed. Davis, CA: Hermagoras Press, 1994. 63-74.

Glass, Gene V., and Mary Lee Smith.Meta-Analysis of Research on the Relationship of Class-Size and Achievement.  The Class Size and Instruction Project.”  Washington D.C.: National Institute of Education, 1978.

Goettler-Sopko, Sheree. “The Effect of Class Size on Reading Achievement.” Washington D.C.: U.S. Department of Education, 1990.

Graff, Gerald. Beyond the Culture Wars: How Teaching the Conflicts Can Revitalize American Education. New York: WW Norton, 1993.

Hairston, Maxine. “Working with Advanced Writers.” CCC 35(1984): 196–208.

Hartwell, Patrick. “Grammar, Grammars, and the Teaching of Grammar.” College English 47 (February 1985): 105–27.

Haswell, Richard H.  “Minimal Marking.” College English 45.6 (1983): 166-70.

Hillocks, George.  Research in Written Composition: New Directions for Teaching.  Urbana: NCTE, 1986.

– – -. Teaching Writing as Reflective Practice: Integrating Theories. New York: Teachers College P., 1995.

– – -. Ways of Thinking, Ways of Teaching. New York: Teachers College P., 1999.

Huot, Brian.  “Toward a New Theory of Writing Assessment.” CCC 47.4 (1996): 549-66.

Knoblauch, C.H. and Lil Brannon. “On Students’ Rights to Their Own Texts: A Model of Teacher Response”, College Composition and Communication, 33 (1982): 157-66.

Kolln, Martha. Rhetorical Grammar: Grammatical Choices, Rhetorical Effects. 4th Ed. New York: Pearson, 2002.

Labov, William. The Logic of Non-Standard English. Champaign: National Council of Teachers of English, 1970.

Lanham, Richard. Revising Prose. 4th ed. New York: Pearson Longman, 1999.

Miller, Susan. Textual Carnivals: The Politics of Composition. Carbondale: Southern Illinois UP, 1991.

NCTE College Section Steering Committee. “Guidelines for the Workload of the College English Teacher.” Online. http://www.ncte.org/positions/workload-col.html. 1998.

Ohman, Richard. English in America: A Radical View of Profession. New York: Oxford UP, 1976.

Pascarella, E.T. And Terenzini, P.T. How College Affects Students: Findings and Insights from Twenty Years of Research. San Francisco: Jossey-Bass, 1991.

Richards, I.A. The Meaning of Meaning: A Study of the Influence of Language upon

 Thought and of the Science of Symbolism. 8th ed. New York: Harcourt, Brace

& World, 1946.

Rose, Mike. Lives on the Boundary. New York: Penguin, 1990.

Sommers, Nancy. “Revision Strategies of Student Writers and Experienced Adult Writers.” CCC 31 (December 1980): 378–88.

San Juan Unified School District. “Class Size Reduction Evaluation: Freshman English, Spring 1991.” Washington D.C.: U.S. Department of Education, 1992.

Shaughnessy, Mina. Errors and Expectations. New York: Oxford UP, 1979.

Slavin, Robert, “Class Size and Student Achievement: Is Smaller Better?” Contemporary Education 62 (Fall 1990): 6-12.

Smitherman, Geneva. “‘Students’ Right to Their Own Language’: A Retrospective.” English Journal 84.1 (l995): 21-27.

Swales, John, and Hazem Najjar.  “The Writing of Research Article Introductions”  Writtten Communication 4.2 (April 1987): 175-91.

Tomlinson, T. M. “Class Size and Public Policy: Politics and Panaceas.” Educational Policy 3 (1989): 261-273.

Trail, George Y. Rhetorical Terms and Concepts: A Contemporary Glossary. New York: Harcourt, 2000.

Williams, David D., et al.  “University Class Size: Is Smaller Better?” Research in Higher Education 23.3: 307-318.

Williams, Joseph.  “The Phenomenology of Error.”  CCC 32 (May 1981): 152-68.

—. Style: Ten Lessons in Clarity and Grace. Chicago: U. Chicago P., 1997.

[1] Unhappily, in our experience, the expectation is that instructors should spend more than forty hour per week on their jobs, or cut corners in various ways. For instance, it is often assumed that office hours can be used for course preparation or grading, but that amounts to an official policy that office hours are not times when students can expect the full attention of the instructor. Hence, when upper administrators say that office hours should not be counted separately from course preparation, the correct answer is, “Put that in writing.”

Class size in college writing (an old paper)

[This was co-authored with Reinhold Hill in 2007, based on research done in the late 90s at our then-institution. People have sometimes cited it, although it wasn’t published, so I’m posting it.]

The issue of class size in first year college writing courses is of considerable importance to writing program administrators.  While instructors and program administrators generally want to keep classes as small as possible, keeping class size low takes a financial and administrative commitment which administrators are loath to make in the absence of clear research.  While the ADE and NCTE recommendations of fifteen students are persuasive to anyone who has taught first-year writing courses, they often fail to persuade administrators who are looking for research-based recommendations.  And, in actual fact, class sizes at major institutions ranges from ten to twenty five students.

Unfortunately, anyone looking to the available research on class size in college writing courses is likely to come away agnostic.  While there is considerable research on class size and college courses in general, there are several important reasons that one should doubt its specific applicability to college writing courses.  First, much of the general research on class size includes students of all ages.  Second, the research often involves the distinction between huge and simply large courses, such as between forty and two hundred students,  whereas most writing program administrators are concerned about the difference between fifteen and twenty-five students. Third, the courses involved in the studies often have very different instructional goals from first year writing courses.  Finally, the assessment mechanisms are often inappropriate for evaluating effectiveness and student satisfaction in writing courses.

In other words, the NCTE recommendations for writing courses are not based on research, and the research on class size in general cannot yield recommendations.        At the University of Missouri, we were given the opportunity to engage in some informal experimentation regarding class size.  While the limitations of our own research mean that we have not resolved the class size question, our results do have thought-provoking indications for class size and program administration.  In brief, our work suggests that reducing class size, while very popular among instructors, appears not to result in marked improvement in student attitudes about writing unless the instructors use that reduction in class size as an opportunity to change their teaching strategies.  In other words, we seem to have confirmed what Daniel Thoren has concluded about class size research: “Reducing class size is important but that alone will not produce the desired results if faculty do not alter their teaching styles.  The idea is not to lecture to 15 students rather than 35” (5).  If, however, instructors are able to take advantage of the smaller class size, then even a small reduction can result in students perceiving considerable improvement in their paper writing abilities.  We do not wish to imply that reducing class size should not be a goal for writing program administrators, but as a goal in and of itself it is not enough – we need to be aware that pedagogical changes must be initiated together with reductions in class size.

1. Institutional Background

Our study, largely funded by the Committee on Undergraduate Education, was the result of recommendations made by a Continuous Quality Improvement team on our first year composition course (English 20).  That team was itself part of increased campus, college, and departmental attention to student writing.  As a result of that attention, the English 20 program underwent philosophical and practical changes.

The most important change was probably the shift in program philosophy. While there remains some variation among sections, the philosophy of the program as a whole is to provide an intellectually challenging course in which students write several versions of researched papers on subjects of scholarly interest about which experts disagree.  Students write and substantially revise at least three papers, each of which is four to five pages long.  There are four separate but connected goals in these changes.  First, for instructors, our goal is to provide a teaching experience which will make the teaching of first-year composition appropriate preparation for teaching writing intensive courses in their area.  Hence, instructors need to develop their own assignments.

Second, for students, one goal of the course is to enable students to master the delicate negotiation of self and community necessary for effective academic writing.  As Brian Huot has noted, research in writing assessment indicates that students tend to be fairly competent at expressive writing, but have greater difficulty with “referential/participant writing” (241).  Our sense was that this assessment is especially true of students entering the University of Missouri.  They are quite competent at many aspects of writing, but they have considerable difficulty enfolding research into an interpretive argument.  Thus, we did not need to teach The Research Paper that Richard Larson has so aptly criticized; nor do students need instruction in personal narrative.  Instead, students needed practice with assignments which called for placing oneself in a community of experts who are themselves disagreeing with one another.  Achieving this goal was nearly indistinguishable from achieving the goal described above for instructors–assisting instructors to write assignments which called for an intelligent interweaving of research and interpretation into a college-level argument would necessarily result in students’ getting experience with that kind of assignment.

Our third goal was to teach students the importance of a rich and recursive writing process, one which involves considerable self-reflection, attention to the course and research material, and substantial revision in the light of audience and discipline expectations.  Research in composition indicates over the last thirty years suggests that such an attention toward the writing process is the most important component to success in writing, especially academic papers (Flowers and Hayes, Berkenkotter, Emig).

It should be briefly explained that this is not to say that the program endorses what is sometimes called a “natural process” mode of instruction–that term is usually used to describe a program which is explicitly non-directional, in which students write almost exclusively for peers and on topics of their own choosing, and which endorses an expressivist view of writing.  In fact, attention to the writing process does not necessarily preclude the instructor taking a “skills” approach to writing instruction (that is, providing exercises or instruction in what are presumed to be separable aptitudes in composition) but it does necessitate course design with careful attention to paper topics.

And this issue of modes of instruction raises our fourth goal–to enable instructors to use what George Hillocks calls the “environmental” mode of instruction.  When we began making changes to the first year composition program, it was our impression that the dominant mode of instruction was what Hillocks calls the “presentational” mode, which

is characterized by (1) relatively clear and specific objectives…(2) lecture and teacher-led discussion dealing with concepts to be learned and applied; (3) the study of models and other material which explain and illustrate the concept; (4) specific assignments or exercises which generally involve imitating a pattern or following rules which have been previously discussed; and (5) feedback following the writing, coming primarily from teachers.  (116-117)

It is important to emphasize that this mode does not depend exclusively on lecture.  A class “discussion” in which the instruction guides students through material by asking questions intended to elicit specific responses is also presentational mode.  Insofar as we can tell, a large number of instructors used class time to present advice on writing papers as well as to present writing products which students might use as models.  Instructors then used individual conferences in order to discuss strategies for revising papers.

The dominance of this mixing of presentational and individualized modes of instructions in our program had two obvious consequences.  First, it was exhausting for instructors.  An instructor’s time was generally split between the equally demanding tasks of preparing the information to be presented in class and engaging in individual conferences with students. The standardized syllabus recommended four papers; each class has eighteen to twenty students; many of our instructors teach two classes per semester.  Instructors were forced to choose between not providing individual instruction for students on each paper or spending a minimum of eighty hours per semester in conference with students.  If instructors are also spending six hours per week preparing class material, and three hours per week in class, they are spending one hundred and seventy five hours per semester per class on their teaching–not including the time spent grading and commenting on papers.  Standards for good standing and recommendations regarding course load assume that such students are spending only one hundred fifty hours per semester on each course.

It should be emphasized that shifting instructional mode and changing the syllabus to only three papers cannot solve the problem of overworking instructors.  Class preparation and time in class account for one hundred thirty five hours per semester; if instructors spend forty-five minutes grading a first submission and only fifteen minutes grading a second submission, an enrollment of twenty students brings their commitment to one hundred ninety five hours per semester per course, and this amount of time does not include any conferences.

An informal survey of our instructors indicated the consequences of these conflicting expectations: some instructors did minimal commenting on papers, some instructors permitted their own status as students to suffer, while others encouraged students to write inappropriately short papers, and all were over-worked.

The second consequences of the programatic tendency to alternate between presentational and individualized modes of instruction has to do with Hillocks’ own summary of research on modes of instruction.  Hillocks concludes that the presentational mode of instruction is not as effective as what he calls the “environmental mode”: “On pre-to-post measures, the environmental mode is over four times more effective than the traditional presentational mode” (247). In other words, our instructors were working very hard in ways that may not have been the most effective for helping students write better papers.

So, we wanted instructors to use the “environmental” mode of instruction, which

“is characterized by (1) clear and specific objectives…(2) materials and problems selected to engage students with each other in specifiable processes important to some particular aspect of writing; and (3) activities, such as small-group problem-centered discussions, conducive to high levels of peer interaction concerning specific tasks….Although principles are taught, they are not simply announced and illustrated as in the presentational mode.  Rather, they are approached through concrete materials and problems, the working through of which not only illustrates the principle but engages students in its use.”  (122)

In the environmental mode, one neither lectures to students, nor does one simply let class go wherever the students want.  Instead, the instructor has carefully prepared the tasks for the students–thinking through very carefully exactly what the writing assignments will be and why.

2. Other Research on Class Size and College Writing

The relevance of the considerable body of research on class size is largely irrelevant to first-year composition.  Glass et al’s 1979 meta-analysis of 725 previous studies, for instance,  remains one of the fundamental studies on the subject.  Yet, it includes a large number of studies on primary and secondary students; hence, there is reason to wonder what role age plays in the preference for smaller class size.  A more recent, and frequently quoted, meta-analysis of college courses which claims, as measured by student achievement by final examination scores, that class size has no effect on student achievement begins with classes as small as 30 to 40(Williams et al 1985).  But, this study does not appear to have included a writing course.  Considering that the study was restricted to courses with “one or more common tests across sections” (1985 311) it is unlikely to have been a composition course; if it was, then it was one which presumed that improvement in writing results from learning information which can be tested–a problematic assumption.

A more fundamental problem–because it is shared with numerous other studies of class size–is the measurement mechanism.  That is, examinations are not appropriate measures of student achievement in courses whose goal is to teach the writing of research papers (see Huot, 1990, CCCC Committee on Assessment, 1995, White, 1985, White and Polin, 1986); hence, any study which relies on examination grades is largely irrelevant in terms of its measurement mechanism.

Finally, there are good reasons to doubt the implicit assumption that course goals and instructional method are universal across a curriculum.  Feldman’s 1984 meta-analysis of 52 studies does not list any study which definitely involved a writing class; most of the studies, on the contrary, definitely did not include any such course.  Smith and Cranton’s 1992 study of variation of student perception of the value of course characteristics (including class size) concludes that those perceptions “differ significantly across levels of instruction, class sizes, and across those variables within departments” (760).  They conclude that the relationships between student evaluations and course characteristics “are not general, but rather specific to the instructional setting” (762).

This skepticism regarding the ability to universalize from research is echoed in Chatman who argues that class size research indicates that “instructional method should probably be the most important variable in determining class size and should exceed disciplinary content, type and size of institution, student level, and all other relevant descriptive information in creating logical, pedagogical ceilings” (8).  And, indeed, common sense would suggest that there is no reason to assume that research on courses whose major goal is the transmission of information applies very effectively to writing courses.

3. Methods and Results of Our Research

We had two main assessment methods.  Because we were concerned about reducing the time commitment of teaching English 20, we asked instructors to keep time logs.  The mainstay of our initial method of assessment was a set of questionairres given to students at the beginning and end the semesters.  While questionnaires are a perfectly legitimate method of program assessment, they do not provide as complete a picture of a program as a more thorough method would (for more on advantages and disadvantages of questionnaires in program assessment, see Davis et al 100-107).  Given the budget and time constraints, however, we were unable to engage in those methods usually favored by writing program administrators for accuracy, validity, and reliability such as portfolio assessment.  We are relying to a large degree on self-assessment, which, while not invalid, has obvious limitations.  Nonetheless, the results of the questionairres were informative.

Because the program goals emphasize the students’ understanding of the writing process, the questionnaires were intended to elicit any changes in student attitude toward the writing process.  We were looking for confirmation of three different hypotheses.

First, there should be a change in their writing process.  Scholarship in composition suggests that we will find that students begin with a linear and very brief composing process (writing one version of the paper which is revised, if at all, at the lexical level).  If English 20 is fulfilling its mission, that second set of answers will indicate that the majority of students end the course with a richer sense of the writing process–they will revise their papers more, their writing processes will lengthen, and they will revise at more levels than the lexical.

Second, their hierarchy of writing concerns should change.  According to Brian Huot, composition research indicates that raters of college level writing are most concerned with content and organization (1990, 210-254). In various studies which he reviews, he concludes that readers, while concerned with mechanics and sentence structure, consider them important only when the organization is strong (1990, 251).  That is, readers of college papers have a hierarchy of concerns, in that they expect writers to be concerned with mechanics, correctness, and format (sometimes called “lower order concerns”), but that they expect writers to spend less time on those issues than on effectiveness of organization, quality of argument, appropriateness to task, depth and breadth of research, and other “higher order concerns.”

Beginning college students, however, often have that hierarchy exactly reversed: they are often under the impression that mechanics, format, and sentence level correctness are the most important to their readers, and deserve much less attention than the argument (or substance of the paper).  Hence, if English 20 is succeeding, there should be a shift in student ranking of audience concerns.  That is, their beginning questionnaire answers will indicate that they pay the most attention to lower order concerns and least attention to higher order considerations (whether or not the paper fulfills the assignment; if the paper is well-researched; if the evidence is well-presented; if the organization is effective).  At the end of the semester, they should demonstrate a more accurate understanding of audience expectations–not that they have dropped lexical or format concerns, but that they understand those concerns to be less important for success than the higher order concerns.

Third, there should be variation in student and teacher satisfaction with the courses.  This shift is more difficult to predict than the other hypotheses, but it does make sense to expect that the sections in which students receive greater personal attention would be more satisfying for both instructors and students.  In this regard, we expected to confirm what a report from the National Center for Higher Education Management Systems has identified as “an overwhelming finding”: that students believe they learn more in smaller classes, and that they are far more satisfied with such courses.

As with many studies, our results are most useful for suggesting further areas of research.  One area should be mentioned here.  The very constraints of the assessment method–a quantitative and easily administered method–meant that we were asking students to use language other than what they might have.  Open-ended interviews with students would almost certainly elicit much richer results.  One advantage of our study of class size was that it was part of experimenting with various changes in our program; thus, a large number of sections participated in the study as a whole.  Each semester, we had about twenty sections participating in the study in some form or another, and each semester at least four were held to an enrollment of 15 students.[i]  We also designated at least four sections “control” groups, meaning that we did not reduce class size, or consciously make any of the other modifications to English 20 we were contemplating.

An important limitation of our experiment should be mentioned before discussing the results. We ran the experiment over three semesters (WS97, FS97, and WS98), but were only able to use the survey results from the second and third semesters (because we changed the survey between the first and second semester).  In the first semester that we did the experiment, we made a conscious attempt to balance each group in terms of instructor experience and subjective judgments regarding the quality of their teaching.  Given the intricacies of scheduling, however, we were unable to maintain the balances over the next two semesters of the experiment.  This imbalance obviously affected the experimental results in ways that will be noted.

In terms of reducing the time that instructors spent on the course, reducing class size did not have markedly good results.  In FS97, instructors teaching the smaller sections averaged just under twelve hours per week, but they averaged just under fifteen hours per week in WS98.  The control groups reported spending an average of ten and fourteen hours respectively.  Thus, reducing class size did not reduce the amount of time that instructors spent on their courses.

The instructor surveys indicate some reasons that their time commitment might not have reduced.  In FS97, for instance, the teachers mentioned that having a smaller class size inspired them to make changes to their teaching–creating new assignments, taking longer to comment on papers, conferring with students for longer periods of time or more often, adding in an extra paper.  In other words, the instructors took the opportunity to try something that a class size of twenty had previously dissuaded them from trying.

Obviously, this experimentation on the part of the instructors would have had some kind of impact on our own experiment, but it is impossible to predict what it would have been.  It may well be that we would have had very different results with the same instructors had they continued with a reduced class size for a second semester.  Working with that class size for the second time, they might have made different decisions about how to spend their time.  It’s also possible that this experimentation accounts for some of the unpredicted results in regard to student satisfaction and writing process, but, again, it is impossible to know.  Thus, one conclusion which we can draw from our own experiment is that one is likely to get better results by having the same instructors work with a reduced class size for several semesters in a row.

As was mentioned earlier, students were given a survey at the beginning and the end of the semester, eliciting their views of the relative importance of various aspects of the writing process, the amount of revision (and kind) in which they typically engaged, and their understanding of the expectations of college teachers. Most of them were comparison questions, asking the same question about the students’ high school experiences at the beginning of the semester that were then asked about their English 20 experience at the end.  For instance, students were asked: “What aspects of a paper were most emphasized in your high school English course?” at the beginning of the course and “What aspects of a paper were most emphasized in your English 20 course?” at the end of the course. Students were asked to select five aspects of writing a paper most emphasized in high school and five most emphasized in their English 20 classes.  The results from FS97 are shown in the table below.  The area of emphasis is listed in order, and the number is the percentage of students who listed that area among their five.  One term which should be explained is “Thesis statement,” which we take to mean, because of the emphasis of our program, revising the central argument, and not simply rewriting the last sentence fo the introduction.

WS97

HS

CONTROL

CLASS SIZE

Organization

71.66

Drafting 67.4

Peer Review 86.5

Grammar 61.92

Logic 65.3

Revising TS 71.2

Logic and Reasoning 57.38

Peer Review 65.3

Logic 61.5

Format 54.8

Organization 57.1

Revising Organization 53.9

Revising one’s TS 51.78

Revising TS 51

Organization 48.1

WS98

HS

CTRL

CLASS SIZE

Grammar 73.7

Peer review 87.5

Organization 85.7

Organization 67.7

Organization 75

Peer review 85.7

Logic 60

Logic 65.9

Logic 66.7

Research 55.9

Revising ts 59.1

Research 61.9

Format 54.4

Revising one’s organization 48.9

Revising ts 59.1

The results only partially confirmed our hypotheses.  We had predicted that the students would indicate that their high school writing courses put the most emphasis on grammar, format, and outlining and the least emphasis on revision.  We discovered, however, that high school instructors, while putting much emphasis on lower order concerns (e.g., format and grammar) do also emphasize some higher order concerns (e.g., organization and reasoning).   We also discovered more variation between semesters than expected.  While the WS98 results were much the same, with the five areas of most emphasis in high school being (in order) grammar, organization, logic and reasoning, research, format, and outlining, revising one’s thesis was second from last (with only 33.6% of students noting it as an area of emphasis in high school).

Our hypotheses were partially confirmed in that, in both semesters, the high school courses put the least emphasis on any form of revision: revising one’s grammar, revising one’s organization, or engaging in  peer review.  There was consistently a shift from high school in terms of greater emphasis on revision–it is interesting to note, for instance, that students perceive their high school courses putting considerable emphasis on organization (71.66 and 61.7), but almost none on revising organization (18.9).  Similarly, while students noted that grammar was emphasized in high school (73.7), revising one’s grammar was not (36.5).  In contrast, while English 20 is perceived as putting much less emphasis on grammar and usage (24.9), that number is much closer to the number of students who perceived an emphasis on revising one’s grammar and usage (25).  We infer that there is considerable variation among high schools–more than we had predicted–but that most high schools emphasize grammar and format more than English 20, and that English 20 emphasizes revision more than most high schools.

It is also interesting to note that students tend to report considerable experience with group work in high school courses.  Yet, students consistently reported little high school emphasis on peer review.  This discrepancy suggests that high school groups are not being used for peer review, or that–despite being put in these groups consistently–students do not perceive the peer reviews as important.

Students were also asked what aspects of a paper college teachers think most important by selecting four out of eight possibilities.  We had expected that this question would show a shift from lower order to higher order concerns–that, for instance, the method of library research would be rated high at the beginning of the semester, but would be replaced by the sources and relevance of evidence.  As with the previous table, the results from FS97 are presented in order, with the number representing the percentage of students who selected that aspect among their four.

FS97

HS

CONTROL

CLASS SIZE

Clarity of org

65.8

Clarity 71.4

Method 80.8

Correct grammar and usage 57.28

Logic 65.3

Persuasiveness 80.8

Logic and reasoning 57.38

Persuasiveness 55.1

Clarity 71.2

Persuasiveness of argument 55.12

Grammar 36.7

Logic 61.5

Mastery of subject 54.7

Mastery 36.7

Sources 50

WS98

HS

CTRL

CLASS SIZE

Clarity of org 69.5

Clarity of org 78.4

Clarity of org 76.2

Logic 60

Persuasiveness 71.6

Logic 66.7

persuasiveness 58.5

Logic 65.9

Persuasiveness 61.9

Mastery 54.8

Grammar/format/sources 34.1

Mastery 50

Grammar  50.5

Grammar 47.6

What is possibly most interesting about these charts is what is indicated about the high school preparation.  Students are relatively well informed about college instructors’ expectations before they begin the course; what little change there is in the control group in the first semester (and the almost complete lack of change in the second semester) suggests that simply being in college for one semester will inform students’ audience expectations.

The second most interesting result is that the reduced class size was a distinct failure in the first semester by our own program goals.  We did not want instructors emphasizing the method of library research; it was positively dismaying to see that listed as the greatest area of emphasis.  This result is typical of what Faigley and Witte have called unexpected results, and it is one consequence of how instructors were selected for the study.

Because scheduling of graduate students is often a last minute scramble, there were not specific criteria for participating in the reduced class size experiment.  In FS97, one instructor had participated in considerable training (Adams), one was still using a version of the old standardized syllabus and had participated in no training after her entry into the graduate program several years previous (Chapman), one was taking comprehensive exams and had engaged in only the required training (Brown), and one had participated in some training above what was required (Desser).  Adams generally engaged in the environmental mode; Chapman and Brown almost exclusively in presentational mode; Desser largely in environmental mode, but with some reliance on presentational.  Similarly, the instructors had a variety of years of experience–ranging from two to nine years.  As will be discussed below, the number of years of experience had no effect on the results, but the extent to which a person participated in training did.  In regard to the question discussed above, for instance, one can see the range of training reflected in the range of answers: Adams had only 9 per cent of students list method of library research as important; Brown had 37.5; Chapman had 41.6; Desser had 30.77.  In other words, the amount that a person participated in departmental training was reflected in the amount that their course reflected departmental goals.

As mentioned above, the exigencies of scheduling prevented our being able to balance the study groups.  Thus, what we generally called the control group was not necessarily analogous to the other sections in terms of instructor quality, experience, or preparation.  We have, therefore, also included the average number for each question–that is, the average number for all eighteen sections included in the study.

Students were asked about their perception of any change in the quality of their papers.  In asking this question, we did not assume that students were necessarily accurate judges of the quality of their papers, but we did think that their answer would provide a more specific way of evaluating the course than our course evaluations provided.  That is, whether or not they think their papers are better seems to us a useful way for thinking about student satisfaction.  The number represents the percentage of students who checked that item.  “Average” means the average number for all eighteen sections participating in the study.

Substantially better

Somewhat better

same

Somewhat worse

Substantially worse

control

40.1

44.9

4.1

0

0

size

21.2

55.8

15.4

3.9

0

average

34

WS98

Sub better

Some better

same

Some worse

Sub worse

ctrl

23.9

53.4

15.9

2.3

0

size

16.7

61.9

11.9

4.8

0

Here again one sees the results of how instructors were selected to participate.  If one looks at this same table for FS97 in regard to individual instructors, one sees a wide variation in student reaction.

Sub better

Some better

same

Some worse

Sub worse

adams

0

54.5

36.3

0

0

brown

0

50

37.5

12.5

0

chapman

25

50

25

0

0

desser

38.4

46.1

15.3

0

0

It is striking that the different sections had very nearly the same percentage of students who reported some improvement–where one sees the greatest difference is in the number of students who reported substantial improvement.  At least with these four instructors, the more training the instructor had, the more likely students were to report substantial gains.

Only one of these instructors participated in the study the next semester–Desser.  In WS98, Desser was in the control group, and the results were as follows:

No answer

Sub better

Some better

same

Some worse

Sub worse

11.1

5.5

55.5

22.2

5.5

0

Another instructor, Ellison, participated both semesters.  He was in another kind of experimental group fall semester (he met regularly with a faculty member and a group of instructors to discuss assignments, teaching videos, and so on) and reduced class size WS98.  One sees a similar pattern in the difference between the two semesters for his students–when he had a reduced class size, more students reported substantial and some improvement:

Sub better

Some better

same

Some worse

Sub worse

fs97

15.7

57.8

21

0

0

ws98

20

70

0

10

0

Granted, it is dangerous to speculate on the basis of two instructors, but it is intriguing that these instructors received very different results with a reduced class size.  If these instructors are typical, then one can conclude that the same person will get better results with a reduced class size.

There was not always a correlation between amount of training and survey results. For instance, students were asked whether their enjoyment of the paper writing process had changed.  This question was intended as a slightly different way to investigate student satisfaction–ideally, the course would improve both the students’ ability to write college-level papers at the same time that it increased their enjoyment of writing. We were unsure whether or not the question would elicit useful information, however, as we predicted it might be nothing more than an indication of the rigor of the instructors’ grading standards–that students might enjoy writing more in courses with higher GPAs.

Substantially more

Somewhat more

same

Somewhat less

Substantially less

Adams

0

27.2

63.6

0

0

Brown

0

28.7

62.5

12.5

6.25

Chapman

0

41.6

41.6

16.6

0

Desser

15.3

46.1

38.4

0

0

average

7.35

There is not quite as close a correlation between training and results as there was in regard to improved ability, but it is interesting that instructors with more training did not have any students reporting a decrease in enjoyment.  Similarly, the instructor with the least training–an instructor who tends to rely on the presentational mode–had no students report that their papers were substantially better after taking English 20, and the lowest number of students reporting that they received substantially more (12.5) or somewhat more (12.5) attention in English 20 than they had thought they would get.

We had assumed that students in the sections with fewer students would report more individual attention, but this was not necessarily the case.  The table below shows the results for FS97 and the results for Desser and Ellison for both semesters.

Sub more

Some more

same

Some less

Sub less

ctrl

38.8

38.8

14.3

0

0

average

Class size

34.6

19.2

32.7

9.6

1.9

adams

27.2

45.4

18.1

0

0

brown

12.5

12.5

56.2

18.7

0

chapman

33.3

16.6

25

16.6

8.3

Desser fs97

69.2

7.6

23

0

0

Desser ws98

22.2

38.8

33.3

5.5

0

Ellison fs97

31.5

40

30

0

0

Ellison ws98 (red)

31.5

36.8

26.3

0

0

Here one sees no striking correlation to amount of training, nor to instructional method.  We speculate that this lack of correlation results from the more important factor being the amount that the instructor engages in individual conferences with students.  While one does see a striking difference for Desser, there is no change for Ellison (the apparent change is simply the result of 5.2% of his WS98 students not answering that question).  The (highly tentative) inference is that reducing class size will not necessarily result in any group of instructors giving students more individual attention than any other group of instructors might do, but it may result in particular instructors doing so.

This range of results in regard to instructors with lower class size indicates our most important result:  that reducing class size does not increase overall student satisfaction if the instructor uses the presentational mode.  Reducing class size might, however, increase the student satisfaction and confidence on an instructor by instructor basis.

The final table that has provocative results is in response to the question: “If your writing process has changed, in what areas have you seen the greatest change?” Students were asked to select five.  The table is arranged by descending order of frequency in the control group.  The number represents the percentage of students who selected that area among their four.

CTRL

PLA

Class size

Close

Wkshp

Organization

57.1

Library research

51

Revise TS

44.9

Logic

42.9

drafting

30.6

27.1

28.9

45.6

27.1

Peer review

30.6

Revise org

30.6

Time management

26.5

Knowledge of format

24.5

18.6

26.9

29.4

20.8

Write elegant sentences

20.4

Computer use

14.3

Internet research

14.3

Knowledge of grammar

12.2

Reading course material

4.1

reading

2

outlining

2

WS98

ctrl

Close sup

size

wrkshp

Org 48.9

Logic 48.9

Rev org 45.2

Org 41.7

Rev ts 42.1

org46.8

Logic 42.9

Peer rev 41.7

Peer rev 40.9

Rev ts

Org 38.1

Revise org 41.7

Rev org 36.4

Rev org

Rev ts 35.7

Logic 40

Lib 28.4

Computers 27.7

Peer rev 28.6

Rev ts 36.7

The survey results as a whole did not indicate important gains in the reduced class size sections.  For instance, on average, the students in FS97 did not feel that they received more individual attention than the students in the control group did.  They showed slightly more shifting from lower order to higher order concerns on the whole than did students in the control sections, but a fewer number rated their paper writing as “substantially better.” At the beginning and end of the semester, we asked students how much of a paper they typically revised; we expected that students in the smaller class sizes would report engaging in greater revision than in the control groups.  But, that was not the case.  At the beginning of the semester, 22.4% of students in the reduced class size sections reported changing under 10% of a paper between drafts compared to 16.1% of students in the control groups.  At the end of the semester the results were 9.6 and 4.1 respectively.  The largest gain for the reduced class size group was in the 11-25% range (from 41.4 to 51.9) and in the 26-50% group for the control (28.6-40.8%).  Similarly, the control group had a larger number of students who reported that they revised “substantially” than did the instructors whose class sizes were reduced (22.5 compared to 17.3).

Students perceived the greatest emphasis in the course was on peer review; revising the thesis; logic and reasoning; revising organization; organization; format; drafting.  They saw the greatest change in their writing processes in regard to: peer review; organization; thesis revision; organization revision; library research.  In other words, the students saw the greatest changes in at least one area that they did not think that the instructors had especially emphasized (library research).  Most discouraging, 3.9% of the students thought that the papers they were writing after taking English 20 were somewhat worse, and 15.4% thought they were the same.  (None of the students in the control group thought their papers were somewhat worse, and only 4.1% of students thought their papers had remained the same.)

Looking at the results for individual instructors, however, has very different implications.  Instructors teaching the reduced class size sections did not necessarily have any training, and they were not required (or even encouraged) to change their teaching practices to take advantage of the reduced class size.  Instructors who taught reduced class size who did have some kind of previous training did have markedly different results. If an instructor engages in presentational mode, as some of our instructors did, then there is not an obvious improvement for the students in being in a smaller class.

There is, however, some reason to doubt that assumption.  For instance, according to Hillocks, research on grammar, usage, and correctness in student writing indicates that knowledge of grammatical rules has little or no effect on correctness in student performance.  That is, the transferring of information about writing does not improve writing itself.

While lecturing has repeatedly been demonstrated to be of little use in teaching writing, there is no reason to conclude that it is useless in other sorts of courses.  Common sense suggests that a good lecturer can lecture equally well to 15 students or 50 students–indeed, the research on class size indicates that the ability to present and communicate material in an interesting way may well be more important than class size for lecture courses (see, for instance, Feldman 1984).  The environmental mode of instruction, on the contrary, is almost certainly affected by class size.  As McKeachie has said, “The larger the class, the less the sense of personal responsibility and activity, and the less the likelihood that the teacher can know each student personally and adapt instruction to the individual student” (1990, 190).

[i]. The other kinds of sections were: ones with an attached peer-learning assistant; ones whose instructors met regularly with a faculty member to discuss the course; ones in which students met exclusively in small groups with fewer required contact hours per semester.

Writing Centers and copy-editing

Faculty and administrators at UT are extraordinarily supportive of the University Writing Center, something I attribute to the previous directors who set in place a good culture and set of processes. We get fan mail, financial support, and faculty who cheerfully run workshops for us. And our end-of-consultation and follow-up surveys show that students appreciate what we do—98% of 13k surveys say they love what we’re doing.

But what about that 2%?[1] And what if I include faculty who grump at me in meetings or email?

One really interesting complaint, that comes from faculty and students, is that we won’t “edit” student writing. And what they mean by “edit” is go through a paper and write in the “correct” version of every “error” (what is more accurately called “copy-editing”).[2] These people (again, less than two percent of our visitors) want the Writing Center to be, not just directive, but red-pen editors. And they want it because they care about writing, but they care in different ways:

    • They just want someone to edit their writing because editing is hard.
    • Some people believe that editing (or “writing” as they call it) is a specialized skill set they don’t need to acquire—knowing the correct rules of grammar is a kind of knowledge unrelated to (and less important than) content knowledge.
    • They think sentence-level correctness is important, and easy to convey.
    • They think careful attention to sentence-level decisions is important, and they can point to a time when someone harshly editing their writing opened a new world.
    • They want to read error-free writing.

I appreciate that these people want the UWC to do something that they think will make writing better.

What they don’t understand is that there is a field of research on writing center practices and, in fact, on directive vs. non-directive methods of commenting. There is also a long history of practice. People in writing centers want to improve students writing—it’s our mission, passion, and reason for going to work. If red-pen copy-editing of consultees’ work resulted in students being better writers, we’d do it. We don’t because experience and research show that, despite it seeming like the obviously right choice, it doesn’t really help most students.

When I was hired at the Berkeley Writing Center, in the late 70s, there was no training. They hired people who wrote good papers with no grammatical errors, and we met once a week for the first year or so to talk about what was happening in our consultations.

I thought my job was telling people how to change their papers, so I did. That’s what most of us did, and no one told us not to. But, quickly, I learned that wasn’t useful. A good teacher who is giving sensible writing assignments gives a lot of information in class about his/her expectations, about the discipline, about the assignment, and I hadn’t heard any of that. I didn’t actually know what the consultee should do.

And that’s what was happening across writing centers in that era—writing centers learned that consultants shouldn’t evaluate because consultants don’t know the criteria by which a faculty member will evaluate. We shouldn’t pretend to have knowledge we don’t have. That’s why writing centers are non-evaluative—because no one should evaluate the papers of a class who hasn’t been intimately involved with the class.

Well, okay, but why not correct all the commas?  Well, first off, because rules about commas aren’t all that clear—these are rhetorical as much as correctness choices. And, oddly enough, that applies to a lot of “rules” that people think are grammatical, but are stylistic, and vary from one discipline to another (passive voice, bundling nouns as though they’re adjectives, comma splice, use of second person, modifying errors that result from passive agency).

And a lot of “errors” aren’t easily corrected errors of “grammar” but signals of muddled thinking. Errors in predication, mixed construction, reference, modifying, parallelism, metaphor use, and even style choices such as whether to use passive voice/agency often can only be corrected by reconsidering an argument. We can’t just “edit” or “correct” a paper because shifting correcting mixed construction is a cognitive, not grammatical, choice.

In addition to all that, we shouldn’t just rewrite student papers for them because we’re a teaching unit. Except for the rare people who become professors, and even not for them until the moment they are engaged in a discipline, most writers don’t learn much about writing by having someone else go through a paper and correct errors.

We think that red-penning a paper is a good strategy because we can often look back and remember some very dramatic moment when we benefitted from having a paper red-penned. We got it back, looked it over, and tried to figure out what all the marks meant, and how they made the paper better. We learned. We assume it would help all students (as a colleague said, a certain amount of narcissism is probably necessary for success in academia)—that’s what initially made me mark up consultees’ papers. But we aren’t like most students. That moment was generally one when an expert in the field (thus, someone with considerable expert authority) helped us learn discipline-specific discourse (such as graduate school) at a moment we wanted to learn that discourse. I appreciate the faculty who red-penned my work, and I applaud others who do that for students who are at a moment when that is useful information.

The writing center is not that moment. You are that moment, and only for some of your students.

Writers who are anxious to learn the conventions of a field are often appreciative of directive advice as to how we’re not meeting those expectations, and faculty are always people who were that kind of student. We forget that we were atypical. So, yes, red-penning the work of a fairly advanced and very promising student who wants to be an academic can be profoundly useful. But, to be blunt, that is not the job of the UWC because we don’t know who is and is not very promising in a field. Our job is to teach. Not direct.

And most students don’t benefit from that kind of red-penning—they don’t look again at the corrections; they just make them.

As I tell students in my class when I explain why I don’t edit their first submissions, I’m not going through life with them editing their papers. I need to teach them to edit their own papers. If I teach them to rely on me to correct their papers, I’ve done them a disservice. The UWC doesn’t help students be better writers if we copy-edit their papers. Our mission isn’t helping students turn in better papers; it’s helping students be better writers.

[1] In UWC exit surveys, this is less than 2%. It’s a higher percentage of faculty who email or call me, since I don’t get 97 calls or emails about how what we do is great, but it’s still a very small number of calls. Still and all, all of the emails or calls are from people who really care about student writing, and I love that.

[2] “Correct” and “error” are in scare quotes because a lot of times it isn’t a grammar error, but a disciplinary or personal preference. People often assume that, if you don’t copy-edit, you don’t care about sentence-level correctness issues at all. We care about them very much, enough that we ensure that our consultants engage in practices that, unlike copy-editing, are likely to have long-term impact on student writing.

“You ain’t got nuthin’ to do but count it off.” Chester Burnett

For years, I’ve had this quote in my signature: “You ain’t got nuthin’ to do but count it off.” And every once in a while someone asks me why. It isn’t a command to others, or even a pithy statement everyone should know; it’s a reminder to me.

It’s something Chester Burnett (aka “Howlin’ Wolf”) said to the other musicians at what has come to be known as the “London Sessions.They’re working on the song “Little Red Rooster,” and the other guitarist is having trouble following him. That guitarist (Eric Clapton) tells Wolf that he doesn’t think he can follow unless Wolf plays acoustic on the recording. Wolf says, “Ah, man, c’mon, you ain’t got nuthin’ to do but count it off,” and proceeds to count it off. Except he doesn’t, really. What he does is much more complicated than just counting it off.

Clapton was almost certainly bullshitting Wolf to some extent—he could play it without Wolf, since he did so on final version–, but it’s equally certain that what Wolf was describing as “nuthin” was actually very complicated and difficult, even for Clapton.

For Wolf, though, it really was “nuthin,” because it’s what he did all the time, and what he’d done for years.

A lot of my email is giving advice or explaining things to people who haven’t spent as long neck-deep in the things I’ve been reading, writing, and thinking about as I have. Those tasks might seem really easy and straightforward to me, but they’re actually complicated, and they just seem straightforward because of how often I’ve done them. It’s easy to slide into an explanation that makes sense to me, but wouldn’t to someone else. To someone who’s not done them a lot, they’re hard. And so that quote in the signature is to remind me that it isn’t always just counting it off.

A rambling narrative about my writing projects

My first publication was in The Nation Weekly (a journal that briefly existed in the 70s), and the second was in a collection about Writing Centers. Both of those were things I happened to write for various reasons that someone else wanted to publish for their own reasons. In graduate school, a colleague wanted to publish a special issue about reading, and I was working on how John Muir read the landscape, and so that happened.

I then entered into years of hostile readers, bad choices about where to submit, misunderstandings about the genres of academic writing, and a failure to seek out better advice. (That’s kind of funny if you think about it—I was failing to try to figure out my rhetorical situation.)

It was clear from my dissertation work that John Muir’s inability to persuade conservationists to preserve the Hetch Hetchy Valley when he had previously been so successful was the consequence of the intellectual milieu changing—from Romanticism (dominant when he was first writing) to a kind of proto-third-way-neoliberalism (the best use of public resources is the one that advances market interests while remaining in public ownership). It was also clear to me that there was a hermeneutic and epistemological issue at play: people disagree(d) about what to do in regard to the environment because they disagree(d) about what the natural environment means—how to read it. And people disagree because of questions of how to know what we read: are our value judgments in the environment or in our minds? (This is valuable regardless of whether people value it, or this is valuable to the extent that people value it.) Everyone was reading Nature as though it were a book, but they brought different notions of how to read, and that’s why they disagreed about what to do.

There was another interesting glitch, that I couldn’t quite process. There were, as I was writing my dissertation, major scholars who argued that you could dismiss environmental concerns on the grounds that the kind of people who had those concerns were irrational.

So, I thought, my first book should trace out the connections I suspected were there: attitudes toward nature, epistemologies, and hermeneutics, and somehow it would end up on that point about dismissing arguments on the basis of motivism. It would move from the American Puritans up to Muir and the Hetch Hetchy Valley debate.

Looking back on this, I came to see that graduate school sets people up for the mistake I was making. In graduate school, you read the most famous scholars’ most recent work (except in the case of teacher who wants to trash another school of thought or scholar, in which case you read their early work, and spend a class talking about how simplistic and jejeune their article is). Scholars, toward the end of their careers, write in a completely different way from people early on—they can engage in grand narratives, broad brushes, and assertions that come from having thought about something for thirty years. We try to write what we read, and so junior scholars are set up for failure by trying to write in the way that an established scholar can write—the rules are different.

Eventually, I tried to write a book that started and ended with John Muir, but was almost entirely about the American Puritans. (A university press was interested, and kept telling me they would let me know—their editor was ill. There were many emails about how they would let me know in three weeks as the tenure clock was in the final seconds and a dean was telling the department not to support me. I have literally never heard a final word from them. They were discontinued. I was denied tenure. I got a better job.) I also directed a first-year comp program and pissed off a dean. I tried to publish an article about Horatio Alger, and another about Robert Montgomery Bird, and both were stymied.

I moved to a department that had more people publishing in the history of rhetoric, and those faculty gave me really useful readings of my manuscript, and I connected with a better press, and I got a book manuscript accepted, and then I published pieces from it (not the normal chain of events).

That book was about the 17th century New England Puritans, and how their notions of rhetoric, epistemology, and public deliberation did and didn’t fit together. No one in rhetoric and writing had written on the Puritans for a long time, and so I couldn’t make the normal scholarly moves of “They say but I say.” There was no current “I say.” Also, it irritated me off that one part of my argument was that we got the transmission model (the thesis-first) from the Puritans, and it came from their belief that persuasion doesn’t really happen. You tell  people the truth, and they recognize it. Good people act on that truth, and bad people dismiss it (a model of persuasion oddly persistent even in current studies). One of the reviewers (a comm and not comp person) insisted I put my thesis first. I grumped about it.

I intended that book to be the first part of a series, so that the next book would be looking at the rhetorical theories, epistemologies, hermeneutics, and attitudes toward nature in the late 17th and early 18th century American culture. I read a lot of 19th century American popular literature, but I couldn’t write that book. The erasure, dismissal, rationalization, and rhetorical shittiness about the indigenous peoples was too awful for me to manage. For instance, I had an article about Robert Montgomery Bird and the paradox that the same actions were to be condemned when done by Native Americans but considered heroic when done by “whites” (aka, why I can’t watch most movies). One reader said, and I’m not kidding, “But don’t you think they deserved it?” I put that and the Horatio Alger article away.

grrrrr

I have been a fan-girl of Hannah Arendt since junior high school when I read Eichmann in Jerusalem. In graduate school, for reasons even now I can’t determine, I ran into a Habermas article with an amazing endnote about how rhetoric (bad) and communicative action (good) interact. He cited speech act theory, so I took a class with John Searle (I think I got a B+, and I still really appreciate that class). As a Comp Director, I found myself in a lot of uselessly non-arguments about argumentation—people opposed teaching argumentation because they believed that no one is ever persuaded of anything (they taught the 5 paragraph essay, and they had noticed that that genre is unpersuasive, and so concluded persuasion is impossible). Their perception of persuasion is that a person has the truth, and tells it to another (the recipient) and then that person has the truth. If the recipient doesn’t have the truth at the end, then it’s proof that persuasion isn’t possible. (You hear both of those arguments a lot still.)

That’s an obviously silly model of persuasion, but, oddly enough, it’s dominant, and not restricted to one political group or philosophical approach. You can hear poststructuralists, neoconservatives, neopositivists, and behavioralists all cite studies that show no one is actually persuaded by evidence, and cite studies to support their position. (I think that’s funny.) Wayne Booth and Jurgen Habermas both nailed this one, showing that a lot of people toggle between two models of persuasion (neither of which is the one on which they actually operate): they toggle between the notion that you are persuaded by unemotional logic or you are persuaded by emotion. Oddly enough, the people arguing for it’s all emotion cite scientific studies to support their point. If they really believed it’s all emotion, they wouldn’t cite studies; they would just assert their point. Their engaging in argumentation shows that they think argumentation does potentially have an impact. This is sometimes called the pragmatic contradiction.

This problem (people engaged in persuasion who insist that no one is ever persuaded) starts from asking the wrong methodological question. You have a person who believes s/he has the truth (the experimenter) and s/he asks the experimentee what s/he believes, then presents an assertion that the experimentee is wrong. The experimentee doesn’t immediately convert on the basis of this short interaction, and the experimenter concludes that persuasion doesn’t happen! The experimenter has given the experimentee objective evidence (rational) that the experimentee doesn’t instantly accept, so the experimentee is irrational.

The irrational (no logic, all emotion)/ rational (no emotion) split is like dividing everything into round or green. Some people (roundists) are very narrow in their definition of what is round, and they declare everything that doesn’t fit that narrow definition as green. Therefore, skyscrapers are green. The greenists are very narrow about what is green, and call everything else round.

This might seem like a silly example, but it’s how American media presents politics. Major televisions media accept the Us or Them binary and then find all sorts of reasons at this or that moment to draw the lines differently. Unhappily, too many Christians do the same, accepting the premise that all the various positions can be divided into two, and then you argue about where the Us v. Them line is drawn. Given Christ’s message, we really should know better.

In any case, my point is that believing that squirrels are evil beings trying to get to the red ball is rational, and truly patriotic, means that you will perceive anyone who disagrees with you on that point as irrational and unpatriotic. And I saw that how “argumentation” was (and is) taught would reinforce that foundational fallacy.

I was convinced that the hostility to teaching argumentation in first year composition came from two places: 1) different conceptions of what it means to participate in democracy; 2) the rational/irrational split. So, I thought, I would write a book that would show the connections between models of democracy and pedagogies and that would end more hopefully and pragmatically, with a long discussion about what advances in argumentation meant for the teaching of argument.

So, what became Deliberate Conflict was supposed to be about half of a book. I wrote that book, and then farmed out parts (that isn’t how you’re supposed to do it) and it was too long. I had to take my favorite part (about Arendt) and put some of it into an article.

I had a bit of a glitch with moving (having been given tenure) to a new place and with certain promises being given that were cheerfully reneged, and so had to write two books to get associate professor and three for full. (And, yes, I’m bitter about that, since the two people who made that happen have never apologized or even acknowledged that their regneging might have caused me some grief. One of them has twice told me it was no big deal.)

Here things get complicated, since I was given my first paid leave in my career. I got my degree in 1987, and it was 2003 (or 4—I’m vague on that). I had been directing a very large first year composition program at my first job, and a slightly smaller one at my second. I HAD A LEAVE. I sent out a bunch of articles.

One of the articles I sent out in 2003 or 4 was the one a colleague (in 1992 or so) had told me was unpublishable because my argument about how whites justified pre-emptive violence against indigenous people “ignored that they started it,” and it got an award. The best vengeance is success.

I had long since moved on to the argument that agonistic rhetoric was the bomb, and the post-bellum shift away from agonism was bad. And a graduate student asked me, “If antebellum methods of teaching rhetoric were so good, why couldn’t we solve the slavery problem rhetorically?” So, I set out to write a book about the slavery debate. It was an elegant plan for a book, with five chapters: the public pro-slavery; the counter-public pro-slavery (since I wanted to undermine the public/counter-public binary which is often a good/bad or bad/good binary), the public pro-slavery, the counter-public anti-slavery, the public pro-slavery, and the mediators (that no one talks about anymore, but were once the heroes: Webster, Clay, Calhoun).

It ended up being a book about the proslavery argument between 1830 and 1835. (In other words, every book I’ve written has started out as a much longer book.)

The Civil War didn’t happen because both sides were fanatics, nor because they couldn’t compromise. The Civil War happened because the Constitution gave an advantage to slave states, slavery became the single identifying sign of Southernness, and fanaticism on behalf of slavery was a sure path to political success in a slave state. The Civil War happened because, having won every “compromise” in regard to slavery (that is, the US was becoming increasingly a slave nation) the slave states saw a political opportunity when Lincoln was elected. Their extremist rhetoric got them extremist politics and a war they never needed to have.

They thought they needed the war because they lived in an informational enclave in which various events (e.g., the mass mailing of AAS pamphlets) were a fact, although they didn’t actually happen (there was no flooding of the South with those pamphlets). They also lived in a culture in which it was dishonorable to argue pragmatically about various outcomes, including failure, and so it was the classic situation of amplification.

I was working on this book in 2003, and I thought the Iraq War was the same situation. It was a war that never needed to happen, and it happened because large numbers of people believed things that were false (Saddam Hussein was behind 9/11 and he had WMD), but they lived in a world in which those myths were foundational facts.

That seemed to me demagoguery. And, so, I got interested in demagoguery. And I read everything recent about demagoguery (there was not much in rhetoric and writing) and wrote an article arguing that rhetoric should pay attention to demagoguery. And the responses are there to read. I ran into a really kind and smart person at an airport who asked if I was going to respond to them, and I said no. I wanted to get the argument going, and I thought I had, and I also thought that responding to those articles would have involved my saying, “Yeah, I’m just gonna repeat what I said, since y’all obviously didn’t read the article I wrote, and just responded to something in your heads.”

I never said demagoguery was about emotionalism, for instance. Sheefuckingeesh.

And then I started working hard on a book about demagoguery. And it was going gangbusters, and it’s a weird book, and it was sent to readers, one of whom said demagoguery was a dead issue.

The book is a point by point refutation of common notions about demagoguery. Demagoguery isn’t just about the demes, it isn’t necessarily emotional, it has a weird relationship to expert discourse. I deliberately chose to have a section on a person I admire. And it has a chapter in which my point is that rhetoric can enable someone to identify shitty expertise discourse. But it’s a weird book, inductively argued.

In any case, my point in all of this is that a scholarly trajectory isn’t something you direct from the beginning. Trajectory is, I’d say, entirely the wrong metaphor. It’s more like following scat. You have something you’re hunting, and you follow the scat of the thing you’re hunting. I’ve had a lot of setbacks—a press that was uncommunicative and then went under, a dean out to make sure I was denied tenure, people in power who cheerfully reneged on promises, unsympathetic reviewers. But I’ve also had a lot of good breaks, reviewers who saw promise, editors who turned hostile reviews into a forum, hitting the job market at good moments, supportive colleagues and challenging students.

Nicholas Taleb has an analogy I think is really helpful. He says that you should imagine a study in which a thousand people are asked to engage in Russian roulette. After five shots, there will be some people standing. He points out that those people will be asked about their strategies, and whatever those people say they did will become the mantras for success in…. in his case, it’s finance.

There are no strategies that will guarantee success in our field. There are some really good books out there about what are strategies you can try, but there’s no guarantee.

You do any job for love or money. No one does academia for money, so it had better be for love. And what is it you love? When I started teaching, it was for love of teaching, but promotion required publication, and I came to love research. (I still don’t love publishing.) And this Robinson Jeffers poem has always moved me:

“I hate my verses, every line, every word.
Oh pale and brittle pencils ever to try
One grass-blade’s curve, or the throat of one bird
That clings to twig, ruffled against white sky.
Oh cracked and twilight mirrors ever to catch
One color, one glinting
Hash, of the splendor of things.
Unlucky hunter, Oh bullets of wax,
The lion beauty, the wild-swan wings, the storm of the wings.”
–This wild swan of a world is no hunter’s game.
Better bullets than yours would miss the white breast
Better mirrors than yours would crack in the flame.
Does it matter whether you hate your . . . self?
At least Love your eyes that can see, your mind that can
Hear the music, the thunder of the wings. Love the wild swan.

He’s referring, of course, to Yeats’ “Wild Swans” poem, and his own sense that he could never be Yeats. And, initially, he’s seeing writing as nailing down the thing about which he’s trying to write (note my own “nailing” metaphor above). But we will never nail to the wall anything about which it’s worth writing. We need to love what we’re trying to write about. We need to love the thing we’re chasing. It isn’t about shooting something; it’s about following a trail. I generally hate my writing, and find the slippage between what I say and what I’m trying to say sometimes incredibly discouraging. But I love democracy, and I try to make that good enough.

Advice for graduate students and junior faculty about writing

For years, I’ve been intrigued by the paradox that people who have written well enough to get to graduate school (or to finish, or to write a first book) at some point find themselves unable to write. I fell deep into the research on that issue, and I thought I would write a book about it. Well, actually, I did, but I’m not sure about trying to get it published. Today I found out that the place I published it still exists, and so here it is.

Ethos, pathos, and logos

Since the reintroduction of Aristotle to rhetoric in the 60s, there has been a tendency to read him in a post-positivist light. That is, the logical positivists (building on Cartesian thought) insisted on a new way of thinking about thinking—on an absolute binary between “logic” and “emotion.” This was new—prior to that binary, the dominant models of thinking involved multiple faculties (including memory and will) and a distinction within the category we call “emotions.” While it was granted that some emotions inhibited reasoning (such as anger and vengeance) theorists of political and ethical deliberation insisted on the importance of sentiments. The logical positivists (and popular culture), however, created a zero-sum relationship between emotion (bad) and reasoning (logic–good). Thus, when we read Aristotle’s comment about the three “modes” of persuasion post-positivist world, we tend to assume that he meant “pathos” in the same way we mean “emotion” and “logos” in the same (sloppy) way we use the word “logic.” And we get ourselves into a mess.

For instance, for many people, “logic” is an evaluative term—a “logical” argument is one that follows rules of logic. Yet, textbooks will describe an “appeal to facts” as a logos (logical) argument. That’s incoherent. Appealing to “facts” (let’s ignore how muckled that word is) isn’t necessarily logical—the “facts” might be irrelevant, they might be incorporated into an argument with an inconsistent major premise, the argument might have too many terms. In rhetoric, we unintentionally equivocate on the term “logical,” using it both to mean any attempt to reason and only logically correct ways of reasoning. (It’s both descriptive and evaluative.)

The second problem with the binary of emotion and reason is that, as is often the case with binaries, we argue for one by showing the other often fails. Since relying entirely on emotion often leads to bad decisions, then it must be bad, and relying on logic must be good. That’s an illogical argument because it has an invalid major premise. Were it valid, then someone who made that argument would also agree that relying on emotion must be good because relying purely on logic sometimes misleads (it’s the same major premise—if x sometimes has a bad outcome, then not-x must be good).

So, even were we to assume that emotion and logic are binaries (they aren’t), then what we would have to conclude is that neither is sufficient for deliberating.

And, in any case, there’s no reason to take a 19th century western notion and try to trap Aristotle into it.

A better way to think about Aristotle’s division is that he is talking about: what the argument of a speech is, who is making the speech, and how they are making it. So, the logos (discourse) in a speech can be summarized in an enthymeme because, he said, that’s how people reason about public affairs. There are better and worse ways of reasoning, and he names a few ways we get misled, but he didn’t hold rhetoric to the same standards he held disputation—that is where he went into details about inference. An appeal to logos, in Aristotle’s terms, isn’t necessarily what we mean by a logical argument.

Aristotle pointed out that who makes the speech has tremendous impact on how persuasive it is (and also how we should judge it)—both the sort of person the rhetor is (young, old, experienced, choleric), and how the person appears in the speech (reasonable, angry). And, finally, how the person makes the speech has a strong impact on the audience, whether it’s highly styled, plain, loud, and so on.

And all of those play together. A vehement speech still has enthymemes, and it’s only credible if we believe the speaker to be angry—if we believe the speaker to be generally angry (or an angry sort of person) that will have a different impact from an angry speech on the part of someone we think of as normally calm. Ethos, pathos, and logos work together, and they don’t map onto our current binary about logic and emotion.

On career choices as mingling in Burke’s parlor

On Wednesday, I sent off the scholarly version of the demagoguery argument. It isn’t the book I once planned (that would involve a deeply theoretical argument about identity and the digital world), but it’s the one I really wanted to write, that would (I think) reach more people than that other one.

And it’s the last scholarly book I’ll write. I intend to spend the rest of my career trying to solve the interesting intellectual problem of making scholarly concepts and debates more accessible to non-academics. But that isn’t because I reject highly specialized academic writing as, in any way, a bad thing.

I have no problem with highly theoretical and very specialized books. My books have barely grazed the 1000 sales point, and that’s pretty good for a scholarly book. People have told me that something I’ve written has had an impact on their scholarship, pedagogy, program administration, so I’m really happy with my record as a scholar.

And I’m happy with the record of people who have sold both more and less because measuring impact is so very difficult. Publishing a book with an academic press is an extraordinary achievement, and measuring the impact of such books accurately is nigh impossible—a really powerful book is shared in pirated pdfs, checked out of libraries, passed from one person to another. Sales and impact are orthogonal in academia.

If you study the history of ideas even a little you have to know that what seemed major in the moment was sometimes just a trend (like mullets) and sometimes a sea change (like the synthesizer). No one reads Northrop Frye anymore, but he was a big deal at one moment, and yet Hannah Arendt is still in the conversation, who was also a big deal around the same time. And there are all those people who weren’t big deals in their era, but later came to have tremendous impact, such as Mikhail Bakhtin.

Some trade books on scholarly issues have had extraordinary sales (such as Mortimer Adler’s writings), but it’s hard to know what impact they had. Madison Grant’s racist book Passing of the Great Race had poor sales, but appears to have had a lot of impact. And there are lots of trade books that have come and gone without leaving any impact, so there’s no good reason to conclude that trade books necessarily have more impact than scholarly ones. I don’t think there are a lot of (any?) necessary conclusions that one can draw about whether trade or scholarly books have more impact, are more or less important, more or less valuable intellectual activity.

I have always loved Kenneth Burke’s analogy of the parlor for what it means to be interested in major questions. You show up at a party, he says, and it’s been going on for a while, and you find some conversation that seems interesting. You listen for a while, and then you take a side or point out something new. You get attacked and defended, and some people leave the conversation, and others join, and eventually you too leave. And it goes on, with other people taking sides that may or may not have to do with what you were arguing.

What Burke fails to mention is that, if it’s a good party, there are a lot of conversations going on. You might choose to leave that particular conversation, but not leave the party.

I have loved writing scholarly pieces (although I didn’t initially think I would), and my work has placed me in some particular conversations. I’ve moved from one conversation to another, but all on the side of the parlor engaged in very scholarly arguments. I’d like to leave that side of the parlor, not because it’s a bad one—it’s a wonderful one—but because it’s a party with a lot of conversations going on. I’d like to mingle.

I think a lot of discussions of the public intellectual make a lot of odd assumptions that assume binaries—that either specialized or public scholarship is good, for instance. Scholarship that speaks with authority to a small group is neither better nor worse than scholarship that reaches a broad audience—it’s just a different conversation in Burke’s parlor. And I’m going to wander over there for a bit.

 

“Just Write!” and the Rhetoric of Self-Help

There is a paradox regarding the large number of scholars who get stalled in writing—and a large number do get stalled at some point (50% of graduate students drop out)—they got far enough to get stalled because, for some long period of time, they were able to write. People who can’t write a second book, or a first one, or a dissertation, are people who wrote well enough and often enough to get to the point that they needed to write a dissertation, first book, second book, grant, and so on. So, what happened?

The advice they’re likely to be given is, “Just write.” And the reason we give that advice (advice I gave for years) is that we have the sense that they’re overthinking things, that, when they sit down to write, they’re thinking about failure, and success, and shame, and all the things that might go wrong, and all the ways what they’re writing might be inadequate, and all the negative reactions they might get for what they’ve written. So, we say, “Just write,” meaning, “Don’t think about those things right now.”

The project of writing may seem overwhelming because existentially risky, and the fear created by all the anxiety and uncertainty is paralyzing. It can seem impossibly complicated, and so we give simple advice because we believe that persuading them to adopt a simpler view of the task ahead will enable them to write something. Once they’ve written something, once they’re unstuck, then they can write something more, and then revise, and then write more. Seeing that they have written will give them the confidence they need to keep writing.

And I think that advice often works, hence the (deserved) success of books like Writing Your Dissertation in Fifteen Minutes a Day or Destination Dissertation. They simplify the task initially, and present the tasks involved in ways that are more precise than accurate, but with the admirable goal of keeping people moving. Many people find those books useful, and that’s great. But many people don’t, and I think the unhappy consequence of the “you just have to do this” rhetoric is that there is an odd shaming that happens to people for whom that advice doesn’t work. And, while it’s great that it works for a lot of people, there are a lot for whom it doesn’t, and I’m not happy that they feel shamed.

These books have, as Barbara Kamler and Pat Thomson have argued, characteristics typical of the self-help genre (“The Failure of Dissertation Advice Books”), especially in that it presents dissertation writing as “a series of linear steps” with “hidden rules” that the author reveals. While I am not as critical of those books, or of the genre of self-help, as Kamler and Thomson, I think their basic point is worth taking seriously: that this advice misleads students because it presents dissertation writing as a set of practices and habits rather than cognitive challenges and developments.

Academic writing is hard because it’s hard. Learning to master the postures, steps, and dances of developing a plausible research question, identifying and mastering appropriate sources, determining necessary kinds of support, managing a potentially sprawling project, and positioning a new or even controversial claim in an existing scholarly conversation—all of that is hard and requires cognitive changes, not just writing practices.

Telling people academic writing “just” requires anything (“just write,” “just write every day,” “just ignore your fears,”) is a polite and sometimes useful fiction. And self-help books’ reliance on simple steps and hidden rules is, I’d suggest, not necessarily or manipulative, but based in the sense that telling people something hard is actually hard can discourage them. If you lie, and thereby motivate them to try doing it, then they might realize that, while hard, it isn’t impossible.

I think the implicit analogy is to something like telling a person who needs to exercise that they should “just get up off the couch.” Telling people that improving their health will be a long and slow process with many setbacks is unlikely to motivate someone to start the process; it makes the goal seem impossible, and unrewarding. Telling someone that getting healthier is simple, and they “just” need to increase their exercise slightly, or reduce portion size slightly, or do one thing differently will at least get them started. Having gotten a little healthier might inspire them to do more, but, even if it doesn’t, they are getting a little better.

But that’s the wrong analogy.

A scholar who is having difficulty writing is not analogous to someone who needs to get up off the couch: it’s a person with a long record of successes as a writer. That is what we (and people who are stuck) so often lose track of when we give the “just write” advice. They are not a person sitting on a couch; they are someone with an exercise practice that has always worked for them in the past and it isn’t working now.

The better analogy, I would suggest, is a sprinter who is now trying to run a marathon. Sprinting has worked for them in the past, and many academics have a writing process that is akin to sprinting—chunks of time in which we do nothing but write, and try to get as much done as quickly as we can. Writing a dissertation or book, on the other hand, is more like running a marathon.

It would be unethical to tell a sprinter who is unable to run a marathon that she should “just run.” She has been running; she’s quite good at it. But the way that she has been running is not working for this new distance. And if she does try to run a marathon the way she has always run short races, she will hurt herself.

My intuition is that people who have trouble writing are people who have always used the sprinting method, and have simply managed to develop the motivational strategies to sprint for longer, or collapse from time to time while on the race, and pick themselves up. Often, it seems to me, that motivation relies on panic and negative self-talk—they manage to binge write because otherwise, they tell themselves, they are a failure.

So I’m not saying that “Just write” is always bad advice. I am saying that it sometimes is; it is sometimes something that can send people into shame spirals. It only works for some people, for people who do find that polite fiction motivating. For others, though, telling them “just write” is exactly like telling a person in a panic attack “just calm down” or someone depressed “just cheer up.”

The “just write” comes from a concern that lack of confidence will paralyze a student. But I think we might be solving the wrong problem.

Part of the problem is the myth of positive thinking, which has taken on an almost magical quality for some people. There is a notion that you should only think positive thoughts, as though thinking negative things brings on bad events. Since thinking clearly about how hard it is to write a book, dissertation, or grant (and, specifically, thinking clearly about how we might have habits or processes that inhibit our success) is thinking about “bad” things, about how things might go wrong or what troubles we might have, the myth of positive thinking says you shouldn’t do it. You should, instead, just imagine success.

This is a myth. It isn’t just a myth, but pernicious, destructive nonsense. A (sometimes secular) descendant of the positive psychology elegantly described by Bowler in Blessed, this is magical thinking pure and simple, and perfectly contrary to what research shows about how positive thinking actually affects motivation.

But here I should be clear. Some people who advocate wishful thinking do so because believe that the only other possibility is wallowing in self-loathing and a sense that the task is impossible, and they believe that telling students that academic writing is hard will necessarily lead to their believing it is impossible. In other words, there is an assumption that there is a binary between thinking only and entirely about positive outcomes or thinking only and entirely about tragic outcomes. The former is empowering and the latter is paralyzing. That narrative is wrong on all three counts—positive thinking is not necessarily enabling, moments of despair are not necessarily disabling, and our attitude toward our own challenges is not usefully described as a binary between pure optimism and pure despair. Left out of that binary is being hopefully strategic: aware of possible failures, mindful of hurdles, with confidence in our resilience as much as in our talents.

As to the first, studies clearly show that refusing to think negative thoughts about possible outcomes is actively harmful, and frequently impairs achievement. That’s important to remember: telling students they shouldn’t think about their own flaws, the challenges ahead of them, and how things might go wrong is not helping them, and it is making it less likely they will do what they need to do.

Gabriele Oettingen’s considerable research shows that (summarized in the very helpful book Rethinking Positive Thinking), while wishful thinking can be useful for maintaining hope in a bad situation or identifying long-term goals, it inhibits action. Fantasizing about how wonderful a dissertation or book will be doesn’t inspire us to write either; for many people, it makes the actual sometimes gritty work so much more unattractive in comparison that it’s impossible to write. The fantasy is far more fun than writing a crummy first draft. Similarly, Carol Dweck’s research on mindsets shows that success depends on acknowledging what has gone wrong and identifying how one might grow and change to get a different outcome in the future.

A sense that the task is so hard as to be impossible is not inevitably and necessarily disabling. It is, however, inevitable. It is dishonest to tell students that we never feel that what we’re trying to do can’t be done or isn’t worth doing, because so many of us do. And most of us got (and get) through it. Sometimes it took time, therapy, medication, changing things in our personal lives, changing jobs, changing projects, all of the above. But I don’t know any productive scholar free from times of slogging through the slough of despond.

In my experience, academic writing gets easier, but it’s never easy. The hardest writing is probably finishing a dissertation while writing job materials—nothing after that is so hard. But it’s always hard. If we tell students that it’s easy, or that it gets easy, even if we do so with the intention of keeping them moving, we do them a disservice. If they believe us, if they believe that we find it easy, then, when it gets hard, as it necessarily will, they have to conclude that there is something wrong with them. They are unhappily likely to conclude that they have been exposed for the imposter they always worried they were.

The “just write” advice almost certainly works for some people in some situations, as does the “just write every day” or “just freewrite” or “just start with your thesis” or any of the other practices and rules that begin with “just.” They work for someone somewhere and maybe they work for everyone some of the time, and they always strike me as sensible enough to suggest that people experiment with them. But we shouldn’t pretend that they’re magical and can’t possibly fail, or that someone “just” needs to do them. The perhaps well-intentioned fiction that academic writing “just” requires certain practice is magical thinking, and we need to stop saying it.

In my experience, people who find the “just write” advice useless find it too abstract. So, I think we need to be clear that scholarly productivity is, for most people, hard, and it’s find that a person finds it hard. And it takes practice, so there are some things a person might “just write”:

    • the methods section;
    • descriptions of an incident, moment in a text, interaction, or some other very, very specific epitome of their problem (Pirsig’s brick in the wall of the opera house);
    • summaries of their secondary materials with a discussion of how each text is and is not sufficient for their research;
    • a collection of data;
    • the threads from one datum to another;
    • a letter to their favorite undergrad teacher about their current research;
    • a description of their anxieties about their project;
    • an imitation of an introduction, abstract, conclusion, or transition paragraph they like written by a junior scholar.

I’m not presenting that list as a magical solution. It would be odd for me to say that simplistic advice is not helpful and then give a list of the five (or seven, or ten) things we “just” have to do to become (or teach others to become) skilled and productive academic writers. What we have to do is acknowledge that the project requires significant and complicated cognitive changes: that, for most of us, scholarly writing is hard because it’s hard. Let’s be honest about that.