Class size in college writing (an old paper)

[This was co-authored with Reinhold Hill in 2007, based on research done in the late 90s at our then-institution. People have sometimes cited it, although it wasn’t published, so I’m posting it.]

The issue of class size in first year college writing courses is of considerable importance to writing program administrators.  While instructors and program administrators generally want to keep classes as small as possible, keeping class size low takes a financial and administrative commitment which administrators are loath to make in the absence of clear research.  While the ADE and NCTE recommendations of fifteen students are persuasive to anyone who has taught first-year writing courses, they often fail to persuade administrators who are looking for research-based recommendations.  And, in actual fact, class sizes at major institutions ranges from ten to twenty five students.

Unfortunately, anyone looking to the available research on class size in college writing courses is likely to come away agnostic.  While there is considerable research on class size and college courses in general, there are several important reasons that one should doubt its specific applicability to college writing courses.  First, much of the general research on class size includes students of all ages.  Second, the research often involves the distinction between huge and simply large courses, such as between forty and two hundred students,  whereas most writing program administrators are concerned about the difference between fifteen and twenty-five students. Third, the courses involved in the studies often have very different instructional goals from first year writing courses.  Finally, the assessment mechanisms are often inappropriate for evaluating effectiveness and student satisfaction in writing courses.

In other words, the NCTE recommendations for writing courses are not based on research, and the research on class size in general cannot yield recommendations.        At the University of Missouri, we were given the opportunity to engage in some informal experimentation regarding class size.  While the limitations of our own research mean that we have not resolved the class size question, our results do have thought-provoking indications for class size and program administration.  In brief, our work suggests that reducing class size, while very popular among instructors, appears not to result in marked improvement in student attitudes about writing unless the instructors use that reduction in class size as an opportunity to change their teaching strategies.  In other words, we seem to have confirmed what Daniel Thoren has concluded about class size research: “Reducing class size is important but that alone will not produce the desired results if faculty do not alter their teaching styles.  The idea is not to lecture to 15 students rather than 35” (5).  If, however, instructors are able to take advantage of the smaller class size, then even a small reduction can result in students perceiving considerable improvement in their paper writing abilities.  We do not wish to imply that reducing class size should not be a goal for writing program administrators, but as a goal in and of itself it is not enough – we need to be aware that pedagogical changes must be initiated together with reductions in class size.

1. Institutional Background

Our study, largely funded by the Committee on Undergraduate Education, was the result of recommendations made by a Continuous Quality Improvement team on our first year composition course (English 20).  That team was itself part of increased campus, college, and departmental attention to student writing.  As a result of that attention, the English 20 program underwent philosophical and practical changes.

The most important change was probably the shift in program philosophy. While there remains some variation among sections, the philosophy of the program as a whole is to provide an intellectually challenging course in which students write several versions of researched papers on subjects of scholarly interest about which experts disagree.  Students write and substantially revise at least three papers, each of which is four to five pages long.  There are four separate but connected goals in these changes.  First, for instructors, our goal is to provide a teaching experience which will make the teaching of first-year composition appropriate preparation for teaching writing intensive courses in their area.  Hence, instructors need to develop their own assignments.

Second, for students, one goal of the course is to enable students to master the delicate negotiation of self and community necessary for effective academic writing.  As Brian Huot has noted, research in writing assessment indicates that students tend to be fairly competent at expressive writing, but have greater difficulty with “referential/participant writing” (241).  Our sense was that this assessment is especially true of students entering the University of Missouri.  They are quite competent at many aspects of writing, but they have considerable difficulty enfolding research into an interpretive argument.  Thus, we did not need to teach The Research Paper that Richard Larson has so aptly criticized; nor do students need instruction in personal narrative.  Instead, students needed practice with assignments which called for placing oneself in a community of experts who are themselves disagreeing with one another.  Achieving this goal was nearly indistinguishable from achieving the goal described above for instructors–assisting instructors to write assignments which called for an intelligent interweaving of research and interpretation into a college-level argument would necessarily result in students’ getting experience with that kind of assignment.

Our third goal was to teach students the importance of a rich and recursive writing process, one which involves considerable self-reflection, attention to the course and research material, and substantial revision in the light of audience and discipline expectations.  Research in composition indicates over the last thirty years suggests that such an attention toward the writing process is the most important component to success in writing, especially academic papers (Flowers and Hayes, Berkenkotter, Emig).

It should be briefly explained that this is not to say that the program endorses what is sometimes called a “natural process” mode of instruction–that term is usually used to describe a program which is explicitly non-directional, in which students write almost exclusively for peers and on topics of their own choosing, and which endorses an expressivist view of writing.  In fact, attention to the writing process does not necessarily preclude the instructor taking a “skills” approach to writing instruction (that is, providing exercises or instruction in what are presumed to be separable aptitudes in composition) but it does necessitate course design with careful attention to paper topics.

And this issue of modes of instruction raises our fourth goal–to enable instructors to use what George Hillocks calls the “environmental” mode of instruction.  When we began making changes to the first year composition program, it was our impression that the dominant mode of instruction was what Hillocks calls the “presentational” mode, which

is characterized by (1) relatively clear and specific objectives…(2) lecture and teacher-led discussion dealing with concepts to be learned and applied; (3) the study of models and other material which explain and illustrate the concept; (4) specific assignments or exercises which generally involve imitating a pattern or following rules which have been previously discussed; and (5) feedback following the writing, coming primarily from teachers.  (116-117)

It is important to emphasize that this mode does not depend exclusively on lecture.  A class “discussion” in which the instruction guides students through material by asking questions intended to elicit specific responses is also presentational mode.  Insofar as we can tell, a large number of instructors used class time to present advice on writing papers as well as to present writing products which students might use as models.  Instructors then used individual conferences in order to discuss strategies for revising papers.

The dominance of this mixing of presentational and individualized modes of instructions in our program had two obvious consequences.  First, it was exhausting for instructors.  An instructor’s time was generally split between the equally demanding tasks of preparing the information to be presented in class and engaging in individual conferences with students. The standardized syllabus recommended four papers; each class has eighteen to twenty students; many of our instructors teach two classes per semester.  Instructors were forced to choose between not providing individual instruction for students on each paper or spending a minimum of eighty hours per semester in conference with students.  If instructors are also spending six hours per week preparing class material, and three hours per week in class, they are spending one hundred and seventy five hours per semester per class on their teaching–not including the time spent grading and commenting on papers.  Standards for good standing and recommendations regarding course load assume that such students are spending only one hundred fifty hours per semester on each course.

It should be emphasized that shifting instructional mode and changing the syllabus to only three papers cannot solve the problem of overworking instructors.  Class preparation and time in class account for one hundred thirty five hours per semester; if instructors spend forty-five minutes grading a first submission and only fifteen minutes grading a second submission, an enrollment of twenty students brings their commitment to one hundred ninety five hours per semester per course, and this amount of time does not include any conferences.

An informal survey of our instructors indicated the consequences of these conflicting expectations: some instructors did minimal commenting on papers, some instructors permitted their own status as students to suffer, while others encouraged students to write inappropriately short papers, and all were over-worked.

The second consequences of the programatic tendency to alternate between presentational and individualized modes of instruction has to do with Hillocks’ own summary of research on modes of instruction.  Hillocks concludes that the presentational mode of instruction is not as effective as what he calls the “environmental mode”: “On pre-to-post measures, the environmental mode is over four times more effective than the traditional presentational mode” (247). In other words, our instructors were working very hard in ways that may not have been the most effective for helping students write better papers.

So, we wanted instructors to use the “environmental” mode of instruction, which

“is characterized by (1) clear and specific objectives…(2) materials and problems selected to engage students with each other in specifiable processes important to some particular aspect of writing; and (3) activities, such as small-group problem-centered discussions, conducive to high levels of peer interaction concerning specific tasks….Although principles are taught, they are not simply announced and illustrated as in the presentational mode.  Rather, they are approached through concrete materials and problems, the working through of which not only illustrates the principle but engages students in its use.”  (122)

In the environmental mode, one neither lectures to students, nor does one simply let class go wherever the students want.  Instead, the instructor has carefully prepared the tasks for the students–thinking through very carefully exactly what the writing assignments will be and why.

2. Other Research on Class Size and College Writing

The relevance of the considerable body of research on class size is largely irrelevant to first-year composition.  Glass et al’s 1979 meta-analysis of 725 previous studies, for instance,  remains one of the fundamental studies on the subject.  Yet, it includes a large number of studies on primary and secondary students; hence, there is reason to wonder what role age plays in the preference for smaller class size.  A more recent, and frequently quoted, meta-analysis of college courses which claims, as measured by student achievement by final examination scores, that class size has no effect on student achievement begins with classes as small as 30 to 40(Williams et al 1985).  But, this study does not appear to have included a writing course.  Considering that the study was restricted to courses with “one or more common tests across sections” (1985 311) it is unlikely to have been a composition course; if it was, then it was one which presumed that improvement in writing results from learning information which can be tested–a problematic assumption.

A more fundamental problem–because it is shared with numerous other studies of class size–is the measurement mechanism.  That is, examinations are not appropriate measures of student achievement in courses whose goal is to teach the writing of research papers (see Huot, 1990, CCCC Committee on Assessment, 1995, White, 1985, White and Polin, 1986); hence, any study which relies on examination grades is largely irrelevant in terms of its measurement mechanism.

Finally, there are good reasons to doubt the implicit assumption that course goals and instructional method are universal across a curriculum.  Feldman’s 1984 meta-analysis of 52 studies does not list any study which definitely involved a writing class; most of the studies, on the contrary, definitely did not include any such course.  Smith and Cranton’s 1992 study of variation of student perception of the value of course characteristics (including class size) concludes that those perceptions “differ significantly across levels of instruction, class sizes, and across those variables within departments” (760).  They conclude that the relationships between student evaluations and course characteristics “are not general, but rather specific to the instructional setting” (762).

This skepticism regarding the ability to universalize from research is echoed in Chatman who argues that class size research indicates that “instructional method should probably be the most important variable in determining class size and should exceed disciplinary content, type and size of institution, student level, and all other relevant descriptive information in creating logical, pedagogical ceilings” (8).  And, indeed, common sense would suggest that there is no reason to assume that research on courses whose major goal is the transmission of information applies very effectively to writing courses.

3. Methods and Results of Our Research

We had two main assessment methods.  Because we were concerned about reducing the time commitment of teaching English 20, we asked instructors to keep time logs.  The mainstay of our initial method of assessment was a set of questionairres given to students at the beginning and end the semesters.  While questionnaires are a perfectly legitimate method of program assessment, they do not provide as complete a picture of a program as a more thorough method would (for more on advantages and disadvantages of questionnaires in program assessment, see Davis et al 100-107).  Given the budget and time constraints, however, we were unable to engage in those methods usually favored by writing program administrators for accuracy, validity, and reliability such as portfolio assessment.  We are relying to a large degree on self-assessment, which, while not invalid, has obvious limitations.  Nonetheless, the results of the questionairres were informative.

Because the program goals emphasize the students’ understanding of the writing process, the questionnaires were intended to elicit any changes in student attitude toward the writing process.  We were looking for confirmation of three different hypotheses.

First, there should be a change in their writing process.  Scholarship in composition suggests that we will find that students begin with a linear and very brief composing process (writing one version of the paper which is revised, if at all, at the lexical level).  If English 20 is fulfilling its mission, that second set of answers will indicate that the majority of students end the course with a richer sense of the writing process–they will revise their papers more, their writing processes will lengthen, and they will revise at more levels than the lexical.

Second, their hierarchy of writing concerns should change.  According to Brian Huot, composition research indicates that raters of college level writing are most concerned with content and organization (1990, 210-254). In various studies which he reviews, he concludes that readers, while concerned with mechanics and sentence structure, consider them important only when the organization is strong (1990, 251).  That is, readers of college papers have a hierarchy of concerns, in that they expect writers to be concerned with mechanics, correctness, and format (sometimes called “lower order concerns”), but that they expect writers to spend less time on those issues than on effectiveness of organization, quality of argument, appropriateness to task, depth and breadth of research, and other “higher order concerns.”

Beginning college students, however, often have that hierarchy exactly reversed: they are often under the impression that mechanics, format, and sentence level correctness are the most important to their readers, and deserve much less attention than the argument (or substance of the paper).  Hence, if English 20 is succeeding, there should be a shift in student ranking of audience concerns.  That is, their beginning questionnaire answers will indicate that they pay the most attention to lower order concerns and least attention to higher order considerations (whether or not the paper fulfills the assignment; if the paper is well-researched; if the evidence is well-presented; if the organization is effective).  At the end of the semester, they should demonstrate a more accurate understanding of audience expectations–not that they have dropped lexical or format concerns, but that they understand those concerns to be less important for success than the higher order concerns.

Third, there should be variation in student and teacher satisfaction with the courses.  This shift is more difficult to predict than the other hypotheses, but it does make sense to expect that the sections in which students receive greater personal attention would be more satisfying for both instructors and students.  In this regard, we expected to confirm what a report from the National Center for Higher Education Management Systems has identified as “an overwhelming finding”: that students believe they learn more in smaller classes, and that they are far more satisfied with such courses.

As with many studies, our results are most useful for suggesting further areas of research.  One area should be mentioned here.  The very constraints of the assessment method–a quantitative and easily administered method–meant that we were asking students to use language other than what they might have.  Open-ended interviews with students would almost certainly elicit much richer results.  One advantage of our study of class size was that it was part of experimenting with various changes in our program; thus, a large number of sections participated in the study as a whole.  Each semester, we had about twenty sections participating in the study in some form or another, and each semester at least four were held to an enrollment of 15 students.[i]  We also designated at least four sections “control” groups, meaning that we did not reduce class size, or consciously make any of the other modifications to English 20 we were contemplating.

An important limitation of our experiment should be mentioned before discussing the results. We ran the experiment over three semesters (WS97, FS97, and WS98), but were only able to use the survey results from the second and third semesters (because we changed the survey between the first and second semester).  In the first semester that we did the experiment, we made a conscious attempt to balance each group in terms of instructor experience and subjective judgments regarding the quality of their teaching.  Given the intricacies of scheduling, however, we were unable to maintain the balances over the next two semesters of the experiment.  This imbalance obviously affected the experimental results in ways that will be noted.

In terms of reducing the time that instructors spent on the course, reducing class size did not have markedly good results.  In FS97, instructors teaching the smaller sections averaged just under twelve hours per week, but they averaged just under fifteen hours per week in WS98.  The control groups reported spending an average of ten and fourteen hours respectively.  Thus, reducing class size did not reduce the amount of time that instructors spent on their courses.

The instructor surveys indicate some reasons that their time commitment might not have reduced.  In FS97, for instance, the teachers mentioned that having a smaller class size inspired them to make changes to their teaching–creating new assignments, taking longer to comment on papers, conferring with students for longer periods of time or more often, adding in an extra paper.  In other words, the instructors took the opportunity to try something that a class size of twenty had previously dissuaded them from trying.

Obviously, this experimentation on the part of the instructors would have had some kind of impact on our own experiment, but it is impossible to predict what it would have been.  It may well be that we would have had very different results with the same instructors had they continued with a reduced class size for a second semester.  Working with that class size for the second time, they might have made different decisions about how to spend their time.  It’s also possible that this experimentation accounts for some of the unpredicted results in regard to student satisfaction and writing process, but, again, it is impossible to know.  Thus, one conclusion which we can draw from our own experiment is that one is likely to get better results by having the same instructors work with a reduced class size for several semesters in a row.

As was mentioned earlier, students were given a survey at the beginning and the end of the semester, eliciting their views of the relative importance of various aspects of the writing process, the amount of revision (and kind) in which they typically engaged, and their understanding of the expectations of college teachers. Most of them were comparison questions, asking the same question about the students’ high school experiences at the beginning of the semester that were then asked about their English 20 experience at the end.  For instance, students were asked: “What aspects of a paper were most emphasized in your high school English course?” at the beginning of the course and “What aspects of a paper were most emphasized in your English 20 course?” at the end of the course. Students were asked to select five aspects of writing a paper most emphasized in high school and five most emphasized in their English 20 classes.  The results from FS97 are shown in the table below.  The area of emphasis is listed in order, and the number is the percentage of students who listed that area among their five.  One term which should be explained is “Thesis statement,” which we take to mean, because of the emphasis of our program, revising the central argument, and not simply rewriting the last sentence fo the introduction.

WS97

HS

CONTROL

CLASS SIZE

Organization

71.66

Drafting 67.4

Peer Review 86.5

Grammar 61.92

Logic 65.3

Revising TS 71.2

Logic and Reasoning 57.38

Peer Review 65.3

Logic 61.5

Format 54.8

Organization 57.1

Revising Organization 53.9

Revising one’s TS 51.78

Revising TS 51

Organization 48.1

WS98

HS

CTRL

CLASS SIZE

Grammar 73.7

Peer review 87.5

Organization 85.7

Organization 67.7

Organization 75

Peer review 85.7

Logic 60

Logic 65.9

Logic 66.7

Research 55.9

Revising ts 59.1

Research 61.9

Format 54.4

Revising one’s organization 48.9

Revising ts 59.1

The results only partially confirmed our hypotheses.  We had predicted that the students would indicate that their high school writing courses put the most emphasis on grammar, format, and outlining and the least emphasis on revision.  We discovered, however, that high school instructors, while putting much emphasis on lower order concerns (e.g., format and grammar) do also emphasize some higher order concerns (e.g., organization and reasoning).   We also discovered more variation between semesters than expected.  While the WS98 results were much the same, with the five areas of most emphasis in high school being (in order) grammar, organization, logic and reasoning, research, format, and outlining, revising one’s thesis was second from last (with only 33.6% of students noting it as an area of emphasis in high school).

Our hypotheses were partially confirmed in that, in both semesters, the high school courses put the least emphasis on any form of revision: revising one’s grammar, revising one’s organization, or engaging in  peer review.  There was consistently a shift from high school in terms of greater emphasis on revision–it is interesting to note, for instance, that students perceive their high school courses putting considerable emphasis on organization (71.66 and 61.7), but almost none on revising organization (18.9).  Similarly, while students noted that grammar was emphasized in high school (73.7), revising one’s grammar was not (36.5).  In contrast, while English 20 is perceived as putting much less emphasis on grammar and usage (24.9), that number is much closer to the number of students who perceived an emphasis on revising one’s grammar and usage (25).  We infer that there is considerable variation among high schools–more than we had predicted–but that most high schools emphasize grammar and format more than English 20, and that English 20 emphasizes revision more than most high schools.

It is also interesting to note that students tend to report considerable experience with group work in high school courses.  Yet, students consistently reported little high school emphasis on peer review.  This discrepancy suggests that high school groups are not being used for peer review, or that–despite being put in these groups consistently–students do not perceive the peer reviews as important.

Students were also asked what aspects of a paper college teachers think most important by selecting four out of eight possibilities.  We had expected that this question would show a shift from lower order to higher order concerns–that, for instance, the method of library research would be rated high at the beginning of the semester, but would be replaced by the sources and relevance of evidence.  As with the previous table, the results from FS97 are presented in order, with the number representing the percentage of students who selected that aspect among their four.

FS97

HS

CONTROL

CLASS SIZE

Clarity of org

65.8

Clarity 71.4

Method 80.8

Correct grammar and usage 57.28

Logic 65.3

Persuasiveness 80.8

Logic and reasoning 57.38

Persuasiveness 55.1

Clarity 71.2

Persuasiveness of argument 55.12

Grammar 36.7

Logic 61.5

Mastery of subject 54.7

Mastery 36.7

Sources 50

WS98

HS

CTRL

CLASS SIZE

Clarity of org 69.5

Clarity of org 78.4

Clarity of org 76.2

Logic 60

Persuasiveness 71.6

Logic 66.7

persuasiveness 58.5

Logic 65.9

Persuasiveness 61.9

Mastery 54.8

Grammar/format/sources 34.1

Mastery 50

Grammar  50.5

Grammar 47.6

What is possibly most interesting about these charts is what is indicated about the high school preparation.  Students are relatively well informed about college instructors’ expectations before they begin the course; what little change there is in the control group in the first semester (and the almost complete lack of change in the second semester) suggests that simply being in college for one semester will inform students’ audience expectations.

The second most interesting result is that the reduced class size was a distinct failure in the first semester by our own program goals.  We did not want instructors emphasizing the method of library research; it was positively dismaying to see that listed as the greatest area of emphasis.  This result is typical of what Faigley and Witte have called unexpected results, and it is one consequence of how instructors were selected for the study.

Because scheduling of graduate students is often a last minute scramble, there were not specific criteria for participating in the reduced class size experiment.  In FS97, one instructor had participated in considerable training (Adams), one was still using a version of the old standardized syllabus and had participated in no training after her entry into the graduate program several years previous (Chapman), one was taking comprehensive exams and had engaged in only the required training (Brown), and one had participated in some training above what was required (Desser).  Adams generally engaged in the environmental mode; Chapman and Brown almost exclusively in presentational mode; Desser largely in environmental mode, but with some reliance on presentational.  Similarly, the instructors had a variety of years of experience–ranging from two to nine years.  As will be discussed below, the number of years of experience had no effect on the results, but the extent to which a person participated in training did.  In regard to the question discussed above, for instance, one can see the range of training reflected in the range of answers: Adams had only 9 per cent of students list method of library research as important; Brown had 37.5; Chapman had 41.6; Desser had 30.77.  In other words, the amount that a person participated in departmental training was reflected in the amount that their course reflected departmental goals.

As mentioned above, the exigencies of scheduling prevented our being able to balance the study groups.  Thus, what we generally called the control group was not necessarily analogous to the other sections in terms of instructor quality, experience, or preparation.  We have, therefore, also included the average number for each question–that is, the average number for all eighteen sections included in the study.

Students were asked about their perception of any change in the quality of their papers.  In asking this question, we did not assume that students were necessarily accurate judges of the quality of their papers, but we did think that their answer would provide a more specific way of evaluating the course than our course evaluations provided.  That is, whether or not they think their papers are better seems to us a useful way for thinking about student satisfaction.  The number represents the percentage of students who checked that item.  “Average” means the average number for all eighteen sections participating in the study.

Substantially better

Somewhat better

same

Somewhat worse

Substantially worse

control

40.1

44.9

4.1

0

0

size

21.2

55.8

15.4

3.9

0

average

34

WS98

Sub better

Some better

same

Some worse

Sub worse

ctrl

23.9

53.4

15.9

2.3

0

size

16.7

61.9

11.9

4.8

0

Here again one sees the results of how instructors were selected to participate.  If one looks at this same table for FS97 in regard to individual instructors, one sees a wide variation in student reaction.

Sub better

Some better

same

Some worse

Sub worse

adams

0

54.5

36.3

0

0

brown

0

50

37.5

12.5

0

chapman

25

50

25

0

0

desser

38.4

46.1

15.3

0

0

It is striking that the different sections had very nearly the same percentage of students who reported some improvement–where one sees the greatest difference is in the number of students who reported substantial improvement.  At least with these four instructors, the more training the instructor had, the more likely students were to report substantial gains.

Only one of these instructors participated in the study the next semester–Desser.  In WS98, Desser was in the control group, and the results were as follows:

No answer

Sub better

Some better

same

Some worse

Sub worse

11.1

5.5

55.5

22.2

5.5

0

Another instructor, Ellison, participated both semesters.  He was in another kind of experimental group fall semester (he met regularly with a faculty member and a group of instructors to discuss assignments, teaching videos, and so on) and reduced class size WS98.  One sees a similar pattern in the difference between the two semesters for his students–when he had a reduced class size, more students reported substantial and some improvement:

Sub better

Some better

same

Some worse

Sub worse

fs97

15.7

57.8

21

0

0

ws98

20

70

0

10

0

Granted, it is dangerous to speculate on the basis of two instructors, but it is intriguing that these instructors received very different results with a reduced class size.  If these instructors are typical, then one can conclude that the same person will get better results with a reduced class size.

There was not always a correlation between amount of training and survey results. For instance, students were asked whether their enjoyment of the paper writing process had changed.  This question was intended as a slightly different way to investigate student satisfaction–ideally, the course would improve both the students’ ability to write college-level papers at the same time that it increased their enjoyment of writing. We were unsure whether or not the question would elicit useful information, however, as we predicted it might be nothing more than an indication of the rigor of the instructors’ grading standards–that students might enjoy writing more in courses with higher GPAs.

Substantially more

Somewhat more

same

Somewhat less

Substantially less

Adams

0

27.2

63.6

0

0

Brown

0

28.7

62.5

12.5

6.25

Chapman

0

41.6

41.6

16.6

0

Desser

15.3

46.1

38.4

0

0

average

7.35

There is not quite as close a correlation between training and results as there was in regard to improved ability, but it is interesting that instructors with more training did not have any students reporting a decrease in enjoyment.  Similarly, the instructor with the least training–an instructor who tends to rely on the presentational mode–had no students report that their papers were substantially better after taking English 20, and the lowest number of students reporting that they received substantially more (12.5) or somewhat more (12.5) attention in English 20 than they had thought they would get.

We had assumed that students in the sections with fewer students would report more individual attention, but this was not necessarily the case.  The table below shows the results for FS97 and the results for Desser and Ellison for both semesters.

Sub more

Some more

same

Some less

Sub less

ctrl

38.8

38.8

14.3

0

0

average

Class size

34.6

19.2

32.7

9.6

1.9

adams

27.2

45.4

18.1

0

0

brown

12.5

12.5

56.2

18.7

0

chapman

33.3

16.6

25

16.6

8.3

Desser fs97

69.2

7.6

23

0

0

Desser ws98

22.2

38.8

33.3

5.5

0

Ellison fs97

31.5

40

30

0

0

Ellison ws98 (red)

31.5

36.8

26.3

0

0

Here one sees no striking correlation to amount of training, nor to instructional method.  We speculate that this lack of correlation results from the more important factor being the amount that the instructor engages in individual conferences with students.  While one does see a striking difference for Desser, there is no change for Ellison (the apparent change is simply the result of 5.2% of his WS98 students not answering that question).  The (highly tentative) inference is that reducing class size will not necessarily result in any group of instructors giving students more individual attention than any other group of instructors might do, but it may result in particular instructors doing so.

This range of results in regard to instructors with lower class size indicates our most important result:  that reducing class size does not increase overall student satisfaction if the instructor uses the presentational mode.  Reducing class size might, however, increase the student satisfaction and confidence on an instructor by instructor basis.

The final table that has provocative results is in response to the question: “If your writing process has changed, in what areas have you seen the greatest change?” Students were asked to select five.  The table is arranged by descending order of frequency in the control group.  The number represents the percentage of students who selected that area among their four.

CTRL

PLA

Class size

Close

Wkshp

Organization

57.1

Library research

51

Revise TS

44.9

Logic

42.9

drafting

30.6

27.1

28.9

45.6

27.1

Peer review

30.6

Revise org

30.6

Time management

26.5

Knowledge of format

24.5

18.6

26.9

29.4

20.8

Write elegant sentences

20.4

Computer use

14.3

Internet research

14.3

Knowledge of grammar

12.2

Reading course material

4.1

reading

2

outlining

2

WS98

ctrl

Close sup

size

wrkshp

Org 48.9

Logic 48.9

Rev org 45.2

Org 41.7

Rev ts 42.1

org46.8

Logic 42.9

Peer rev 41.7

Peer rev 40.9

Rev ts

Org 38.1

Revise org 41.7

Rev org 36.4

Rev org

Rev ts 35.7

Logic 40

Lib 28.4

Computers 27.7

Peer rev 28.6

Rev ts 36.7

The survey results as a whole did not indicate important gains in the reduced class size sections.  For instance, on average, the students in FS97 did not feel that they received more individual attention than the students in the control group did.  They showed slightly more shifting from lower order to higher order concerns on the whole than did students in the control sections, but a fewer number rated their paper writing as “substantially better.” At the beginning and end of the semester, we asked students how much of a paper they typically revised; we expected that students in the smaller class sizes would report engaging in greater revision than in the control groups.  But, that was not the case.  At the beginning of the semester, 22.4% of students in the reduced class size sections reported changing under 10% of a paper between drafts compared to 16.1% of students in the control groups.  At the end of the semester the results were 9.6 and 4.1 respectively.  The largest gain for the reduced class size group was in the 11-25% range (from 41.4 to 51.9) and in the 26-50% group for the control (28.6-40.8%).  Similarly, the control group had a larger number of students who reported that they revised “substantially” than did the instructors whose class sizes were reduced (22.5 compared to 17.3).

Students perceived the greatest emphasis in the course was on peer review; revising the thesis; logic and reasoning; revising organization; organization; format; drafting.  They saw the greatest change in their writing processes in regard to: peer review; organization; thesis revision; organization revision; library research.  In other words, the students saw the greatest changes in at least one area that they did not think that the instructors had especially emphasized (library research).  Most discouraging, 3.9% of the students thought that the papers they were writing after taking English 20 were somewhat worse, and 15.4% thought they were the same.  (None of the students in the control group thought their papers were somewhat worse, and only 4.1% of students thought their papers had remained the same.)

Looking at the results for individual instructors, however, has very different implications.  Instructors teaching the reduced class size sections did not necessarily have any training, and they were not required (or even encouraged) to change their teaching practices to take advantage of the reduced class size.  Instructors who taught reduced class size who did have some kind of previous training did have markedly different results. If an instructor engages in presentational mode, as some of our instructors did, then there is not an obvious improvement for the students in being in a smaller class.

There is, however, some reason to doubt that assumption.  For instance, according to Hillocks, research on grammar, usage, and correctness in student writing indicates that knowledge of grammatical rules has little or no effect on correctness in student performance.  That is, the transferring of information about writing does not improve writing itself.

While lecturing has repeatedly been demonstrated to be of little use in teaching writing, there is no reason to conclude that it is useless in other sorts of courses.  Common sense suggests that a good lecturer can lecture equally well to 15 students or 50 students–indeed, the research on class size indicates that the ability to present and communicate material in an interesting way may well be more important than class size for lecture courses (see, for instance, Feldman 1984).  The environmental mode of instruction, on the contrary, is almost certainly affected by class size.  As McKeachie has said, “The larger the class, the less the sense of personal responsibility and activity, and the less the likelihood that the teacher can know each student personally and adapt instruction to the individual student” (1990, 190).

[i]. The other kinds of sections were: ones with an attached peer-learning assistant; ones whose instructors met regularly with a faculty member to discuss the course; ones in which students met exclusively in small groups with fewer required contact hours per semester.