Skip to Main Content

music

PDF Version

LiberalArtsOnline Volume 7, Number 5
September 2007

This month's author, Daniel F. Chambliss, professor of sociology at Hamilton College, writes about the importance of incorporating our students' perspectives into making changes in courses, departments, programs, and curricular structures. He suggests that what college professionals want for an institution is not always in line with what could most help students. Further, he warns us that data drawn on institutional collectivities, such as courses or departments, should be interpreted with care, especially if used to inform policy. Such data may give us an inaccurate or incomplete picture of our students. His recommendation is to use the individual student as the unit of analysis when trying to measure student learning outcomes. He gives tangible examples of how students and data can be misunderstood, and offers helpful suggestions for anyone working to improve student learning.


 

 

 

 

 

 

 

 

 

 


A Neglected Necessity in Liberal Arts Assessment: The Student as the Unit of Analysis
by Daniel F. Chambliss
Eugene M. Tobin Distinguished Professor of Sociology
Hamilton College

Educational results for real students matter; the proper goal of educators is to enhance the educational experience and learning of students. But our daily work as college professionals (deans, presidents, administrators, and professors) often pulls us away from understanding the lived experience of real students, so that in formulating policies, we lose sight of the educational results of our organizational work. In two different dimensions, which I will call the "horizontal" and the "vertical," we frequently slide away from an understanding of individual student reality. We forget that (a) in the horizontal dimension, students do not see the world as faculty and administrators do—they are in a sense different kinds of people; and (b) in the vertical dimension, the success of individual students doesn’t directly reflect the success of classes, departments, programs, or institutions, since individual experience cannot automatically be inferred from the behavior of collectivities.
 
In the language of social science, therefore, outcomes assessment should take the individual student as the unit of analysis. Within institutions, data gathering (collected on courses, departments, program initiatives, etc.) often overlooks this methodological requirement, so we don’t measure the results we claim to produce. There’s been lots of talk recently among accreditation agencies, education leaders, and assessment scholars about the importance of doing "outcomes assessment," but in this sense we often don’t do it. Effective administrative action in shaping student outcomes requires (a) understanding the lived experience of students; (b) sampling on the entire student body, using individuals as the unit of analysis; and (c) learning how particular organizational actions (program initiatives, courses, majors, etc.) affect the totality of student outcomes. (Here, by the way, is where I think that individually based aggregate measures such as the NSSE or CLA fall short). Frequently, I will show, this is not what happens.

Student Perspective Isn’t Faculty or Administration Perspective

Along a "horizontal" dimension, imagining people standing side by side, students aren’t like professors or administrators. They sleep later in the morning and they stay up half the night. They take tests, while other people write and grade tests. Students follow rules; deans and professors make up rules. Many students live on their parents’ money. Most have never read Darwin, Marx, or Freud. They were born in specific years, belong to particular generations, and see the world through the eyes of the era in which they grew up. Each fall, Beloit College issues a "mindset" list, reminding its faculty that contemporary first-year students have always lived in a world with MTV and AIDS, have never owned (or even seen) a record player, and remember neither Johnny Carson nor the USSR (http://www.beloit.edu/~pubaff/mindset). Students not only hold different opinions and a different view of things than we do; they hold an entirely different place in life. For a teacher, or a college, to be successful in transforming their students, we need to understand and use such knowledge.
 
Consider a simple example. Academic deans and professors view their colleges as organizations of programs, departments, and faculties, all deployed in such a way as to provide a good education. We believe that courses are fundamental, curricula are important, and professors stand at the center of college life. We "would hope" students take their studies seriously, and sometimes think they "should" work a 40-hour week on academics.

But for a freshman entering college, the immediate challenge is managing an independent life: living on one’s own, away from parents, with no one enforcing curfews. Students can stay out late without permission, maybe get a little (or very) drunk, and even have a boyfriend or girlfriend sleep over for an entire night. Drugs! Sex! No adults!

And some classes.
 
And in the academic realm itself, students and professionals experience things differently:

  • Foundations, presidents, and deans love to talk about innovation, new programs, exciting new turns in curriculum planning and pedagogy. But for students, it’s all new—Western Civilization, Introduction to Anthropology, Geology with Lab, Shakespeare, the whole thing. Picking one’s own classes, schedules, and teachers is a novelty; deciding not to attend class is breathtaking; to most freshmen, the daily reality of being in college, with all that entails, is itself astonishingly innovative.

     

  • At the same time, many professors are bored teaching their old courses: Calculus, Introduction to the Novel, Survey of American History. They think Marx is outdated and Freud is passé. But beginning students want those basics. They want to study psychology, literature, philosophy—the big issues, not the technical arcana. They don’t care if a course is new or old, innovative or traditional; to them every course is new. (Of course, professors have trouble remembering this. As the old mathematician once said, "I’ve been teaching calculus for 30 years, and by damn, they still don’t get it!")

     

  • Registrars, trying to balance classroom use across the day, suggest putting required courses for majors at 8:00 a.m., thus equalizing classroom loads across the day—and guaranteeing a bit of misery for any student who enrolls in one of those majors. Department chairs, similarly trying to equalize faculty loads by balancing sections, move students into sections they didn’t want, with professors they didn’t ask for. It’s fair for faculty, but is it good for student learning?

     

  • The dean at a small college oversees perhaps 180 professors. But any particular student meets, in her entire career, perhaps only 20 or so; and at the outset, encounters only four or five in a semester. Those are the ones that matter to her.

     

  • When a recent campus newspaper editorial complained about the diminishing level of student contact with faculty, a highly respected English professor responded by listing professors who sing in the choir, help with sports teams, host dinners for students, and sit in the campus coffee shop, in an effort to prove that "we’re trying." But the editor’s complaint was not that professors aren’t active; it was that students didn’t encounter them, a very different matter.

Students and faculty also approach academic disciplines with different expectations. Faculty, for instance, typically place the psychology department among the natural sciences; most psychologists themselves do, and many fiercely advance a scientific agenda and image for their discipline. But most freshmen (reasonably) expect psychology to explain parental divorce, boyfriend problems, and why roommates fight. When they discover that hypothesis testing often figures more prominently than people, many students drop psychology.

Some professors, sensing the gulf between the students’ perspective and their own, see student priorities and values as immature or just silly (sometimes true)—and therefore illegitimate or immoral (a different matter). "In the real world," one professor told me in arguing for early morning classes, "people have to get up and go to work!" Yes, but a student could respond that in the real world people don’t have lifetime job security with three months on their own in the summer. Or as one Dean’s List student described the perception gap, "Administrators believe that students do two things: drink and work. If we aren’t working, we’re drinking. Therefore, they think if they make us work more, we’ll drink less." In each case here, the error is simply to forget that students have had different experiences, have different interests, slice up the world differently than do the adults who run the place.

The mistake is easy to fix: talk with some students, and incorporate what you learn into planning and policy decisions. Respecting the students’ point of view isn’t pandering; it’s smart. By taking their view into account, faculty and administrators can have their cake and entice students to eat it too, designing programs and policies that tap, not ignore, students’ perceptions and motivations.

The Individual is not the Collective

Students, then, aren’t like professors; similarly, groups aren’t like individuals. Along a "vertical" dimension, we frequently err by inferring individual-level student experience downwards from group-level information (such as course evaluations or program success). This is a false deduction, an error in logic known in social science as the "ecological fallacy."

Groups are different kinds of things than are people, and research findings about groups frequently do not apply to the individuals belonging to them. For instance, countries (groups) that are rich have higher rates of heart disease than do poor countries, but rich individuals have lower rates than poor individuals. Or do you remember those specious "voting maps" that appeared following the 2000 U.S. presidential election, showing "how counties voted," the vast majority going Republican? Vast swaths of America were painted in red. But counties (groups) don’t vote; people do. And a huge "win" among counties is completely irrelevant to either the popular vote or the electoral vote and outcome.

Similarly, in a liberal arts program the excellence of a single academic department is by itself nearly useless information: a department can be in itself great, but if it teaches only a few students, it has little effect on overall outcomes for the college. The same argument applies to courses. A substantial majority of an institution’s courses might be evaluated as being excellent; but the educational results could be slight, if only a few students were actually enrolled in those courses. Administrators usually tend, as I’ve said, to think in terms of the departments they oversee; the programs they’ve created; or, at best, the individual professors whom they have helped to hire and whom they have to evaluate. But in no case does the intrinsic quality of these entities—departments, programs, professors—directly predict what is happening with students at all.

Such errors are common, consequential, and sometimes laughable:

 

  1. US News & World Report gathers data on the "percentage of small classes" that colleges offer. At Hamilton College, a clear majority of classes have fewer than 20 students, giving us a US News rating of 69% of such classes. But when we studied student transcripts, we found that most students are typically enrolled in larger classes—indeed, that’s what makes the classes large! Most classes thus are small, but most students are in larger classes. At the hypothetical extreme, a college could offer ten courses total, in nine of which one lone student enrolled, with 2,000 students in the tenth. The college could then triumphantly claim (and USNWR accurately report) "90% of our classes are small." Remember that classes are small because students aren’t in them.
  2. A state university branch advertises that it is "one of just 10 colleges in [its state] with both a nationally accredited school of education and an AACSB-accredited school of business." Of course, no actual student will ever benefit from this highly touted fact, since students enroll in only one school. But administrators are proud of the success of both, and believe the public should care.
  3. A well-regarded academic department, having seen its enrollments drop by nearly half in a five-year period, points to the "improved rigor" of its program. And no doubt students who remain in the classes gain some benefit. But you can’t have rigor without students to whom it is applied. Half of the former numbers of students are getting—for all the department’s effort—no rigor at all in the discipline, nor any of the discipline’s content. Whether the presumed benefits of raising the overall standards for the college occur, and whether they justify the very real cost, remains an open question.
  4. When distribution requirements were ended, course enrollments in lab sciences remained the same, so college leaders initially thought dropping distribution requirements made no difference in what students were studying. But transcript analyses soon revealed that science majors were taking even more science courses than before and were, in effect, replacing other students who had left science altogether. Therefore, dropping distribution requirements dramatically changed what courses students were taking, but did not change the raw number of students in a class. Gathering data on classes had thus been profoundly misleading as to what students were doing.

In all of these cases, attention to information gathered about collectivities—departments or courses—was mistakenly used to draw conclusions about the actual experience of students. Our data collection units tend to be such collectivities—courses, programs, graduating classes—which are then tallied up as if they were all of roughly equal importance. But if the goal is educated students, not just good programs or institutions, then the data must be collected on students, weighted in a "one person–one vote" fashion. A course can be rigorous, well organized, steeped in valuable and current literature, featuring great media and employing the best active-learning pedagogy. But if it enrolls only a few students, it may easily be irrelevant to the desired result.
 
At this point, critics may protest, "You’re just talking about enrollments!" No. Counting enrollments per se should never be the basis for decision making. The quality of the students’ experience—did they learn to write an essay? do they now understand photosynthesis?—must be a crucial consideration. But if the quality of learning matters, so too does the quantity, the number, of real students—human beings—who have learned. After all, "enrollments" is just abstracted administrative jargon for "actual students in classes."

In sum, educators talk a lot about student learning outcomes, but if assessments are only made by course, or department, or professor, the students have already been selected; the measurement ignores all of the students who are not in the course. Therefore, in gathering outcomes data, one must sample on the student body as a whole, not just on groups that exist as pregiven administrative categories. What happens with people is not what happens with collectivities, and findings that are true of one level on this vertical dimension need not, at a different level, be true at all.

Why Do These Mistakes Happen?

In summary, too often we forget that (a) students aren’t like professors or administrators, and (b) collectivities aren’t like individual people. The reason for such failures, apart from a lack of methodological sophistication in doing social science research, is the simplest kind of human psychology.

We most keenly feel our own efforts, our own exertions, where our energy is focused. Presidents are paid to create visions, excite the trustees, and attract support from foundations, and so they do: near the end of 1999, the president of one top-ranked college rolled out a huge, expensive, plan for rethinking the college’s strategic vision, although he privately found the exercise dubious at best. When asked why he proceeded nonetheless, he said, "It’s the year 2000; the trustees want something millennial." Deans, for their part, manage their work through departments and programs, building resumes with new initiatives, curricular changes, new facilities. Heading off to conferences, deans and associate deans need something to talk about. Professors, too, think about what is interesting or difficult for them, what they have to work on. They’ll work very long hours, spend time on campus on weekends, and grade papers endlessly in efforts to help their students. And institutional researchers will reasonably begin their work from available sources, with administrative data. As the paid employees of academic institutions, then, we all concentrate on our formal, institutionalized, organized efforts to help our students. So it’s not surprising that when we try to measure what happens, we measure our own efforts: what buildings are newly opened; what programs are designed and initiated; what’s in the course catalogue; the classes we teach and how many students are in them; even how successful those classes are.

That’s all fine, but it’s not how the students see things. They don’t care about our efforts. For them, and perhaps for their learning, who even knows if classes are the important factor? The amount of work we put in, how many years we spent on curriculum planning, how much our new buildings cost—could easily be totally irrelevant.

Policy Implications

If educational leaders want results for students, then, they must focus relentlessly on what happens to students—on the actual outcomes for particular, real students—not on what is offered, nor what they’ve tried, nor what great new program is in place; and not on what courses "ought to be good," nor what "one would hope" that faculty are doing. The hopes and efforts of faculty, administrators, and departments, strenuous though they may be, are by themselves irrelevant; effort per se predicts nothing about results. What matters is what actually affects real students.

Therefore:

 

  1. Start by sampling on students, not programs or professors. Recently at Hamilton, we wanted to know what sort of instruction students were getting in oral communication. We could have conducted a survey of all faculty, asking if they included public speaking in their courses. But that would assume that all faculty have equal access to students. Instead, we first sampled all of our students, looking for whether they had actually been exposed to any such instruction. We eventually worked back to surveying the teachers who actually taught those students, to get details of course syllabi. More broadly, one can analyze a random sample of student transcripts. Even a 30-minute flip through of, say, 100 random-sample transcripts will provide a clear, often startling view of the actual careers of your typical students and will account for enrollments, levels of courses, and professor popularity.
  2. Then, make a fair accounting of all students at the college, not just those who finish or those in flagship programs. Many students gain nothing from programs that legitimately claim to be "excellent" (in quality of faculty, facilities, offerings). Certainly, students may benefit even from a program they aren’t taking, if it raises standards generally or attracts more top-quality students. Maybe the faculty publishes well, the curriculum plan makes sense, and the teachers win high student evaluations. But if the program teaches only a few students for the resources used, it is actually penalizing all of the students who aren’t in the program. At the same time, a popular department with low academic standards may, by its very popularity, simply be damaging that many more students. Again, the important empirical issue is not whether programs are good or bad in themselves, but whether and how much real students benefit.
  3. Finally, remember that departmental or program-level assessment, so politically feasible and apparently efficient, may easily be irrelevant to student outcomes. Departmental assessment, for instance, allows a department to improve results by excluding weaker students. Obviously, no department can be responsible for the educational outcomes for every student at a college. But every program operates, in some way, at the expense of the students who aren’t in it. And that should be acknowledged.

There’s good news here for leaders. A small group of vigorous programs with good enrollments and excellent professors, supported by the rest, can successfully educate most of your students; in fact, a small group of excellent departments, professors, and courses can create a marvelous educational program for virtually the entire student body. Conversely (more good news for deans), your weakest faculty and departments need not matter, so long as they have no students. There’s no inescapable need for the dean to reinvigorate deadwood, dismiss poor professors, or sink yet more money into shaky programs. Remember, the goal isn’t lots of great departments; it’s lots of well-educated students. But to reach that goal, one must always be clear about the proper unit of analysis, and always keep students—not professors, departments, or programs—firmly in mind.

--------------------------

Daniel F. Chambliss is the Eugene M. Tobin Distinguished Professor of Sociology at Hamilton College. This work is part of the Mellon Project for Longitudinal Assessment of Student Learning Outcomes at Hamilton College, supported since 1999 by a series of major grants from the Andrew W. Mellon Foundation. I am grateful to Christopher Takacs for detailed comments on drafts of this paper; additional contributions came from Ann Owen, David C. Paris, Mitchell Stevens, Shauna Sweet, Carol Trosset, and Marcia Wilkinson.  


 

 

 

 

 

 

 

 Direct responses to lao@wabash.edu. We will forward comments to the author.

---------------------------------------

The comments published in LiberalArtsOnline reflect the opinions of the author(s) and not necessarily those of the Center of Inquiry or Wabash College. Comments may be quoted or republished in full, with attribution to the author(s), LiberalArtsOnline, and the Center of Inquiry in the Liberal Arts at Wabash College.

---------------------------------------

LiberalArtsOnline is an occasional publication electronic publication of the Center of Inquiry in the Liberal Arts about assessment in liberal arts education. We invite you to subscribe and to submit an essay. Past issues are available in the archives.  

 

 

 

 

 

 

 

 

 

Back to Top