LiberalArtsOnline Volume 7, Number 5
September 2007
This month's author, Daniel F. Chambliss, professor of sociology at Hamilton College, writes about the importance of incorporating our students' perspectives into making changes in courses, departments, programs, and curricular structures. He suggests that what college professionals want for an institution is not always in line with what could most help students. Further, he warns us that data drawn on institutional collectivities, such as courses or departments, should be interpreted with care, especially if used to inform policy. Such data may give us an inaccurate or incomplete picture of our students. His recommendation is to use the individual student as the unit of analysis when trying to measure student learning outcomes. He gives tangible examples of how students and data can be misunderstood, and offers helpful suggestions for anyone working to improve student learning.
A Neglected Necessity in Liberal Arts Assessment: The Student as the Unit of Analysis
by Daniel F. Chambliss
Eugene M. Tobin Distinguished Professor of Sociology
Hamilton College
Educational results for real students matter; the proper goal of educators is to enhance the educational experience and learning of students. But our daily work as college professionals (deans, presidents, administrators, and professors) often pulls us away from understanding the lived experience of real students, so that in formulating policies, we lose sight of the educational results of our organizational work. In two different dimensions, which I will call the "horizontal" and the "vertical," we frequently slide away from an understanding of individual student reality. We forget that (a) in the horizontal dimension, students do not see the world as faculty and administrators do—they are in a sense different kinds of people; and (b) in the vertical dimension, the success of individual students doesn’t directly reflect the success of classes, departments, programs, or institutions, since individual experience cannot automatically be inferred from the behavior of collectivities.
In the language of social science, therefore, outcomes assessment should take the individual student as the unit of analysis. Within institutions, data gathering (collected on courses, departments, program initiatives, etc.) often overlooks this methodological requirement, so we don’t measure the results we claim to produce. There’s been lots of talk recently among accreditation agencies, education leaders, and assessment scholars about the importance of doing "outcomes assessment," but in this sense we often don’t do it. Effective administrative action in shaping student outcomes requires (a) understanding the lived experience of students; (b) sampling on the entire student body, using individuals as the unit of analysis; and (c) learning how particular organizational actions (program initiatives, courses, majors, etc.) affect the totality of student outcomes. (Here, by the way, is where I think that individually based aggregate measures such as the NSSE or CLA fall short). Frequently, I will show, this is not what happens.
Student Perspective Isn’t Faculty or Administration Perspective
Along a "horizontal" dimension, imagining people standing side by side, students aren’t like professors or administrators. They sleep later in the morning and they stay up half the night. They take tests, while other people write and grade tests. Students follow rules; deans and professors make up rules. Many students live on their parents’ money. Most have never read Darwin, Marx, or Freud. They were born in specific years, belong to particular generations, and see the world through the eyes of the era in which they grew up. Each fall, Beloit College issues a "mindset" list, reminding its faculty that contemporary first-year students have always lived in a world with MTV and AIDS, have never owned (or even seen) a record player, and remember neither Johnny Carson nor the USSR (http://www.beloit.edu/~pubaff/mindset). Students not only hold different opinions and a different view of things than we do; they hold an entirely different place in life. For a teacher, or a college, to be successful in transforming their students, we need to understand and use such knowledge.
Consider a simple example. Academic deans and professors view their colleges as organizations of programs, departments, and faculties, all deployed in such a way as to provide a good education. We believe that courses are fundamental, curricula are important, and professors stand at the center of college life. We "would hope" students take their studies seriously, and sometimes think they "should" work a 40-hour week on academics.
But for a freshman entering college, the immediate challenge is managing an independent life: living on one’s own, away from parents, with no one enforcing curfews. Students can stay out late without permission, maybe get a little (or very) drunk, and even have a boyfriend or girlfriend sleep over for an entire night. Drugs! Sex! No adults!
And some classes.
And in the academic realm itself, students and professionals experience things differently:
Students and faculty also approach academic disciplines with different expectations. Faculty, for instance, typically place the psychology department among the natural sciences; most psychologists themselves do, and many fiercely advance a scientific agenda and image for their discipline. But most freshmen (reasonably) expect psychology to explain parental divorce, boyfriend problems, and why roommates fight. When they discover that hypothesis testing often figures more prominently than people, many students drop psychology.
Some professors, sensing the gulf between the students’ perspective and their own, see student priorities and values as immature or just silly (sometimes true)—and therefore illegitimate or immoral (a different matter). "In the real world," one professor told me in arguing for early morning classes, "people have to get up and go to work!" Yes, but a student could respond that in the real world people don’t have lifetime job security with three months on their own in the summer. Or as one Dean’s List student described the perception gap, "Administrators believe that students do two things: drink and work. If we aren’t working, we’re drinking. Therefore, they think if they make us work more, we’ll drink less." In each case here, the error is simply to forget that students have had different experiences, have different interests, slice up the world differently than do the adults who run the place.
The mistake is easy to fix: talk with some students, and incorporate what you learn into planning and policy decisions. Respecting the students’ point of view isn’t pandering; it’s smart. By taking their view into account, faculty and administrators can have their cake and entice students to eat it too, designing programs and policies that tap, not ignore, students’ perceptions and motivations.
The Individual is not the Collective
Students, then, aren’t like professors; similarly, groups aren’t like individuals. Along a "vertical" dimension, we frequently err by inferring individual-level student experience downwards from group-level information (such as course evaluations or program success). This is a false deduction, an error in logic known in social science as the "ecological fallacy."
Groups are different kinds of things than are people, and research findings about groups frequently do not apply to the individuals belonging to them. For instance, countries (groups) that are rich have higher rates of heart disease than do poor countries, but rich individuals have lower rates than poor individuals. Or do you remember those specious "voting maps" that appeared following the 2000 U.S. presidential election, showing "how counties voted," the vast majority going Republican? Vast swaths of America were painted in red. But counties (groups) don’t vote; people do. And a huge "win" among counties is completely irrelevant to either the popular vote or the electoral vote and outcome.
Similarly, in a liberal arts program the excellence of a single academic department is by itself nearly useless information: a department can be in itself great, but if it teaches only a few students, it has little effect on overall outcomes for the college. The same argument applies to courses. A substantial majority of an institution’s courses might be evaluated as being excellent; but the educational results could be slight, if only a few students were actually enrolled in those courses. Administrators usually tend, as I’ve said, to think in terms of the departments they oversee; the programs they’ve created; or, at best, the individual professors whom they have helped to hire and whom they have to evaluate. But in no case does the intrinsic quality of these entities—departments, programs, professors—directly predict what is happening with students at all.
Such errors are common, consequential, and sometimes laughable:
In all of these cases, attention to information gathered about collectivities—departments or courses—was mistakenly used to draw conclusions about the actual experience of students. Our data collection units tend to be such collectivities—courses, programs, graduating classes—which are then tallied up as if they were all of roughly equal importance. But if the goal is educated students, not just good programs or institutions, then the data must be collected on students, weighted in a "one person–one vote" fashion. A course can be rigorous, well organized, steeped in valuable and current literature, featuring great media and employing the best active-learning pedagogy. But if it enrolls only a few students, it may easily be irrelevant to the desired result.
At this point, critics may protest, "You’re just talking about enrollments!" No. Counting enrollments per se should never be the basis for decision making. The quality of the students’ experience—did they learn to write an essay? do they now understand photosynthesis?—must be a crucial consideration. But if the quality of learning matters, so too does the quantity, the number, of real students—human beings—who have learned. After all, "enrollments" is just abstracted administrative jargon for "actual students in classes."
In sum, educators talk a lot about student learning outcomes, but if assessments are only made by course, or department, or professor, the students have already been selected; the measurement ignores all of the students who are not in the course. Therefore, in gathering outcomes data, one must sample on the student body as a whole, not just on groups that exist as pregiven administrative categories. What happens with people is not what happens with collectivities, and findings that are true of one level on this vertical dimension need not, at a different level, be true at all.
Why Do These Mistakes Happen?
In summary, too often we forget that (a) students aren’t like professors or administrators, and (b) collectivities aren’t like individual people. The reason for such failures, apart from a lack of methodological sophistication in doing social science research, is the simplest kind of human psychology.
We most keenly feel our own efforts, our own exertions, where our energy is focused. Presidents are paid to create visions, excite the trustees, and attract support from foundations, and so they do: near the end of 1999, the president of one top-ranked college rolled out a huge, expensive, plan for rethinking the college’s strategic vision, although he privately found the exercise dubious at best. When asked why he proceeded nonetheless, he said, "It’s the year 2000; the trustees want something millennial." Deans, for their part, manage their work through departments and programs, building resumes with new initiatives, curricular changes, new facilities. Heading off to conferences, deans and associate deans need something to talk about. Professors, too, think about what is interesting or difficult for them, what they have to work on. They’ll work very long hours, spend time on campus on weekends, and grade papers endlessly in efforts to help their students. And institutional researchers will reasonably begin their work from available sources, with administrative data. As the paid employees of academic institutions, then, we all concentrate on our formal, institutionalized, organized efforts to help our students. So it’s not surprising that when we try to measure what happens, we measure our own efforts: what buildings are newly opened; what programs are designed and initiated; what’s in the course catalogue; the classes we teach and how many students are in them; even how successful those classes are.
That’s all fine, but it’s not how the students see things. They don’t care about our efforts. For them, and perhaps for their learning, who even knows if classes are the important factor? The amount of work we put in, how many years we spent on curriculum planning, how much our new buildings cost—could easily be totally irrelevant.
Policy Implications
If educational leaders want results for students, then, they must focus relentlessly on what happens to students—on the actual outcomes for particular, real students—not on what is offered, nor what they’ve tried, nor what great new program is in place; and not on what courses "ought to be good," nor what "one would hope" that faculty are doing. The hopes and efforts of faculty, administrators, and departments, strenuous though they may be, are by themselves irrelevant; effort per se predicts nothing about results. What matters is what actually affects real students.
Therefore:
There’s good news here for leaders. A small group of vigorous programs with good enrollments and excellent professors, supported by the rest, can successfully educate most of your students; in fact, a small group of excellent departments, professors, and courses can create a marvelous educational program for virtually the entire student body. Conversely (more good news for deans), your weakest faculty and departments need not matter, so long as they have no students. There’s no inescapable need for the dean to reinvigorate deadwood, dismiss poor professors, or sink yet more money into shaky programs. Remember, the goal isn’t lots of great departments; it’s lots of well-educated students. But to reach that goal, one must always be clear about the proper unit of analysis, and always keep students—not professors, departments, or programs—firmly in mind.
--------------------------
Daniel F. Chambliss is the Eugene M. Tobin Distinguished Professor of Sociology at Hamilton College. This work is part of the Mellon Project for Longitudinal Assessment of Student Learning Outcomes at Hamilton College, supported since 1999 by a series of major grants from the Andrew W. Mellon Foundation. I am grateful to Christopher Takacs for detailed comments on drafts of this paper; additional contributions came from Ann Owen, David C. Paris, Mitchell Stevens, Shauna Sweet, Carol Trosset, and Marcia Wilkinson.
Direct responses to lao@wabash.edu. We will forward comments to the author.
---------------------------------------
---------------------------------------
LiberalArtsOnline is an occasional publication electronic publication of the Center of Inquiry in the Liberal Arts about assessment in liberal arts education. We invite you to subscribe and to submit an essay. Past issues are available in the archives.