Skip to Main Content

Liberal Arts Colleges: Taking the Lead on Assessment and Accountability

PDF Version

LiberalArtsOnline Volume 5, Number 1
January 2005

Assessment and accountability—two words that may make you shudder or just plain tune out. We have all heard politicians, parents, and others questioning the rising costs of education and publicly demanding accountability from educators at all levels. It is easy to imagine these concerns forcing our teaching to become a mindless, bureaucratic chore. Richard Hersh, who has served as president of two liberal arts colleges, makes the case for another perspective—the educational imperative for assessment. He suggests that liberal arts colleges have an obligation to direct efforts towards addressing these issues of accountability. As you read this month’s article, consider what kinds of evidence could help make you and your colleagues more effective liberal arts educators.

--Kathleen S. Wise, Editor

--------------------------------------


Liberal Arts Colleges: Taking the Lead on Assessment and Accountability
by Richard H. Hersh
Senior Fellow, Council for Aid to Education (a subsidiary of RAND)
Former President, Hobart and William Smith Colleges
Former President, Trinity College (Hartford)


From Cicero’s artes liberals to the trivium and quadrivium of the medieval schoolmen, to the studia humanitatis of the Renaissance humanists, to Cardinal Newman’s definition in his Idea of a University, to the attempts at common curricula in the first half of this century, to the chaotic cafeteria that passes for a curriculum in most American universities today, the concept has suffered from vagueness, confusion, and contradiction.
                                         – Donald Kagan [1]


Cafeteria or Fine Dining: What Difference Does it Make?

When I was president at private liberal arts colleges, traveling the usual admissions and fund-raising circuits, I often encountered the following paired questions: "What difference do small liberal arts colleges make?" and, "Are you really worth the thousands of extra dollars?" Rarely, thank goodness, was I ever asked about the cafeteria curriculum with its attendant vagueness, confusion, and contradiction that Donald Kagan describes as common in American universities. Nor was I asked if such confusion actually mattered. Needless to say, my assumption was that my college offered far better fare—more akin to fine dining—and that it did matter! 

I earnestly gave affirmative answers to these questions e.g., higher graduation rates, smaller classes, professors who actually taught and advised, greater alumni satisfaction, greater proportion of students going on to graduate school, and the inspiring anecdotes about lives transformed. But as salutary as these answers were to my audiences, they rang increasingly hollow to me. What difference in education does one college or university make compared to another after we take admissions selectivity into account? Anecdotes and impressive-sounding statistics aside, we cannot currently answer this question.

Is it Selectivity or the Institution that Makes the Difference?

Some colleges and universities have students with high SAT scores; others, higher graduation rates or lower average class size. In still others, library holdings, faculty salaries, and alumni giving are superior. Does any of this really matter? We know that higher admissions selectivity and a number of these other attributes correspond to the perceived prestige of an institution, and we know that going to college substantially increases career earnings. But does it matter where one goes to college in terms of what one actually learns? 

Surely, it must matter. Otherwise the anxiety-driven college application process would be such a terrible waste, the U.S. News & World Report’s college rankings a sham, and all of that money spent on athletic teams, hotel-like residence halls, wireless campuses, famous faculty—for naught. The fact is that we have not had ways of systematically assessing what difference any of this makes in terms of what and how much a student learns once he or she arrives on a campus. And now people outside the academy are asking just such questions.

External Calls for Accountability

We have learned that hospitals and doctors with equivalent facilities and prestigious reputations vary widely in their success rates. We know that the elementary and secondary school a youngster attends makes a difference and we demand that public schools do far better. And now such issues as grade inflation, widespread cheating, students unable to read or write adequately at graduation, athletic scandals, and increasing alcohol and drug abuse on college campuses garner much attention. We are asked if our sharply rising costs are worth the value students receive.

In the name of "accountability," the public has turned its attention to assessing the quality of institutions once assumed on faith to be satisfactory. In the shadow of the K-12 high-stakes testing and "No Child Left Behind" movements, Congress ominously raised the same issue during its recent higher education reauthorization hearings. More than forty states by law require evidence of student learning as part of their accountability regimes. Each of the regional accreditation associations, the historic arbiters of higher education quality, specifies assessment of student learning as the ultimate criterion of educational quality.

The Association of American Colleges and Universities (AAC&U), in its landmark "Greater Expectations" study of higher educational quality, calls for institutional accountability based on student learning. [2] The recent AAC&U paper, "Our Students’ Best Work," devoted exclusively to the subject of accountability, argues, "Too many institutions and programs still are unable to answer legitimate questions about what their students are learning in college." [3] The corporate community weighs in as well. The Business-Higher Education Forum argues strongly for measures of student learning as the central component of a higher education accountability system. "One of the most important public policy imperatives in higher education is to enhance institutional productivity by focusing on learning." [4]

These calls for measuring learning are more than simply the longings of a college president trying to make a better case for his or her institution; they are now a matter of public discussion and policy. This is an era in which a college degree is increasingly seen as a commodity, a credential to possess rather than a mark of intellectual distinction. Vocational training is confused with education and cost is paramount. The public increasingly questions liberal education’s relevancy, and higher education is challenged to create systematic assessment of student learning as the metric for accountability.

The Educational Imperative for Learning Assessment

The academy has resisted what it perceives as external and "political" challenges to its presumption of quality and lack of learning assessment data. One reason given for such resistance is the definitional flux of liberal education, alluded to by Kagan. A second is the seeming lack of adequate measures to assess educational outcomes beyond knowledge and comprehension. Furthermore, many people believe that the important effects of a liberal education cannot be measured until far beyond the college years; or even that they need not be assessed at all, so powerful are the inherent virtues of liberal education, and our ability to know it when we see it. Others argue that learning assessment and accountability cannot be mixed without losing academic freedom. However valid or invalid these defenses, they have been up to now accepted by a public for whom faith alone has been sufficient. That faith has eroded. Unless the academy takes the lead in the assessment and accountability conversation, we risk doing an educational and political disservice to our students and ourselves—educational by not providing our students or ourselves with useful feedback, and political by allowing others to impose their measures on us, thus placing academic freedom at far greater risk.

The time has come to powerfully respond to these external calls for accountability with what I think is the best possible defense—the educational imperative for learning assessment. There always has been a valid educational response, which we have either been ignorant of or have ignored, to support a more rigorous and expansive assessment of learning. Appropriate and timely assessment of learning is a powerful force for teaching and learning. This is true at any level of learning—in schools, colleges, and universities; large and small; public and private. Moreover, I suggest, because small liberal arts colleges claim to have the most powerful liberal education learning cultures, the richest heritage of critical inquiry, and superior curricular and pedagogical prowess, they should take the lead in assessing liberal learning. Such learning assessment can best meet head-on the cries of the vocational philistines now calling the relevance of liberal education into question.

It is our educational and professional duty to assess higher education’s cumulative institutional impact on student learning. Student learning is our raison d’etre and we know that appropriate, timely, formative, and summative feedback to students powerfully increases that learning. Moreover, our academic training and professional status obligate us to be transparent in our endeavors. Colleges and universities are subsidized by the public, either directly through tax revenues or indirectly through tax exemption. We thus have a responsibility to the public for rigorous student and institutional assessment and public accountability. In short, appropriately assessing student learning is a powerful way we as "learning organizations" can educationally demonstrate to ourselves, our students, and others our curricular and pedagogical efficacy. Equally important, such assessment is a necessary part of institutional improvement, a process we continually advocate as we strive for excellence. We ought to stand on these educational grounds to lead public accountability conversations.

Value-Added Assessment

The strongest possible educational account of our liberal education efficacy would be to demonstrate that we make a significant learning difference, often referred to as "value-added," from college entrance to graduation. Succumbing to the pressures of college rankings, campuses and others have used selectivity in admissions and graduation rates as surrogate measures of quality. Yet these merely reflect a "diamonds in, diamonds out, garbage in, garbage out" perspective on quality. Ideally, excellence and quality should be determined by the degree to which an institution develops the knowledge, understanding, and abilities of its students. "Value-added" designates what qualitatively and quantitatively are added to students’ capabilities and knowledge as a consequence of their education at a particular college or university. Measuring such added value requires assessing what students know and can do as they begin college, and again during and at the end of their college years. Ideally, we should also assess them after they have had some years away from college to realize the full benefits of their education.

To assess the value-added of liberal education by a college or university, it is necessary to ask both how one defines and where one "locates" liberal education within the undergraduate experience. Is it what one knows? How one is able to use knowledge? Does it reside within the general education component, usually the focus of the first two years of college? Is it found primarily in a "core curriculum" with specific courses and an established canon of required works? Does it reside most powerfully in small, residential, liberal arts colleges or within university liberal arts and sciences colleges? Might one receive an equally good liberal education within the professional colleges in a large university? Does it reside as well within majors? Is it inherent in the educational culture of the institution itself?

These are legitimate questions for which there are varied responses currently extant. But, they are ultimately unanswerable unless one has a metric with which to assess liberal learning, however defined and located. In response to these questions, groundbreaking value-added assessment work is being done by Charles Blaich and his colleagues at the Center of Inquiry in the Liberal Arts at Wabash College, and by the Collegiate Learning Assessment (CLA) project at RAND’s Council for Aid to Education, of which I am co-director.

Value-Added Assessment Research Projects

Blaich and his colleagues looked at specified attributes of liberal education across the whole spectrum of Carnegie classified institutions. The Center of Inquiry project asked if such pedagogical, curricular, and institutional attributes correlate with learning outcomes. [5] They found that while liberal arts colleges evidence greater levels of supportive conditions associated with increased learning, there is no clear advantage for liberal arts colleges in terms of measured outcomes. This finding may well be the product of insensitive outcome measures, or because the pedagogy and curricula used at liberal arts colleges are actually no different or more effective than those at other kinds of institutions, despite our claims to the contrary. [6] [Editor’s note: The Center of Inquiry has begun a national longitudinal study that will further explore these issues.]
 

The CLA project asks two fundamental questions: "Do the institutions students attend make a difference in terms of overall student learning?" and, "Can learning-assessment data be especially useful in promoting institutional change?" CLA believes that a liberal education encompasses both what a person learns and how he or she is able to use such knowledge. In its current iteration, CLA has constructed "performance tasks" for critical thinking, analytic reasoning, and written communication—three outcomes that are universally agreed upon as necessary, yet are not sufficient for liberal learning. The performance tasks are all essays that provide students with the narrative, graph, and pictorial data they need to answer questions across a variety of disciplinary topics.

What is an institution’s cumulative contribution to students’ abilities? Controlling for student input differences, CLA uses the institution as the unit of analysis by averaging student scores to get an institutional "value-added" measure. Our data show that where you attend school does make a difference, in terms of the value-added that our performance tasks measure. Put another way, schools that look alike with regard to entering student characteristics differentially affect learning, and some schools with less selective admissions outperform others with greater selectivity. Presumably these differences can be attributed to different campus cultures, curricular programs, and pedagogies. We have begun to conduct case studies to see if we can discern those critical institutional attributes that might account for such learning differences.

The CLA project has conducted cross-sectional studies that by nature have inherent limitations. However, we have also begun longitudinal studies that will enable us to pre- and post-test students at entry and graduation, the gold standard for assessing cumulative learning. I invite any and all institutions to join us in this research effort.

The Courage to Lead

Each type of college and university—community colleges, comprehensive universities, research universities, online programs, and small liberal arts colleges—offers a rationale for their ability to provide a superior liberal education. Small liberal arts colleges claim that their size, small classes, faculty dedicated to teaching rather than primarily to research, and missions that focus exclusively on liberal rather than vocational education place them in the greatest value-added situation. Research universities claim their more extensive general education programs, coupled with study in each of the disciplines with research faculty/scholars, provide the best conditions for liberal education. Community colleges offer the beginning of liberal arts curriculum for transfer to four-year schools, and argue this is equivalent to four-year institutions. And various online institutions claim that individualized instruction and assessment rival traditional college and university work. Does a particular type of school actually provide a superior liberal education? Each institution also claims to be constantly improving; by engaging in fund-raising, for example, to hire more and better faculty, make curriculum changes, improve technology, and increase student engagement. Do such improvement efforts work? We are unable to sort out these claims—current data are inconclusive, but we have begun to develop a value-added metric that will enable us to provide more adequate answers.  

The public is asking that higher education become more accountable. Liberal education generally, and liberal arts colleges in particular, are under special scrutiny. We would be well served if we gathered the best evidence possible to test the validity and reliability of our various educational promises. Assessment of student learning is the best way to gauge the inextricable connection between educational ends and means and provide the public with such data to best judge our claims. We in the academy must find the courage to lead this assessment and accountability conversation.


Notes:

  1. Donald Kagan, "What is a Liberal Education?" Reconstructing History, eds. Elizabeth Fox-Genovese and Elizabeth Lasch-Quinn (1999).
  2. AAC&U, Greater Expectations: A New Vision for Learning as a Nation goes to College (2002).
  3. AAC&U, Our Students’ Best Work: A Framework for Accountability Worthy of Our Mission (2004) p. 2.
  4. Public Accountability for Student Learning in Higher Education: Issues and Options, position paper from the Business-Higher Education Forum (2004) p. 13.
  5. Gregory C. Wolniak, Tricia A. Seifert, and Charles F. Blaich, "A Liberal Arts Education Changes Lives: Why Everyone Can and Should Have This Experience," available on the Center of Inquiry in the Liberal Arts website, http://liberalarts.wabash.edu/home.cfm?news_id=2166  
  6. Jay Mathews, "Measure by Measure," The Atlantic Monthly (October, 2004); also, Roger Benjamin and Marc Chun, "A New Field of Dreams: The Collegiate Learning Assessment Project," Peer Review (Summer, 2003).


Direct personal responses to
rhhersh@sbcglobal.net.

---------------------------------------

The comments published in LiberalArtsOnline reflect the opinions of the author(s) and not necessarily those of the Center of Inquiry or Wabash College. Comments may be quoted or republished in full, with attribution to the author(s), LiberalArtsOnline, and the Center of Inquiry in the Liberal Arts at Wabash College.


*************************************************************
National Study of Liberal Arts Education
The Center of Inquiry in the Liberal Arts is excited to announce the National Study of Liberal Arts Education, a large-scale, longitudinal study of the effects of American higher education on liberal arts outcomes. We will explore not only whether and how much students develop because of their collegiate experiences, but also why and how this development takes place. Our research will help faculty, staff, and administrators more effectively align educational programs and experiences to student learning and institutional missions. We encourage research universities, regional universities, liberal arts colleges, community colleges, and other higher education institutions to apply for participation in this groundbreaking study. You can find more information about this study online (http://liberalarts.wabash.edu/nationalstudy). Application forms will be posted January 26, 2005.

The National Study of Liberal Arts Education is a collaborative effort among the following research teams: the Center of Inquiry in the Liberal Arts at Wabash College, led by Dr. Charles F. Blaich; the University of Iowa, led by Dr. Ernest T. Pascarella; Dr. Marcia Baxter Magolda from Miami University (Ohio); and the University of Michigan, led by Dr. Patricia M. King.

*************************************************************