Skip to Main Content

Institutional Assessment Tools

by Glenda Droogsma-Musoba, Indiana University
Amelia Noël-Elkins, Indiana University
Chris Rasmussen, University of Michigan
August 10, 2004

Each of the following describes the institutional assessment tool in terms of its strengths and weaknesses.

Institutional Self-studies

Definition: Internal assessment of university programs, policies, and activities generally conducted in conjunction with an external review or evaluation. Campuses engage in self-evaluation in preparation for scheduled visits from institutional, departmental, or discipline-based national accreditation or governing agencies (e.g., North Central Regional Accrediting Association, National Collegiate Athletic Association, American Chemical Society, Association of Schools of Music, etc.). National accrediting agencies (external evaluators) function much like auditors to businesses; they evaluate whether the institution is accomplishing the goals it has identified for itself and its students.

How to: Generally, the institutional self-study is a lengthy, involved process lasting two years or more. The process is initiated by the institutional president, the chief academic officer, or the head of institutional research efforts. An institutional charge or direction is issued, based on guidelines established or suggested by the accrediting agency. Self-studies involve the appointment of a coordinator or co-coordinators, the establishment of a planning team with multiple areas of research focus, and the creation of various sub-committees and levels of organization. Data used for the self-study can include reports of various surveys and assessments of student outcomes, program effectiveness, and student satisfaction; summaries and evaluations of various academic and co-curricular program activities; quantitative/numeric data related to programs and service, such as number of program participants, and number of client-hours, number of service recipients or users, etc.

What it does not measure: Student outcomes measured are often rudimentary. Self-studies are not assessments of higher order student learning or program effectiveness per se, but rather represent something akin to meta-analyses or attempts to summarize inputs, processes, and outputs related to core university functions. They are more an audit of the systems and processes, leading to recommendations for improvement.

Benefits: Self-studies are a comprehensive audit of university functions and the degree to which various academic, administrative, and support units contribute to the core functions of the institution. Self-studies can involve participation of a wide variety of individuals from the institution, resulting in diverse perspectives. Secondarily, institutions benefit from self-studies through the creation and strengthening of relationships across campus.

General challenges: Self-studies are generally done infrequently and usually only in conjunction with accreditation visits. Self-studies are extremely time consuming and labor intensive. The scope of the task can require additional staff or release time for existing staff. Because the bulk of the data collection is done by employees of the institution, the process can lack an outsider’s perspective, with resulting bias limiting the scope and honesty of the evaluation. Because self-study reports are prepared for the external evaluator rather than internally motivated, too often they do not result in substantial changes on the campus. Documenting student outcomes for the self-study when data collection has not been ongoing often limits the meaningfulness or depth of the results.

Integration with other research/assessment: Because this is generally an audit of systems, it can highlight the need for additional information in some areas, thereby generating smaller research. It forces reflection and introspection about how and why something is done, potentially causing changes in other evaluative processes such as classroom grading practices.


Strategic Plans

Definition: A strategic plan is a document that outlines, in general or specific terms, the goals, objectives, and intended direction of the institution, and which can serve as a blueprint for human and fiscal resource allocation and program planning. When institutions grow, strategic planning can develop a plan of action to add positions, to add programs, or to make other desirable changes if resources exist. Strategic planning can establish priorities by which new programs or personnel will be added based on the goals and objectives of the organization. When institutional changes or reductions are needed because of financial limitations or ideological shifts, strategic planning can assist in identifying and protecting the necessary personnel and programs.

How to: Similar to an institutional self-study, the development of a strategic plan generally begins with a charge or direction issued by the university president, governing board, or departmental director. This is followed by the appointment of a strategic planning team that consults with various community members (both internal and external) about the perceived and desired mission, function, and activities of the institution or department. After identifying strengths, weaknesses, and goals of the institution, the committee produces a strategic plan that identifies and prioritizes the future activities of the institution. Then the plan may be distributed among campus leaders for approval and implementation.

What it does not measure: Strategic planning is not a method for measuring student achievement or other student outcomes.

Benefits: The development of a strategic plan ideally involves an examination of existing institutional or departmental data on student learning and program effectiveness, and a comprehensive assessment of strengths and weaknesses in effecting desired liberal arts outcomes. This information can be used as a foundation for outlining the future activities of the institution or department and the appropriation of resources in a manner consistent with stated objectives.

General challenges: Strategic planning can be time consuming and controversial. It may result in difficult conversations and debates between individuals and groups around issues of change and innovation, culture and tradition, resource allocation, stability, and uncertainty. In some instances, it can also compete with the ongoing administrative planning of the institution in identifying priorities.

Integration with other research/assessment: If done properly and conveyed throughout the community, a strategic plan can guide the development of programming and curriculum so the goals of the institution or department are achieved. A strategic plan should identify outcomes the institution will want to measure in its assessment process.


Course Evaluations

Definition: Course evaluations are assessments of student experience and satisfaction with an individual course and are generally done at or near the end of the academic term. They provide the professor, department, and institution with student perceptions of the classroom aspect of the educational experience.

How to: Course evaluations can include any number of questions designed to assess student learning, general student satisfaction, perceived effectiveness of the instructor, fairness of performance measures such as written assignments and examinations, classroom climate, and relationships with peers. Although the most common method of course evaluation is the collection of a paper-and-pencil survey at the end of the term, evaluations can also be conducted mid-term or more frequently to provide the instructor with feedback that can be used to make changes in teaching. Evaluations may include multiple choice questions, statements with quantitative responses, and qualitative items such as open-ended questions. Multiple-choice or quantitative items are often written with a Likert-type scale (students rate their agreement with the question or statement on a five point scale), which enables a full range of statistical analyses within and across courses. Qualitative items generally take the form of open-ended questions or unfinished sentences.

What it does not measure: Student course evaluations have been criticized by some as referenda more on instructor popularity than course content and learning opportunities. Personality and other instructor variables can influence student perception of classroom climate, fairness of evaluation, and instructor effectiveness. Course evaluations are student self-assessments, and thus have limited reliability as measures of student learning compared to other instruments and methodological approaches. Therefore, assessing liberal arts outcomes may not be reliable.

Benefits: Course evaluations provide individual instructors and institutional leadership with valuable data on the degree to which students are pleased with their course experience, student perceptions of the effectiveness of instructors, and the extent to which the course promoted learning and growth. Depending on how instruments are designed, course evaluations can enable comparisons across courses for the same or for different instructors, from year to year, from course to course within an academic major or department, and across majors and disciplines. Because course evaluations can be made public, they can be a visible affirmation of the achievement of stated instructional goals by an institution or department.

General challenges: Obstacles include resistance of some faculty, based on legitimate concerns related to the quality and effectiveness of the survey; the reliability of student self-reports; and the extent to which course evaluations are used in performance reviews for promotion and tenure. Institutions must also develop clear and public statements of intent for the use of course evaluations, including conditions of confidentiality and anonymity of instructors.

Integration with other research/assessment: If done prior to the end of the semester, the professor can use evaluation results to adjust graded classroom assignments and classroom practices.


Post-graduate Surveys and Assessments

Definition: An assessment of a student’s overall collegiate experience and/or satisfaction with this overall collegiate experience.

How to: Although generally conducted shortly after completion of the undergraduate degree, post-graduate assessment can occur at any time in the future, including at regularly scheduled intervals to evaluate the perceived outcomes and value of the college experience at various points in life. Assessment can be done via a written survey involving a combination of multiple-choice and open-ended questions, or through interpersonal data collection methods such as telephone, face-to-face, or group interviews. Databases can enable analysis of data over the years for the same students, comparisons across graduating classes, academic programs, or other variables.

What it does not measure: Since post-graduate surveys represent self-reports of perceived outcomes, their validity as true measures of learning are limited. Ability to use the liberal arts curriculum or experience as something akin to an independent variable is limited in the absence of a control or comparison group.

Benefits: Surveys or interviews provide real data on graduate satisfaction with the college experience and the extent to which alumni feel the institution prepared them for professional and personal concerns. Surveys can be structured in such a way as to intentionally measure specific liberal arts outcomes with observed behaviors. Relationships can be established between outcomes and predictors such as academic major, course-taking, co-curricular experiences, and pre-college factors. Surveys can combine objective data on student employment, advanced education, salary, and community involvement to create a more complete picture of graduate outcomes.

General challenges: Data collection requires a programmatic and financial commitment on the part of the institution, particularly if data is collected over multiple years. After graduation, addresses may become outdated and students’ commitment to the institution may fade so the number of students with complete data dwindles. Multiple attempts to encourage graduate response may be needed. Surveys require a system for data storage, as well as staff resources if an interview-based approach is used.

Integration with other research/assessment: Post-graduate surveys can be used in the context of or in conjunction with entrance and exit interviews of students, allowing comparison of student expectations and abilities prior to college with the same items immediately following and several years after college. Surveys can also be used with other data collected over a number of years so that long-term effects of the educational environment and liberal arts goals can be assessed.


Analysis of Course Syllabi

Definition: Analysis of an instructor’s formal academic course plan, or syllabus. The syllabus generally includes an outline of course topics, materials, and other resources to be used. It may also include a timeline, faculty expectations, and evaluation methods. Course syllabi are usually distributed the first day or week of class, and copies are usually kept on file in a departmental or school/college office.

How to: Syllabi may be analyzed using an a priori framework for categorizing content, including items such as number of texts used, total pages of required reading, number and type of examinations, written assignments and other evaluation activities, class presentations, group work, etc. A database can be created to organize syllabi by course, discipline, faculty member, year, or other variables.
 
What it does not measure: Analyzing course syllabi does not measure the extent to which course content and activities conform to the written plan, the extent to which students complete assignments, student satisfaction with outcomes from the course, and growth in student learning. It measures the processes of education, not the outcomes of those processes.

Benefits: Analysis provides insight into the type, amount, and frequency of academic activities assigned, such as reading, writing, examinations, presentations, and out-of-classroom projects. Resulting data can be analyzed across courses, disciplines, and cohorts. Analysis of data in comparison to various student outcomes can provide information as to the effectiveness of different pedagogical approaches and academic activities.

General challenges: This type of analysis is time consuming. Faculty need to be convinced of the value of the resulting data and informed if the analyses will be used for their performance evaluations. Making syllabi public may lead to intellectual property issues if the information is freely exchanged without the permission of the instructor.

Integration with other research/assessment: Analysis of syllabi is a measure of the activities or processes of education, so it needs to be used with other tools that measure student outcomes in order to fully assess the effectiveness of classroom practices.


Transcript Analysis

Definition: Analysis of course patterns reported on student academic transcripts. Rather than using new data collection or acting as an assessment instrument, transcript analysis involves examining existing school or student records for patterns. Transcript data can also be matched with other quantitative student data and courses used as predictor variables to see a relationship with student outcomes.

How to: Transcript analysis can be accomplished either through the examination of individual academic records or manipulation of computer information systems of all students to produce summary reports of course-taking patterns for different student groups. Student records can be analyzed quantitatively to show course-taking patterns and provide administrative data such as differences in student exposure to full professors (verses teaching assistants or junior faculty).

Application to liberal arts: Transcript analysis of student coursework across the departments or distribution requirements within each department can provide assessment of whether students experience the necessary breadth of course exposure to facilitate the integration of different types of learning or understanding. Usually these are measures of the processes of education rather than the outcomes of education.

What it does not measure: Transcripts cannot measure actual student learning or other student outcomes related to enrollment in a given course beyond the grade earned. Transcript analysis does not measure the extent of student participation or involvement in course activities.

Benefits: Transcript analysis allows institutions to examine patterns and trends in course enrollment over a set period of time, across cohorts, or across academic majors. Transcript analysis can be combined with other data to determine relationships between course-taking and student learning or other post-graduate outcomes. Patterns can also be measured for subgroups of the student body that may be of interest to the institutions, such as students whose parents did not go to college, students who were admitted conditionally because of weak academic preparation, under-represented minority groups, and students who studied abroad or participated in other specific out-of-classroom experiences.

General challenges: Analysis of individual transcripts is potentially tedious and time consuming. The analyses would likely require the presence of flexible computer information systems that enable the disaggregation of academic data and various combinations of variables. In addition, access to individual transcripts must be handled appropriately because of FERPA (Family Educational Rights and Privacy Act) restrictions. Course-taking patterns do not show causation for course selection. Certainly, some students take courses because of their reputation as easy or popular courses, whereas other students may intentionally seek a challenge.

Integration with other research/assessment: Because of the quantitative nature of the data, if information collected through other instruments also contains student identification, the records can be matched. This other data may be student outcome measures gained through assessments or postgraduate surveys. The matched data can be used to measure the relationship between student outcomes and their academic experiences.
 
General challenges: Tracking requires staff time and attention to integrate tracking mechanisms and manage resulting data. Given the nature of the tracking systems, it may be impossible to specifically identify visitors as students or faculty, or determine whether multiple visits came from the same user.

Integration with other research/assessment: This assessment tool focuses on the processes of education. Alone, it is inadequate to measure student outcomes, but it would be useful to improve services to students and can be used with other data to form a more complete picture of the educational experience.
 

Advising Activity Records

Definition: Data on the frequency and nature of student meetings with faculty or academic advisors, or other communication related to academic advising.

How to: Data on advising activity can be logged by faculty, possibly through use of a specially designed computer program that collects information related to the occurrence, duration, type, and nature of advising meetings and correspondence. Some of the data might be quantified, but structured observations of the interactions can provide richer data regarding the influence of these interactions on students. Faculty can use the interaction to obtain data or information from students about their college experience, aspirations, and perceptions of skills and character development. This data can be useful for institutional research but also enriches the advising experience.
 
What it does not measure: Analysis of advising activity does not necessarily measure student learning or other outcomes, depending on how the advising contact is structured. Data collected about student perceptions of their college experience is anecdotal and is not a behavioral outcome.

Benefits: Research into this area can provide quantitative and qualitative data that could be shared with various constituencies interested in the activities of faculty and the nature and scope of faculty-student advising interactions. Data can be analyzed for trends and patterns across cohorts, academic majors and disciplines, or over time. Data can also be used to improve the process of advising. Faculty data can be compared to student reports of satisfaction with academic advising to gain insight into the types of faculty activities that are most highly valued by students. Advising data can be compared to student grades, course completions, and graduation rates to determine the possible relationship between academic advising and various student outcomes. Additionally, advising discussions can provide early warnings of student dissatisfaction or potential departure and can be used to enhance the educational environment and/or better understand the campus culture.

General challenges: Faculty may resist the time and effort required for analysis. Faculty also may question the utility of the data and the possible implications or outcomes. Some of the information in advising sessions may be considered private and, therefore, protected by FERPA (Family Educational Rights and Privacy Act). Data collected by so many individuals lacks continuity unless a structured protocol is provided to faculty. However, a more structured protocol may hinder the natural conversation of advising. A more qualitative approach may result in an overwhelming amount of data.

Integration with other research/assessment: Faculty or advisors can provide useful data about the educational experience with little additional effort. If designed for qualitative data collection, advising data can provide rich information about the process of education along with some student perceptions of outcomes. Used alone, it provides an incomplete picture of the educational experience and minimal behavioral outcome measures.