Measuring Quality in Higher Education

by Susan D. Phillips and Kevin Kinser, Ph.D.

Sharkshock/Shutterstock

With the system of higher education accreditation focused on ensuring quality, it turns out that defining (and measuring) quality is not so easy. There are multiple, sometimes contradictory and often simplistic, definitions.

Historically, the definition of quality was focused on the familiar debate that pits process and inputs against outcomes and outputs. The process/inputs side says that solid systems and resources should be in place to increase the likelihood that the resulting outcome is quality. The outcome/outputs side argues that the product or consequence of educational experiences are more important than a measurement of what is in place when a student arrives on campus.

But inputs versus outputs is not a very helpful debate: It seems fair to conclude that quality arises not only from having the right inputs but also from evidence that those resources are put into play towards a desired outcome. And, indeed, accreditation has evolved from primarily looking at inputs such as admission requirements and campus facilities, to also looking for desired outcomes such as graduation rates and job placement, pass rates for certification exams, and other indicators of student learning and achievement.

But — wait — what IS the desired outcome of education? Should it be that the graduates are employed? That they pay back their loans? That students are protected from fraud? That students learn x, y, or z? Or that the risk for the government’s investment in higher education is well managed?

An important facet of this challenge stems from the diverse set of institutions that accreditation is expected to oversee. The most interesting outcomes for a research university are quite different from those that motivate a community college. A small private liberal arts college has different goals for its students than a for-profit career training institute. This diversity necessitates a mission-driven approach, where outcomes are tailored to the institutional mission.

But without objective criteria common to all institutions and programs, many argue that an institution can assert outcomes that are self-serving and lack comparative meaning.

The counterpoint is that a one-size-fits-all, easy-to-compare approach clearly ignores real and significant differences between institutions. In this view, indicators should, in fact, recognize that institutions are trying to accomplish different things through their curriculum; students have different purposes for attending; and interests vary among policymakers, global partners, and eventual employers.

This brief glimpse of differing perspectives points to a fundamental lack of agreement about what outcomes are relevant and valued, from whose perspective, for which institutions or programs, and for which students.

Depending on one’s point of view, what counts as evidence of quality in outcomes might range from skills learned, to tests passed, to programs completed, to certifications achieved, to employment obtained, to salary earned, to loans repaid — to lives well lived. . Apart from the student’s goals and the institution’s mission, different outcomes are relevant to different public and policy goals: Those concerned about consumer protection might want measures such as “Do students finish?,” “How long does it take?” and “Do they get jobs?” Those concerned with the use of taxpayer funds might want a metric that addressed whether an income was achieved, and a debt repaid. And — even if we could all agree that a metric such as default rate is an important indicator of quality — it is not obvious that accreditation agencies are the appropriate venue for assessing the level of compliance.

Even if there is broad consensus on an outcome, AND even if it is clearly within the purview of accreditation to evaluate it, the lamppost problem of measurement remains: just as it is a fallacy that the best place to look for your lost keys is where the light is brightest, it is also an error that the best way to look for evidence of learning is where the indicators are easiest.

Even for those who think that the movement towards outcomes-based quality assurance should continue and even accelerate, the questions surrounding outcomes themselves remain sticking points–what counts as quality, what entity assesses it, and how well that assessment is achieved. Even with the recent calls for differential accreditation processes centered on a risk-based assessment — an intriguing effort that many see as reducing institutional burden — there will still be the sticking point of how to define “risk.”

Taken together, these questions about definitions and measurements make it clear that the problems of accreditation are not solved by simply refocusing our attention on the results of the educational process, nor by imposing a common easily measured yardstick. In a meaningful system of accreditation, the question of defining and measuring quality must be considered with a full awareness of the diversity of student goals, institutional missions, policy interests, and measurement challenges.

Source link

Leave a Comment