Time to Rethink University Accreditation
Many people believe that if a college or university is accredited, that’s the equivalent of a guarantee of quality. Just as the seal of approval from Underwriters Laboratories (UL) tells consumers that an electric appliance is going to be reliable, so too with college accreditation, which supposedly tells students that a college is of good quality. At least, that’s widely thought to be true.
Like so many other things that are widely thought to be true, the belief that accreditation is a guarantee of educational quality is mistaken. Many Americans holding degrees from accredited colleges learned little or nothing of value and now struggle to repay their loans with mundane jobs that high-school kids could do. Accreditors rarely uncover academic malpractice such as the infamous “paper courses,” for which star athletes at UNC got high grades to help them remain eligible to play.
A new study done by the Texas Public Policy Foundation should spark debate over the role of accreditation. In it, author Andrew Gillen endeavors to show which of the accrediting bodies appear to do the best job of maintaining sound educational standards and which seem to be failing in that task.
First, exactly what is college accreditation?
Gillen explains that it began late in the 19th century, when there was a proliferation of institutions calling themselves “colleges,” some of which were so in name only, offering correspondence courses. To separate real colleges from the rest, groups of established schools formed associations in their regions of the country that were meant to help students tell wheat from chaff. Only those schools that met certain standards for faculty, library, and organization could be accepted as members. Only they were “accredited.”
Six regional accrediting associations sprang up across the U.S. They were and still are private organizations, and membership is voluntary. Little attention was paid to them until the federal government decided to make accreditation a necessary condition for the receipt of federal student-aid money, beginning with the G.I. Bill and continuing with the massive expansion under the Higher Education Act of 1965. Thus, the accreditors became the gatekeepers for access to the gusher of federal funds.
The idea behind that was simple: Accreditation was presumed to be a guarantee of quality, a way to make sure that student-aid money was not being wasted on degree mills.
Accreditation has always been mainly based on educational inputs. Site visits examine a school’s finances, procedures, facilities, and so on but not its educational outputs—the results for students. It’s easy to look at such things as faculty credentials and library holdings but difficult to assess whether, for example, students are learning how to write well in their English courses. Until recently, there was no way of telling if students were benefiting from their accredited colleges.
But now there is a way.
In the last few years, federal statistics have become available on students’ debt as a percentage of their earnings. As Gillen explains his approach, “We look at whether accreditors disproportionately approve programs that leave their students with excessive student loan debt relative to their post-graduation earnings.”
In previous studies, Gillen has classified college programs according to that debt-to-earnings metric: Some are very good, but others are bad, including some with debt-to-earnings ratios of over 100 percent. The worst among those are programs where the ratio exceeds 125 percent.
Using that approach, Gillen finds that only 7 percent of undergraduate degrees fall into “monitor” or “sanction” categories, but 53 percent of doctoral programs fall into them. And yet, they are accredited. To their accreditors, they look fine, based on their inputs.
So, there are plenty of college and post-college programs that are bad values for students. The question Gillen is interested in is whether any of the accreditors stand out as better or worse in that regard. He breaks his analysis down by degree type.
Looking at bachelor’s degrees, he finds that the Higher Learning Commission (HLC), founded in 1895 and for most of its existence called the North Central Association, is the best-performing accreditor. It accounts for 36 percent of all bachelor’s degrees but only 27 percent of those regarded as failing. At the other end of the spectrum is the Southern Association of Colleges and Schools (SACS), which accounts for 25 percent of the degrees but 42 percent of the failing programs.
Looking at doctoral programs, we see different results. The worst-performing accreditor is the Western Association of Schools and Colleges (13 percent of all programs but 18 percent of the failures), while SACS appears to overperform (29 percent of all programs but only 25 percent of the failures).
Gillen concludes that, overall, HLC stands out as the best accreditor and SACS as the worst.
His policy suggestion is that states should consider requiring their higher-ed institutions to use better-performing accreditors and avoid the poorer ones. (Until recently, accreditors were restricted to specific regions, but that rule was dropped, thus allowing states to implement a “shop around” policy.)
I’m not opposed to that, but doubt that it will accomplish much good.
I say that because we know very little about the internal workings of the accreditors. Do we know that HLC has procedures that improve or cull academic programs that provide little benefit for students? We don’t. All we know from the data presented is that it has fewer bad bachelor’s-degree programs than do other accreditors. We don’t know why that’s the case.
Have the apparently better accreditors been assisting schools in improving their poorly-performing programs or threatening to revoke their accreditation unless they are dropped? We would want to have evidence on that before deciding that the way to raise the level of higher education in a state is to have schools switch accreditors.
And bear in mind that even the best accreditor in this analysis still has quite a few weak programs under its wing. Relying on accreditation to raise educational quality therefore seems amiss.
Do we need to rely on it?
We don’t, because it’s a poor solution to a minimal problem.
Students don’t want to waste their time on educational programs that are unlikely to pay off well enough for them to cover their costs. The original impetus for insisting that federal aid be used only at accredited institutions was the idea that many students weren’t able to tell whether a school was reputable and could be taken in by slick sales pitches from fraudulent ones. That was probably true in the 1950s and 60s, but now we have far more information on outcomes and vastly better means of disseminating it.
One problem with accreditation as it has been traditionally done is that it mostly operates at the institutional level. Colleges and universities are either accredited or they are not. But not all programs they offer are of equal value; accredited schools frequently offer some programs that are unlikely to prove worthwhile for students. The “accredited” label thus obscures the program-specific information that would be most useful to students.
Suppose that the government dropped the requirement of accreditation and instead published (and widely disseminated) its data on programmatic value? That would be doubly beneficial: It would give students useful guidance, and it would eliminate the ability of accreditors to suppress innovation in higher education. As Gillen observes, their inputs-based methods inhibit upstart schools from discovering “new recipes” that might work better than the old-fashioned ones used by existing members.
Even though some accreditors might be better than others, there is no reason to keep any of them as gatekeepers for student aid.