Ranking the Unmeasurable

Today’s Inside Higher Ed has a story about growing resistance to the US News rankings:

In the wake of meetings this week of the Annapolis Group — an organization of liberal arts colleges — critics of the U.S. News & World Report college rankings are expecting a significant increase in the number of institutions where presidents pledge not to participate in the “reputational” portion of the rankings or to use scores in their own promotional materials.

A majority of the approximately 80 presidents at the meeting said that they did not intend to participate in the U.S. News reputational rankings in the future. Some of those presidents may have previously endorsed the movement, so the exact increase is uncertain as Annapolis Group leaders said that the expected individual presidents to announce their decisions.

Interestingly, yesterday’s interesting story was about how we don’t actually know anything about the students we admit:

A major study released Monday by the University of California suggests that high school grades may be good at predicting not only first-year college performance, as commonly believed, but performance throughout four undergraduate years. The same study suggests that the SAT adds little predictive value to admissions decisions and is hindered by a high link between SAT scores and socioeconomic status — a link not present for high school grades.

And further, the study finds that all of the information admissions officers currently have is of limited value, and accounts for only 30 percent of the grade variance in colleges — leaving 70 percent of the variance unexplained.

It’s an interesting combiantion, because the admissions story is a nice demonstration of the problems the anti-US News people are talking about. Our proxy measurements of student quality all suck, and these end up being a big chunk of the rankings. You’re not going to be able to make fine distinctions between institutions using measurements where you can’t explain 70% of the variance.

Of course, the SAT and graduation rate data are like laser spectroscopy compared to the “reputational” portion of the rankings, which is the thing that the Annapolis Group presidents are talking about boycotting. This is a completely ridiculous process, in which the top two or three academic officers of every college are sent a survey and asked to rank other colleges. This accounts for something like 25% of the ranking all by itself, which more or less guarantees that the strongest correlation between any objective measure and a given school’s ranking will be with the previous year’s ranking.

Elsewhere in blogdom, Mark Kleiman snarks at the Annapolis Group:

Boycotting the U.S. News college rankings is a fine idea. But that survey filled a void. Either the colleges themselves or some friendly foundation needs to write a reasonable ranking system, collect the data, and publish the results.

I was going to be snarky in return, because the next paragraph of the Inside Higher Ed piece I linked above is:

At the same time, the Annapolis Group formally endorsed the idea of working with the National Association of Independent Colleges and Universities and the Council of Independent Colleges to create “an alternative common format that presents information about their colleges for students and their families to use in the college search process.” The idea is to create online information with “easily accessible, comprehensive and quantifiable data.”

It’s not Mark’s fault, though– the New York Times piece he used as a source doesn’t bother to mention the endorsement of alternate rankings. So, boo for the Paper of Record.