(Author’s Note: We received a lot of great email feedback on our last few posts, so we have opened up comments below. If you have thoughts on today’s topic we’d love to hear them.)
One of the aspects we enjoy most about writing Finding Equilibrium is that putting our weekly post together forces us to clarify our thinking – and justify our perspective – on the topic du jour. Having an opinion is one thing, being able to communicate why you hold the opinion (with compelling reasons, at least to us) is something different.
So, right up front this week, we are not a fan of university rankings. Probably better said, we are not a fan of the administrative behaviors that university rankings can promote. This week and next, we’ll talk about those administrative behaviors, where rankings add value for students and universities and where they don’t, and how to navigate in a world where rankings aren’t going away.
Nothing New
There is nothing new about university rankings. In 1900, Alick Maclean, an Englishman, published the first university rankings. Universities were ranked in order by “the absolute number of ‘eminent men’ who had attended them”. A few years later in 1910, Americans began ranking universities when James McKeen Cattell created a list of 1000 men (accomplished scientists) and then ranked their alma maters/where they taught based the number of the men from/at the university relative to the university’s total number of faculty.
Cattell wanted to measure the “strength of each institution” and he recommended that students should be attending schools that have “large proportions of men of distinction among their instructors”. Cattell also suggested that institutions take actions to improve their “judgement of quality”.
In the process, two potentially valuable uses of university rankings were born – providing information on the quality of the university to prospective students and providing a guide for how universities could improve. While crude (who decided the men (and men only) were eminent?), these rankings were focused on outcomes.
Reputational rankings came later in 1924 when Raymond Hughes, a chemistry professor at Miami University (Ohio), asked faculty in 19 disciplines at Miami to rate 36 U.S. universities on the quality of their graduate program using a 1 to 5 scale. Additional reputational studies followed.
While outcomes-based rankings were the primary focus until the late 1950s, reputational rankings took off in the 1960s and beyond. Myers and Robe attribute the rise of reputational rankings to the work of Allan Cartter who published an influential reputational study of graduate programs in 1966.
Today we have a literal blizzard of rankings that are a mix of outcomes, inputs, and reputation/opinion.
Criticisms Abound, But Rankings Aren’t Going Anywhere
The last thing needed is a rant from us calling out all the issues with university rankings. Despite serious criticisms, rankings – and rankings services - aren’t going anywhere. It is more productive to revisit Cattell’s two objectives for his early rankings: 1) where do rankings add value for students and where do they not; and 2) where are rankings useful/misleading/poisonous to universities? We’ll look at the student question this week and consider the institutional question next.
What Information do Students Need in a Ranking?
As we have discussed previously, there is a dearth of timely, relevant, easy to access data to help students choose the right university. Rankings services have stepped up to fill that void and promote it as the reason for their existence (and of course make some money in the process).
That said, if a ranking is going to be useful to students what information should it include? Students need information on educational outcomes. They need to know if students graduate and graduate on time, if graduates get jobs and what starting salaries are. They need to know the actual cost of education (not the sticker price) and how much debt graduates have.
Students need to know about the student experience. They need to know who teaches classes (permanent or adjunct faculty or TAs) and whether or not students participate in high-impact learning experiences, internships, and co-curricular activities. They need this information at the program level given wide disparity across programs within a university.
Do Rankings Meet Student Needs?
Let’s take a look at how rankings stack up against the kinds of information students and their families need.
Who’s Number 1 and Does that Matter? Universities are complex organizations producing research, human talent, and services for stakeholders. When a place claims ‘we are number one’, just what is being ranked? Many rankings focus on undergraduate programs and graduate and professional programs, while others attempt to assess research.
Rankings focus on the number of CEOs the university has graduated, the quality of the food, and, yes, beer consumption. The notion that any institution is ‘number one’ in some comprehensive way across all missions (and areas) is ludicrous – though any correlations between the CEO and beer rankings would be interesting...
As an aside, between the myriad of things being ranked and the use of sub-categories (public vs. private, regional breakdowns, etc.), the good news is almost everyone can find a ranking where they shine. “We’re the fourth-ranked public online nutrition science program in Kentucky”. Trophies for all!
Another key point: given the variance around any quantitative measure of ‘performance’, is there any real (statistical) difference between number 1 and number 10 – or number 20? One study (using an earlier version of the USNWR criteria) found that rankings moves of +/- 4 points should be considered statistical noise and were not reflective of actual differences in quality. Rankings such as Niche (letter grades) or Money (stars) provide ratings of institutions, in addition to ordinal rankings – which makes way more sense to us as a way of comparing institutions.
Digging Out the Useful Information. The data used to construct these things continues to evolve. The 2025 US News and World Report national university rankings used the following components: 52% outcomes (graduation rates, debt,….); 11% faculty resources (faculty salaries, student-faculty ratio,…); 20% reputation (as ranked by other academics); 8% financial resources (per student); 5% standardized test scores; 4% research reputation.
Wall Street Journal/College Pulse uses a different set of measures, same with Money, Forbes, Niche, QS, ... Within a given set of overall rankings, you can find a plethora of other rankings: student experience, best salaries, best value, social mobility, by major, test optional, colleges with no application fee,… The rankings measure different things, so they tell you different things – confusing students and their families in the process.
An especially troublesome issue is that few consumers of rankings know what is actually being ranked. Someone who cares a lot about undergraduate education might be appalled to learn that the ranking they are following is actually based on research reputation and that marginal investments taken away from undergraduate education to boost research standing will improve the ranking.
Source: CARPE.
So what is useful here? Some of the rankings try to get at value or ROI. The Wall Street Journal/College Pulse ranking includes a measure of how much more a graduate of a specific university earns relative to what would be expected regardless of college attended, and a measure of how many years it takes to pay off the net price of attending the college. Niche rankings attempt to get at student life. USRWR offers a number of rankings that dig into the student experience, affordability, etc.
Reputational Components of Rankings. While many rankings use outcome data on completion, retention, starting salaries, etc., others are based solely or largely on ‘reputational’ components. We have a very low opinion of reputational rankings.
Let’s use USNWR to see how this works. USNWR uses a 5-point scale for the peer assessment portion of overall university USNWR rankings. Specific program rankings (e.g. colleges of business or engineering, computer science programs) are entirely reputational and the ranking is determined “solely on the judgements of deans and senior faculty members”.
To get at these judgements, USNWR sends university administrators long lists of other universities or specific programs. They score each on a 5-point scale from excellent to sub-standard, or acknowledge that they have no basis for judgement. When there is a deeper drill down into sub-disciplines (say AI, cyber security, software engineering, etc. for computer science), these same individuals are asked to simply list the best 5-10 programs in each sub-area.
When we filled out these surveys we had no information at all on about 99% of the programs we were asked to rank. For the remaining 1% we knew only snippets. And it’s not just us. The thought that any administrator has deep insight into more than a handful of programs beyond their own is just a fallacy – it is hard enough to know every nook and cranny of your own institution.
This lack of insight on other places leads to some interesting behavior. About the time rankings surveys would come out, our mail would bring fliers, cards, buttons, magnets, other bling, and occasionally something useful like coffee – all from universities that wanted to be remembered at rankings time. Maybe that stuff works better than we think it does…but it has nothing to do with the substance of which programs are excellent or substandard.
As contrasted to feedback from university administrators, some rankings services like Niche are using student feedback for the reputational component. Plenty you could get exorcized about there, but at least students have direct experience at a place.
How are Students Using Rankings?
Is the ‘number 1’ institution the right one for every student? Of course not. Picking the right college is about much more than some single indicator of ‘quality’. Given all the flaws, do university rankings create value for students? Overall, the real value looks to be the readily accessible information on outcomes ranking services provide and the ability to compare universities on those outcomes.
Some good news: there is evidence that students are all over this point – they seem to be using the data rankings services provide more than the rankings themselves. (High academic profile students do appear to pay more attention to a university’s numeric rank relative to other students.) While the sheer number of rankings services can be overwhelming to students, several of the ranking websites have filters that allow students to dig more deeply into a university’s performance and make comparisons across universities – and that looks to be what is most important to students.
Source: Student Poll.
Some Guidelines for Students and Their Families
Think about what is most important to you when choosing a college and let that guide your choice of which ranking to explore. Make sure you understand how the ranking was constructed and that those are the factors important to you.
Rankings with ratings will likely provide better information on groups of universities that fit your interests. When looking for a short-term rental, the fact the owner has a 5-star rating is more important than the fact they are number 12 of 50.
Get beyond rankings and ratings whenever you can. Use the websites to compare metrics such as graduation rate, starting salaries, cost, and debt. A few of the services allow students to determine their own weight on the various components of the rankings – a custom ranking reflecting their priorities.
Get to the program level when you can. An average salary for university graduates tells you something about the place, but if you are interested in business school, the average salary for business school graduates is what you really want to know.
What’s Coming
Next week we explore how university rankings can influence decision-making on campus – for good or ill.
Research Assistance provided by Marley Heritier.
“Finding Equilibrium” is coauthored by Jay Akridge, Professor of Agricultural Economics, Trustee Chair in Teaching and Learning Excellence, and Provost Emeritus at Purdue University and David Hummels, Distinguished Professor of Economics and Dean Emeritus at the Daniels School of Business at Purdue.
Another great article with much to ponder. Can’t wait for next week and the implications for university leadership as they use rankings for decision making.
Thanks for the broader perspective