Two weeks ago we looked at the history of university rankings and the value (or lack thereof) rankings had for students. We also discussed many of the criticisms of how university rankings are constructed and won’t belabor those points. This week, we focus on the inherent dangers when rankings impact decision making on campus, why administrators pay attention to them, and offer some thoughts on a better path.
Rankings and Quality?
One of James Cattell’s original goals when ranking institutions was to encourage universities to take actions improving their ‘judgement of quality’. But given the heavy focus on inputs instead of outcomes in ranking criteria and the role of ‘reputation surveys’ in the ranking process, it’s hard to say that rankings provide much insight into the actual quality of a university.
That said, motivating excellence as a goal of rankings sounds good – perhaps there is some level of accountability in rankings? A university with a lousy graduation rate getting called out through its rankings might be motivated to do something about it.
Maybe this happens more frequently than we think. But, a lot of the conversations we have been in/are aware of go something like this:
“Our ranking is X and it needs to be X-n. How do we move that ranking?”
“I don’t know, what are the inputs into the ranking? Can we figure those out and reverse engineer them? Which of those inputs can we manipulate to move the ranking quickly and at the lowest cost?”
That conversation is vastly different from…
“We want to improve graduation rates, and it will be great if in the process of doing that, we are ranked higher.”
Rankings and Mission
Rankings are a lazy approach to framing a university’s mission. ‘We want to be top 10!’ seems like a great rallying cry, until you take about 5 seconds and try to unpack what that means: Top 10 in size? Top 10 in graduation rate? Top 10 in research dollars? Top 10 in … ?
The danger is that making rankings a priority precludes deep thinking about what a university’s mission should be. Decisions get focused on the very narrow set of criteria that ‘count’ in rankings as opposed to the much wider set of criteria stakeholders care about.
Rankings are also a force for homogenization, and not in a good way. Scan the communications/mission statements/key initiatives of most major universities and you would be hard pressed to identify the institution without the school colors and mascot present.
Finding your point of difference is hard work but it is necessary work. We are all in the higher education business: what is unique about our institution? Why does our state/nation need our institution when it could just make the others a bit bigger and save the overhead?
Public institutions have a very direct obligation to the taxpayers of the state – who likely care little about some national ranking. Taxpayers do care about the educational opportunities the university provides residents of the state; the contribution to the state’s economic and social vitality; how the university supports key industries; and the contributions made in addressing critical state problems such as public health. Nothing good happens when the quest for a higher ranking becomes a substitute for being the university the state needs it to be.
Rankings and Decision-Making
When rankings drive decisions, you have a problem. If investments are guided by rankings and rankings aren’t aligned with mission and student success, you don’t meet the needs of students and stakeholders. And you lose public trust – or worse.
There is certainly evidence of this misalignment. Some studies (using an earlier version of the USNWR rankings) have shown that universities tend to 1) spend more and 2) allocate funds differently to improve rankings. There is also evidence that such ‘striving’ behavior is most pronounced for those institutions on the ‘border’ between ranking classes – just outside the top 25 for instance – as being in a better ranking neighborhood is perceived as bringing prestige to the campus.
As an example, admission decisions can have a direct impact on rankings – and can make holistic admission a joke. Take a highly laudable goal like improving access to education among first generation or Pell-grant eligible students. On average these students tend to have lower SAT scores, lower completion rates, lower starting salaries,… A rankings-focused admissions strategy might start with: only admit rich kids.
Spending more to educate each student can drive some rankings higher – but is antithetical to a mission of controlling the cost of education. Reputation building investments can grab headlines, but starve less flashy initiatives that are essential in serving students and stakeholders. When it comes to university budgets, it is a zero-sum game: a dollar spent to move a ranking is not available to spend anywhere else.
There is no shortage of recent examples of decisions that are rankings driven. Louisiana State University altered hiring and promotion and tenure processes for librarians as part of their push to enter the AAU. Columbia was taken to task and ultimately apologized for misreporting data which had inflated rankings. At the extreme, a Temple business school dean was sentenced to prison for providing false information in an effort to boost rankings.
Improving Rankings: Sticky and Costly
Rankings are amazingly ‘sticky’ over time - most institutions stay in a +/- 4-point band. In our time in leadership, we saw plenty of examples of programs that seemed to be showing steady progress up the rankings ladder, only to inexplicably slide right back down the rankings chute the next year.
Why is that? Rosenberg argues that university rankings are nothing more than a mirror for reputation and that university reputations are ridiculously hard to change.
In 1983 when the US News and World Report Rankings were launched, the top four universities were Stanford, Harvard, Yale, and Princeton. In 2022, the top four were Princeton, Harvard, MIT, and Yale (Stanford was 5th). Rosenberg contrasts this list with Fortune’s Most Admired Companies. In 1983, the most admired companies were Exxon, General Motors, Mobil, Texaco, and Ford. In 2022 the top five were Apple, Amazon, Microsoft, Pfizer, and Disney - “none of which were in the top 100 in 1983, and one of which had not been founded” (page 38).
A reason for the ‘stickiness’: the cost of moving up in the rankings is crazy high. One 2014 study explored the cost of a university moving from a ranking in the mid-30s to the top 20 of the rankings. Using the USNWR criteria at the time, it would take over $100,000,000 per year (forever) in additional financial resources per student and in faculty salaries to make that kind of jump. And, the university would still need to improve their peer assessment score – which basically doesn’t move.
Are Rankings Useful to the Institution?
One argument is that higher rankings boost enrollment and attract better students. We covered this in some detail in our earlier post. Far more undergraduate students use the data that ranking services provide (as opposed to the ranking itself) when they make their college choice. However, numerical rankings seem to matter to some students, and are probably quite important for elite professional graduate programs.
What about those high profile students and their parents that place more weight on numerical rankings? There is an element of prestige in a school that has a high ranking and prestige alone has value to certain students/families. And that probably makes sense if you want to serve as a Supreme Court clerk or justice.
More practically, high profile students have stronger educational outcomes, typically come from wealthier families, and may well be out of state students paying much higher tuition. So, high profile students attracted by high rankings are a win from an input metrics standpoint and a win from a financial standpoint for universities.
Another argument is that employers use rankings and highly ranked institutions offer more and better job opportunities for their students as a result. There is little evidence employers use rankings this way, perhaps in part because they directly assess the quality of students they hire from various universities, basically creating their own internal rankings. As a caveat, this would not necessarily be true for professional schools as top legal posts, consulting jobs, and medical positions tend to be filled by graduates from ‘highly ranked’ schools. And, sheer enrollment may send some employers to much larger public flagship campuses, than smaller (and lower ranked) public regionals.
Source: Strada
Some universities (Florida for example) have been able to lever the quest for improved rankings into additional funding – maybe rankings provide a simplistic measure of excellence that connected with Boards and legislatures.
Finally, there is an element of sport here: much like athletic team rankings, administrators and alumni certainly enjoy bragging rights when the rankings look good. (We both bragged about our rankings while serving in administrative roles.) We don’t deny a lofty position can make faculty, staff, students, and donors feel good about a place – even if the ‘why’ behind the ranking is murky.
A Better Path
We know plenty of folks who think about rankings the way we do – but what do you do about them? Given the pervasive nature of rankings and the pressure that can come to ‘improve’ them, they are impossible for any academic leader to ignore. That said, you can work to keep rankings in perspective as you make the strategic decisions to deliver on your unique mission.
An Inc. magazine article many years ago talked about ‘image positioning’ vs. ‘character expression’. Image positioning is basically using communications tools to self-promote a desired image. Character expression is doing the hard work of living a mission, and reaping the benefits of delivering authentic value to your clientele and stakeholders.
Rankings smack of image positioning and are noisy, misleading, and confusing signals of excellence. Universities that express their character by knowing who they are and defining and delivering excellence accordingly lay the foundation for long-term success.
These places who are comfortable in their own skin are bold enough to only pay attention to the rankings and/or the components of rankings that reflect the excellence they seek and that matter to their stakeholders (including peers). They use peer comparisons to support excellence in chosen target areas. These universities do not let the pursuit of rankings or peer comparisons take them in directions which are not aligned with their chosen goals and those of the communities they serve.
Is your teaching mission maximizing incoming student quality, or is it transforming student potential while they are at the university? Is your research mission to rank in the top ten in federal funding, or is it advancing disciplines and addressing societal issues? Is your engagement mission contributing to workforce and community development and a healthier, more prosperous state?
You can’t point to a ranking to demonstrate such impacts. But you can and must tell your story, using quantitative metrics where possible, using qualitative narrative where appropriate, but always focused on the mission itself.
Perhaps most importantly, mission-driven excellence happens when there is a shared sense of responsibility for achieving goals across campus – an internal accountability and understanding of how individuals contribute to the broader goals of the institution.
Getting a campus, alumni/donors, employers, and the state excited about a mission which makes a difference is a far more engaging and motivating exercise than chasing a ranking. Accomplishing this means rewards and recognitions for administrators, faculty, and staff must be aligned with the specific goals and aspirations of the institution.
This is incredibly hard work: it demands relentless communication everywhere, all the time by the campus and its leadership to scream to stakeholders: this is who we are, this is why it matters, this is how we are doing, and this is how we will get better in service to you – with your help.
Next Week…
We are going to take a look at tuition with an ‘explainer’ on trends in tuition and the cost of attending college. As always. thanks for reading and for your support.
Research assistance provided by Marley Heritier.
“Finding Equilibrium” is coauthored by Jay Akridge, Professor of Agricultural Economics, Trustee Chair in Teaching and Learning Excellence, and Provost Emeritus at Purdue University and David Hummels, Distinguished Professor of Economics and Dean Emeritus at the Daniels School of Business at Purdue.
Another excellent essay on a critical issue facing higher education! Thank you!