“Just Because It’s Easy Doesn’t Make It Right”

The previous post dealt with the inherent fallacy of valuing what is easily assessed instead of learning to assess what’s truly valuable. There is clearly too much of an emphasis put on the comfort and familiarity with objective data in the admissions process. This is especially true at the colleges of the highest selectivity.  It is perhaps ironic that the same misguided reliance on valuing what is easily assessed rather than what is truly important  is in fact what made that ultra selective college so ultra selective in the first place!
U. S. News and World Report’s annual college rankings issue is the tail that wags the dog in admissions world.  Parents and students look to the list for guidance and validation; for better or worse, when people speak of college rankings, they are referring to those of U.S. News and World Report.  But should the list be so heeded?  Let’s take a look at the methodology for the rankings:

According to U.S. News and World Report’s Robert Morse:

The rankings evaluate colleges and universities on 16 measures of academic quality. They allow you to compare at a glance the relative quality of U.S. institutions based on such widely accepted indicators of excellence as first-year student retention, graduation rates and the strength of the faculty…to make valid comparisons, schools are grouped by academic mission into 10 categories for 10 distinct rankings…To calculate the overall rank for each school within each category, up to 16 metrics of academic excellence are assigned weights that reflect U.S. News' researched judgment about how much they matter. For display purposes, we group these measures into the following indicators: outcomes, social mobility, graduation and retention rates, faculty resources, financial resources, alumni giving, student excellence, and expert opinion.


Allow me to define these measures: outcomes (the average length of time it takes students at the university to graduate), social mobility (how many Pell grant-eligible students the university graduates in a given year), graduation and retention rates (the percentage of students who return for their sophomore years and eventually graduate), faculty resources (salary, teaching load, average class size, etc.), financial resources (the amount of money a university spends per student), alumni giving (how much money the university gets from alumni donations in a given year), student excellence (measured by the average high school grade point average and standardized test score of admitted students), and expert opinion (derived from surveys of college counselors and university presidents).  For the full article, please click here

The point is, these rankings are largely derived from measurable, quantifiable (yes, objective) data. An analysis of this methodology begs the question: Do any of these measures really tell prospective applicants and their families what it’s like to be a student at the university? Do they accurately indicate how transformative the experience of being a student there is? Of course not!

These musings prompt a second point: the objective data points that comprise the methodology for college rankings aren’t necessarily valuable. Recall what Professor Resnick observed in the previous post about the relationship between assessments and value; he notes that we—to the detriment of our educational system— tend to value what is easily assessable rather than learning to assess what we value. In that post, I tried to make the case that the same tendency infects the college admission process.

Moreover, this pernicious practice is clearly at play in the multi-million dollar effort to quantify the relative strengths and weakness of colleges and universities by ranking them. In the case of the U.S. News and World Report rankings, the most sacred of rankings, there is clearly a leap to be made from evaluating the objective data of a given university and arriving at the conclusion that it will be a place of significance and transformation for a particular applicant. They might be a useful tool but should never be the final arbiter!

But it’s so easy to give these rankings so much power! They are so accessible! Everybody knows about them! But just because it’s easy doesn’t make it right.

Even as I understand the simple math of trying to give a thorough read to every application, I find it frustratingly incomplete when college admission committees seek a shortcut in their evaluation of an applicant by relying too heavily on the objective data of the application. But it is no less frustrating to me—for an entirely different reason—when I see students and families take similar, objective data-driven shortcuts in what should be a thoughtful decision by relying too heavily on college rankings!

I’d love to hear your thoughts!
 
Back