The host of intangibles that make up the college experience can't be measured by a series of data points. But for families concerned with finding the best academic value for their money, the U.S. News Best Colleges rankings provide an excellent starting point for the search.
They allow you to compare at a glance the relative quality of institutions based on such widely accepted indicators of excellence as freshman retention and graduation rates and the strength of the faculty. And as you check out the data for colleges already on your short list, you may discover unfamiliar schools with similar metrics, and thus broaden your options.
Many factors other than those spotlighted here will figure in your decision, including location and the feel of campus life; the range of academic offerings, activities and sports; and cost and the availability of financial aid. But if you combine the information on usnews.com with college visits, interviews and your own intuition, our rankings can be a powerful tool in your quest for the right college.
How the Methodology Works
The U.S. News ranking system rests on two pillars. The formula uses quantitative measures that education experts have proposed as reliable indicators of academic quality, and it's based on our researched view of what matters in education.
First, schools are categorized by their mission, which is derived from the breakdown of types of higher education institutions as refined by the Carnegie Foundation for the Advancement of Teaching in 2010. The Carnegie classification has been the basis of the Best Colleges ranking category system since our first rankings were published in 1983, given that it is used extensively as the accepted standard by higher education researchers.
The U.S. Department of Education and many higher education associations use the system to organize their data and to determine colleges' eligibility for grant money, for example. The category names we use are our own – National Universities, National Liberal Arts Colleges, Regional Universities and Regional Colleges – but their definitions rely on the Carnegie principles.
National Universities offer a full range of undergraduate majors, plus master's and Ph.D. programs, and emphasize faculty research. National Liberal Arts Colleges focus almost exclusively on undergraduate education. They award at least 50% of their degrees in the arts and sciences.
Regional Universities offer a broad scope of undergraduate degrees and some master's degree programs but few, if any, doctoral programs. Regional Colleges focus on undergraduate education but grant fewer than 50% of their degrees in liberal arts disciplines; this category also includes schools that have small bachelor's degree programs but primarily grant two-year associate degrees.
Regional Universities and Regional Colleges are further divided and ranked in four geographical groups: North, South, Midwest and West.
Next, we gather data from each college on up to 16 indicators of academic excellence. Each factor is assigned a weight that reflects our judgment about how much a measure matters. Finally, the colleges and universities in each category are ranked against their peers, based on their composite weighted score.
U.S. News made significant changes this year to the Best Colleges ranking methodology to reduce the weight of input factors and increase the weight of output measures. See section further below for more details on each of the ranking model indicators.
1. High school class standing: We reduced the weight assigned to the high school class standing of newly enrolled students in the ranking model for all categories and gave slightly more weight to SAT and ACT scores.
It is clear from the data that U.S. News collects that, as each year passes, the proportion of high school graduates with class rank on their transcripts is falling. As a result, the measure is less representative of each college's freshman class than it was five or 10 years ago.
The decline in importance of high school class standing in admissions decisions was confirmed in the National Association for College Admission Counseling's 2012 "State of College Admission" report. The report states that since 1993, "the factor showing the largest decline in importance is class rank." For fall 2011, just 19% of colleges rated it as considerably important, down from 42% in 1993.
This same research shows that SAT and ACT scores are growing in importance in admissions decisions. To better reflect this reality, the student selectivity indicator in our ranking model was adjusted so that the weight of high school class standing dropped from 40% to 25%, and the weight of SAT and ACT scores increased from 50% to 65%.
At the same time, the weight of student selectivity overall has declined from 15% to 12.5% to place less emphasis on inputs. This change reduced the effective weight of class rank in the overall rankings from 6% to 3.125%; increased the effective weight of SAT and ACT scores in the overall rankings from 7.5% to 8.125%; and slightly reduced the effective weight of acceptance rate in the overall rankings from 1.5% to 1.25%.
2. Graduation rate performance: We expanded the use of the graduation rate performance indicator to all the Best Colleges ranking categories; this meant that for the first time, it applied to Regional Universities and Regional Colleges, nearly 1,000 additional colleges.
Since 1997, this ranking factor had been used only in the National Universities and National Liberal Arts Colleges ranking categories. It now has a weight of 7.5% in the ranking model for all schools.
Incorporating this indicator for all schools improves the Best Colleges ranking methodology as it's an important outcome measure that focuses on the difference between each school's predicted graduation rate (as calculated by U.S. News based on key characteristics of the incoming class closely linked to college completion, such as SAT and ACT scores and Pell Grants) and its actual graduation rate. The indicator gives credit to schools that have higher-than-expected graduation rates.
3. Other ranking factors: We changed other weights in the ranking model to further emphasize outcome measures.
The weight of the peer assessment score was reduced in the Regional Universities and Regional Colleges categories from 25% to 22.5%; the weight of graduation and retention rates was increased for National Universities and National Liberal Arts Colleges to 22.5% from 20%.
Since we added graduation rate performance as a ranking factor for Regional Universities and Regional Colleges, the weights for the graduation rates themselves and retention rates dropped from 25% to 22.5%.
As a result of the changes described above, many schools' ranks changed in the 2014 edition of the Best Colleges rankings compared with the 2013 edition.
If a school's ranking data changed in the 2014 edition compared with the 2013 edition, this could have had an impact on its new overall rank.
Even if a school's ranking data changed little in the 2014 edition compared with the previous edition, if the new methodology placed more emphasis on a ranking factor that the school scored relatively higher in, then its rank may have risen.
Similarly, if the new methodology placed more emphasis on a factor that the school was relatively weaker in, then its rank may have fallen.
Beyond the ranking methodology changes, we used clearer footnotes to indicate the schools that did not report to U.S. News fall 2012 SAT and ACT scores for all first-time, first-year, degree-seeking students with these scores – including athletes, international students, minority students, legacies, those admitted by special arrangement and those who started in the summer of 2012.
The footnotes also include schools that declined to tell us whether all students with test scores were represented.
The value of those footnoted SAT and ACT scores reported by the school was reduced in the Best Colleges ranking model. This practice is not new; since the 1997 rankings, we have discounted the value of such schools' reported scores in the ranking model, since the effect of leaving students out could be that lower scores are omitted.
If a school told U.S. News that it included all students with scores in the reported SAT and ACT scores, then those scores were counted fully in the rankings and were not footnoted.
Schools are Unranked and listed separately by category if they have indicated that they don't use SAT or ACT test scores in admissions decisions for first-time, first-year, degree-seeking applicants. And, in a few cases, schools are Unranked if too few respondents to the peer assessment survey gave them a rating.
Other reasons institutions are not ranked include: a total enrollment of fewer than 200 students, a large proportion of nontraditional students and no first-year students – as is the situation at so-called upper-division schools.
As a result of these eligibility standards, many of the for-profit institutions have been grouped with the Unranked schools; their bachelor's degree candidates are largely nontraditional students in degree completion programs, for example, or they don't use SAT or ACT test scores in admissions decisions.
We also did not rank a few highly specialized schools in arts, business and engineering.
Most of the data come from the colleges. This year, 91% of the 1,376 ranked colleges and universities we surveyed returned their statistical information during our spring and summer 2013 data collection.
A ranked college is defined as those colleges in the National Universities, National Liberal Arts Colleges, Regional Universities and Regional Colleges categories that are numerically ranked or listed as Rank Not Published. There are an additional 141 colleges in those categories that are listed as Unranked.
In total, U.S. News has collected data on nearly 1,800 colleges and all their data is on usnews.com, but only 1,376 are included in the actual numerical rankings described in this methodology.
We obtained missing data from a number of sources, including the American Association of University Professors (faculty salaries), the National Collegiate Athletic Association (graduation rates), the Council for Aid to Education (alumni giving rates) and the U.S. Department of Education's National Center for Education Statistics (information on financial resources, faculty, SAT and ACT admissions test scores, acceptance rates and graduation and retention rates).
Estimates, which are not displayed by U.S. News, may be used in the ranking calculation when schools fail to report particular data points that are not available from other sources. Missing data are reported as N/A in the ranking tables.
For colleges that were eligible to be ranked but refused to fill out the U.S. News statistical survey in the 2013 data collection, we have made extensive use of the statistical data those institutions were required to report to the NCES on such factors as SAT and ACT scores, acceptance rates and faculty and retention rates. These schools are footnoted as nonresponders.
Ranking Model Indicators
The indicators we use to capture academic quality fall into a number of categories: assessment by administrators at peer institutions, retention of students, faculty resources, student selectivity, financial resources, alumni giving, graduation rate performance and, for National Universities and National Liberal Arts Colleges only, high school counselor ratings of colleges.
The indicators include input measures that reflect a school's student body, its faculty and its financial resources, along with outcome measures that signal how well the institution does its job of educating students.
The measures, their weights in the ranking formula and an explanation of each follow.
Undergraduate academic reputation (22.5%): The U.S. News ranking formula gives significant weight to the opinions of those in a position to judge a school's undergraduate academic excellence. The academic peer assessment survey allows top academics – presidents, provosts and deans of admissions – to account for intangibles at peer institutions such as faculty dedication to teaching.
For their views on National Universities and National Liberal Arts Colleges, we also surveyed 2,202 counselors at public high schools, each of which is a gold, silver or bronze medal winner in the U.S. News rankings of Best High Schools, published in April 2013, and 400 college counselors at the largest independent schools. The counselors represent nearly every state and the District of Columbia.
Each person surveyed was asked to rate schools' academic programs on a scale from 1 (marginal) to 5 (distinguished). Those who didn't know enough about a school to evaluate it fairly were asked to mark "don't know."
The score used in the rankings is the average score of those who rated the school on the 5-point scale; "don't knows" are not counted as part of the average. In the case of National Universities and National Liberal Arts Colleges, the academic peer assessment accounts for 15 percentage points of the weighting, and 7.5 percentage points go to the counselors' ratings.
For the second year in row, the two most recent years' survey results, from spring 2012 and spring 2013, were averaged to compute the high school counselor reputation score. This was done to increase the number of ratings each college received from the high school counselors and to reduce the year-to-year volatility in the average counselor score.
The academic peer assessment score continues to be based only on the most recent year's results. Both the Regional Universities and Regional Colleges rankings continue to rely on one assessment score, by the academic peer group.
In order to reduce the impact of strategic voting by respondents, we eliminated the two highest and two lowest scores each school received before calculating the average score. Ipsos Public Affairs collected the data in spring 2013; of the 4,554 academics who were sent questionnaires, 42% responded. The counselors' one-year response rate was 11% for the spring 2013 surveys.
Retention (22.5%): The higher the proportion of freshmen who return to campus for sophomore year and eventually graduate, the better a school is apt to be at offering the classes and services that students need to succeed.
This measure has two components: six-year graduation rate (80% of the retention score) and freshman retention rate (20%). The graduation rate indicates the average proportion of a graduating class earning a degree in six years or less; we consider freshman classes that started from fall 2003 through fall 2006. Freshman retention indicates the average proportion of freshmen who entered the school in the fall of 2008 through fall 2011 and returned the following fall.
Faculty resources (20%): Research shows that the more satisfied students are about their contact with professors, the more they will learn and the more likely they are to graduate. We use six factors from the 2012-2013 academic year to assess a school's commitment to instruction.
Class size has two components: the proportion of classes with fewer than 20 students (30% of the faculty resources score) and the proportion with 50 or more students (10% of the score).
Faculty salary (35%) is the average faculty pay, plus benefits, during the 2011-2012 and 2012-2013 academic years, adjusted for regional differences in the cost of living using indexes from the consulting firm Runzheimer International. We also weigh the proportion of professors with the highest degree in their fields (15%), the student-faculty ratio (5%) and the proportion of faculty who are full time (5%).
Student selectivity (12.5%): A school's academic atmosphere is determined in part by the abilities and ambitions of the students.
We use three components: We factor in the admissions test scores for all enrollees who took the Critical Reading and Math portions of the SAT and the Composite ACT score (65% of the selectivity score); the proportion of enrolled freshmen at National Universities and National Liberal Arts Colleges who graduated in the top 10% of their high school classes or in the top quarter at Regional Universities and Regional Colleges (25%); and the acceptance rate, or the ratio of students admitted to applicants (10%).
The data are all for the fall 2012 entering class. While the ranking calculation takes account of both the SAT and ACT scores of all entering students, the table displays the score range for whichever test was taken by most students.
Financial resources (10%): Generous per-student spending indicates that a college can offer a wide variety of programs and services. U.S. News measures financial resources by using the average spending per student on instruction, research, student services and related educational expenditures in the 2011 and 2012 fiscal years. Spending on sports, dorms and hospitals doesn't count.
Graduation rate performance (7.5%): This indicator of added value shows the effect of the college's programs and policies on the graduation rate of students after controlling for spending and student characteristics such as test scores and the proportion receiving Pell Grants. We measure the difference between a school's six-year graduation rate for the class that entered in 2006 and the rate we predicted for the class.
If the actual graduation rate is higher than the predicted rate, the college is enhancing achievement.
Alumni giving rate (5%): This reflects the average percentage of living alumni with bachelor's degrees who gave to their school during 2010-2011 and 2011-2012, an indirect measure of student satisfaction.
To arrive at a school's rank, we first calculated the weighted sum of its scores. The final scores were rescaled so that the top school in each category received a value of 100, and the other schools' weighted scores were calculated as a proportion of that top score. Final scores were rounded to the nearest whole number and ranked in descending order. Schools that are tied appear in alphabetical order.