The 2020 edition of Inside Honors, to be published in August or September, 2020, will include in-depth ratings of 45 programs and somewhat shorter reviews of an additional five programs.
The 45 programs to receive full ratings and 3,000-word profiles are below:
College of Charleston
New Jersey Inst of Tech
Below are the five programs that will receive unrated reviews:
Yes, the title of this post is a mouthful. For years now, I have kept an updated list of the departmental rankings that U.S. News publishes so that I can add them to the biannual profiles I do of honors programs. When the 2020 rankings came out, I wanted to see whether there was any clear relationship between the departmental scores and the academic reputation scores. Then I compared the latest reputation scores with those published in 2015 to see how much had changed. Finally, the table below also includes changes in university rankings and the most recent rankings for social mobility.
(I would welcome comments on this post. Please email firstname.lastname@example.org.)
It appears that the social mobility metric has had some impact, especially if the ranking is very strong, as in the case of many UC campuses and Florida institutions. There is no clear relationship between departmental scores and academic reputation scores. Departmental rankings do have a modest relationship to the overall U.S. News rankings, but there are many inconsistencies. Academic reputation scores do seem to show some “grade inflation” since 2015; often this is the case even when the U.S News ranking has dropped significantly.
The table below includes data for 100 public and private universities.
The cumulative rankings that I do for 15 academic disciplines requires some explanation. U.S. News only ranks graduate programs for most departments. Here are the disciplines for which I have cumulative departmental rankings, using the most recent data (2018): biological sciences; business (undergrad); chemistry, computer science; earth sciences; economics; education; engineering (undergrad);English; history; mathematics; physics; political science; psychology; and sociology.
Not every university has a ranked department in each of the 15 disciplines. I averaged departmental rankings for every university that had at least six ranked departments. For universities with, say, fewer than 12 ranked departments, the total ranking will be artificially high because only the best departments are ranked and I cannot include unranked departents. Most universities have 12-15 departments that are ranked, and so the overall average will be more useful for them. And some of the universities with a small number of ranked departments are specialized, such as Georgia Tech and Caltech. Clearly, even ranking only six or seven departments for those schools and getting a strong result is not misleading.
Universities with fewer than 10 departmental rankings: Colorado School of Mines; Georgia Tech; Miami Ohio; American; Brigham Young; Caltech; Dartmouth; Drexel; Fordham; Georgetown; and RPI.
It should be said that universities with relatively low departmental rankings can legitimately receive high rankings because of other meaningful factors, such as grad and retention rates and class size. Some excellent universities do not have an especially strong research focus or a lot of graduate programs. Dartmouth is one prominent example.
The universities below appear in rank order of their 2020 academic reputation, according to U.S. News.
At last, there is a major study that goes a long way toward answering this important question.
Dr. Art Spisak
Making good use of the increasing data now available on honors programs and their parent institutions, two honors researchers have recently published a major paper that compares honors students and non-honors students from 19 public research universities. Out of 119,000 total students, a total of 15,200 were or had been participants in an honors program.
The study is extremely helpful to parents and prospective honors students who rightly ask how an honors education differs from a non-honors education: How will participation in an honors program shape and differentiate an honors student? Will an honors education be the equivalent of an education at a more prestigious private college?
Feelings about the undergraduate experience: “In their undergraduate experience, students in the honors group reported a more positive experience, on average, than those in the non-honors group.” Both groups attended classes with similar frequency, but honors students reported greater activity in the following areas:
finding coursework so interesting that they do more work than is required;
communicating with profs outside of class;
working with faculty in activities other than coursework;
increasing effort in response to higher standards;
completing assigned reading;
attending to self care, eating, and sleeping;
spending more time studying;
performing more community service and volunteer work;
participating in student organizations;
and, while spending about the same time in employment, finding on-campus employment more frequently than non-honors students.
Participation in “high-impact” activities: These experiences contribute to undergraduate success and satisfaction as well as to higher achievement after graduation. Some of these are restricted to upperclassmen, so the study concentrated on participation by seniors in high-impact activities, including undergraduate research, senior capstone or thesis, collaborating with a professor on a project or paper, studying abroad, or serving in a position of leadership.
“Those [students] in the honors student segment of the senior sample had markedly higher cumulative college grade point averages.” The cumulative GPA of the honors group was 3.65; for the non-honors group it was 3.31. “A grade point average of 3.31 is located at the 38th percentile in the overall distribution within the study sample, and a grade point average of 3.65 is at the 69th percentile.” The authors found that the very significant difference was “particularly impressive” given that the high school GPAs of honors and non-honors students did not vary so significantly. Honors students were also 14% more likely to have served as an officer in a campus organization.
Students in the honors group were 77% more likely to have assisted faculty in research projects, 85% percent more likely to have studied abroad, and 2.5 times more likely to have conducted undergraduate research under faculty guidance.
Intellectual curiosity: Honors students expressed a statistically significant but not dramatically greater degree of intellectual curiosity; however, their intellectual curiosity was aligned with the “prestige” of an academic major. The study did not measure whether this attachment to prestige reflected a desire for greater intellectual challenge or for higher salaries associated with many such majors. (Or both.) Both groups placed similar emphasis on the importance of high pay after graduation and on career fulfillment.
Diversity: The study found that African American students were only 52% as likely to be in an honors program as they are to be in the larger university sample. Latin American students were 58% as likely. These figures may be due in part to the fact that, as a group, the 19 research universities “are located in states that are somewhat more white than the nation as a whole, but most of the discrepancy can be attributed to the fact that Research 1 universities do not, in general, have enrollments that are especially representative of ethnic and racial minorities.” On the other hand, LGBQ, transgender, and gender-questioning students “appear to be slightly over-represented among honors students.”
Low-income and first generation participation: These students “are significantly and substantially under-represented in the honors group.” Pell Grant recipients are 30% less likely to be in honors than in the non-honors group; and 40% of first-generation students are less likely to be in the honors group.
Test scores and HSGPA: There was a difference between honors and non-honors students, but it was not dramatic. “Regardless of which test score was used, the honors group had scores that were about 10% higher, on average.” (In our ratings of honors programs, we have found that honors test scores were about 17% higher, based on actual honors scores and the mid-range of test scores in U.S. News rankings.) The average high school GPA for the honors group was .11 points higher than for the non-honors group.
The study used data from the 2018 Student Experience in the Research University (SERU) survey for 2018. Although the study only used data from Research 1 universities that comprise only 3% of all colleges and universities in the nation, R1 universities enroll 28.5% of all undergraduates pursuing four-year degrees.
Research centered on honors education is increasingly important: An estimated 300,000-400,000 honors students are enrolled in American colleges and universities today.
“It has become a mantra in some quarters to assert that standardized tests measure wealth more than intellectual ability or academic potential, but this is not actually the case. These tests clearly assess verbal and mathematical skills, which a century of psychological science shows are not mere reflections of upbringing. Research has consistently found that ability tests like the SAT and the ACT are strongly predictive of success in college and beyond, even after accounting for a student’s socioeconomic status.”
For years, U.S. News has used test scores and selection rates as ranking data for the annual “Best Colleges” report. The publication has slightly reduced the impact of test scores in recent editions.
Below I will explain why we do not include test scores as a metric and argue that, for honors and non-honors students, other factors are more important in predicting success. (High school GPA is certainly a major factor; but since almost all honors students have high GPAs, I do not discuss the impact of GPA in this post.)
In their published scholarly work, the authors argue that test scores by themselves correlate very strongly ( r= -.892) with the annual U.S. News Best Colleges rankings for national universities even though the test scores count for only 7.75 percent of the total ranking score. (The authors do not cite the impact of test scores on other ranking factors such as graduation and retention rates, which together account for 22 percent of the total ranking score.)
Our own work for the past eight years, however, shows that test scores do not have a similar correlation to quantitative assessments of honors programs. In our publications we list minimum and average admissions test scores for all programs we rate, but we do not count the scores alone as a rating factor.
Here’s why we do not use test scores as a measure: The factors that make for an excellent honors program are primarily structural. The major building blocks are the credits required for honors completion; the number of honors class sections offered, by type and academic discipline; the availability of priority registration and honors housing; the size of honors class sections; and the number of staff to assist students.
So, don’t the test scores drive the university graduation rates of honors program entrants, just as they do in elite colleges? The answer is not so much; the correlation is r= .50
Admittedly, it is probably difficult for a student with, say, a 1050 SAT score to succeed in an elite college or in most honors programs. But within a fairly large range of SAT scores (~1280-1510), the opportunities for success are more often present given a conducive structure. With every biannual review of honors data, I find great pleasure in discovering outstanding honors programs that are not housed in highly- ranked and extremely selective universities. The golden nuggets of excellence in higher education are scattered much farther and wider than many would have us believe.
I am strongly opposed to the numerical ranking of colleges or their honors programs, whether or not test scores are included in the methodology. I ranked honors program one time, in 2012, and regret doing so. Yes, I have data that allows me to numerically differentiate the total rating scores earned by honors programs. But anyone who wants to provide some kind of assessment of colleges or programs needs to do so with the assumption that their methodology is subjective and imperfect. Ordinal rankings based on distinctions of one point or fractions of a point give readers a veneer of certitude that a qualitative difference exists even if it (often) does not.
Although we do not rank honors programs, we do place them in one of five rating groups, a process that is similar to rating films on a five-star basis but based on quantitative rather than completely subjective data. The seven honors programs in the top group in 2018 (out of 41) had average SAT scores (enrolled students) ranging from 1280 to 1490, a sizable range.
Honors completion rates are something of an issue these days. An honors completion rate is the percentage of first year honors entrants who complete at least one honors program graduation requirement by the time of graduation from the university. About 42 percent of honors students do not complete honors requirements before graduation, although a very high percentage of honors entrants (87 percent) do graduate from the university.
The seven honors programs with honors completion rates of 75 percent or higher in our 2018 ratings had average SAT scores ranging from 1340 to 1510; the mean for this group was 1420. The mean SAT for the 31 (of 41) programs that provided completion rates was 1405, not much lower. And another seven programs with mean SAT scores of 1420 or higher had completion rates below 58 percent, the group mean.
The mean SAT score for all 41 rated programs was 1407; the mean SAT for the top seven programs was only one point higher at 1408.
It is clear, at least with respect to honors programs, that average SAT scores are not the best predictors of program effectiveness. What does this mean for the value of test scores nationwide, if anything?
I think it means that for students who are in the 1280 to 1500 SAT range, success depends as much or more on mentoring, smaller interdisciplinary sections, student engagement, course availability, community (including housing), and advising support than it does on test scores.
The good news here is that even for students who are not in honors programs, high levels of achievement are accessible to students who do not begin college with extremely high test scores, although non-honors students will probably have to assert themselves more in order to benefit from the strongest attributes of their university.
Editor’s Note: We hope to update this post before the end of September 2019. The list appears after the introductory section. The list was current as of September 25, 2018.
In a previous post, Based on Academic Reputation Alone, Publics Would Be Higher in U.S. News Rankings, we write that many public universities have a reputation in the academic community that is much higher than their overall ranking by U.S. News. In this post, we will summarize the reasons that prospective honors students and their parents might consider paying more attention to academic reputation than to other factors in the oft-cited rankings. The list also facilitates comparisons of public and private universities.
First, these are factors to consider if the state university’s academic reputation is much stronger than its overall ranking:
1. The overall rankings penalize public universities for their typically larger class sizes, but the average honors class size in our most recent study of honors programs is 24.9 students, much smaller than the average class size for the universities as a whole. Many of these honors classes are lower-division, where the preponderance of large classes is often the norm. First-year honors seminars and classes for honors-only students average 17.5 students per section. Result: the relatively poor rating the whole university might receive for class size is offset for honors students.
2. The overall rankings hit some public universities hard for having relatively low retention and graduation percentages, but freshmen retention rates in honors programs are in the 90% range and higher; meanwhile six-year grad rates for honors entrants average 87%–much higher than the average rates for the universities as a whole. Result: the lower rates for the universities as a whole are offset for honors students.
3. All public universities suffer in the overall rankings because U.S. News assigns ranking points for both the wealth of the university as a whole and for the impact that wealth has on professors’ salaries, smaller class sizes, etc. This is a double whammy in its consideration of inputs and outputs separately; only the outputs should be rated. Result: the outputs for class size (see above) are offset for honors students, and the wealth of the university as an input should not be considered in the first place.
4. For highly-qualified students interested in graduate or professional school, academic reputation and the ability to work with outstanding research faculty are big advantages. Honors students have enhanced opportunities to work with outstanding faculty members even in large research universities, many of which are likely to have strong departmental rankings in the student’s subject area. Result: honors students are not penalized for the research focus of public research universities; instead, they benefit from it.
5. Many wealthy private elites are generous in funding all, or most, need-based aid, but increasingly offer little or no merit aid. This means that families might receive all the need-based aid they “deserve” according to a federal or institutional calculation and still face annual college costs of $16,000 to $50,000. On the other hand, national scholars and other highly-qualified students can still receive significant merit aid at most public universities. Result: if a public university has an academic reputation equal to that of a wealthy private elite, an honors student could be better off financially and not suffer academically in a public honors program.
But…what if the academic reputation of the public university is lower than that of a private school under consideration? In this case, the public honors option should offer the following offsets:
1.The net cost advantage of the public university, including merit aid, probably needs to be significant.
2. It is extremely important to evaluate the specific components of the honors program to determine if it provides a major “value-added” advantage–is it, relatively, better than the university as a whole. Often, the answer will be yes. To determine how much better, look at the academic disciplines covered by the honors program, the actual class sizes, retention and graduation rates, research opportunities, and even honors housing and perks, such as priority registration.
The rankings below are on a 5.0 scale, and there are many ties. We have included national universities with reputations rankings between 2.7 and 4.9.
Editor’s Note: This is the third and final post in our series on honors program completion rates.
In the first post, we wrote about the hybrid structure of honors programs and how that can affect honors completion rates. An honors completion rate is the percentage of honors students who complete all honors course requirements for at least one option by the time they graduate. The second post presented a tentative formula for evaluating honors completion rates.
This post has two parts. The first part compares honors completion rates of main option and multiple option honors programs; the second part (2) a compares completion rates of honors colleges and honors programs.
Main option programs emphasize only one curriculum completion path, usually requiring more than 30 honors credits and often an honors thesis as well. Multiple option programs offer two or more completion paths for first-year students. One option might require 24 honors credits; another might require 15-16 credits. Either of these might also require a thesis.
Many universities are now establishing honors colleges. These usually have a dean and a designated staff of advisors. They typically provide at least enough honors housing space for first-year students. Some began as honors programs and then re-formed into honors colleges. Quite a few honors colleges have significant endowments.
Honors programs do not have a dean, but are administered by a director and staff. Sometimes there are few real differences between honors colleges and programs. In general, however, honors colleges have more staff and offer more access to honors housing.
We received data from 23 honors colleges and eight honors programs, having a combined enrollment of more than 64,000 honors students. The 31 parent universities had an average U.S. News ranking of 126, ranging from the low 50s to higher than 200.
The first summary is below:
PART ONE: SUMMARY STATISTICS
MAIN OPT PROGRAMS VS
MULTI OPTION PROGRAMS
NO. OF PROGRAMS
NO. HONORS STUDENTS
COMPLETION % rate
UNIVERSITY GRAD RT
UNIV GRAD RT>COMPLETION RT
HONORS GRAD RATE
HONORS GR RT>COMPLETION RT
HONORS GR RT>UNIV GR RT
TEST SCORES ADJ TO SAT
CURRICULUM REQUIREMENT AVG
THESIS OPTION Y/N
THESIS REQ ALL OPTIONS Y/N
DORM RMS / FR & SOPH
HON CLASS SEATS / HON STUDENTS
APPLY SEP TO HONORS Y/N
The second summary, comparing honors colleges and honors programs, is below:
Honors completion rates, as we noted in a previous post, are a complicated issue. They represent the percentage of students who enter an honors program and then complete all honors requirements for at least one completion option by the time they graduate.
They are related to university freshman retention rates and university graduation rates, but in order to evaluate them there must be some workable baseline completion rate derived from a significant sample of programs.
Honors deans and directors at 31 public university honors programs contributed the data used to calculate the values in the next paragraph, along with extensive additional data we use in rating honors programs. The 31 programs enrolled more than 64,000 honors students in Fall 2017. At some point we might include completion rates as a metric; if we do, then this formula, or an improved version, might be used.
This tentative formula takes into account (1) the average (mean) honors completion rate for the whole data set (57.88 percent); (2) the mean university-wide freshman retention rate for the whole data set (86.81 percent); (3) the completion rate of each program; (4) the freshman retention rate for the parent university of each program; and (5) the graduation rate of each university.
The formula assumes that a desirable target honors completion rate should at least equal the midway point between the university graduation rate and the adjusted honors completion rate.(See examples below, however, for programs that have honors completion rates that exceed the university graduation rate.) The formula can easily be changed to include lower or higher target levels by increasing or reducing the divisor.
H = the mean honors completion rate for the data set;
F = the mean freshman retention rate for the data set;
P = the program completion rate;
C = the completion rate of each program adjusted to the university freshman retention rate (.67*R);
R = the freshman retention rate of each parent university;
G = the graduation rate of each parent university;
T = the estimated target completion rate after the formula is applied. T = (G + C) /2. This is an estimate of what the minimum completion rate should be, given the university’s freshman retention rate and graduation rate, and the mean completion rate and mean freshman retention rate for this data set. Other data sets would of course have different data, but the formula could still be applied.
The completion rates of ten programs exceeded the graduation rates of their parent universities.
Here is the formula, where P = 61%; R = 92%; G = 83%:
First step = (H/F), or .57.88 / 86.81. The result is .67. This is a constant for this data set.
Second step is to adjust the completion rate in relation to the university freshman retention rate = .67 *R, or .67 *92. The result is 61.64 (C), a bit higher than the actual program completion rate of 61.0 (P), because of the relatively high freshman retention rate.
Third step is to adjust the completion rate C in relation to the university graduation rate in order to calculate the target completion rate. T = (G + C) /2, or (83 + 61.64) /2 = 72.32 (T).
Fourth step is to calculate P – T, which would be 61.00 – 72.32 = –11.32. This step calculates the extent to which the program completion rate varies from the estimated target rate. The program is performing below the estimated target rate. The relatively high university graduation rate is the main reason.
Honors program A had a program completion rate (P) of 84%, a freshman retention rate (R) of 88%, and a university graduation rate (G) of 73%. The C rate would be .67*88, or 58.96. The T calculation would be (G + C) /2, or (73 + 58.96) / 2= 65.98 (T). Now calculate C – T, (or 84 – 65.98) = +18.02. This program is performing far above its estimated target rate.
Honors program B had the same program completion rate (P) of 84% but a much higher freshman retention rate (R) of 95%, and a university graduation rate (G) of 81%. Calculating the C value would be .67*95, or 63.7, and the T would (G + C) /2, or (81 – 63.7) /2 = 73.325. When we calculate C – T, (84 – 73.325), the result is + 11.675. This program is performing well above its estimated rage, but even with the same completion rate as Program A, the impact of higher graduation and freshman retention rates for Program B causes its relative performance rating to be lower than Program A. In other words, the expectations were higher for Program B. Both programs are exceptional in that their honors completion rates exceed their university graduation rates.
Honors program D had a program completion rate (P) of 40%, a freshman retention rate (R) of 82%, and a university graduation rate (G) of 53%. C would be .67*82, or 54.94. T would be (G + C) /2, or (53 + 54.94) /2 = 53.97. Calculating C – T, the result is 40 – 53.97, or -13.97. Program D is significantly underperforming based on the formula.
Especially notable in the list below are the changes in major public universities.
Included here are institutions that were, at some point, ranked in the top 50 in those two categories. Some values are blank because in those years the magazine did not give individual rankings to every institution, instead listing them in large groups described as “quartiles” or “tiers.” The rankings shown for 1983 and 1985 are the ones that U.S. News published in its magazine in those same years. For all subsequent years, the rankings come from U.S. News’s separate annual publication “America’s Best Colleges”, which applies rankings for the upcoming year.
The website Quartz just published a list of the universities that place the highest number of grads at tech firms in Silicon Valley.
“The most coveted jobs are in Silicon Valley, and most selective US universities are members of the Ivy League. So it stands to reason that tech giants like Apple, Google, Amazon and Facebook would scoop up best and brightest from those bastions of power and privilege.
“Think again. None of the eight Ivy League schools—Harvard, Yale, Princeton, Brown, Columbia, Cornell, Dartmouth and the University of Pennsylvania—cracked the top 10 on a list of the universities sending the most graduates to tech firms, according to an analysis by HiringSolved, an online recruiting company. The company used data from more than 10,000 public profiles for tech workers hired or promoted into new positions in 2016 and the first two months of 2017.”
Editor’s note: The HiringSolve link also lists the 10 specific skills most in demand as of 2017, with changes from 2016. For example, the top four skills for entry level placement in 2017 are Python, C++, Java, and algorithms. The top job titles for entry placement in 2017 are Software Engineering Intern, Software Engineer, Business Development Consultant, and Research Intern.
Now let it be said that the 17 public universities in the top 25 are generally much larger than the private institutions on the list, so the sheer volume of highly-trained tech grads from the publics is much larger.
But the final message from Quartz was this:
“If the list tells us anything, it’s that admission to an elite university isn’t a prerequisite for a career in Silicon Valley, and what you know is more important than where you learn it.” [Emphasis added.]
Here are the top 25 universities for Silicon Valley tech placement, in numerical order:
San Jose State
UC San Diego
Univ of Phoenix*
UC Santa Barbara
*Hypothesis: hands-on experience and later degrees?
The new rankings from Money are out, and public colleges and universities account for 27 of the top 50 best values in 2017. These rankings are likely the best college rankings overall, given their balanced approach.
As Jeffrey J. Selingo writes in the Washington Post, the earnings portion of the rankings are based in part on some very interesting new evidence: the “Chelly data.”
“That refers to Raj Chetty,” Selingo tells us, “a Stanford professor, who has led a team of economists that has received access to millions of anonymous tax records that span generations. The group has published several headline-grabbing studies recently based on the data. In findings published in January, the group tracked students from nearly every college in the country and measured their earnings more than a decade after they left campus, whether they graduated or not.
Money does a better job of ranking colleges based on “outcomes” than Forbes does (see Outcomes farther down). This is especially the case with the multiple earnings analyses.
To see the list of top publics, please skip the methodology discussion immediately below.
The 2017 rankings include 27 factors in three categories:
Quality of education (1/3 weighting), which was calculated using:
Six-year graduation rate (30%).
Value-added graduation rate (30%). “This is the difference between a school’s actual graduation rate and its expected rate, based on the economic and academic profile of the student body (measured by the percentage of attendees receiving Pell grants, which are given to low-income students, and the average standardized test scores of incoming freshmen).” [Emphasis added.]
“Peer quality (10%). This is measured by the standardized test scores of entering freshman (5%), and the percentage of accepted students who enroll in that college, known as the “yield” rate (5%).” Note: using the yield rate is an improvement over the U.S. News rankings.
“Instructor quality (10%). This measured by the student-to-faculty ratio.” Note: this is very similar to a U.S. News metric.
“Financial troubles (20%). This is a new factor added in 2017, as financial difficulties can affect the quality of education, and a growing number of schools are facing funding challenges.” Note: although this is not an “outcome” either, it is more meaningful than using data on alumni contributions, etc.
Affordability (1/3 weighting), which was calculated using:
“Net price of a degree (30%). This is the estimated amount a typical freshman starting in 2017 will pay to earn a degree, taking into account the college’s sticker price; how much the school awards in grants and scholarships; and the average time it takes students to graduate from the school, all as reported to the U.S. Department of Education….This takes into account both the estimated average student debt upon graduation (15%) and average amount borrowed through the parent federal PLUS loan programs (5%).
“Student loan repayment and default risk (15%).
“Value-added student loan repayment measures (15%). These are the school’s performance on the student loan repayment and default measures after adjusting for the economic and academic profile of the student body.
Affordability for low-income students (20%). This is based on federally collected data on the net price that students from families earning $0 to $30,000 pay.
Outcomes(1/3 weighting), which was calculated using:
“Graduates’ earnings (12.5%), as reported by alumni to PayScale.com; early career earnings within five years of graduation (7.5%), and mid-career earnings, which are for those whose education stopped at a Bachelor’s degree and graduated, typically, about 15 years ago. (5%).
“Earnings adjusted by majors (15%). To see whether students at a particular school earn more or less than would be expected given the subjects students choose to study, we adjusted PayScale.com’s data for the mix of majors at each school; for early career earnings (10%) and mid-career earnings (5%).
“College Scorecard 10-year earnings (10%). The earnings of federal financial aid recipients at each college as reported to the IRS 10 years after the student started at the college.
“Estimated market value of alumni’s average job skills (10%). Based on a Brookings Institution methodology, we matched up data provided by LinkedIn of the top 25 skills reported by each school’s alumni with Burning Glass Technologies data on the market value each listed skill.
“Value-added earnings (12.5%). To see if a school is helping launch students to better-paying jobs than competitors that take in students with similar academic and economic backgrounds, we adjusted PayScale.com’s earnings data for the student body’s average test scores and the percentage of low-income students at each school; for early career earnings (7.5%) and mid-career earnings (5%).
Job meaning (5%). We used the average score of each school’s alumni on PayScale.com’s survey question of “Does your work make the world a better place?”
“Socio-economic mobility index (20%).
For the first time, we included new data provided by the Equality of Opportunity Project that reveals the percentage of students each school move from low-income backgrounds to upper-middle class jobs by the time the student is 34 years old.Finally, we used statistical techniques to turn all the data points into a single score and ranked the schools based on those scores.” [Emphasis added.]
The inclusion of these metrics makes the Money rankings a hybrid of the Washington Monthly “public good” rankings, U.S. News, and Kiplinger rankings, with the socio-economic factors having a less significant impact than the Washington Monthly rankings on overall standing. Still, these factors do result in two CUNY campuses’ receiving high rankings.
“The data showed, for example,” Selingo writes, “that the City University of New York propelled almost six times as many low-income students into the middle class and beyond as all eight Ivy League campuses, plus Duke, M.I.T., Stanford and Chicago, combined. The California State University system advanced three times as many.”
TOP PUBLIC UNIVERSITIES, MONEY MAGAZINE, 2017, BY NAME AND OVERALL RANK INCLUDING PRIVATE INSTITUTIONS:
CUNY Baruch College–2
College of New Jersey–24
UC Santa Barbara–36
Cal State Long Beach–42
Rutgers, New Brunswick–49