Forbes College Rankings 2014: Short Honeymoon for Publics Is Over

Last year, we wrote that the Forbes America’s Best Colleges rankings had suddenly become more friendly to public universities after several years of relegating many of them to the high three figures in the numerical rankings.   In 2013, 19 public universities (not counting the military academies) made it into the top 100; but this year, that number dropped to only 14.

Worse, all but two of the 14 that remain in the top 100 lost ground, some by a large amount.

It is not unusual for anyone who ranks or evaluates colleges to make changes in their ranking methodologies.  We have done the same for our next edition of the Review, although we will not be using numerical rankings this time around.

For 2014, Forbes (or rather the Center for College Affordability and Productivity (CCAP), which does the work for the magazine) has increased the weight of the student debt factor from 17.5% to 25%.  At the same time, the weight for “Academic Success” went down modestly, from 11.25% to 10%.  Both of these probably hurt public universities: the debt, because state support still has not caught up with costs; the academic success because CCAP counts National Science Foundation Fellowships and Fulbright awards, many of which are won by students and faculty at public research universities.

Yet here is another puzzling aspect of the rankings: the Forbes Best Value rankings, which about which we will write in a future post, list many of the same schools that dropped in the overall rankings.  And student debt is a major metric in the value rankings.

At least the bizarre rankings that marked the Forbes list for the first few years have mostly gone away.  No longer do we see, for example, a university ranked 320th one year and rise to 168th the next.  And it is good to keep in mind that the Forbes rankings lump all private and public universities and liberal arts colleges into one huge group; so a Forbes ranking of, say, 65 or 70 for a public university is a much stronger ranking than a U.S. News “national university ranking” in the same range.

Still, it is difficult to understand how some of the public universities could have dropped so far in just one year.

Below are the Forbes rankings of public universities in the top 100, for 2013 and 2014 respectively.  The first parenthesis is the 2013 ranking, and the second parenthesis is the 2014 ranking.

U.S. Military Academy— (7) (9)

UC Berkeley— (22) (37)

U.S. Naval Academy (28) (27)

Virginia (29) (40)

Michigan (30) (45)

U.S. Air Force Academy (31) (34)

UCLA (34) (44)

UNC Chapel Hill (38) (50)

William & Mary (44) (41)

Illinois (53) (68)

Washington (55) (73)

UT Austin (66) (76)

Wisconsin  (68) (70)

Maryland  (73) (82)

Florida (74) (87)

Georgia Tech (83) (90)

Georgia (90) (94)

Penn State (93) (166)

UC Santa Barbara (96) (116)

Indiana (97) (107)

UC Davis (99) (113)

 

 

 

 

 

 

 

How Does It Feel to Win the $250,000 Hertz Fellowship…Plus Two Others?

Editor’s note:  The following article by Dorothy Guerrero appeared in Alcalde, the alumni magazine of UT Austin….

Maybe you’ve had this nightmare: Dressed in a suit and tie, you sit at a table across from two geniuses who are exalted in their field. You’re in a room on the MIT campus in Cambridge, Mass., and the walls are made of glass, so everyone in the hallway can see you sweat. There’s a big stack of paper in the middle of the table and a couple of pens on either side to use if you need to draw a schematic to explain a concept. There is no way to cram for this oral exam because you are not being tested on something you have learned—but on everything you have ever learned.

No? Well it actually happened last March to Ashvin Bashyam, BS ’14, who managed to pass the daunting interview during his senior year at UT and win the Hertz fellowship for graduate education in the applied physical, biological, and engineering sciences. It’s a five-year award, valued at $250,000. He was one of only 15 students in the nation selected for the fellowship and the university’s fourth since 2011.

Bashyam was a researcher in UT’s Ultrasound Imaging and Therapeutics Research Laboratory, where he focused on improving cancer detection through advanced medical imaging. Geoffrey Luke, PhD ’13, who mentored Bashyam in the lab, says he knew he was special right away.

“Any time you are describing something to him,” Luke says, “he’s usually one step ahead. A student like Ashvin doesn’t come around very often.”

The goal of the Hertz interview is for the candidate to prove that he can think creatively and apply what he knows on the fly to unsolved problems. A panel of past winners asks open-ended, hypothetical questions.  Bashyam remembers being stressed out, but for the most part he felt he was doing well—until one question tripped him up.

“Imagine we are in the future of health care,” said one of his interrogators. “Fifteen to 20 years from now, and every disease is managed except for very early stage cancers. Those are still unstoppable until we can see them. So come up with a way for a hospital to screen every patient walking in … Go.”

Bashyam’s first attempt at an answer had something to do with using X-ray and MRI, but the panel interrupted him right away and told him to think more ambitiously.

“I started off recalling what I’d done in lab where we learn how cancer at its somewhat early stages starts to recruit blood vessels and raises the overall temperature in that area,” Bashyam says. “It’s a process called angiogenesis.”

So he threw out a proposal for a kind of imaging technique that looks for increased blood vessel density, or maybe changes in hemoglobin concentration.

“Nope,” they said, cutting him off again, “We’re talking about earlier.”

Good Fellow

That’s when Bashyam had to dig deeper and cast his thoughts wider than he had ever done before. He found himself talking about the immune system, which he says he knows very little about. He talked about inflammation, T-cells, and lymphocytes and then he said if we could somehow track the immune cells’ activity level, we would see it increase in response to cancer.

“I guess I must have said something intelligent,” he remembers, though still with a puzzled look on his face, “because eventually they nodded and we moved on.”

A few weeks after the interview, Bashyam got word that he’d won not just the Hertz, but also the National Science Foundation fellowship and the National Defense Science and Engineering Graduate fellowship.

“The Hertz alone is an amazing accomplishment for any individual and their school,” Luke says, “but then to get the other two as well, which are also very competitive … it’s a testament to Ashvin and how well he’s able to perform under pressure.”

This fall, Bashyam will return to the site of his interview to study medical engineering and medical physics as part of the Harvard-MIT Program in Health Sciences and Technology. He’s looking forward to being in the middle of such a vibrant health-technology environment, where venture capital firms are supporting major innovations coming out of the program.

One day, he hopes to develop an implantable device that circulates around the body and looks for tumor cells. Anyone with any kind of cancer or risk factors could have one, and that, Bashyam says, would completely change the game.

“That’s a career goal,” he says with a grin.

Honors Education: More than Rubrics, Templates, and Outcomes

Editor’s note: The following essay is by Dr. Joan Digby, a professor at Long Island University and Director of the Honors program.  Although we look at basic “outcomes” in trying to evaluate public honors colleges and programs, we agree with Dr. Digby’s criticism of the growing regimentation of higher ed in America and the current over-emphasis on business and bureaucratic terminology.  Our abandonment of numerical rankings reflects our own concern that there are limits to quantifying the real value of higher learning.  This essay is from the website of the National Collegiate Honors Council….

When my goddaughter was eight years old, she was permitted to come from London to New York for a two-week visit. Elanor was precocious and had been asking when she could make this trip from the time she was four. When eight arrived, she was packed and ready. I had never had children, so living with an eight-year-old was an intense experience. What she mainly wanted to do was solve Rubik’s Cube in five minutes flat. When that didn’t happen, she erupted into a volcano of screams and tears. Eventually she figured out how to solve the puzzle and brought her completion time down to about three minutes.

If Ernő Rubik were naming his puzzle, today he would probably go for the pun and call it Rubric’s Cube since rubrics are all people talk about now in education. Remember when the word “paradigm” appeared in every high-toned article? Well, it has been replaced by “rubric.” Here a rubric, there a rubric, everywhere a rubric rubric . . . Old MacDonald had several, and they all add up to little boxes far less colorful and ingenious than Rubik’s Cube.

I’m betting that most of the people who use the word “rubric” know very little about its meaning or history. Rubric means red ochre—red earth—as in BryceCanyon and Sedona. Red headers were used in medieval manuscripts as section or chapter markers, and you can bet that the Whore of Babylon got herself some fancy rubrics over the years. Through most of its history, the word has been attached to religious texts and liturgy; rubrics were used as direction indicators for conducting divine services. In a system that separates church and state, it’s a wonder that the word has achieved so universal a secular makeover. Now it’s just a fancy word for a scoring grid. Think boxes! Wouldn’t they look sweet colored in red?

For decades I have been involved in university honors education. The essence of the honors approach is, dare I say, teaching “outside the box.” Everyone knows that you can’t put round ideas into square boxes, everyone except the people who do “outcomes assessment,” the pervasive vogue in filling in squares with useless information. Here, for example, is the classic definition of rubric as spelled out by the authors of a terrifying little handbook designed to help people who are still awake at three in the morning looking to speed up grading papers:  “At its most basic, a rubric is a scoring tool that lays out the specific expectations for an assignment” (Stevens and Levi 3). There it is, a “tool” to measure “specific expectations,” and those are precisely what we do not want to elicit from students, especially in honors but to my mind across the university.

My goal is not to score or measure students against preconceived expectations but to encourage the unexpected, the breakthrough response that is utterly new, different, and thus exciting—such as a recent student analysis of Melville’s “Bartleby the Scrivener” in light of the “Occupy Wall Street” movement, an approach that made me rethink the story altogether. The operative word here is “think.” Students attend college, in part, to learn how to think, and we help them engage deeply in “critical thinking.” Wouldn’t it then be hypocritical to take their thoughtful reflections and score them like mindless robots, circling or checking little boxes? Sure it would. That is why, whenever I hear anyone suggest using a “rubric” to grade an essay, I want to let out the bloodcurdling (appropriately red image) scream of an eight-year-old. I’m practicing. I can do it.

What I can’t and won’t do is fill in the little boxes. My field is literature—that is, thought and sensibility expressed in words. My field encourages the subjective, anecdotal, oddly shaped experiences that constitute creative writing. I can tell you a thousand stories about my students, how and what they learn and what will be the outcome of their education. I know their outcome (the plural is ugly) because I write to them for years after they leave school. Many are now my colleagues on campus and my friends all over the world. I can tell you their stories, but I can’t and won’t fill in boxes pretending that these will turn into measurable data. If my colleagues want to do the boxes, I won’t object, but “I’d prefer not to.”

Nor will I read portfolios and brood on what can be gathered about the student writers. English teachers read papers for a living. We assess them, write useful comments, and then return them graded to the students so that they can revise. Doing this is in our blood. For what reason would we dive into a pile of papers on which we are prohibited from writing comments for the sake of producing statistics that don’t even go back to the authors? All writers need suggestions and corrections. If we are not reading papers with the express purpose of providing the students with constructive help, then the act of reading is a waste of time.

I regret to acknowledge that the language and fake measuring tools of the data crunchers have infected even my own department, which now has been coerced into producing lists of goals and objectives with such chalk-grating phrases as “students will use writing as a meaning-making tool” and “generate an interpretation of literature . . . .” Not only the mechanistic language of the document but the fascistic insistence that students “will do” this or that strikes me as an utterly dystopian vision of a university education.

At the very least, English departments everywhere should be the ones to point out that goals and objectives are synonyms and that what the assessment folks really mean are goals and strategies for achieving them.  But “goals and objectives” has become a cant phrase at the core of the outcomes ritual, and I’m afraid there is nothing much we can do to change that.

Whoever came up with the phrase “outcomes assessment” probably has no idea how a liberal education works. We teach, students learn, and, if we are lucky, students reciprocally teach us something in a symbiotic relationship that does not require external administration. It works like this: students attend classes, read, write, engage in labs and other learning activities, pass their courses, even do well, and in time graduate. Faculty enjoy teaching and feel rewarded by the successes of their students. Bingo. That’s it. Nothing more to say or prove. No boxes to fill in. Anyone with an urge to produce data can take attendance at Commencement.

Other horrors have bubbled up to pollute the waters of our Pierian Spring. In addition to rubrics, we now have templates for everything we do. A template is essentially a mold that lets us replicate a structure. In different industries it means a gauge or guide, a horizontal beam functioning to distribute weight, or a wedge used to support a ship’s keel. You can find out more at students’ new best friend, www.dictionary.com.  Yet nowhere in this most accessible word hoard is there a specifically academic meaning for template, a word that must come up at least once in every academic meeting. The template craze implies that everything we do can and must be measured to fit a certain mold.  Not only the word but the increasing use of templates in the university reveal the degree to which academia has become an industrial operation.

In fact, we don’t need templates any more than we need rubrics. They come from the same family of low-level ideas responsible for the mechanical modes of teaching that I reject. If I were a medievalist, I would write an allegorical morality play, an updated version of The Castle of Perseverance, in which virtuous Professors battle vicious Rubrics and Templates, winning the day by driving them off with Open Books—

I concede, maybe Digital Books!

University education, what’s left of it, is at a decisive crossroad that requires us to take a stand against the models that administrations and consultants and accrediting agencies are forcing on us. The liberal arts and sciences are under serious attack, and, if we don’t defend the virtues of imagination and spontaneity in our classes, we will all be teaching from rigid syllabi according to rubrics and templates spelled out week by week as teachers of fifth-grade classes are forced to do.

It so happens that my grandmother, born in 1887, was a fifth-grade teacher. Every Sunday evening she sat at the kitchen table filling out hour-by-hour syllabi for the week to come. I remember a book with little cards, like the library cards we used to tuck into book pockets. No pun intended, but her last name was Tuck. Even then my grandmother resented the mechanical nature of her obligation, calling it with utter contempt “busy work.”

Part of what convinced me to go into college teaching was the desire to avoid busy work and to teach what I was trained to do without people peering over my shoulder or making me fill out needless forms. Throughout my career I have given students general reading lists, telling them that we will get through as many of the works as our discussions allow, eliminate some and add others if our interests take us in different directions. I always say, “There are no literature police to come and check on whether we have read exactly what is printed on this paper.”

But now the literature police have arrived. More and more there is pressure to write a syllabus and stick to it so as to meet absurdly regimented, generally fictitious, and misnamed goals and objectives. This is no way to run a university course and is instead the surest way to drive inspiration out of university teaching and learning.

Tragically, the university is rapidly becoming fifth grade. The terminology that has seeped into university teaching from the lower grades has, to my great horror, also mated with business so that the demons we are now facing believe that we will do as we are told by top-down management so that we attract students, bring in tuition dollars, increase endowments, and pass Go with our regional accreditation bodies. If this sounds like a board game, it is—or perhaps a computer game since everything seems to be played out in distance learning, distance teaching, anything but face-to-face, open-ended, free-form discussion and debate. This pernicious trend has made me one Angry Bird!

Around the campus I see that my young colleagues are running scared. They are afraid that they won’t get tenure and that tenure itself will soon disappear. They are afraid that their small department will be absorbed by another, bigger one. They are afraid that their classes will be cancelled and they will ultimately lose their jobs. We are not in familiar territory because all of the power and control have been misappropriated by business operatives calling for outcomes. We need to remind them that a university—and especially an honors program—is in essence a faculty teaching students. Administrators are hired hands secondary to this endeavor. Moreover, only one outcome is important: students graduate and go into the world to become the next generation of educated people. We need to clear all the rubrics and templates out of the way so that we can teach and they can learn.

To my mind there is nothing but folly in searching for “measurable outcomes”; this is a quest as doomed as searching for the meaning of life. Those who remember Monty Python will get the idea and imagine the Knights Templates dressed up in rubric baldrics, entertaining us with a jolly good “Outcomes Assessment Joust.”

Reference

Stevens, Dannelle D., and Levi, Antonia J. Introduction to Rubrics: An Assessment Tool to Save Grading Time, Convey Effective Feedback and Promote Student Learning. Sterling, VA:  Stylus Publishing, 2013.

The author may be contacted at Joan.Digby@liu.edu.

So Just How Big Are Those Honors Classes, Anyway?

In another post, Honors College, Honors Program–What’s the Difference?, we noted among other things that the average honors class size in public honors colleges is about 19 students per section, and in public honors programs it is about 22 students per section.  These averages are for all honors courses only, not for all courses an honors student might take on the way to graduation.

The averages above include data for the many smaller honors seminars, often interdisciplinary rather than discipline-focused.  The average class size for seminars is in the 14-19 student range.  Please bear in mind that seminars often count for gen ed requirements, and their small size is a big advantage, aside from the advantages of their interdisciplinary approach.

But what about honors class size averages for sections in the major academic disciplines?   Partly in preparation for our new book, we took took the honors sections from 16 of the public universities we will review in the book and then calculated the actual enrollment averages in each section.  The academic disciplines we included were biology and biochemistry; chemistry; computer science and engineering; economics; English; history; math; physics; political science; and psychology.  The honors colleges and programs included three of the largest in the nation, along with several smaller programs.

Given the perilous state of the humanities, it is no surprise that the smallest classes are in English and history, while the largest are in computer science, chemistry, biology, and political science.

Here are the results of our recent analysis:

Biology–63 sections, average of 38.6 students.  (Bear in mind that many intro biology classes are not all-honors and are generally much larger, 100 or more, with separate weekly honors discussion sections, each with 10-20 students.  Same for into chemistry.)

Chemistry–33 sections, average of 40.3 students.

Computer Science/Computer Engineering–18 sections, average 54.3 students.

Economics–49 sections, average of 31.2 students.  (This is in most cases a significant improvement over enrollment in non-honors class sections.)

English–110 sections, average of 19.4 students.  This does not include many even smaller honors seminars that have a humanities focus.

History–58 sections, average of 16.2 students.  This likewise does not include many even smaller honors seminars with a humanities/history emphasis.

Math–44 sections, average of 24.7 students.  Most of the math sections are in calculus, differential equations, linear algebra, topology, vector analysis.

Physics–30 sections, average of 25.5 students.  Again, many honors programs do not offer honors classes in intro physics, so a student could still have large non-honors classes in that course.

Political Science–19 sections, average 34.4 students.  The striking point here is the small number of polysci sections offered–just over 1 per program, per semester on average.  The major has become extremely popular, so many sections outside of honors could be quite large.

Psychology–60 sections, average 28.9 students.  Another popular major, but more class availability in general.