Increasingly, it seems, the first fact of academia is this: higher education is a hyper-competitive industry. Scientists compete for federal research dollars, humanities professors compete for prestige (by publishing books and essays), departments on the same campus compete for dwindling resources, admissions offices compete for applications, and everyone strives to beat peer institutions in the U.S. News & World Report rankings. The system has helped make research universities in the United States the best in the world (at research), the scientific side of them marking the great success story of the twentieth century. When it comes to pursuing undergraduate applicants, however, competition has had an entirely different result. Instead of pushing the campus toward more rigor, more excellence, competition has dragged the campus into more mediocrity, more complacency.
The reason is simple. Instead of having to impress federal agencies and peer reviewers with the scientific and scholarly value of their products, campuses have had to impress eighteen-year-olds with the social and experiential value of their products. Colleges and universities need to expand the applicant pool, and so they must market the things that excite teenagers—not challenging teachers and high standards, but a state-of-the-art gym, lots of student choice in course work, tolerance and diversity, and loads of fun. If a school has a low average GPA, if it asks freshmen to write too many papers, if it has too many U.S. history, Western civ, and foreign language requirements…word gets around and high school seniors don’t apply.
This is one reason why empirical instruments such as the National Survey of Student Engagement (NSSE) keep delivering bad news. The NSSE is the undergraduate survey instrument popular among colleges and universities. More than 1,400 institutions in the U.S. and Canada have used NSSE to collect information from students about their interests, course work, and study and leisure habits since its inception in 1999. First-year students and seniors complete a long questionnaire and campus officials interpret the results in a process of self-evaluation and longitudinal study. The instrument provides officials with revealing evidence of trends inside and outside the classroom, pinpointing strengths and weaknesses and grounding academic policies in solid, reliable data.
When students answer NSSE questions about their behaviors and achievements over the previous year, abysmal numbers come up. Consider the following national tallies on the 2009 version of the survey:
- Sixty-one percent of seniors spend fifteen hours or less per week preparing for class; at the same time, 80 percent said they spend “significant amounts of time studying”—apparently, for many college seniors two hours of homework a day counts as “significant.”
- Twenty-eight percent of seniors discussed ideas with teachers outside of class “Often” or “Very often”; 29 percent “Never” had such discussions.
- Twenty percent of seniors replied “None” and 53 percent answered “1–4” when asked how many books they read on their own for enjoyment or enrichment in the previous year.1
One of the worst findings of previous surveys, including the Beginning College Survey of Student Engagement (BCSSE), is summarized in a 2007 article in Peer Review by George D. Kuh, director of the Center for Postsecondary Research at Indiana University, which hosts NSSE. “BCSSE and NSSE data show that first-year students expect to do more during the first year of college than they actually do,” Kuh writes. “They study two to six hours less per week than they thought they would when starting college. Even so, nine of ten first-year students expected to earn grades of B or better while spending only about half the amount of time preparing for class that faculty say is needed to do well.”2
The only way to explain the drop-off in the first year is to point a finger directly at the institutions those students attend. The teachers, the syllabi, the culture…they bring expectations down, and the numbers for seniors listed above show that more semesters on campus don’t bring them back up.
The results of NSSE are quoted repeatedly in Academically Adrift: Limited Learning on College Campuses, a research study that has become the most talked about higher education book of the year. The authors, sociologists Richard Arum and Josipa Roksa, add to the existing data the results of their own attempt to measure the learning that actually takes place from year to year using Collegiate Learning Assessment (CLA) scores at twenty-four schools of different types and regions. The results are devastating to the image higher education projects as a place of challenge, rigor, discovery, and development. Arum and Roksa also explode some of the favorite theories of progressivist educators, particularly the value of social integration and collaborative work.
So how much are students actually learning in contemporary higher education? Arum and Roksa’s first finding: “The answer for many undergraduates, we have concluded, is not much.”
The students in Arum and Roksa’s sample took the CLA as freshmen in fall 2005 and once again as sophomores in spring 2007, nearly halfway through their college career. By the second testing, most of them had fulfilled several general education requirements, including freshmen composition, and they’d narrowed their focus to a major. But those three-plus semesters, it turned out, had a disappointing impact. By the authors’ calculation, students improved by only 0.18 standard deviation, equal to a seven percentile point gain. In other words, first-year students in 2005 who scored in the fiftieth percentile would rise to the fifty-seventh percentile at the end of sophomore year (relative to incoming freshmen). They may have taken more than a dozen courses and paid more than $50,000 for the schooling, but, in the authors’ judgment, it produced “a barely noticeable impact on students’ skills in critical thinking, complex reasoning, and writing.”
Critical thinking, complex reasoning, and writing—generic intellectual abilities that cross disciplines—are the types of elements measured by the CLA. The CLA doesn’t pick up domain knowledge, for instance, the history that a student retains after taking a history class, an exclusion that leads some to criticize the instrument as partial and inaccurate. Arum and Roksa admit that limitation, but don’t consider it a disqualification of CLA scores. Colleges can, after all, control for differences across majors and are able to devise measures of specialized knowledge to complement CLA measures of general skills.
It’s unlikely, however, that knowledge assessments would yield any better results than the CLA. The reason is easy to locate in the second major finding in Academically Adrift. Put simply, students don’t work hard. Here are the numbers:
- On average, students study just twelve hours per week (one hour, forty-two minutes per day) and 37 percent spend less than five hours per week preparing for class.
- “Fifty percent of students in [the] sample reported that they had not taken a single course during the prior semester that required more than twenty pages of writing, and one-third had not taken a course that required even forty pages of reading per week.”
- In spite of the low workloads, “85 percent of students have achieved a B-minus grade point average or higher, and 55 percent have attained a B-plus grade point average or higher.”
No wonder work expectations decline after students spend a few months on campus. If they don’t work hard, they may not get A grades, but they won’t get Cs and Ds, either.
Arum and Roksa point briefly to one cause of the ease of student life, the research professor who is too busy with projects to devote much attention to undergraduate learning. But they quickly move to a more pervasive cause, one that universities don’t conceal (even the top research schools give lip service to instruction) but trumpet: the social factor. Whether because they aim to increase applications by emphasizing the social atmosphere of their respective campuses, or because of a genuine belief that college really should be about socialization as much as academics, officials have increasingly presented college as a peer-oriented time and place.
Arum and Roksa cite education researchers who say that “the students’ peer group is the single most potent source of influence on growth and development during the undergraduate years.” Administrators encourage club memberships, on-campus employment, dormitory living, volunteering, and group project participation in an attempt to foster social skills and awareness. I remember many years ago at my home institution a campus life dean gave a speech to just-arrived students telling them not to focus exclusively on coursework, but to regard college as a fuller life experience, a place to grow outside academics. Did she really believe it, or was she just pandering to eighteen-year-olds? Either way, the last thing these students—and our university—needed was a dean applauding anti-academic impulses that were, no doubt, already at work.
But, of course, the campus life outlook doesn’t regard social life as anti-academic. It’s just a different kind of learning. Here is where Arum and Roksa intervene. First, they cite studies in which students do, indeed, put social learning in competition with academic learning. In one study, they note, “70 percent of students reported that social learning was more important than academics.” A survey of University of California students pegged an average week at thirteen hours of studying, twelve hours with friends, eleven hours of “fun” computer time, six hours of TV, five on hobbies, and three on other entertainments. For them, academics is just a fraction of the schedule, not the main focus.
Second, Arum and Roksa ponder the academic learning connections. While they find some benefits in on-campus jobs as long as they don’t exceed ten hours per week, the other socializations are damaging. Volunteering, for instance, has “a negative relationship to learning”—that is, more volunteering meant lower academic achievement. Furthermore, participating in student clubs is not related to learning. And when students engage with their peers, either by studying with them or participating in fraternities and sororities, negative consequences for learning occur. Social integration activities, then, either have no measurable effect on learning or coincide with lower learning scores.
The finding regarding students studying together is bound to disappoint many cutting-edge educators and learning theorists, for it undermines one of the cardinal practices of twenty-first-century classrooms—collaborative learning. It has become attached to other trendy conceptions such as “twenty-first-century skills” (participatory and interactive abilities are crucial in an era of “collective intelligence”) and workplace readiness (businesses operate on group models, so classrooms should as well). Perhaps, too, educators like collaborative work because it appeals to the collectivist mentality. The loner student reading Nietzsche and Dostoevsky (or Marx, for that matter) strikes them as a lost youth, not a good learner.
Arum and Roksa’s data refute them.
“There is a positive association between learning and time spent studying alone, but a negative association between learning and time spent studying with peers,” the authors find. Specifically, the more time students spent studying alone, the more gains they showed on the CLA. Faculty members may not create the proper framework for effective collaboration, and the “free rider” problem (one member does the work, the others skate along) is widespread, but whatever the reason, the fact remains that collaborative learning isn’t working.
These blunt and harsh judgments are bound to provoke defenders and promoters of higher education as it is. At the same time, observers of higher education who have lamented and decried these conditions for years draw a rueful satisfaction from Academically Adrift. None of the latter wishes that the book were as true as it is, though. They take joy only in the way in which Arum and Roksa’s conclusions expose hype, indict bad practices, and explode pretense. The authors have penetrated the rosy veil of higher education with sound data, a necessary step toward the more difficult task of actually reforming the system. Arum and Roksa maintain that “[i]nstitutions need to develop a culture of learning if undergraduate education is to be improved,” and the materials they present remove one resistance to the effort—the practice of denial.
But changing a culture requires more than data. In this case, campus leaders need to change, first, the consumer attitudes of students and parents; second, the service-provider attitudes of administrators; and third, the research-centric attitudes of professors.
Who can do it? In the case of professors, reformers can’t expect much help because rigorous standards and challenging syllabi only make the lives of teachers harder. And administrators can only find scrupulous instruction expensive. To take one example, Arum and Roksa note that employers complain about the poor writing skills of recent graduates, a complaint that has reached the level of a mandate for higher education. To improve student writing, however, entails a massive investment of personnel and resources that few institutions can make. Writing classes can’t number five hundred students, and the time it takes to grade one fifteen-page paper equals the time it takes to score an entire class’s multiple choice exam. One tutorial in which student and teacher pore over a rough draft, word by word and comma by comma, looks like a ridiculously inefficient delivery of expertise. Don’t expect many schools to double the writing requirements and design smaller classes for them.
Ultimately, though, Arum and Roksa conclude that the resistance to reform isn’t about money so much as it is about priorities. Everybody involved in the enterprise—students, parents, teachers, administrators—places undergraduate learning low on the ladder of outcomes. Students want grades, not knowledge and skills. Parents want degrees and jobs for their kids, not liberal arts formation. Administrators want revenue and faculty want more time for themselves and their careers. The findings of Academically Adrift lay bare those disinclinations. In doing so, the authors have advanced the discussion and changed the terms from a simple debate over whether the system of higher education is in bad shape or in good shape to a policy challenge: what is to be done?
1 National Survey of Student Engagement, Assessment for Improvement: Tracking Student Engagement Over Time, NSSE Annual Results 2009 (Bloomington, IN: Indiana University Center for Postsecondary Research, 2009), http://nsse.iub.edu/NSSE_2009_Results/.
2 George D. Kuh, “What Student Engagement Data Tell Us about College Readiness,” Peer Review 1, no. 9 (Winter 2007): 6, http://nsse.iub.edu/uploads/PRWI07_Kuh.pdf.