If the quality of America’s colleges could be ranked on scale from 1.0 to 5.5, what would it receive? According to the Columbia Teacher’s College, the rating would hover somewhere around a 3.0—not marvelous, but not dismal either.
Recently, the Teachers College released a pilot study titled College Educational Quality (CEQ) that purports to assess academic rigor, workload, cognitive complexity, and teaching quality at one public and one private research institution. (The authors do not disclose the names of the institutions.) For the study the authors conducted student surveys, syllabus analyses, class observations, and assessments of student work.
Corbin Campbell, the study’s author, considers the two universities in the report to be representative of the state of higher education more generally. (She mentioned, however, that the two universities may not provide a thorough representation, and that the CEQ will later look at 10 or more institutions.) In a conversation with The Chronicle of Higher Education, she described the academic rigor at universities as “lukewarm.” According to Campbell, students are getting a better and more demanding education than less optimistic studies like Academically Adrift might suggest, but much work remains.
“There are some strong educational processes happening at these institutions,” Campbell said, but she added that universities are “not maximizing their educational capacity.”
While the authors of Academically Adrift believe that students learn next to nothing in college, academics and politicians boast of America’s world-leading education. Campbell believes that the truth lies somewhere in the middle.
The report found that most (82 percent) of students attended class—a finding Campbell describes as “fairly good news.” However, other findings—such as the number of times students raised their asked questions in class—hint at mediocrity. (The study did not disclose whether the “quality score” for number of questions asked in class was calculated differently for lecture-based vs. discussion-based courses, and it also did not identify the number of questions considered appropriate per hour of class time.)
Campbell and a team of 10 graduate students used three tools to assess academic rigor: first, they used a revised version of Benjamin Bloom’s taxonomy of educational objectives; second, they judged the quantity and complexity of the work assigned; third, they examined the level of expectations set for students’ preparation and participation in class.
Using these categories, they found that academic rigor ranked 3.33 on a 5.5-point scale, and teaching quality ranked 2.97 out of 5.
Based on the material provided in the report, it is difficult to determine what the authors of the study perceive as “teaching quality.” Appropriate modalities of teaching may vary depending on discipline, course level, and class size, and it seems overreaching to assess the general aptitude of a school without first laying out definitive criteria.
Indeed, the report appears to be intentionally vague on several of these matters, and it is particularly puzzling that the authors do not disclose the names of the schools involved in the study. If the purpose of the report is to promote greater public accountability for institutions of higher education, then why not draw attention to the specific institutions and areas that require addressing?
The Cornell Sun reported that Cornell University professors and officials are highly skeptical of the CEQ project. Susan Murphy, the vice president of Student and Academic Services at Cornell, said “[I’m] not sure [the study] is scalable to any helpful degree.”
In addition, the Cornell Sun cited the opinions of many other professors who are critical of the study and fear that it provides no assessment of response bias and no measure of whether students can accomplish learning outcomes described in the course descriptions. Donald Viands, a professor of plant breeding and genetics, felt concerned that the duration of the investigation was so short that no meaningful conclusions could not be drawn.
“The [study’s] rating is very subjective,” he said. “Many observations, not just one week, are needed in a course.”
Rather than publishing descriptions of the course material and requirements and showing how these fail to measure up to a clear standard, the Columbia Teachers College simply provides a list of numerical rankings within categories such as “quality,” “quantity,” and “subject matter.” Because the categories have such broad headings, the reader has little notion of how these rankings correspond to particular subjects, class levels, or teaching modes.
For NAS’s 2013 report Recasting History, researchers acquired and read every book and reading assigned in the syllabi of every course that fulfilled a state requirement for U.S. history. They classified each one according to eleven defined categories of U.S. history (i.e. diplomatic, social, military, technological, etc.).
In contrast, the Columbia Teachers College simply offers data on whether students asked questions during class, whether classes lasted longer than an hour, and whether class sizes were large or small. (The study does not define “small” or “large,” but cites the U.S. News and World Report’s formula that defines “small” as 1-19 students and “large” as 50 or more. Campbell sees the U.S. News and World Report’s formula as incomprehensive.) Bloom’s taxonomy defines “quality” as a combination of how well an educator has managed to implement six cognitive domains of learning: knowledge, comprehension, application, analysis, synthesis, and evaluation. Though Bloom’s taxonomy may offer some insight into student performance, a more reliable report would include detailed documentation of coursework.
This research project aims to document academic rigor, teaching quality, and learning goals, but if the researchers do not have an objective standard for judging these qualities, how can we know that they are not simply infusing subjective opinions and agendas into the analysis?
If the authors of the study wish to provide an accurate snapshot of higher education, much more explicitly defined criteria are required. Furthermore, the authors would surely need to study more colleges than the two mysterious, unnamed institutions featured in CEQ.
It is unclear whether the report intends to assess all of the courses at the two schools or simply evaluate a sample of courses. The size of the sample is never identified, and we do not know whether the institutions have thousands of students or mere hundreds. For all of the information offered by the report, the reader may as well assume that the authors have examined Hogwarts School of Witchcraft and Wizardry or a make-believe college in Timbuktu.
Though Campbell and her team make a noble attempt to provide an assessment of educational progress that measures more than just job and salary outcomes, the report falls short of providing any definitive data.
The Columbia Teachers College aims to complete the project in 2016, and perhaps by then the purpose of the study will become clear. Until then, why should we take the authors’ word for it?
Image: Publicdomainpictures, Public Domain