- December 19, 2008
There has been a good deal of discussion recently about the value of the college degree. Much of it centers now on Charles Murray, who has called the B.A. degree a “meaningless credential.” Murray’s may be the sharpest and most categorical criticism to date, but doubts about the value of the college degree, and about the B.A. degree in particular, have a long history. Many conservative and libertarian writers, in particular, have thought that too many are going to college, and that college is oversold. Murray and many others like him believe that most high school graduates are incapable of benefiting from college, and that those who can benefit from it typically don’t need the B.A. credential and gain virtually nothing from obtaining it. As Murray puts it:
For a few occupations, a college degree still certifies a qualification. For example, employers appropriately treat a bachelor's degree in engineering as a requirement for hiring engineers. But a bachelor's degree in a field such as sociology, psychology, economics, history or literature certifies nothing. It is a screening device for employers. The college you got into says a lot about your ability, and that you stuck it out for four years says something about your perseverance. But the degree itself does not qualify the graduate for anything. There are better, faster and more efficient ways for young people to acquire credentials to provide to employers.
As Murray points out, employers do use the B.A. as an indispensable screening device. The controversy is over whether the screening device works, and if it does, why it does. Is it a useful screening device because college graduates holding the B.A. degree have valuable skills that employers need and that college graduates have, at least in part, because they went to college? Is the B.A. a useful qualification for employers only because it signifies a certain level of intellectual ability (thereby enabling the employer to evade what is otherwise often unlawful standardized testing for I.Q. under federal law)? Or is it because the individual has demonstrated the perseverance required to put up with a meaningless exercise for four years?
The assertion that the B.A. degree “certifies nothing” and that the degree itself “does not qualify the graduate for anything” is a very large claim to make, particularly because to date little or no empirical evidence has been put forward to support it. Furthermore, there is a considerable body of evidence against it. CASNET, the Listserv for the National Association of Scholars that I moderate, has covered some of this evidence, but to date the contributions and postings have been scattered and unorganized. The aim of this posting is to pull together some of the loose threads of some important recent discussions on this matter.
Three questions to be answered
A number of questions are involved in the discussion.
The first involves comparing those who go to college with those who don’t while controlling for all the relevant confounding variables: intelligence, family income, the quality of pre-college education, etc. If college graduates outperform non-graduates on important outcome variables after controlling for confounding variables that might also account for these outcomes, then one can conclude that the college degree has value.
The second question concerns the non-economic or non-financial value of higher education to the individual and society. Do college graduates make better citizens, are they more active civically and politically, are they happier and healthier? And so on.
The third question is, assuming a positive finding to either or both of the first two questions: Why does a college degree have value? Is it merely because of the degree’s credentialing function in the marketplace? That is, does the college degree have value in the marketplace simply because employers use it as a screening device, even though in itself the degree “certifies nothing” and “does not qualify the graduate for anything,” as Murray has put it?
It turns out that there is empirical evidence bearing on all three questions. In this posting I will marshal some of this evidence. As we will see, there is a significant body of evidence showing that the B.A.degree has a non-financial and non-economic payoff to the individual and to the society, and also that the B.A. degree has had -- at least to date -- considerable economic value to individuals. There is also an emerging body of evidence that a college education leading to a B.A. degree does have value over and above the credentialing factor. That is, it is likely that the college degree has income-earning value to individuals because a college education has educational value -- i.e., because it develops significant cognitive skills and abilities.
The non-monetary benefits of higher education for individuals and society
The College Board has usefully collated much of the evidence for the value of a college education in a publication entitled “Education Pays: The Benefits of Higher Education for Individuals and Society.” A previous posting to this Listserv, which is still available in the CASNET online archives, listed a number of benefits of a college education to the individual and society taken from that article and others. As was pointed out there, the monetary benefits of a college education include the fact that there is a positive correlation between higher levels of education and higher earnings for all racial/ethnic groups and for both men and women. This income gap between high school graduates and college graduates has increased significantly over time. The earnings differential is sufficiently great that the cost of the college degree (even as high as that has become) has amortized itself in a relatively short period of time.
There are significant societal benefits to a college education as well. Higher levels of education correspond to lower unemployment and poverty rates. Consequently, college educated adults are less likely to depend on social safety-net programs and are more likely to contribute to tax revenues. College graduates have lower smoking rates and healthier lifestyles than those who do not graduate from college. Higher levels of education are also correlated with civic participation, including volunteer work, voting, and blood donation.
The factor of I.Q.
To accurately measure the impact of college, it would be necessary to control for all the relevant variables that might distinguish college graduates from non-college graduates. For example, college graduates are known to come from higher socioeconomic backgrounds and to score higher on intelligence tests than those who do not graduate from college. Hence simple comparisons of graduates and non-graduates on variables like income, health, civic participation etc. are not sufficient for our purposes, since they do not show that the outcome differences are due to the college experience itself.
I have not had the time to examine all the studies cited in the College Board report. However, it is likely that most -- probably all -- of the studies cited by the College Board lacked the desired controls. I do know that the major on-going database studying the impact of the college experience -- the ACE-sponsored project of the Higher Education Research Institute at UCLA -- does not compare college graduates with non-graduates. (Alexander Astin’s What Matters in College?, which is based on the HERI database, is the work that Malcolm Sherman and I used to refute the claims in the Grutter and Gratz litigation that campus racial diversity has educational benefits.) Studies like Astin’s involve pre- and post-test comparisons of college students as entering freshmen and graduating seniors. Such studies can measure the impact of college over time on students, and can invite inferences about the value of that experience. But the methodology cannot show that individuals would not have experienced the same changes if they had not gone to college. Hence this methodology cannot show with any certainty that the educational outcomes it measures are due to the college experience itself.
This does not mean that such studies are worthless. Multivariate and other kinds of statistical tests that attempt to measure the impact of the college experience with pre- and post-testing of college students are in principle worth doing, even when they fail to include controlled comparisons of graduates and non-graduates. However, they are less valuable than studies that are able to control for relevant variables. It is of particular interest, therefore, that data are available that support comparisons of outcomes for identical twins who vary in the amount of education they have received, because I.Q. is obviously a relevant variable, and because the correlation between the I.Q.s of identical twins is high.
There are at least two identical twin studies that looked at this question. (Both of the studies I cite below were mentioned in a CASNET posting on November 23.)
One, an NBER paper by Orley Ashenfelter and Cecilia Rouse, used data on approximately 700 identical twins. The authors estimated an average return to schooling in general (not just of higher education) of 9 percent for identical twins. Interestingly, in view of Murray’s recent claims, the estimated returns in this study appeared to be slightly higher for less able individuals as measured by I.Q. As Ashenfelter and Rouse point out in the introduction to their paper, the role of genetic and family background factors have figured prominently in public policy discussions about the effectiveness of investments in education—and they specifically cite Herrnstein and Murray's The Bell Curve in this connection. According to the authors, their own findings stand in sharp contrast to the recent claim that “genetic factors predetermine education and income" and that "such differences are not amenable to alteration by public or private choices."
An identical twin study from Australia also found a significant rate of return when studying twin-pairs who had obtained different amounts of schooling. (This study, like the preceding one, looked at education in general, rather than at higher education specifically.) This study found little evidence of ability bias in the students’ rate of return in earnings to schooling in Australia. Overall, the findings were that the Australian rate of return to education, when corrected for a slight ability bias, is around 10%, which the authors claim is similar to the rate of return for education in Britain, Canada, the Netherlands, Norway, and the United States.
As noted, neither of these twin studies compared the rates of return (in earnings) of twin-pairs who differed in the amount of higher education they had received. What we need is data on the rate of return for higher education specifically. Fortunately, there is a study that found such data. Doubts about Murray’s claim that higher education is beneficial for only a minority of students who currently attend college are greatly strengthened by this study. Indeed, the study’s findings are particularly striking, because they suggest that the rate of return on higher education is actually higher for less intellectually able individuals -- the same result that Ashenfelter and Rouse reached independently for education generally.
In an op-ed column in WSJ’s Opinion Journal dated 17 Jan 2007 (“What's Wrong With Vocational School?: Too many Americans are going to college”), Murray said:
Traditionally and properly understood, a four-year college education teaches advanced analytic skills and information at a level that exceeds the intellectual capacity of most people.
There is no magic point at which a genuine college-level education becomes an option, but anything below an IQ of 110 is problematic. If you want to do well, you should have an IQ of 115 or higher. Put another way, it makes sense for only about 15% of the population, 25% if one stretches it, to get a college education. And yet more than 45% of recent high school graduates enroll in four-year colleges. Adjust that percentage to account for high-school dropouts, and more than 40% of all persons in their late teens are trying to go to a four-year college -- enough people to absorb everyone down through an IQ of 104.
In November of this year, Jason Malloy, the moderator of the site GeneExpression, responded to Murray's claim that college was useless for most students who are now attending college. Malloy's blog, “College is Still the Best Pay-off,” received two days’ worth of attention from several of the contributors to PhiBetaCons at National Review Online (which is how it came to my attention).
Malloy noted that all the relevant variables for testing Murray's claim that attending college only makes sense for about 15% of the population (25% if one stretches it) are included in the General Social Survey. Courtesy of Malloy, I posted his piece in its entirety to this list yesterday. For our purposes, Malloy’s central finding was this:
People with average and below average IQs are getting just as much of a financial return out of their 4-year degree as those above the 85th percentile. This suggests many more people of marginal ability should be seeking a Bachelor's degree, not less. …Fifth, and most directly related to Murray's argument, people with 4-year degrees earn much more than people with 2-year degrees and trade jobs at every level of IQ. Average IQ people will get a much, much larger monetary reward from completing a 4 year school than a 2 year school. So the BA is far from being a “meaningless credential" when it comes to "chances of making a good living.”
As Malloy points out, a study like his cannot determine whether the greater earnings payoff is due to important skills that college graduates acquire that non-graduates do not, or whether graduates earn more money simply because they have a credential that has marketplace value. (Employers might, for example, use the college degree as a screening device that avoids standardized testing that might be unlawful under federal employment laws.) To begin to tease out a possible skills effect from a credentialing effect, one must be able to test directly for skills. It is to that important question that we now turn.
The Collegiate Learning Assessment (CLA)
CASNET has given considerable attention for some time now to a new and important testing and assessment initiative by the Council for Aid to Education (CAE) called the Collegiate Learning Assessment (CLA). The CASNET archives can be searched for some of the recent postings about the CLA. Several useful CASNET postings that are still in the archives are here, here, and here.
The CLA aims to measure the following skills: critical thinking, analytic reasoning, problem solving, and written communication. The CLA is not a multiple-choice exam: it consists entirely of performance tasks and analytic writing tasks. As CAE puts it (“Frequently Asked Technical Questions”): “All CLA tasks evaluate students’ ability to articulate complex ideas, examine claims and evidence, support ideas with relevant reasons and examples, sustain a coherent discussion, and use standard written English.” The performance task asks students to engage in activities like preparing a memo or a policy recommendation that requires reviewing and evaluating several documents. There are two types of analytic writing tasks in the CLA: making an argument and critiquing an argument. In the first, students are asked to explain why they would agree or disagree with a statement. In the latter, students are asked to describe shortcomings in an argument presented by someone else.
The just-cited document includes a discussion of the reliability and validity of the test (construct validity, face validity, etc.). Another CAE document that covers technical issues is “The Collegiate Learning Assessment: Facts and Fantasies.” You can also test the face validity of the CLA yourself by taking a retired CLA performance task (enter the session number 5115-531929 when prompted).
How much does college attendance affect CLA scores? When the comparison is simply between the entering freshman class and the senior class four years later, the gains are very large. According to an April draft of the CAE document “Facts and Fantasies,” “[T]he average improvement in scores between entering freshmen and graduating seniors on the CLA is more than one standard deviation. By any standard, this is a very large effect size, and one that suggests the CLA is sensitive to the effects of a college education.” A pre- and post-test improvement of a whole standard deviation (and more) on a performance test measuring general intellectual and written skills would be more than “very large”: it would be huge. This huge effect size may be due to the fact that in the CAE analysis the comparison is between the freshman class and the senior class, rather than between the scores of the students who take the test in both the freshman and senior years. The latter kind of study (a longitudinal study), unlike the former one, can control for the effects of attrition or dropouts in college. (For example, if there is a tendency for weaker students to drop out between the freshman and senior years, the senior class will tend to be stronger academically than the freshman class quite independently of the college experience itself.) In any case, a more recent study of the CLA -- a longitudinal study that so far covers only the first two years of college, which has been conducted by the Social Science Research Council (SSRC) -- has found more modest gains.
The SSRC study, “Learning to Reason and Communicate in College: Initial Report of Findings from the CLA Longitudinal Study,” which was co-authored by Richard Arum, Josipa Roksa, and Melissa Velez, found that students’ average spring 2007 CLA scores were 0.18 standard deviations higher than their original Fall 2005 performance. This is a relatively modest gain, but respectable enough. If the first two-year growth trend continues over the next two years, the standard deviation between the entering 2005 scores and the graduating spring 2009 scores will be 0.36. Most analysts and observers would probably deem gains of this magnitude to be more than just respectable.
These findings do not answer the question whether the well-known monetary gains from attending college are due to the credentialing factor or to real gains in important skills. To answer that question, one would have to extend the study. One might, for example, conduct a longitudinal study comparing the labor market outcomes of college graduates with their CLA test scores. If the study found that college graduates who did better on the CLA obtained higher earnings in the market place, that would be evidence that reasoning and communication skills learned in college contribute to higher earnings. A test of this sort using the CLA has not been conducted yet, but it certainly could be. In any case, the mere fact that the CLA has demonstrated gains in college in a very important skill set does begin to shift the burden of proof in the argument against those who have simply asserted that a college degree like the B.A. is a meaningless credential.
There are, in addition, a couple of specific considerations that suggest that the skill set measured by the CLA is a critical factor in the higher income earnings of college graduates.
First, there is the matter of the facial validity of the test. As one of my previous postings to this Listserv pointed out, most employers do not look for employees with narrow skill sets. They look for employees who can demonstrate a general set of intellectual skills, creativity, innovativeness, and imagination. The skill set that is the focus of the CLA -- critical thinking, analytic reasoning, problem solving, and written communication -- comes particularly close to the skills that have been identified in a recent AACU study that examined employers’ views on learning outcomes and assessment approaches. Since the skills measured by the CLA include the kinds of skills that employers say they want, it is not unreasonable to think that they are skills that are in fact rewarded in the market place.
It is possible that those who do not go to college also pick up these skills; and if they do, then the foregoing research would still have failed to eliminate credentialing as the main factor accounting for the greater lifetime earnings of college graduates. One finding in the two-year SSRC study, however, does indicate that college graduates have an advantage over non-graduates in developing these skills.
The basis for this claim is that the SSRC study found that gains in the CLA are not constant across disciplines and majors, and that the gains are actually lower in the fields or majors that are specifically targeted to the kinds of jobs that graduates might find themselves doing in the business world. The longitudinal study of the first two years found that students majoring in “traditional” disciplines like math, science, social sciences, and the humanities outperformed students who majored in fields like business, education, and social work. Details about these findings are reported in Table 3C on p. 33 of the Report, and a graph presenting the results is presented in Figure 5 on p. 11. These findings were statistically significant. The finding that coursework in the social sciences and the humanities outperformed coursework in business parallels the likely related finding that I reached and reported to this Listserv some weeks ago: that the BA outperforms the BBA degree in the business world, as measured by lifetime executive performance.
The SSRC study also found that in the first two years of its longitudinal study social sciences and humanities majors slightly outperformed math, science, and engineering majors, once institutional variables were taken into account. In the Full Model column of Table 3C (p. 33), for example, social sciences and humanities have a value of 46.984 and science and math have a value of 52.990, but when fixed effects dummy variables are introduced to take into account certain institutional characteristics, the values become 32.716 and 30.540, respectively. That is, when the institutional fixed effects are introduced, the social sciences and humanities are found to very slightly outperform science and math majors in improvement on the CLA.
It would be interesting to know the institutional variables that were used, and I asked Richard Arum, one of the co-authors of the report, about this. He explained the methodology as follows (email communication):
In terms of institutional effects, we do have indicators of a broad range of measures on the institutions in the sample, such as selectivity, cost, public/private sector, etc. We are reluctant to push the results on any of these specific measures, however, because we only have a non-random sample of 24 institutions. With only 24 cases, we are cautious not to overstate any of these specific findings. Instead, we rely on a statistical approach called "fixed effects". What this does is account for all the factors associated with statistical differences across institutions, without having to specify them individually. That is, the methods account for the extent to which students in school A on average (and net of other factors) do better in terms of growth than students in other schools. The interpretation for the change in field of study effects, once one adds institutional fixed effects, is that part of the earlier benefits of math and science coursework were due to the fact that students taking these courses tend to be disproportionately concentrated in schools where for a variety of reasons students will show more growth on the CLA than average. Part of the earlier math and science effects thus were due actually to the schools that students attended, not the fields of study per se.
Unlike the differences between the academic concentrations and business, education, and social work, the differences between the social sciences and humanities, on the one hand, and science, math, and engineering, on the other, are not statistically significant. Nevertheless, the fixed effects analysis revealed that an additional 5-6% of variance in 2007 CLA test scores -- even after controlling for individual social background, high school preparation and field of study -- can be attributed explicitly to institutional differences (Richard Arum, personal communication). This means that it is important to study the institutional variables themselves. One important SSRC finding is that students make more gains on the CLA when the faculty has higher expectations of student performance. It would also be interesting to know whether gains vary according to the rigor of courses, the amount of reading and writing required, and so on. Further research using the CLA should help throw light on the extent to which “best practices” in the teaching of undergraduates make a difference in educational outcomes.
Finally, the SSRC longitudinal study has found evidence (though it was not reported in “Learning to Reason and Communicate in College”) that directly undercuts Murray’s claim that college can benefit only the top 15% of students who currently attend it.
CAE itself has emphasized the high correlation between the CLA and SAT scores. The SSRC study also found that the learning and skills gains on the CLA are greater for those with a strong academic preparation in high school (Advanced Placement courses, strong GPA etc.) Given these findings, it would be very useful to know the extent to which gains in CLA scores in the first two years of the college experience (2005-2007) are correlated with SAT scores. Although SAT scores cannot simply be identified with I.Q., the correlation is high enough that this would appear to be an important question to ask in view of Murray's claims.
Since this matter was not addressed in “Learning to Reason and Communicate in College,” I sent the following two questions about it to Richard Arum:
(1) Do the students who had higher SAT scores in 2005 make more CLA-defined gains (in 2007) than those with lower scores? If so, how great was the difference?
(2) Is there a level of SAT score below which no value-added gains show up?
As I mentioned to Arum, it seemed to me that a large part of Murray's claim would be undermined if the answer to 2 were No, even if the answer to 1 were Yes.
As it turned out, SSRC had already examined these questions, and its findings do contradict Murray’s claims (the following is also taken from Arum’s email):
There is no relationship between prior SAT (or ACT) performance and growth on CLA performance between 2005 and 2007. The top decile and bottom decile of students in terms of prior SAT/ACT scores each do equally well in terms of growth on the CLA. SAT/ACT scores are correlated with both the 2005 and 2007 CLA scores, but not the growth in these scores.
As measured by the CLA, therefore, the bottom decile and top decile of students who attend college appear to benefit equally from the college experience.
The cumulative evidence against Murray
Several possible readings of Murray’s claim that the B.A. degree is “meaningless” are possible.
On one reading, Murray is claiming that the B.A. degree is meaningless because most students in most disciplines who graduate from college haven’t learned anything. The four years (or more) spent there are meaningless in that sense.
There are many problems with this view. Obviously, the brighter the student, the better he or she will be at intellectual tasks, inside or outside higher education. Abilities are on a continuum, and in principle one can draw a dividing line between who should go to college and who shouldn’t in many different places. In the NYT interview cited above, Murray draws the line at Paul A. Samuelson’s college textbook on introductory economics. The test of whether a student is able to deal with college-level material “traditionally understood,” he says, is whether the student can go beyond simply knowing what the words mean on the pages in this text and whether the student can really understand economics as it is taught in the textbook.
But why draw the line here? Even if we grant Murray that only 15% (approximately) of students who currently attend college can “really understand” Samuelson’s textbook, it is not clear how one gets from this point to anything approaching an endorsement of Murray's radical policy prescription, according to which most students who are currently in college should drop out and do something else. After all, the American higher education system is startlingly diverse. Undoubtedly, the level of “real understanding” that Murray has in mind is beyond most students at many non-selective institutions. But it doesn't follow that important elements of economics -- including ones appropriately taught in higher education at some level -- are beyond all the students that Murray thinks should simply drop out of college.
Even if we agree that the level of intellectual skills and ability that the CLA aims to measure is significantly below the intellectual level required to “really understand” Samuelson’s textbook, it doesn’t follow that the CLA is irrelevant to the assessment of higher education, since the CLA tests for intellectual skills that are needed in jobs in the business world at the management level. The test might have a rather low ceiling for brilliant students, and for that reason might be unable to distinguish between a very bright student and an Isaac Newton -- but then, if you are as bright as Isaac Newton, you don’t even need to take calculus: you can just develop it yourself.
Furthermore: if a majority of students were to follow Murray’s advice and drop out of college, what would they do instead? His recommendation is that they should pursue vocational training or other kinds of non-college education. But as we have seen, it is simply not true that students who pursue vocational education rather than a college degree do better in terms of lifetime earnings. The claim that they do is actually contradicted by the available evidence.
On another reading of Murray -- a less radical one -- the claim is not that college students who earn a B.A. haven’t learned anything, but that there is no way to know, simply on the basis of a student’s having graduated from college with a B.A. degree, that he or she has actually learned anything or developed skills of any significance. As he has put it: “[L]et’s talk about having something kids can take to an employer that says what they know, not where they learned it.” If this is what his claim amounts to, it is simply an appeal for evidence and better assessment. In this case, Murray should certainly welcome and applaud the CLA, which is trying in a new and innovative way to assess how well colleges are doing in developing important intellectual skills. If the preliminary findings of the SSRC hold up, higher education will have no reason to dread such assessments, since the results show that college students do make significant gains on these skills. There are also reasons for thinking, as we have seen, that it is the college experience itself, at least in part, that produces these gains.
The moral of this story
A couple of caveats are in order here.
First, the CLA is only one instrument for assessing college outcomes. While the CLA measures some of the important skills that we surely want a college education to significantly improve, it does not cover the whole territory (and of course the CAE does not claim that it does). Nevertheless, while a significantly improved score on the CLA is not a sufficient indication of the value of a college education, it is a necessary one, so it is encouraging that some colleges are able to demonstrate significant gains on the test. The preliminary results of the longitudinal study should also encourage others to develop other assessment tools to measure a much wider range of outcomes of the college experience.
The second caveat is that higher education has no right to be -- and cannot afford to be -- self-congratulatory or insouciant about the results. The findings do not remove the doubts or grounds for criticism leveled against the academy in recent years by the National Association of Scholars and other reform-minded organizations. The reported gains would undoubtedly be far greater were it not for the manifold, serious ills affecting the academy. If the academy, when it is in such a sorry state, can produce significant results on the CLA, we can only imagine what the results would be if the academy were in better shape.
The findings should therefore be viewed as a call for redoubling efforts at reform, rather than ending them. After all, we at the National Association of Scholars have come to reform the contemporary American university, not to bury it, and reform is an issue only for institutions that are worth saving, or still vital enough to be saved. What the preliminary findings discussed here show is that higher education has great potential, and that it is worth saving. Best of all, perhaps, the CLA effort is starting to show that “traditional” education produces the best outcomes, measured by variables that business and the rest of the society regard as important.