Grade Inflation in Higher Education: Is the End in Sight?

Michael J. Carter

Michael J. Carter is associate professor of sociology at California State University, Northridge,

Northridge, CA 91330-8318; [email protected]. His main research interests are in social psychology, specifically the area of self and identity. His research has appeared in American Sociological Review and Social Psychology Quarterly.

Patricia Y. Lara is an MA candidate in sociology at California State University, Northridge,

Northridge, CA 91330-8318; [email protected]. Her interests include college student development, qualitative research methodology, and criminology. Her research has appeared in the International Journal of Research & Method in Education.


Abstract

An abundance of research has shown the prevalence and severity of grade inflation in American higher education. In this article we revisit the issue of grade inflation and address whether it continues as a trend by examining grade distributions over a recent five-year period (2009–2013) in two of the largest university systems in America: the University of California (UC) and California State University (CSU) systems. The results reveal that while some campuses across these university systems showed a significant increase in undergraduate grade point averages over this time, most campuses showed no significant increase. The notion that grade inflation in American higher education may finally be reaching a plateau is addressed, and a discussion of factors that may explain variance in grade point averages across universities is provided.

Keywords

grade inflation, grade point average, University of California, California State University

Introduction

Over the past decades claims of grade inflation in American higher education have been ubiquitous, with ample evidence documenting its prevalence and severity (Arnold 2004; Summary and Weber 2012). The notion that grade inflation is more than an innocuous trend has also been common across the academy; many have cited the negative outcomes that accompany inflated grades. For example, past research has shown that students spend far less time studying in courses that inflate grades (Babcock 2010), and that students who receive inflated grades in introductory or preliminary courses do worse in advanced courses compared to students who do not receive inflated grades (Johnson 2003).

Along with these revelations, blame has come from academics and administrators about who and what is responsible for inflating grades, linking the trend to status characteristics such as gender and whether an instructor is tenure-track or adjunct (females and adjunct instructors have been found to be more likely to inflate grades) (Jewell and McPherson 2012; Kezim, Pariseau, and Quinn 2005). Others are less likely to point the finger at specific groups of individuals and more on institutional change, explaining the grade inflation phenomenon as a shift from a criterion focused on the search for truth to a criterion of quality control generated outside academia (Oleinik 2009). Regardless of theme or focus, existing studies on grade inflation seem to either identify its correlate problems or reveal that it continues to plague higher education, citing it as an ongoing trend that defines grading practices in universities across the nation.

In this article we revisit the issue of grade inflation by examining contemporary grading trends in the two of the largest systems of public higher education in the United States, the University of California (UC) and California State University (CSU) systems. We examine these university systems because together they serve one of the largest student bodies in the nation,[1] both enroll students from a wide socio-economic spectrum, and both serve a large number of students from minority and underrepresented groups. In other words, students in the UC and CSU campuses form a sample that is generally reflective of the diversity and changing demographics that more and more defines student populations in United States higher education. Examining individual campuses within each system (32 campuses total) also provides a decent sample size in which to draw inferences across universities.

The plan of this article is as follows: We first address grade inflation across the UC[2] and CSU systems to see if the trends toward inflation that have been documented in the past are still occurring in these respective university systems. We then look more closely into grading norms and strategies across the individual UC and CSU campuses to discover what may contribute to variance in grade point averages, and hence variance in grade inflation. To address these issues we examine undergraduate grade distribution data for each campus within the UC and CSU systems over a recent five year period, spanning the 2009 to 2013 academic years.[3] By examining grade data over this five-year span we can see if grade inflation is still a contemporary problem, at least for the population in the study.

Grade Inflation: Still an Issue?

There is no ultimate consensus on the causes of grade inflation. Various reasons have been cited for why it has occurred and why it continues to occur, including competition among schools (Walsh 2010) and the influx of adjunct instructors in higher education who may be more likely to give high grades to receive positive student evaluations and thus ensure gainful employment (Sonner 2000). Regardless of why grades have steadily increased over the years, the fact that grades have increased begs a question: Considering that the common grading scale has lower and upper limits—i.e., the scale ranges from zero to 4.0—how far toward the high end of the scale will grades increase before the high-end limit of the scale is reached? After all, inflation connotes change—by definition inflation represents a sustained increase. But grades cannot inflate unabatedly forever; considering that the grading scale has an upper limit, at some point grade inflation must—theoretically—stop. So, in taking it as given that grades cannot possibly continue to increase forever, the question is not “will grades continue to inflate?” but rather “when will grade inflation diminish, plateau, and/or end?” The results we present in this study show that such an end may be in sight. A look at the recent data for grade distributions among the UC and CSU systems shows that grade distributions in many of their campuses have begun to plateau, though it may be premature to claim that grade inflation is an issue of the past. Let’s examine the data and see.

Grade Inflation Trends: 2009–2013

Between 2009 and 2013 the average grade point average (GPA) across all UC campuses was 3.03 (SD = .16). Across CSU campuses, the average GPA was 2.93 (SD = .09).[4] Among UC campuses, the highest grades were found at Berkeley (3.29, SD = .01), the lowest at Riverside (2.77, SD = .04). Among CSU campuses, the highest GPAs were found at San Francisco State (M = 3.11, SD = .02) and Sonoma State Universities (M = 3.11, SD = .03); the lowest was CSU Bakersfield (M = 2.74, SD = .03). One can debate whether these GPAs are too high (or fine as is), but a different question involves whether these averages have changed significantly over the past half-decade. To answer this question a series of regression analyses were performed on grade distribution data across and within each university system. Regression analysis allows one to see if grade point averages have changed significantly in any direction over time. The regression beta-coefficients (β) presented below represent the amount of change in the dependent variable (grade point average) for an increase in the independent variable (years). A positive coefficient represents a positive increase in grade point average (in standard deviation units) for each passing year. For example, a beta coefficient of 1.00 for the analysis performed would mean that grade point averages in the university under consideration increased one standard deviation each year between 2009 and 2013.

The first regression equation performed examined the change in grade point averages across all UC campuses from 2009 to 2013; the analysis showed no statistically significant change over this time (β = .07, not significant). The same is true for the CSU campuses; across all CSU campuses, combined grade point averages did not increase significantly from 2009 to 2013 (β = .13, not significant). So, while it may be correct to claim that grades across the UC and CSU systems are high comparatively, it is not correct to say that grades are continuing to inflate. But these results conflate the reality of what is occurring within each system, at individual campuses. Let us address whether grade inflation continues to occur in specific universities in these systems.

Figures 1 and 2 present grade trends (linear fitted values) for each UC (figure 1) and CSU (figure 2) campus between 2009 and 2013.[5] In examining figures 1 and 2 one can see that regardless of trajectory, the average grades across all UC and CSU universities are high (with some noted variability across campuses), with the average hovering around 3.0. Figures 1 and 2 also seem to reveal—at least visually—that grade distributions at many UC and CSU campuses show an upward trend over time. But interpreting the data presented in these figures visually is deceiving. Let us examine each UC and CSU campus more thoroughly to see if these upward trends indeed are significantly different over the years.

Regression analyses examining the change over this time in each UC campus reveals that grade distributions for five of the nine campuses significantly increased; UC campuses at Berkeley (β = .89, p <.001), Los Angeles (β = .63, p <.05), Riverside (β = .80, p <.001), San Diego (β = .88, p <.001), and Santa Barbara (β = .73, p <.01) all had significantly increasing GPA distributions in the time spanning from 2009 to 2013.[6] UC campuses at Davis, Irvine, Merced, and Santa Cruz showed no statistically significant increase (or decrease) in grade point average over that time. So, over the past half-decade approximately half of the UC campuses inflated grades; the other half did not. These results seem to show that grade inflation in the recent past is not as ubiquitous across all institutions of higher education as many believe, but let us now examine grading practices in the CSU system to see what has occurred in those campuses.

Grade inflation trends over the past five years are even less noticeable among CSU campuses. Only one-third of the CSU campuses showed a significant increase in GPA over this time. These campuses include Dominguez Hills (β = .94, p <.05), East Bay (β = .98, p <.01), Fullerton (β = .99, p <.001), Northridge (β = .95, p <.05), Pomona (β = .93, p <.05), San Diego (β = .99, p <.001), San Jose (β = .97, p <.01), and San Luis Obispo (β = .99, p <.01). The majority of CSU campuses showed no significant change over that time, and, average undergraduate grades at Humboldt State University (β = -.93, p <.05) actually decreased significantly over that time. So the notion that grade inflation continues to plague higher education across universities is not altogether true, as only thirteen of the 32 UC/CSU campuses (41%) inflated grades over the past five years.

So far we have established—at least for universities in the UC and CSU systems—that grade inflation is not quite as systemic in the recent past as it was in the distant past. These findings offer some optimism to those who fear that grade inflation will continue without cessation, though the fact that many universities examined in this study still inflate grades may quickly quell that optimism (and it is of course possible that those who did not inflate grades could begin to in the future, or return to the practice if they inflated grades previously). Let us now turn our attention to other facets of grading, and see if anything can be discovered regarding why grades tend to be higher at some university campuses than they are at others.

Letter Grades as Semantic Categories

Research on grade inflation often examines grade point averages as the main or sole dependent variable, which is understandable as examining changes in GPA over time seems to be the most objective way in which to measure grade inflation. But often in these studies the semantic meaning of “grade point average”—what GPA actually represents—is glossed over or ignored altogether. After all, grade point averages are in many ways reified measures of competence that represent a different unit of analysis (i.e., grouped data) than what was measured in the units of observation (i.e., performance in particular classes). In other words, grade point averages are numeric aggregates that together are less than the sum of their parts. To clarify this, consider the simple fact that most university students are not assessed by grade points; they are assessed using letter grades. Letter grades are then translated into numeric scores to determine grade point averages.

So, what is the point to this? The point is that it is possible that in examining student assessment as grade point averages educators commonly commit an ecological fallacy: they assume that a 3.0 equates to a B, and that a B equates to work that is above average—but there may be a disconnect concerning treating the semantic meaning of a student’s competence as a numeric value within a range of values. Why is this? Because in looking closely at semantic grade definitions across universities one finds quite a bit of variance in the definitions used to represent letter grades—i.e., variance in the semantic categories that represent competence. Sometimes a B means “above average,” but other times it means something else, such as “high pass.” These differences wash out and disappear when letter grades are transformed into grade points. A 3.0 from one university is virtually the same in value and meaning as it is at another. But perhaps we should not be so quick to ignore the semantic meanings of letter grades and what they represent. Perhaps assuming a B at all universities equates to a 3.0 misses crucial detail, and even leads to faulty assessment of student abilities comparatively. Let us examine grade distributions in the UC and CSU system and consider the semantic meanings of what those grades actually represent in each individual campus to illustrate this.

The Relationship between Semantic Definitions of Grades and Grade Point Average

By assumption, student assessment—the practice of grading—appears standardized and similar across UC and CSU campuses, and also standardized and similar across most (if not all) American universities. By standardized and similar we mean that both the assessment scale (i.e., using the 0-4.0 grade point scale) and semantic categories (i.e., labeling an A as “excellent,” a B as “above average,” a C as “passing,” etc.) used to represent student competence are the same across universities. This assumption is made by most studies on grade inflation that focus on reported grade point averages of universities. Without a close look within each campus in the UC and CSU systems one might assume that grading is standardized and similar. For example, a cursory glance into the grading practices in the UC and CSU systems reveals that all universities within these systems use the traditional A through F grade range, so it appears that standardization and similarity exists. However, a closer look shows that this is only partly so. For example, while each university in each system uses the traditional A through F assessment scale, not all employ a plus/minus system for each letter grade. For example, in the UC system, UC Santa Cruz uses plusses and minuses for some grades (A+, A-, B+, B-, C+), but not all grades (anything below a C is not modified by a plus or minus). In the CSU system, CSU Dominguez Hills has no D- grade in its A to F scale. Other differences in grading among UC/CSU campuses include the use of A+ as a marker of distinction; most UC campuses use it, but only about half the CSU campuses use it (regardless of where it is employed it always carries the same grade weight as an A—4.0). So, even within the traditional A to F assessment scale there is diversity in its use between and within each UC/CSU system.

There also is variability regarding semantic categories (or definitions) for grades in the UC and CSU systems—particularly within the CSU system. Tables 1 and 2 present grade definitions for the five main grade categories (A, B, C, D, and F) used by each UC and CSU campus respectively, disregarding definitions for sub-unit, plus/minus categories (A-, B+, B-, etc.). A perusal of grade category definitions across the CSU reveals substantial differences semantically. This seems to mater, as we will see shortly how grade point averages correlate differently depending on a university’s semantic definition for letter grades.

Table 1. Grade Definitions: University of California Campuses

Grade Category and Definition

UC Campus

A

B

C

D

F

Berkeley

Excellent

Good

Fair

Barely Passed

Failed

Davis

Excellent

Good

Fair

Barely Passing

Not Passing

Irvine

Excellent

Good

Fair

Barely Passing

Failure

Los Angeles

Superior

Good

Fair

Poor

Fail

Merced

Excellent

Good

Fair

Barely Passing

Not Passing

Riverside

Distinction

High Pass

Pass

Marginal Pass

Fail

San Diego

Excellent

Good

Fair

Poor (Barely Passing)

Fail

Santa Barbara

Excellent

Good

Adequate

Barely Passing

Not Passing

Santa Cruz

Excellent

Good

Fair

Poor

Fail

Table 2. Grade Definitions: California State University Campuses

Grade Category and Definition

CSU Campus

A

B

C

D

F

Bakersfield

Excellent

Good

Average

Passing

Failing

Channel Islands

Outstanding Performance

High Performance

Adequate Performance

Less than Adequate Performance

Unacceptable Performance (failure)

Chico

Superior Work

Very Good Work

Adequate Work

Minimally Acceptable Work

Unacceptable Work

Dominguez Hills

Excellent

Very Good

Satisfactory

Barely Passing

Failure

East Bay

Excellent

Good

Satisfactory

Poor

Failing

Fresno

Excellent

Good

Average

Passing

Failing

Fullerton

Outstanding

Good

Acceptable

Poor

Failing

Humboldt

Outstanding Achievement

Outstanding Achievement

Satisfactory Achievement

Minimum Performance

Failure Without Credit

Long Beach

Highest Level Performance

High Level Performance

Adequate Level Performance

Less Than Adequate Performance

Performance has been such that minimal course requirements have not been met.

Los Angeles

Superior

Good

Average

Poor

Non-Attainment

Maritime Academy

Highest Performance

Good Performance

Adequate Performance

Less than Satisfactory Performance

Poor Performance (course requirements not met)

Monterey Bay

Undefined (4.0)

Undefined (3.0)

Undefined (2.0)

Undefined (1.0)

Undefined (0.0)

Northridge

Outstanding

Very Good

Average

Barely Passing

Failure

Pomona

Superior Work

Very Good Work

Adequate Work

Minimally Acceptable Work

Unacceptable Work

Sacramento

Exemplary Achievement

Superior Achievement

Satisfactory Achievement

Unsatisfactory Achievement (sufficient to pass)

Unsatisfactory Achievement

(no credit)

San Bernardino

Excellent

Good

Satisfactory

Passing

Failing

San Diego

Outstanding Achievement

Praiseworthy Performance

Average

Minimally Passing

Failing

San Francisco

Highest Level Performance

Good Performance

Adequate Performance

Less than Adequate Performance

Performance of the student has been such that course requirements have not been met.

San Jose State

Excellent

Above Average

Average

Below Average

Failure

San Luis Obispo

Excellent

Good

Satisfactory

Poor

Failing

San Marcos

Excellent

Good

Satisfactory

Passing

Failing

Sonoma

Outstanding

Commendable

Satisfactory

Minimum Performance

Failure

Stanislaus

Excellent

Good

Satisfactory

Unsatisfactory

Failure

In comparing table 1 and table 2, one can see that there generally is uniformity among UC campuses regarding definitions of grade categories. Most UC campuses define an A as “excellent,” most define a B as “good,” and most define a C as “fair.” There is a bit more diversity in how UC campuses define the D and F grades, but overall grade definitions across the UC campuses are quite uniform. The only UC campuses that seem to deviate in any substantial fashion are UCLA and UC Riverside, which provide unique definitions for some (UCLA) or most (UCR) grades comparatively. All in all, the UC campuses seem to have a mostly consistent grading system.

An examination of the grade definitions provided in table 2 for the CSU campuses reveals far less consistency compared to UC campuses. Among the 23 CSU campuses, 9 different definitions are used for the grade of A; 12 different definitions are used for the grade of B; 8 for the grade of C; 11 for the grade of D; and 10 for the grade of F.[7] To illustrate these differences, let us examine the ways the C grade is defined across CSU campuses. CSU Dominguez Hills, Stanislaus, San Bernardino, San Marcos, East Bay, Sonoma, and San Luis Obispo (7 campuses total) all define a C as “satisfactory.” Sacramento State University and Humboldt State University (2 campuses total) use a similar but not identical definition for C—“satisfactory achievement.” CSU Los Angeles, Bakersfield, Northridge, Fresno State, San Jose State, and San Diego State Universities (6 campuses total) all define a C as “average.” This is interesting because the term average connotes that most students should fit this definition. “Average” implies a distribution centered on the grade point (i.e., it is the modal category), whereas other definitions for C do not imply such a centered distribution. Indeed, San Diego State University’s General Catalogue states that a C is “the most common undergraduate grade” and that a B is “definitely above average” (Cook 2014), clearly implying that there should be more Cs awarded than Bs. An implied distribution of grades based on semantic categories or the idea that most college grades should be in the C range does not necessarily exist for other CSU campuses. CSU Channel Islands, San Francisco State University, and the Maritime Academy (3 campuses total) all define a C as “adequate performance.” CSU Long Beach deviates slightly from this definition by using “adequate level performance,” while Chico State University and Cal Poly Pomona use “adequate work.” CSU Fullerton defines a C as “acceptable.” CSU Monterey Bay offers no formal definition for their letter grades at all, only defining a C as a “2,” in reference to the four-point numeric scale for calculating GPAs.

Defining grades as average or otherwise may seem irrelevant—that these definitions are simply distinctions without a difference. But it begs the question: Does defining a C as “average” make a difference in grade distributions? In other words, if a university defines a C as average, which implies a set distribution and by definition connotes that most students should be assessed that grade in any given course, do those universities have grade point averages that are closer to the numeric value associated with a C (i.e., 2.0) than universities who do not define a C as average? The data show that this is so, if only in a subtle way. A t test examining grade point averages over the past five years for CSU campuses that define a C as average (Mean GPA = 2.90, SD = .08) vs. campuses that use some other definition for a C (Mean GPA = 2.95, SD = .01, t = -2.37, p <.05) [8] shows that GPAs are significantly lower (or closer to 2.0) when a C is defined as average. The difference in GPA between groups may seem negligible at only five hundredths of a grade point, but the difference is statistically significant and in the direction one might expect: when a university defines a C as average professors seem to be more likely to grade as if a C is the most common grade given (though the inflated average which is closer to a B than a C shows they don’t adhere to this mandate very effectively).

Further examining table 2 also shows variability among CSU campuses regarding the label for the F grade. Many campuses label an F as “failing” or “failure,” with others defining an F as “unacceptable performance,” “unacceptable work,” “performance has been such that minimal course requirements have been met,” “non-attainment,” “poor performance,” “unsatisfactory achievement,” and “performance of the student has been such that course requirements have not been met.” Does variability in defining the meaning of an F matter in grade distributions? Perhaps. A t test shows that a statistically significant difference in grade point averages exists between CSU campuses that define an F as “failing” or “failure” (Mean GPA = 2.92, SD = .09) compared to CSU campuses that do not follow this definition (Mean GPA = 2.96, SD = .09, t = -2.44, p <.05). CSU campuses that define an F as failing have lower GPAs than campuses that do not.

The results from the t tests described above show that the manner in which grade categories are defined affects—at least to some degree—grade distributions. Of course this analysis is limited and many other factors contribute to the variance in grade distributions. But it is interesting to see that the variability in grade definitions among universities might actually matter more than previously recognized, if only to a small degree.

With so much interest and concern about grade inflation many have offered strategies to curb the practice, including implementing pass/fail grading systems, better articulating grading expectations, separating instructor evaluations from student evaluations, focusing on earned grades versus entitlement, and reporting median grades on student transcripts (Bar, Kadiyali, and Zussman 2009; Tucker and Courts 2010). Some of these strategies may work better than others. The findings presented here regarding the influence of grade category definitions on grade distributions could potentially add another strategy to those mentioned previously. The manner in which a university defines its grade categories might matter more than presently understood by university administrators. If universities carefully conceive the semantic meaning of their grades and assessment strategies they may be able to curb, reduce, and even reverse grade inflation. Perhaps defining a C as average and stating that it is the most common grade given (and of course making this clear to instructors) could lower inflated distributions. Of course much more research is needed regarding the impact of grade definitions on the practice of grading. The analysis reported here is admittedly simple and other factors may explain as much or more of the variance in grade distributions. But these results are compelling moving forward.

Discussion

Whether grade inflation continues as a problem in higher education or not is really only part of the issue. It is undeniable that grades have increased from what they once were. So, what can be done to rectify the fact that grading seems to center toward the higher end of the grading scale? What can be done to re-center grade distributions, and how can we move toward a more valid metric of assessment? Ameliorating the problems grade inflation has created in higher education might require a significant change in grading rubrics, but it is dubious to believe that instructors could be persuaded en masse to re-center their grading strategies and correct the inflation that has occurred in one fell swoop. Abandoning the letter grade system also seems a dubious solution, as it is so institutionalized as part of the history of assessment.

If all instructors used the A–F scale as was originally intended (that a C represents average performance, As and Bs above average performance, and Ds and Fs below average performance) there would be no issue. But since grade inflation is an issue, some have endorsed the implementation of a rank-order grading system to be used in addition to letter grades (Carter and Harper 2013; Cherry and Ellis 2005). Assessing student performance in terms of both grades and ranks has already been employed in some universities, such as the University of North Carolina, Chapel Hill (Stancill 2014). Using this system, students are given both a letter grade and a numeric value representing their comparative standing vis-à-vis others. The idea is that the combination of letter grades and comparative rank allows a secondary assessment category that controls for the low variance in inflated grade distributions. Proponents of this strategy find it particularly useful in addressing the variance in quality that defines students from class to class. Experienced instructors know that the quality of students can vary dramatically from semester to semester; sometimes a class has many high achievers, other times few, if any. The combination of both letter grades and rank-ordered values might produce a more accurate and precise conception of student ability, because this system provides both a measure of absolute competence regarding mastery of a subject (a letter grade) and measure of comparative competence regarding a student’s ability in relation to others (a ranked value).

While many support the idea of employing ranking in assessing student performance, some questions emerge regarding its efficacy. For example, if rank-ordering is implemented as an additional criterion by which to assess students, will it actually work to combat the harm grade inflation has caused? Some claim that implementing such a system would not improve the current state of education because ranking could force students toward competition and away from collaboration and mastery of a discipline. There is some validity to these claims; past studies have shown that high quality students sometimes avoid majors in the STEM areas (science, technology, engineering, and mathematics) because of the increased competition that defines those areas (Seymour and Hewitt 1997; Tobias 1990). Ranking does embrace individualistic more than collectivistic orientations. Students would likely collaborate less and be hesitant to help one another in their classes when they realize that student assessment has greatly become a zero-sum game (helping others succeed and get ahead in a rank-ordered grading system may directly harm one’s relative ranking). However, many who have criticized rank-order systems often offer little in terms of strategies that would directly counter inflated grades, instead claiming that we should “focus on teaching” and “engage students” (Anonymous 2004; French 2005). Adding a rank-ordered grading system to current assessment strategies might bring more positives than negatives, if the goal is to combat the effects of grade inflation and achieve a more reliable and valid system of assessment.

There presently is no consensus regarding what should be done about the problems grade inflation has created in higher education, and it is doubtful if such a consensus will ever be reached. The results in this study offer some evidence to show that grade inflation may be slowing, but the issue of grades remaining too high remains. It is beyond the scope of this article to identify all the possible avenues of change that could be implemented to combat the problems brought on by grade inflation. What we have hoped to accomplish here is simply to report findings regarding the most current grade distributions within two university systems that together resemble common university populations in America, and to show that an end to grade inflation may be in sight. More research is needed to determine if the results found in this study are similar to what is occurring across the nation.

Conclusion

In this article we have attempted to show two things, first that grade inflation may not be as widespread as it once was across institutions of higher education, and second that grade inflation might be influenced by structural factors (e.g., semantic meanings of grades) more than previously understood. It should be mentioned that the analysis in this study has its limitations, and one should be cautious in interpreting the results as an absolute indication that grade inflation is indeed on the decline across American higher education. The sample examined in this article only addresses universities on the west coast; additional research is needed to determine if grade inflation across the nation is beginning to plateau or decline. Even in the universities examined here, respective grade distributions are likely influenced by many variables beyond those mentioned, including the socioeconomic status of students at the university, the level of preparedness the students in each university had before entering college, etc. Even considering these limitations the analysis here does provide a bit of information about assessment practices and the state of contemporary higher education. The grade inflation issue that many believe still plagues American higher education might have an end in sight.

References

Anonymous. 2004. “Against Grade Inflation.” Nature 431:723.

Arnold, Roger A. 2004. “Way That Grades Are Set Is a Mark Against Professors.” Los Angeles Times. Los Angeles.

Babcock, Philip. 2010. “Real Costs of Nominal Grade Inflation? New Evidence from Student Course Evaluations.” Economic Inquiry 48:983–996.

Bar, Talia, Vrinda Kadiyali, and Asaf Zussman. 2009. “Grade Information and Grade Inflation: The Cornell Experiment.” The Journal of Economic Perspectives 23:93–108.

Carter, Michael J. and Heather Harper. 2013. “Student Writing: Strategies to Reverse Ongoing Decline.” Academic Questions 26:285–295.

Cherry, Todd L. and Larry V. Ellis. 2005. “Does Rank-Order Grading Improve Student Performance? Evidence from a Classroom Experiment.” International Review of Economics Education 4:9–19.

Cook, Sandra A. 2014. “2014-2015 SDSU General Catalog.” Pp. 468, vol. 101. San Diego, CA: San Diego State University.

French, Donald P. 2005. “Grade Inflation: Is Ranking Students the Answer?” Journal of College Science Teaching 34:66.

Jewell, R. T. and M. A. McPherson. 2012. “Instructor-Specific Grade Inflation: Incentives, Gender, and Ethnicity.” Social Science Quarterly 93:95–109.

Johnson, Valen E. 2003. Grade Inflation: A Crisis in College Education. New York: Springer-Verlag.

Kezim, Boualem, Susan E. Pariseau, and Frances Quinn. 2005. “Is Grade Inflation Related to Faculty Status?” Journal of Education for Business 80:358–364.

Oleinik, Anton. 2009. “Does Education Corrupt? Theories of Grade Inflation.” Educational Research Review 4:156-164.

Seymour, Elaine and Nancy M. Hewitt. 1997. Talking About Leaving: Why Undergraduates Leave the Sciences. Boulder, CO: Westview Press.

Sonner, Brenda S. 2000. “A is for 'Adjunct': Examining Grade Inflation in Higher Education.” The Journal of Education for Business 76:5-8.

Stancill, Jane. 2014. “At UNC-Chapel Hill, the Truth About Grades.” vol. 2015: Newsobserver.com.

Summary, Rebecca and William L. Weber. 2012. “Grade Inflation or Productivity Growth? An Analysis of Changing Grade Distributions at a Regional University.” Journal of Productivity Analysis 38:95-107.

Tobias, Sheila. 1990. They're Not Dumb, They're Different: Stalking the Second Tier. Tucson, AZ: Research Corporation.

Tucker, Jan and Bari Courts. 2010. “Grade Inflation in the College Classroom.” Foresight 12:45-53.

Walsh, Patrick. 2010. “Does Competition Among Schools Encourage Grade Inflation?” Journal of School Choice 4:149-173.

 

[1]The UC system enrolls over 244,000 students. The CSU system enrolls over 445,000 students.

[2]Nine UC campuses are examined; UC San Francisco is not included as it is primarily a graduate institution.

[3]Grade data for the UC and CSU campuses was the most recent available at the time the study was conducted.

[4]Data for UC and CSU campus GPAs were obtained for the past five years (2009–2013) from each specific campus’ institutional research division or from the CSU Office of the Chancellor/UC Office of the President. The average GPA for each UC campus regards all undergraduate students enrolled across each respective semester/quarter between 2009 and 2013. The average GPA for each CSU campus regards all undergraduate students enrolled across each respective academic year between 2009 and 2013.

[5]Data for CSU Los Angeles represents GPA trends between 2008 and 2011, as these were the most recent available at the time of data collection.

[6]It should be noted that the significant findings regarding grade inflation between 2009 and 2013 at Berkeley, UCLA, Riverside, San Diego, and Santa Barbara are relatively subtle. For example, in 2009 the average GPA at Berkeley was 3.28, whereas the average GPA in 2013 was 3.30 (a two-hundredth point increase). While subtle, small changes such as these have added up over the past decades and together have led to the problem of grade inflation. Note that the average grade at Berkeley in 2005 was a 3.24, four-hundredths of a grade point lower than 2009, and six-hundredths of a grade point lower than 2013.

[7]Differences among definitions are both substantial and subtle; for the purpose of this article we consider a substantial difference as defining the grade of A as “Excellent” versus “Highest Level Performance.” We consider a subtle difference as defining an A as “Outstanding” versus “Outstanding Achievement.”

[8]Data for CSU campus GPAs were obtained for the past five years (2009–2013) from each specific CSU campuses institutional research division or from the CSU Office of the Chancellor. The average GPA for each campus regards all undergraduate students enrolled in each respective year.

Image Credit: Andrew Tan, Public Domain

  • Share
Most Commented
Most Read

May 30, 2018

1.

The Case for Colonialism

From the summer issue of Academic Questions, we reprint the controversial article, "The Case for Colonialism." ...

July 2, 2020

2.

In Humans, Sex is Binary and Immutable

The idea that there are more than two sexes in human beings is a rejection of everything biological science has taught us. Unbelievably, this idea is coming directly from within the highest......

March 29, 2019

3.

Homogenous: The Political Affiliations of Elite Liberal Arts College Faculty

A study on the partisanship of liberal arts professors at America's top universities. ...