Retooling Education: Testing and the Liberal Arts

Robert L. Jackson

America has a long history of searching for educational tools that will open up opportunities for ordinary people. For over a century, standardized testing has been at the center of such policy initiatives. From the widespread adoption of IQ tests in the 1920s, through the testing regime promoted by No Child Left Behind, and recent announcements by colleges that are abandoning the SAT as an admissions tool, standardized tests have been variously extolled for helping students overcome social disadvantage, criticized for imposing artificial hierarchies, and condemned for reifying social inequalities. The history of such tests is illuminating in several ways, not least because of the migration of test-enthusiasm through the political spectrum. In short order, testing went from being a progressive nostrum to a conservative idée fixe, and from democratic leveler to an elitist swindle.

In what follows, I will trace some of these intellectual switchbacks in the history of standardized testing. Mostly what history shows us is that the initial burst of enthusiasm for testing a century ago pushed American education onto a sterile track. I conclude with some suggestions about what a post-standardized testing approach to education (at all levels) might look like. For the sake of argument, I suggest that we take some of our cues from pretty far back—before the Enlightenment, the scientific revolution, or even the beginnings of American democracy. But let’s start with today.

A recent national poll conducted by Harvard University and the Hoover Institution surveys “What Americans Think about Their Schools.” ([7], p. 15) A majority (57 percent) support the renewal of No Child Left Behind, and nearly three-quarters (73 percent) believe that there needs to be “a single national standard and a single national test for all students,” as opposed to the current irregularity of various state measures. The divergence of political opinion seems most prominent with questions concerning accountability and testing. While the majority (81 percent) feels that a student’s eligibility for grade-promotion and high school graduation should involve passing an appropriate examination, a closer parsing of that majority demonstrates the influence of political affiliation. The poll reveals a dramatic 24-percentage-point difference (88 to 64) between those self-identifying as “extremely conservative” and “liberal,” respectively. As education policy analyst Michael Petrilli concluded: “National testing has become a conservative position.” ([12], online) Perhaps that is true, but it has not always been so.

Mental measurement has long been a contentious issue in American education. National testing has an eighty-year history steeped in the progressive hope of a liberal ideal: identifying students’ aptitudes to help them achieve their full potential. By contrast with today’s conservative position on a national test of standards, the historic proponents of national testing were often committed to socially progressive views of education. The effects of standardized testing and assessment eventually permeated higher education, seeping up through the cracks and fissures of the lower grades. The irony that a liberal-progressive position is now heartily endorsed by self-professing conservatives may be understood better, if we look at the origins of the testing impulse.

This essay will survey a brief history of mental testing that demonstrates the political force of large-scale tests, which served to justify the early twentieth century’s newest social science: education. (The historical accounts of Michael Ackerman and Nicholas Lemann are particularly helpful in this regard.) From wartime necessity to peacetime prosperity, national tests offered educators and politicians a means of taking account of society’s needs. Yet, the discipline of mental measurement, psychometrics, was itself a developing science. And as such, a strain of American pragmatism selectively chose (or discarded) those aspects of the underlying philosophy of testing that suited the contemporary need. Unfortunately, as is all too common in the social sciences, the technical expertise (e.g., standardized testing) has far outstripped the original enterprise (e.g., education) for which it was designed. Today’s concern over national testing standards belies our fundamental purpose in education: to transmit a corpus of content from one generation to the next. The following pages are an attempt to retrace the bewildering history of testing and suggest a more promising future for education—and testing.

The Politics of Testing

In 1917, as the United States entered the First World War, a cadre of eager young psychologists enlisted their services in the war effort through a committee of the American Psychological Association. Their labors produced the first national-scale standardized examination: the Army Alpha test, which was used to classify 1.7 million recruits over the course of America’s eighteen months at war. The magnitude of Alpha’s success thrust several of the army’s psychologists into public prominence—chief among them, Lewis Terman. The exigencies of war gave psychometrics its first big break.

Prior to the war, Terman had worked at Stanford to initiate the department of education and establish the budding field of educational psychology. His revision of Alfred Binet’s intelligence test would become the standard for American intelligence testing (i.e., the Stanford-Binet), even as Terman would become a champion of psychometrics. His seminal work on intelligence, The Measurement of Intelligence (1916), was intended as an explanatory guide for a national audience, and its message was derived from a distinctly progressive political perspective on education. National tests would, in Terman’s opinion, help the country to classify students and “choose the methods and matter of education which will guarantee for such children the best possible returns for their efforts.” ([3], qtd., p. 703) His ambition was to offer “a mental test for every child,” to classify students in one of five categories: gifted (2.5 percent), bright (15 percent), average (65 percent), slow (15 percent), and special (2.5 percent)—a simple demarcation along the standardized distribution of the normal (Gaussian) curve. By the end of the war, the success of psychometrics would fast-track Terman’s aspiration for national intelligence testing and bolster the “new science” of education with modern, empirical techniques.

Understandably, school administrators at the turn of the century were looking for some help in managing “methods and matter,” amid the explosive growth of U.S. secondary schools. Waves of immigrants and poorly educated youth were entering the system. Between 1890 and 1930, the high school population surged: from 6 percent of the 14-to-17-year-old cohort, more than 350,000 students, to more than 50 percent, nearly five million—a 1,300 percent increase. ([10], p. 15) The rapid increase in enrollments (from various socioeconomic and cultural backgrounds) strained school administrators to the breaking point, such that they were eager to identify any technical means to manage such a heterogeneous student population.

In 1893, the National Education Association (NEA) organized the prestigious Committee of Ten to address the nature of college preparation in secondary schools. Given the wide array of school curricula and the diversity of college entrance examinations, the Committee had been charged to offer some standards of practice for American secondary education, along with thoughts on transitioning from high school to college—for that small handful of prospective collegians. A mere 15,500 bachelor’s degrees were awarded in 1890. ([6], p. 503) Four types of secondary curriculum were suggested (classical, Latin-scientific, modern languages, and English), each possessing a core of common elements. Headed by Harvard president Charles Eliot, the Committee offered a modest yet reform-minded suggestion: “every subject which is taught at all in a secondary school should be taught in the same way and to the same extent to every pupil so long as he pursues it, no matter what the probable destination of the pupil may be, or at what point his education is to cease.” ([13], qtd., p. 42) The Committee also downplayed the requirements of classical languages—which outraged Greek and Latin professors—and encouraged greater understanding of the emergent sciences. However, languages, history, mathematics, and the natural sciences continued as pillars in the educational edifice. The Committee represented a middle way, between romantic reformers and classicists. But, within a decade, their recommendations for secondary education would seem obsolete, in light of more progressive educational theory—i.e., John Dewey et al.

Though reform-oriented, the Committee was denounced by educators who were intent on tracking students—i.e., identifying disadvantaged students for alternative forms of schooling, based on their “probable destination.” The politics of progress and the corresponding philosophy of education demanded that students be guided “in the direction of social capacity and service, [toward the students’] larger and more vital union with life.” [5] Historian Diane Ravitch explains the reaction to the Committee: “Progressive educators considered it a misguided elitist effort to impose a college preparatory curriculum on everyone…[ignoring] individual differences and social needs.” ([13], p. 48) The “elitist” curriculum was to be replaced with the new “science of education,” and, to that end, students’ “social capacities” had to be identified—for which intelligence tests offered an immediate, practical solution.

When the NEA approved another advisory council in 1918, the membership of the Commission on the Reorganization of Secondary Education was markedly different than Eliot’s Committee. Rather than college presidents and academy headmasters, this progressively-inclined group was comprised of education professors, who were determined to reorder the secondary curriculum. Their report, Cardinal Principles of Secondary Education, fully endorsed the new approach of “comprehensive high schools,” offering a smorgasbord selection of courses and electives designed with “individual differences and social needs” in mind. Their ambition to replace the liberal arts recommendation of their predecessors demonstrated their disdain for the academic curriculum. And, intelligence tests were submitted as the evidence in support of the new curriculum. As education historian Jeffrey Mirel explains, the opponents of the liberal arts curriculum won,

because supporters of comprehensive high school defined equal education as equal access to different and unequal programs. Guided by the new IQ tests (which did as much as any single thing to convince American educators that tracking was not only possible but preferable) and the rise of guidance and counseling programs (which would match young people with the curriculum track best suited to their “scientifically” determined individual profiles), America entered an era of democratic dumbing down: the equal opportunity to choose (or be chosen for) failing programs. ([10], p. 17)

The curricular influence of such “scientific” notions was promoted by the most renowned educational psychologists of the day. As Terman biographer Paul Davis Chapman recounts, the success and international acclaim of the Army Alpha test enabled Terman and other army psychologists, including Robert Yerkes (Harvard) and Edward Thorndike (Columbia), to mass-market intelligence testing to post-war America. In 1920, in cooperation with the Rockefeller Foundation, the World Book Company distributed “nearly half a million National Intelligence Tests for use in public elementary schools.” ([3], p. 704) With scientifically validated tests to determine the sorting process and psychologically-trained counselors to guide individuals toward their suitable place, a better world was being engineered by what Russell Kirk called “the sophisters and calculators.”

Inspired by a hereditarian philosophy of human nature, Terman and many of his colleagues believed intelligence to be largely fixed—to the extent that Terman logically became an outspoken advocate of eugenics, positing that many social ills (crime, poverty, etc.) were attributable to “children of subnormal mental endowment.” ([3], qtd., p. 703) His support of radical social programs, like forced sterilization laws, would later serve to discredit the strict hereditarian position. Yet, at the peak of Terman’s influence, educators generally assumed that intelligence was inborn.

Then, during the 1930s, a vigorous psychometric debate emerged concerning the essence of intelligence, with a spectacular dispute between two of the world’s leading psychometricians: Charles Spearman and L.L. Thurstone. Depending more on mathematical models than philosophical arguments—i.e., whether to extract a single “g factor” or multiple factors of “primary mental ability (e.g., verbal, numerical, spatial, memory)”—the emergence of “multiple intelligences” allowed for alternative interpretations of intelligence. This also dove-tailed nicely with the progressive cause of differentiated education and individualized instruction. Thus, during the interwar years, progressive educators first adopted fairly rigid tracking mechanisms, based on innate intelligence, then later moved toward individualized programs of instruction, supported by vocational-specific skills-based tests. Whether taking the innate (nature) stance or the environmental (nurture) position, education sought scientific legitimacy through the measurements of psychometrics.

As applied psychologists strove to meet the demands of classifying a diversified labor force, they were “less concerned with theories of human cognition than with the practical needs of their clients.” ([1], p. 289) As testing historian Michael Ackerman reviews the period, educators and psychologists developed various vocational aptitude tests that eventually forced the g-factor of intelligence into the shadows. At about the same time, other leading psychometricians designed alternatives to IQ—for example, E.F. Lindquist’s Iowa Test of Basic Skills and Educational Development, which “claimed to measure a number of different cognitive skills.” Ackerman concludes that “[b]y the beginning of World War II, many psychologists and educators had abandoned the belief that mental capacity could be evaluated on a single scale; a much more complex and varied picture of human capability had been formed.” ([1], p. 289).

Before America’s involvement in the Second World War, the state of the art in psychometrics had dampened earlier extremism (like eugenics), enabling testing to survive the discomforting associations of its deterministic forbearers. And, the political popularity of curricular studies (the science of education) had achieved a radical diminution of the academic curriculum. The symbiotic relationship between pre-undergraduate education (elementary and secondary) and mental measurement was thus firmly established. The second half of the twentieth century would offer higher education an opportunity to generate similar benefits from psychometry.

The Philosophy of Psychometrics

As the early history of intelligence testing demonstrates, IQ scores can have profound social implications. From the outset, test-makers were explicitly interested in classifying students to place them in appropriate school tracks. But the concepts of “tracking” and identifying “appropriate” subject matter for students are philosophical notions for pedagogues, rarely addressed in discussions of IQ and aptitude testing. IQ test-makers were seeking to identify innate potential—the human material attributed to Nature and Fortune, in generations past.

Following World War II, intelligence measurements became a rather delicate matter. On the one hand, psychologists had grown skeptical that there was such a thing as “pure intelligence.” On the other hand, practitioners recognized (or accepted implicitly) that aptitude tests closely correlated with IQ tests. Lemann’s history of IQ testing offers us a glimpse of this social history—one that is still very much alive today. After the war, higher education launched an all-out campaign employing standardized tests, in a strange new world of double-speak: “the father of the SAT [Scholastic Aptitude Test] was on record as believing that there was no such thing as general intelligence….[Yet, the] main promoters of the wide use of the SAT regarded it rather casually as an intelligence test.” ([9], p. 86) So, while avoiding the invocation of g, the demands to sort the latest (and largest) crop of students—returning World War II servicemen—overrode theoretical concerns in the academy. The influx of college-bound GIs offered the educational psychologists another opportunity to practice their psychometric alchemy—transforming the leaden mass of America’s young men into the golden majority of the post-war generation.

Unfortunately, the method of this academic wonder would defy the difficult and complex nature of education. As curricular standards had been eroded throughout the first half of the twentieth century, the majority of returning servicemen were not prepared for the rigors of college-level studies. Nevertheless, tests were once again invoked to sort students, differentiating examinees by responses to a composite score of verbal and quantitative skills—i.e., the SAT. And, specifically what intellectual qualities did the SAT reveal? No one quite knows. As Lemann reminds us, the psychometric experts at Educational Testing Service (ETS), the home of the SAT, “did not settle any of the controversies about intelligence testing, such as whether IQ is something innate or learned. Instead, [ETS’s] bread and butter was tightly looped validations of the SAT.” ([9], p. 89) They continued refining the test—a paragon of reliability and correlational validity—without any clear philosophical explanation for the mental stuff being measured.

At the heart of this progressive project was a logical conundrum: the selection of the most promising students (through mental tests) was in tension with the offer of higher education to the masses (by political fiat). The progressive approach disguised a latent “democratic” paradox: how can we advance excellence while promoting equality?

Using “scientific” and objective assessment of applicants, namely IQ scores, the American Council on Education (ACE, 1948) argued that 50 percent of the college-age population had the requisite “mental ability” to enter post-war universities. Surely that was much more inclusive than pre-war estimates.1 In cooperation with an army study correlating scores on the Army General Classification Test (AGCT) and college entrance examinations, the ACE and the army’s chief psychologist, Walter Bingham, promoted a large-scale expansion of American higher education—a threefold increase! As Bingham addressed the National Academy of Sciences in 1946: “These facts are a challenge to conserve the national heritage…the intellectual capacities of our young people.” ([1], qtd., p. 284) Bingham’s selection of “the most promising” was an ambitious, if unrealistic goal, given the sterilization of the secondary school curriculum. As such, the SAT was forced to return to the “tightly looped validations” of aptitude, without reference to specific content. Students were implicitly viewed as native talents, rather than nurtured goods. Here again, it was not always so.

Prior to World War II, the SAT exams offered by the College Board were subject-based and essay-formatted. Then, in 1942, the College Board modified the SAT by dropping the essay tests, ostensibly because of wartime necessity. However, with the end of hostilities, there was no recovery of the SAT essays. What is more, “[t]he philosophical justification for replacing the essay exam with the SAT employed the language of equal opportunity.” ([1], p. 288) By 1946, colleges and universities were essentially prepared to accept the democratically designed, standardized, multiple-choice tests as measures of academic aptitude. Thus, progressive pedagogical change depended upon psychometric evidence, which, in turn, adapted to pedagogical reform—e.g., no more essays. As Lemann points out, even test critics from within education “believed that the problems they perceived in testing were best addressed not by de-emphasizing testing and letting society sort itself out in a more haphazard fashion but by constructing [better] tests.” ([9], p. 88) Testing was presumed to be an unalloyed good, useful for progressive reforms.

Surely that was the case in the politics of post-war education, driven, in large part, by President Truman’s Commission on Higher Education (1947–1948). Its findings argued for the massive expansion of higher public education; opposition to all discriminatory admissions practices (race, religion, sex, national origin); substantial federal aid to public institutions; a national scholarship program to fund the best and brightest (top 20 percent); and curricular changes to encompass a broader population of students.

Specifically, the Commission described college admissions policies as “discriminating against students who had not taken the proper courses because of inferior school facilities or poor counseling.” ([1], p. 287) The influence of such “narrow” curricular prerequisites seemed undemocratic (discriminatory) to the commissioners, leading them to recommend a de-emphasis on academic courses. This political mandate to provide “more opportunity” translated into the demand for increased college enrollments, which was drawn from the 50 percent quota of the ACE’s aptitude (IQ) analyses. The Commission further “urged colleges to provide new non-traditional programs of study [and] it rejected the belief that higher education should remain committed solely to the conventional academic curriculum, which was oriented ‘toward verbal skills and intellectual interests.’” Furthermore, the Commission recommended that alternate aptitudes be identified (social sensitivity, artistic ability, etc.), arguing that colleges “cannot continue to concentrate on students with one type of intelligence.” ([1], p. 288)

This transmogrification of the curriculum, argued from the interpretation of IQ and aptitude data, was advanced by a politically liberal belief in democratic progress. From that perspective, national tests were expected to serve the majority of American youth by identifying their strengths and offering them the highest possible level of academic achievement—though “academic” would be redefined to accomplish that end. All of this strongly suggests that late-twentieth-century American higher education began “teaching to the test,” in the worst sense of that phrase.

From aptitude tests explicitly designed to foster social equity, research of the 1960s and 1970s emphasized the production of “culturally fair” tests, with an outrageous result: demands for “a complete overhaul of the American educational system, so that it deemphasized reading, verbal communication, and traditional academic problems and stressed a broader range of mental activities.” ([1], p. 294)

Today, the “overhaul” position on education implicitly influences the discourse of test design and interpretation, as mental measurement adapts to the “era of democratic dumbing down” of the curriculum. Tests are designed to quantify generic mental qualities, apart from the content of the student’s academic training—which is assumed to be culturally biased. As Diane Ravitch reports in her censorship expose, The Language Police, textbooks and tests are so scrutinized by politically motivated special interests groups (from the Left and the Right), there is little discernible content remaining by the time the watchdogs are finished bowdlerizing literature, history, and the humanities.

A philosophy of education that permits such devastation of our curriculum should not expect measures of intelligence or aptitude to rescue a few survivors from the ruins. We must fundamentally rethink the relationship between the measures and the subject matter, if we are to move beyond the present confusion. At times, our search for certitude in the educational venture seems to place us on that darkling plain where “ignorant armies clash by night.”

The Art of the Matter

Ultimately, we need a more robust philosophy of education, to address the role and limits of psychometric measures. We do not lack educational theories. The surfeit of ideas on curriculum, instruction, learning, and the sociology of education indicates a near desperation to find the next, new thing. And yet, in spite of multitudes of theories, we are surrounded by the odor of educational decay—failing schools, substandard national test scores in international comparisons, functional illiteracy among graduates, etc. The lack of a curriculum with demonstrable, consistent, replicable results offers a warrant for a longer view and a backward glance—past the horizon of John Dewey and the progressives, to an earlier era, beyond living memory. It is a pedagogical history known primarily through the writings of its detractors: men like Voltaire, Gibbon, and Rousseau. These eighteenth-century philosophes (not twentieth-century progressives) were the first to poke fun at “old-fashioned” pedagogy.

Ask any graduate student of education about the history of pedagogy, and you will likely generate a short synopsis of twentieth-century thinkers—particularly the more radical, such as A.S. Neill or Paulo Freire. I recall searching for some historic continuity in my graduate studies on language teaching. The linguistic and anthropological aspects seemed to offer the most well-rehearsed history, which began in the mid-nineteenth century—Saussure, Sapir, Whorf et al. But, I knew there had to be more.

Fortunately, a thoughtful professor quietly guided me to Louis G. Kelly’s 25 Centuries of Language Teaching (1969), where I was introduced to a strange old-fashioned pedagogy known as “the Latin-translation method.” I had previously been led to believe that Latin-translation was the province of archaic, unenlightened minds, in the period before the pedagogical equivalent of electricity. But, reading Kelly’s description caused me to rethink my snap judgment, and the dismissive scorn of my peers (and a few professors).

The discovery of Kelly’s volume emboldened me to seek out historical perspectives on education. And, I began to see that previous generations possessed compelling histories, with real people and real ways of answering questions, like “How do we best educate our children?” For example, when I first heard that Reverend Maury’s classical school had served as the educational primer for three U.S. presidents (Jefferson, Madison, and Monroe), I started to suspect that effective teaching had a deep connection to the classical tradition.

While education historians can assist us in some of this archival research, many of them look only as far as the Enlightenment-inspired writings of Pestalozzi, Froebel et al., supposing that practical philosophies of education begin with Locke’s Essay Concerning Human Understanding or Rousseau’s Emile (a surprising yet impractical treatise on home-schooling). To reconstruct the Renaissance pedagogies of Europe requires some more intellectual sleuthing, but occasionally such historical investigations cultivate a pearl of great price.

Among my favorites is T.W. Baldwin’s [2] two-volume history of English grammar schools, Shakespeare’s Small Latine and Lesse Greek, which presents an implicit philosophy of education, while offering a fascinating description of the schooling that Shakespeare likely received. With the limited biographical material on Shakespeare, Baldwin conducts some historical forensics on education for middle-class Englishman of the sixteenth century.

Baldwin’s research uncovered a pedagogy steeped in the Renaissance ideal of literary immersion. For Shakespeare and his contemporaries, the oratorical tradition of learning language by venerating and imitating great masters still permeated the spirit and practices of English pre-undergraduate education. Even without a university education, Shakespeare would have studied Latin and Greek over the course of eight years, in a curriculum that exposed students to essential masters, including: Lucian, Demosthenes, Herodotus, Aristophanes, Homer, Euripides, Terence, Virgil, Horace, Cicero, Caesar, Sallust, Origen, Basil, Jerome et al.

From the earliest years, English boys were offered language courses designed to explicate the relationship between “the knowledge of ‘truths’ [as distinct from] the knowledge of ‘words.’” The principal pedagogue of this era was Erasmus, and his philosophy of education presupposed immersion in the classical languages of Greek and Latin because “within these two literatures are contained all the knowledge which we recognize as of vital importance to mankind.” ([2], p. 79) The pedagogic principle was quite simple, on the surface: learn basic language rules (grammar, lexicon, syntax) and then immerse the students in the best models, where those rules are at play.

Understandably, the conviction of classical study eludes many today. We have already seen how the reforms of Eliot’s Committee of Ten pushed classical languages to the margins of the American secondary school curriculum. And, we are justified in challenging the proposition that Greek and Latin proficiency offers “all the knowledge…of vital importance” today, for we live in a word of scientific and technological knowledge that could not have been dreamed of in the days of Erasmus. Moreover, most scientific and international discourse is offered in English, which discourages most American students from branching out, beyond today’s global lingua franca.

Though a resurgence of interest in classical education (and Latin instruction) has produced dozens of academies over the past two decades, this essay is no romantic overture for a return to the grandeur that was Rome. However, Renaissance pedagogies do present a consistent motif worth our consideration. Let me be clear, I am not offering some alternative old-school utopia. I do not expect American school children to master the classical languages (unless they so choose). And, I do not believe that this Renaissance ideal is the province of an upper–middle class, cosmopolitan, academy-bound elite. The pre-Enlightenment method of education was accessible to the common man, because it offered an essential (not overbearing) amount of linguistic description, followed by a prescribed sequence of reading: renowned authors who were the accepted as the language authorities—by their examples.

The curriculum of Erasmus and the English Renaissance rightly conceptualized the priority of language assimilation as central to the educational project. Moreover, that curriculum did not view language as a means to an end—though, in the end, it produced some of the greatest philosophers, statesman, and dramatists of their time. Erasmus and his colleagues perceived language study as a good in itself—one to be delighted in, by teachers and students alike. By gradually immersing students in the best literary and philosophical works, those pedagogues were displaying an astute knowledge of human nature: the innate appeal of excellence. Students want to know how language works, and they are naturally inclined to acknowledge the existence of a select few masters of the language. If students are introduced to those exemplars by caring, thoughtful tutors, they will emulate the qualities of their mentors and continue the tradition of excellence.

However, that Renaissance ideal is confronted by another impulse, typically found in close proximity to the ideal: the movement toward pedagogical circumscription. Once an ideal method has been identified, there is an all-too-human desire to codify and package the effect of the master-teacher, in an efficient and systematic way. In the name of expedience, a circumscriber might offer the following suggestion: “Perhaps all of that poetry and history is not really necessary, if we want to pare it down to its ‘philosophical’ essence.”

Baldwin notes that within several generations, the Renaissance ideal of language training and the recognition of canonical authors had been replaced by eager, progressive pedagogues who “wanted the boys to have something more practical than words, language, and scraps of Greek and Roman History.” Those innovative educators, much like our own progressive sophisters, sought to replace the objective measure of tradition—i.e., the literary and philosophical works that have stood the test of time—with the ephemeral standards of contemporary society.

With the sweeping influence of Enlightenment thought across Europe, English education would follow continental trends, including the search for “natural ways” of educating (cf. Rousseau). In eighteenth-century England, schoolmasters were seeking to impart the mere “Signification of Words” (direct translation, apart from the experience of literary context), so that students might no longer “fear to imitate…their Authors.” Baldwin emphasizes this dramatic pedagogical change: “[The eighteenth-century schoolmaster] is wholly out of sympathy with the Renaissance ideal of style, and so would completely reshape the curriculum to get more matter and less art.” ([2], p. 461, emphasis added)

Citing Locke’s theory of human understanding, these schoolmasters radically departed from the understanding that language is a trans-generational legacy, learned by assimilation through years of close study with gifted instructors and master-models, with its explicit moral hierarchy of excellence. The Enlightenment principle of tabula rasa implicitly denied the traditional view of fallen human nature, replacing sin with error of thought—to be corrected by better, modern philosophies (over and above theology).

While the effect of this philosophical shift would not be felt immediately, the decline of the literary tradition can be traced, in part, to Enlightenment preferences for man-made systems over the Renaissance preoccupation with God-given language traditions (beginning with Holy Writ). As David Lyle Jeffrey explains, the Christian understanding that “all divisions of knowledge are handmaids of theology…[presupposed] a providential unity of reason and revelation.” The literature and philosophy of the Erasmus tradition were coherent because of their binding relationship to the epistemological “view of the interconnectedness of liberal learning or the products of reasoned investigation with biblical revelation and its progressive understanding in the Church.” ([8], p. 29) The curriculum possessed a center that could hold.

But progress would eventually wither the religious soul of the grammar school, though that would not be felt for several generations. Religion would continue to be a subject, in the official sense. But, naturalistic pedagogies from the philosophes would eventually strip away the “superstitious” and “irrational” framework of the Renaissance project—i.e., the Biblical and Christian theological foundations—replacing them with new, rationalistic foundations of knowledge, as the Encyclopedists’ effort demonstrates.

While gifted Enlightenment voices argued persuasively, English grammar school culture would not fare well following such radical replacement therapy, and the literary-theological education of Shakespeare’s day would eventually pass away, replaced by a scientifically-oriented approach to education. In a striking parallel with progressive educators of the American experience, English pedagogues of the eighteenth century enervated the language curriculum by denouncing the great writers of the past and “advising their students to read the Evening Post or some other newspaper regularly.”2 ([2], p. 463, emphasis added)

In 1944, even as Baldwin was writing his account of enlightened English schoolmasters, progressive reforms in the U.S. were repeating that very history. The modern tendency to move away from the literary canon toward a more experiential and manageable pedagogy has often repeated itself, since the days of Erasmus and Shakespeare. And, having become “Men of Sense” who deplore “exactly the things which were ‘practical’ for the poet Shakespeare,” enlightened schoolmasters continue to condemn the literary curriculum. Baldwin concludes: “I feel quite certain that Shakespeare was lucky in going to grammar school in the sixteenth century when it still had an impractical literary objective.” ([2], p. 460).

We know, intuitively, that something has been lost. There is a felt presence in the earlier method of language pedagogy that defies the regime of testing, for all of its efforts to reduce education to manageable, measurable quantities. As we have considered the SAT’s history, we recognize the language-limiting assault of deleting the content-essay exams in 1942; the occupational forces of content-stripping, “culturally-fair” mandates of the 1970s (which continue through institutional censorship, as described by Diane Ravitch); and the battering of a language stronghold, semantic association, with the elimination of analogy items in 2005. ([4], online). Once such profound concessions have been made to the reductionism of discrete-point standardized tests, surely a language-based pedagogy is in hostile territory.

If there is to be an educational renaissance in America, then we must recover Dorothy Sayers’ “lost tools of learning,” beginning with pedagogy that esteems the literature and philosophy that were once essential to basic education. We must foster the environment of a profoundly intelligence-shaping, literature-based pedagogy, which is the necessary (but not sufficient) condition to generate the acumen of such great minds as Cranmer, Hooker, Bunyan, Addison, Defoe, Johnson, Pope et al.

We must also recognize that such a program will be fraught with great difficulties, not the least of which is finding educators willing to revisit such an unfashionable pedagogy, one which seems too “old.” And, how would we develop consensus on the materials for such a curriculum, when the very notion of a “canon” causes many of our contemporaries to cringe? (See Erasmus’ three dozen authors, in an eight-grade sequence for such a canonical list.) Given the nature of historiography, how could we determine an “appropriate perspective” or the common elements to be included? These are essential questions concerning the enterprise, and they are potentially insurmountable obstacles. Yet, given our present course, we haven’t much to lose.

While more students are attending college than at any previous period in American history, the content and quality of that education has been diminished by a process of curricular dispersion and language atrophy. While psychometrics has advanced its ability consistently to measure facets of human cognition, the intellectual center of education—the content—has increasingly withered, to the point of irrelevance.

Today, our emphasis on accountability permeates the educational project: in elementary and secondary education, the focus is on testing—e.g., the National Assessment of Educational Progress, or simply “the Nation’s Report Card”; for higher education, the national accrediting bodies emphasize outcomes assessment, which require tangible, measurable data, to demonstrate educational “success.” Surely, these measures of accountability are aiming to identify (by testing) the presence of some learning or knowledge that has been acquired, as the result of our efforts. And, I would suggest that any proposed solution must at least acknowledge and meet these minimum standards.

While I certainly do not claim to have all the answers, I am inclined to accept that the Renaissance ideal understood its educational end better than we do. Ultimately, a literary-classical education involved training a student to write and speak, drawing on a corpus of content, with an emphasis on wisdom and eloquence—truth and grace. Using grammatical, logical, and rhetorical analyses as topographical maps for exploration (the tools), students were taken on high seas adventures and world-altering discoveries, through realms of golden words. Didactic explanations were always secondary to the beloved models, “for grammar is derived from authors, not authors from grammar.” ([2], p. 87) Students learned to imitate great style by mimicry, adaptive use, and synthesis of a various rhetorical devices (fable, comparison, figure, etc.). Individual recitations demonstrated personal competence with shared content. Thus, students were able to experience the thrill of mastery, as they used language—in writing and in speech—to convey knowledge, passion, and craftsmanship.

Yet, we must believe that progressive educators had similar experiences in mind for their students. What teacher would desire anything less? And, we must acknowledge that progressive reforms were attempting a Herculean task: to manage an educational system that was in such a dramatic state of flux—burgeoning enrollments, the assimilation of widely disparate cultural backgrounds, overcrowded schools, conflicts over civil rights, etc. Today, our emphasis on global competitiveness in the wake of the information-technology revolution leaves most of us nonplussed.

After a century of efforts to reinvent education, Americans find themselves hoping for something to identify as a national standard. And, of course, the tests will offer some common metric to verify the effect of our curricular intervention. Those “extreme conservatives” may be hoping to recover some semblance of a discernible curriculum in American schools. But, it is unlikely those hopes will be realized, for the problem is systemic.

A curriculum that is process oriented with narrowly restricted content (scrutinized by those who would debunk any reasonable core of subject matter) cannot produce serious intellectual activity. Therefore, we must begin the long and painful process of re-enriching the content of our courses—against whatever resistance will be advanced.

Test makers call the unintended effects of testing on instruction “washback.” This brief history has revealed more than its share of twentieth-century washback. Though aptitude tests (like the SAT) often correlate closely with a general entity of intelligence, they have little to show us of a student’s capacities with the subject matter—which should be our primary concern. I repeat, we must recover the content.

Charles Murray, a long-time advocate of the g-factor and co-author of The Bell Curve (1994), recently took an about-face on the SAT. In his essay “Abolish the SAT,” Murray explained that after years of standing by the most well-known standardized test in America, he was abandoning the stronghold of general aptitude because it failed to offer more predictive validity than the SAT Subject Tests (achievement, content tests). That is, the content tests were equal to or superior to the SAT Reasoning Test (the verbal and quantitative composite) in predicting college success. Murray argued that getting rid of the SAT Reasoning Test, with all of its controversial history, would “have the additional advantage of being much better pedagogically…[putting] a spotlight on the quality of the local high school’s curriculum.” ([10], p. 110) That sounds like a sensible idea. In fact, it sounds a lot like an earlier admonition. Remarkably, the SAT Subject Tests are offered in precisely the same categories as those essential secondary school subjects endorsed by Charles Eliot’s Committee of Ten: English (literature), history, mathematics, sciences, and languages.

There is no getting away from it: tests do not revolutionize education; they trivialize it. The very best psychometric instruments merely measure the responses of an examinee on a carefully designed, technically precise assessment of certain topics or skills. In many cases, they merely confirm our suspicions. But, more than measures, we need a plan. Mark Twain famously said, “To a man with a hammer, everything looks like a nail.” We would do well to put the hammer down long enough to see what it is that we’re building. Then we might discover, close at hand, a few forgotten tools of learning and the blueprint for a lasting educational edifice.

  • Share
Most Commented

May 30, 2018

1.

The Case for Colonialism

From the summer issue of Academic Questions, we reprint the controversial article, "The Case for Colonialism." ...

April 16, 2021

2.

Even Finance Professors Lean Left

The faculty at the top twenty finance departments and the editorial boards at the top finance journals are heavily left-leaning. There is little political diversity in the upper echelon of f......

Most Read

April 16, 2021

1.

Even Finance Professors Lean Left

The faculty at the top twenty finance departments and the editorial boards at the top finance journals are heavily left-leaning. There is little political diversity in the upper echelon of f......

May 30, 2018

2.

The Case for Colonialism

From the summer issue of Academic Questions, we reprint the controversial article, "The Case for Colonialism." ...

July 2, 2020

3.

In Humans, Sex is Binary and Immutable

The idea that there are more than two sexes in human beings is a rejection of everything biological science has taught us. Unbelievably, this idea is coming directly from within the highest......