Advertisement

Assessing the Value of a Harvard Education

When Harvard's Economics Department wanted to improve its huge introductory course, Social Analysis 10, researchers did a careful study which compared course objectives to actual results. The analysis showed that students retained broad ideas from the course, but not specific details. As a result, Economics Department instructors revised their teaching stratagies to eliminate much of the technical terminology, which students were forgetting anyway, from the course.

This study is an example of a growing national trend toward increased critical evaluation of higher education. At Harvard and other universities, educators are trying to assess whether or not higher education is achieving its goals and what those goals should be.

As the nation reconsiders the type of return it expects from its investment in education, many politicians have urged that universities be sure they are successfully teaching their students. But as the assessment efforts have grown in the past few years the question scholars have yet to answer is what constitutes a good education. Is it concentrating on a narrow body of knowledge--the terminology of economics or developing broad interdisciplinary skills as the concepts now emphasized in Ec 10?

At Harvard this issue, among others, is being addressed by sixty leading educators who meet once a month as part of a Seminar on Assessment. These scholars, who began meeting in September, are assessing the effectiveness of various aspects of the University, ranging from specific academic programs to the role of extracurricular activities in student life.

The Seminar, which was established by President Bok last spring, is part of an explosion of interest in educational evaluation. Fueled by several recent reports criticizing the nation's colleges and universities for failing to evaluate the effectiveness of their programs, these assessment programs range from the more open ended and scholarly like Harvard's to very specific state mandated standardized tests.

Advertisement

Those critical reports, including Involvement in Learning from the U.S. government's Office of Research and Development of Education (ORDE) and the Carnegie Foundation's College, have "put assessment front and center" on the higher-education agenda, according to Cliff Adelman, senior associate in ORDE.

Critical evaluation of educational programs is relatively new, educators say. Colleges have traditionally done very little empirical self-examination. Instead, they have often based program evaluations on impressions and intuition rather than hard data. "The time faculties and administrators spend working together on education is devoted almost entirely to considering what their students should study rather than how they can learn more effectively or whether they are learning as much as they should," Bok wrote in his book, Higher Learning, published in the fall.

The Harvard Seminar hopes to address Bok's concerns about Harvard's ability to criticize and evaluate itself.

But in order to evaluate teaching and extracurricular programs the Seminar first needs to determine what constitutes a successful outcome, Light says, adding it then must develop tests and other criteria for measuring those definitions of success.

Assessment techniques have been the focus of persistent debate among educators. Until recently many experts believed that the broad objectives proposed by assessment advocates were unrealistic with current research tools. "To study how well students are being educated you need a huge amount of data," said Kenneth C. Green, associate director of UCLA's Cooperative Institutional Research Program (CIRP).

Because of the vast amounts of information needed to analyze higher educational outcomes, most early assessment programs measured student knowledge with standardized tests such as Florida's university system's "rising junior exam," which was required of all students before they could enroll as third-year undergraduates. These basic skills tests are easy to use but draw wide criticism from educators, who argue that they trivialize higher education by not testing higher-order skills.

"Such exams emphasize the acquisition of facts and the mastery of simple skills, [but]...are not suited to measuring how clearly students think about social justice, how deeply they can appreciate a painting or a literary text, how much they have gained in intellectual curiosity, [or] how far they have come in understanding their own capacities and limitations," Bok writes.

In response to this criticism, along with complaints that the exams test students rather than institutions, assessors at Harvard and elsewhere are seeking to diversify their tools. The University of Tennessee at Knoxville, for example, uses the College Outcome Measurement Project (COMP) exam to test student ability to clarify values, solve higher-order problems, and communicate. Seniors are also asked to evaluate their college experience and education in a thorough survey.

"Test scores may tell you what students are learning, but not how they're getting that information," says Trudy W. Banta, a professor at Knoxville's Learning Research Center (LRC).

The University has identified a number of critical problems as a result of the LRC assessment.

"We can point to lots of changes that have occured from this, like improved interaction between faculty and students, strengthened internships, and new training and supervision for TA's" Banta said.

Newly developed evaluation tools such as the COMP exam and CIRP surveys have largely dispelled fears that assessment is impractical, educators say. Instead, faculties have become embroiled in a debate over what educational outcomes they should persue through assessment--a topic which may be the most controversial addressed by the Harvard Seminar.

Assessment often encourages colleges to reevaluate their educational goals. As a result, many institutions no longer consider absolute levels of knowledge among graduates as important as intellectual "value-added" by the university, said Light, who was chosen to head the seminar because of his expertise in program evaluation.

"The idea is to compare what students can do when they arrive versus what they can do as seniors," Light said. "If a college takes weak students who are terrible writers and makes them pretty good--that's a great job."

By looking at how Harvard students' writing ability changes over their four years--how much "value" a Harvard education adds to prose style--assessors get a better estimation of the University's writing instruction than by simply comparing Harvard seniors' essays with those of seniors elsewhere.

At the same time that Harvard assessors are reshaping measures of educational quality, they are also reconsidering what constitutes a quality education. Many educators, for example, now consider knowledge of specific facts less important than the ability to apply and integrate data.

One leader in the assessment movement, Alverno College in Milwaukee, WI, uses an "ability-based" curriculum to foster such growth. Fields of study are de-emphasized; instead students are tested in eight cross-disciplinary areas such as communication, analysis, and aesthetic response.

Most faculties, however, continue to test students without consciously analyzing what abilities they seek to foster, Adelman said. Advocates of ability-based curriculum feel that these educators force students to learn facts that are not critical in a world of rapidly developing information technology.

"Today it's exceedingly difficult to agree on some narrow body of knowledge a student should learn," said Austin Doherty, Alverno's vice-president for academic affairs. "There's too much information out there."

This view is opposed by those who feel that the liberalized curriculum of the 60s and 70s has adversly affected higher education. Secretary of Education William J. Bennett, for example, has suggested that all college students be required to read the "great books" of Western civilization. This "back-to-basics" version of assessment has found strong support among several state legislatures which have mandated basic-skills tests and specific curricular reforms in public universities. While Harvard is not subject to state mandated reforms, it is closely studying assessment projects at schools that are.

Legislated reforms are not popular among educators. A Florida statute which requires students to write 6000 words per month has been a frequent target. "The law was based on the observation that students don't do much writing," Adelman said. "We can't quarrel with the motivation, but whether or not the right amount is 60, 6000, or 60,000 words we don't know. The decision was arbitrary."

Criticized as too poorly informed to dictate specific reforms, states have been moving away from absolute standards of accountability. At least eight states that mandate assessment now allow university faculties to design the specifics of their self-evaluation program, Adelman said. Following a plan outlined by Missouri Governor John Ashcroft, these states often offer financial rewards to schools that demonstrate improvement over several years of assessment.

But financial incentives to achieve set standards may result in a narrowing of curriculum and homogenization of higher education. "There is widespread recognition that the smorgasboard approach in the 60s and 70s didn't serve students as well as it might," said Banta, who operates under a system of financial incentives. "If we find a middle ground between telling students exactly what to do and the cafe-style approach of before, we will have narrowed the curriculum somewhat. But that's not all bad," she said.

Harvard is, if anything, moving in the other direction, Light said. The Core Curriculum--which has been criticized by Bennett as being too and by students as not directed enough--will not be evaluated by the seminar, Light says, but he adds that Harvard must also make many decisions about its educational breadth and goals.

In the face of rising educational costs, the Secretary of Education has joined the call for greater accountability among the nation's colleges and universities, saying "The Department of Education has an obligation...to suggest better means by which the higher education consumer can be confident he is buying a sound product."

Adelman adds, "We've never picked up the hood on the car of higher education and looked underneath to see what's going on."

Advertisement