These pieces originally appeared as a weekly column entitled “Lessons” in The New York Times between 1999 and 2003.
[ THIS ARTICLE FIRST APPEARED IN THE NEW YORK TIMES ON AUGUST 29, 2001 ]
SAT scores aren’t up. Not bad, not bad at all.
The College Board released its annual SAT report yesterday. Average scores, math and verbal combined, were 1020, barely higher than a quarter-century ago. The other college testing service, the ACT, has also announced a stable average score, of 21 (equivalent to about 990 on the SAT).
Contrary to much official hand-wringing, though, stable in this case does not mean unimproved. Hidden in the data is more hopeful news than most people would expect.
These tests are voluntary. If only high achievers take them, average scores mean one thing. But if a broader range of students take them, the results must be interpreted differently.
The number taking the tests has in fact grown a lot. This year 59 percent of all 18-year-olds took the ACT or the SAT, up from 40 percent in 1976. By itself, this big expansion should push average scores down, because more modest achievers are now participating, whereas high achievers as a group have always done so. That results remained steady (even improving a little) suggests real gains.
Scores of both minority and white students, examined separately, increased in this time. But because the number of minority students as a share of all test-takers has grown, and because their scores have continued to be lower than those of whites, the overall average has not risen as much as the separate minority and white averages. The growth in the number of lower-scoring (but improving) minority students taking the tests is a good sign, even if it stunts the overall average.
While these realities are now widely understood, few experts recognize another positive trend: growth in the number of top-scoring students. Unchanging mean scores mask gains of not only average students but the most able and affluent ones as well.
Consider those at the top, with combined math and verbal SAT scores of over 1500 (or roughly 34 on the ACT). This year about four of every 1,000 18-year-olds did so well; only one of every 1,000 scored that high in 1976. (The College Board and the ACT changed scoring systems a few years ago. Older scores reported here were converted to the new scales for comparability.)
Improvement is also found at other high levels. In 2001, about 51 of every 1,000 18-year-olds scored over 1300 on the SAT (or 29 on the ACT), nearly double the number who scored that high 25 years ago.
Another way of looking at improved trends among the highest achievers comes from scores of affluent seniors, most of whom have taken college entrance exams from the start. Their scores are higher partly because of intellectual stimulation they get from their better-educated parents. (The best predictor of test scores has always been students’ social class.)
This year, students from the highest-earning fifth of families had average SAT scores of about 555 (verbal) and 567 (math). In 1987 (the earliest year in which the College Board collected family income data), comparably affluent students had averages of only about 542 and 535.
Like so many leaders today, officials of the testing services sometimes seem to look for negative news about student performance. The College Board, for example, stressed yesterday that the racial test score gap remained, but gave little emphasis to the rise in minority scores. The reason the gap did not narrow is only that white scores rose even more.
College Board and ACT leaders also note that there has been grade inflation: a student who scores 650 on the math SAT today is more likely to get an A than a student with the same score in the past. Secretary of Education Rod Paige issued a statement yesterday calling this troubling. But if achievement really has improved, then it is hard to know if teachers have adjusted to real gains by raising grades too much or if, as Dr. Paige suggested, grades have gone up only because teachers’ standards have collapsed.
In its release, the ACT emphasized that average scores were unchanged, giving less attention to an increase in how many take the test. The ACT announcement prompted Dr. Paige to complain that scores had not improved “despite record levels of spending on education programs over the last decade.”
Actually, the ACT average has improved, to 21.0 from 20.6 in the last decade, a period in which the number of test-takers has grown to 27 percent of all 18-year-olds from 23 percent. It is remarkable that averages gained at all while the test-taking base was expanding.
Both testing services caution that because test-takers are unrepresentative of all seniors, averages can be misleading. But the cautions don’t seem to influence how these data are interpreted by experts. If the ACT this year had presented data in a more positive light, perhaps Dr. Paige would have lauded schools for making such good use of resources.
The annual college-entrance reports can be used to comment on the state of American schooling, but not without more sophisticated analyses than the testing services typically provide.