THEY AINT NO EXPERTS! Americas kids are back in school—and our experts are back in the saddle:
// link //
previous // next //
TUESDAY, SEPTEMBER 5, 2006
With the kids all back in school, we planned to quote the first two paragraphs of this New York Times column
, just to marvel at the lucky child involved. (We decided to postpone Casey/Santorum.) Instead, well link you to this worthy post
, simply saying God bless Josh Marshall and God bless his late, great father.
Special report: They aint no experts!
PART 1—A VERY ROUGH CROWD:
Uh-oh! This front-page, lead story
in Sundays Post revealed an unsurprising but troubling fact. Jay Mathews, the papers education reporter, is running—once again—with a very rough crowd:
MATHEWS (9/3/06): Last week, the Washington-based Thomas B. Fordham Foundation released a report from several experts, including advisers to Republican and Democratic administrations, that outlined ways to move toward national standards...
The experts in the report include Texas lawyer Sandy Kress and former deputy U.S. education secretary Eugene W. Hickok, both key education advisers to Bush, as well as Ravitch and former Clinton advisers Michael Cohen and Andrew J. Rotherham.
Thats right—Jays running with those educational experts again! And as his front-page report makes clear, hes been acquiring this rough cohorts bad habits.
In this case, the experts are on a familiar crusade—they want a national testing system to replace the current arrangement, in which the fifty states use fifty test programs to test the nations school children. In paragraph one, were given the problem—and the solution: Many states...are reporting student proficiency rates so much higher than what the most respected national measure has found that several influential education experts are calling for a move toward a national testing system. Quickly, we get a few examples of the problem which must be erased:
MATHEWS (9/3/06): Maryland recently reported that 82 percent of fourth-graders scored proficient or better in reading on the state's test. The latest data from the National Assessment of Educational Progress, known as "the nation's report card," show 32 percent of Maryland fourth-graders at or above proficiency in reading.
Virginia announced last week that 86 percent of fourth-graders reached that level on its reading test, but the NAEP data show 37 percent at or above proficiency.
Its true; a lot more kids score proficient on some state tests than they do on the national NAEP. Result? Some experts say it's time to be more clear about how well American schoolchildren are doing, Jay says. And he quotes educational expert Diane Ravitch, who complains that the states are using confusing and dumbed-down standards when they administer their tests.
But would a single, national test really help us get more clear about how well American schoolchildren are doing? Not if were left to the mercies of educational experts—or to the writers who are in their thrall. In fact, this front-page report is muddled throughout; virtually nothing about it is clear. But so it goes when our big ed writers run with this very rough crowd.
How unclear is Mathews presentation? Try this short passage, which comes right after Ravitchs complaint about all the confusion:
MATHEWS: A recent study by Bruce Fuller, a professor of education and public policy at the University of California at Berkeley, found that states regularly inflate student achievement. In 12 states studied, the percentage of fourth-graders proficient in reading climbed by nearly two percentage points a year, on average.
The NAEP (pronounced "Nape") data show a decline on average in the percentage who were proficient over the same period, Fuller said.
According to Mathews, a recent study shows that states regularly inflate student achievement. But what exactly does inflate mean here? Does it simply mean that the state tests overstate
their students achievement? Mathews doesnt say, but his passage seems to suggest something different; it suggests that the state tests are showing year-to-year progress where no such progress is really occurring. Those are two quite different things, but Mathews report almost wholly conflates them. We just keep hearing that state tests are bad
—and that the experts have said so.
Putting the study by Fuller aside, Mathews seems to allege two problems with the current situation. Lets clarify:
First, the current situation produces data which are confusing
. Many different test programs are in use—and they produce a wide range of results.
Second, the current situation produces data which are just wrong
. The respected NAEP produces good
data. Those other data? Theyre just dumbed-down.
For ourselves, we dont have a firmly-held view about national testing. We dont oppose it; in fact, wed welcome it in some ways. But we arent inclined to think it would necessarily accomplish all that much. Yes: Under national testing, you could compare test results state-to-state in a way you cant do now. But that doesnt mean that citizens would necessarily be clear about what those data meant. And it doesnt necessarily mean that the data would be more accurate than those we have now.
First: Would data from a national test somehow make things more clear? Sorry. Simply put, theres nothing that is really confusing about the way things are at present. At present, most state tests set the bar for proficiency somewhat lower than the NAEP does. Writers who find that hard to explain will be able to explain nothing, ever.
Second: Would data from a national test be more accurate—less dumbed-down? That may be true in some way or other, but Mathews doesnt say why we should think so. He simply suggests that his experts think that, then presents the dissent in this manner:
MATHEWS: Some educators see comparisons with NAEP as unrealistic. Gerald W. Bracey, an educational psychologist who writes frequently on testing, noted that 1996 NAEP results found only 30 percent of fourth-graders to be proficient or better in science, even though an international study that year ranked American fourth-graders third in science among 26 nations.
Hmm. Bracey actually presents some data which suggest that NAEPs standards may be artificially high. In fact, Bracey is an expert too—and he has argued this case for years. So how does Mathews deal with this? Simple! He downgrades Bracey to educator status, then fails to ask any experts to assess Braceys data or logic.
Uh-oh! Setting the bar for reading proficiency is, inevitably, a subjective matter. Theres no way to say that one standard is right—unless you offer an argument to that effect. But Mathews takes the easy way out, as ed writers often do when they run with this rough, unkempt crowd. He simply presents the views of some experts, thereby avoiding the need to argue his case. Several influential experts have said this, were told. And that pretty much settles that.
But uh-oh! I aint no expert, Rod Steiger said, coming clean to Sidney Poitier decades ago
. But then, most of Jays crowd aint so expert themselves! All week long, well argue that case—and well hope Jay breaks free from their influence.
GETTING TO CLEAR:
Downside: A national test might produce less
clarity, not more. Except when our ed writers bungle the topic, the current situation has this virtue—it helps parents see that theres more than one place to set the bar for proficiency. In the hands of even a mediocre writer, citizens can be shown that theres no perfect way to assess a childs reading proficiency. Does your child weigh 100 pounds? A school can tell you that with precision. Is your child a proficient reader? Thats a whole different animal.
This is a simple point to explain—except for our modern ed writers.
But uh-oh! Under a national testing system, many citizens would think they knew The Truth about how many kids were proficient. That would entail a false sense of precision—and our current crop of education writers would never be able, or inclined, to disabuse them of their false notions.
Bracey says the NAEP standards are artificially high. Is the gentleman more right or more wrong in his view? Dont ask, because youll never be told! In our current culture, ed writers simply quote preferred experts. There is no chance
that youll ever see an attempt to hash out such a question.
A former ed writer—and future expert—speaks out against merit pay
THURSDAY AND FRIDAY:
Oh yeah! Did our educational experts force the state of Virginia to clean up those utterly bogus test scores? The bogus scores which affected every school in the state?
After we explained it for them, in so much detail, did our experts ever get that cleaned up?