Is “Academically Adrift” statistically adrift?

Jacob Felson points me to this discussion by Alexander Astin of a recent book on college education:

The implications of the study recently released with the book Academically Adrift: Limited Learning on College Campuses, by Richard Arum and Josipa Roksa, have been portrayed . . . in apocalyptic terms: “extremely devastating in what it says about American higher education today” . . . The principal finding giving rise to such opinions is the claim that 45 percent of the more than 2,000 students tested in the study failed to show significant gains in reasoning and writing skills during their freshman and sophomore years.

Astin continues:

It is not my custom to offer technical critiques of other researchers’ work in a public forum, but the fact that this 45-percent conclusion is well on its way to becoming part of the folklore about American higher education prompts me to write. . . .

The method used to determine whether a student’s sophomore score was “significantly” better than his or her freshman score is ill suited to the researchers’ conclusion. . . . If the improvement was at least roughly twice as large as the standard error . . . they concluded that the student “improved.” By that standard, 55 percent of the students showed “significant” improvement—which led, erroneously, to the assertion that 45 percent of the students showed no improvement. . . .

Here’s the dilemma: The more error inherent in the test, the less likely you are to conclude that any given student’s improvement is “significant.” But how much error is present in the CLA? . . . the authors provide no information on the CLA’s reliability at the individual student level . . . I [Astin] even looked up several technical articles referred to in the authors’ Methodological Appendix, but was still unable to find any information on the reliability of individual students’ scores. In fact, one of those technical reports flatly states, without explanation, that “student-level reliability coefficients are not computed for this study.”

In short, these considerations suggest that the claim that 45 percent of America’s college undergraduates fail to improve their reasoning and writing skills during their first two years of college cannot be taken seriously. With a different kind of analysis, it may indeed be appropriate to conclude that many undergraduates are not benefiting as much as they should from their college experience. But the 45-percent claim is simply not justified by the data and analyses set forth in this particular report.

I have not read the report in question so have nothing further to add. It would be good to see a scatterplot of students’ before and after scores to get a sense of how much improvement was happening at the individual level.

Comments

If your new mean is just better than two standard errors above the old mean, doesn't that mean that about 2% were at or below where they started and that 98% improved? That sounds better (and truer) than "55% improved." Sure, because of error, you can't be exactly sure which students fit in that 98%, but if you're measuring efficacy that doesn't really matter.

You need to be logged in to comment.
(If there's one thing we know about comment trolls, it's that they're lazy)

Connect
, after login or registration your account will be connected.
Advertisement