Translating the Statistical Representation of the Effects of Education Interventions Into More Readily Interpretable Forms

Nov 28, 2012 | Published Research

Author

Lipsey, M. W., Puzio, K., Yun, C., Hebert, M. A., Steinka-Fry, K., Cole, M. W., Roberts, M., Anthony, K. S., & Busick, M. D. (2012). Translating the Statistical Representation of the Effects of Education Interventions into More Readily Interpretable Forms. (NCSER 2013-3000). Washington, DC: U.S. Government Printing Office.

The superintendent of an urban school district reads an evaluation of the effects of a vocabulary building program on the reading ability of fifth graders in which the primary outcome measure was the CAT/5 reading achievement test. The mean posttest score for the intervention sample was 718 compared to 703 for the control sample. The vocabulary building program thus increased reading ability, on average, by 15 points on the CAT/5. According to the report, this difference is statistically significant, but is this a big effect or a trivial one? Do the students who participated in the program read a lot better now, or just a little better? If they were poor readers before, is this a big enough effect to now make them proficient readers? If they were behind their peers, have they now caught up? 

Knowing that this intervention produced a statistically significant positive effect is not particularly helpful to the superintendent in our story. Someone intimately familiar with the CAT/5 (California Achievement Test, 5th edition; CTB/McGraw Hill 1996) and its scoring may be able to look at these means and understand the magnitude of the effect in practical terms but, for most of us, these numbers have little inherent meaning. This situation is not unusual—the native statistical representations of the findings of studies of intervention effects often provide little insight into the practical magnitude and meaning of those effects. To communicate that important information to researchers, practitioners, and policymakers, those statistical representations must be translated into some form that makes their practical significance easier to infer. Even better would be some framework for directly assessing their practical significance. 

This paper is directed to researchers who conduct and report education intervention studies. Its purpose is to stimulate and guide them to go a step beyond reporting the statistics that emerge from their analysis of the differences between experimental groups on the respective outcome variables. With what is often very minimal additional effort, those statistical representations can be translated into forms that allow their magnitude and practical significance to be more readily understood by the practitioners, policymakers, and even other researchers who are interested in the intervention that was evaluated.

 

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Download

Print Friendly, PDF & Email