Jill Adelson, Ph.D.
Scott Peters, Ph.D.
A recent article in the Hechinger Report reported on a study by Christopher Redding and Jason Grissom, soon to be published in Educational Evaluation and Policy Analysis. In this study, the authors used data from the Early Childhood Longitudinal Study - Kindergarten (ECLS-K; 2010 – 2011 cohort) to see if students who were reported as receiving gifted services performed better than they had prior to receiving services.
The Hechinger Report focused on the fact that the learning gains for students receiving services were, on average, small – 0.066 standard deviations in reading and 0.019 standard deviations in math. The math differences were not statistically significant, and we would go even farther and say the reading differences were not practically significant, they were essentially zero. The study also noted that the small effects in reading disappeared for Black students. Interestingly, that difference in effect for Black students varied depending on the school’s racial composition. Importantly, although the authors did control for a number of covariates, this was still an observational / correlational study.
Although the study is impressive in that it uses nationally-representative data, this ends up being a double-edged sword. The ECLS-K provides a way to examine average effects across the country, but it does not contain any information about the nature, scope, or duration of the services. In the study, “gifted” was determined by whether or not the student’s classroom teacher reported the student as receiving services in reading/language arts or math (in third grade the teacher also had the option of responding that the student received gifted services with no specific content area) or whether the student had an IEP that included gifted services. Redding and Grissom state that this operational definition of “gifted” is aligned with an enrichment model.
The result of this analytic choice (which was not unreasonable) is that we have zero idea what services the students received. It’s likely that most students received zero services aside from within-class differentiation. Too often, gifted programs tend to be low dose, offered for short periods of time and as pull-out programs a few times per week. We’ve also seen recent research that while students tend to be identified based on high scores in reading and math, they tend to be provided services in areas that don’t match with their area of need (e.g., creative and critical thinking programs). This mismatch in identification and services is not unusual and also is an issue in research with matching identification, services, and outcomes. In fact, the authors of the study found that if they matched student identification with outcome (that is, only looked at effects for students identified in math when examining math achievement and only looked at effects for students identified in reading when examining reading achievement), they found stronger programming effects. It’s also likely that some gifted services resulted in large gains while others resulted in zero or even negative outcomes – the main findings in Redding and Grissom’s study dealt with average effects. Again, this is not a criticism of the authors or their study. This is simply what happens when you look at outcomes across such a large scale. You’re unable to drill down into the specifics of what worked and what didn’t.
In the end, we should not be surprised that there were no effects. Indeed, a 2012 paper, using an earlier wave of ECLS-K, showed the same thing using a different quasi-experimental method of analysis. “Gifted” is just not a useful independent variable, especially across many states or districts. It just means too many different things for us to derive any meaningful inferences about what intervention did or didn’t work. Imagine looking at the effects of “instruction” on student outcomes and the variable was operationalized as whether or not the student was in school. Any findings, positive or negative, wouldn’t be terribly informative because we have no idea what kind of instruction the students received, which worked, and which didn’t.
In the end, there are some clear take-aways from this study:
Authors: Scott Peters, Ph.D., professor of assessment and measurement methodology, University of Wisconsin-Whitewater and Jill Adelson, Ph.D., founder of Adelson Research and Consulting, LLC and co-Editor of Gifted Child Quarterly.
The opinions expressed here are not necessarily those of NAGC