The Real Takeaways from Redding and Grissom Study

Jill Adelson, Ph.D.
Scott Peters, Ph.D.

A recent article in the Hechinger Report reported on a study by Christopher Redding and Jason Grissom, soon to be published in Educational Evaluation and Policy Analysis. In this study, the authors used data from the Early Childhood Longitudinal Study - Kindergarten (ECLS-K; 2010 – 2011 cohort) to see if students who were reported as receiving gifted services performed better than they had prior to receiving services.

The Hechinger Report focused on the fact that the learning gains for students receiving services were, on average, small – 0.066 standard deviations in reading and 0.019 standard deviations in math. The math differences were not statistically significant, and we would go even farther and say the reading differences were not practically significant, they were essentially zero. The study also noted that the small effects in reading disappeared for Black students. Interestingly, that difference in effect for Black students varied depending on the school’s racial composition. Importantly, although the authors did control for a number of covariates, this was still an observational / correlational study.

Although the study is impressive in that it uses nationally-representative data, this ends up being a double-edged sword. The ECLS-K provides a way to examine average effects across the country, but it does not contain any information about the nature, scope, or duration of the services. In the study, “gifted” was determined by whether or not the student’s classroom teacher reported the student as receiving services in reading/language arts or math (in third grade the teacher also had the option of responding that the student received gifted services with no specific content area) or whether the student had an IEP that included gifted services. Redding and Grissom state that this operational definition of “gifted” is aligned with an enrichment model.

The result of this analytic choice (which was not unreasonable) is that we have zero idea what services the students received. It’s likely that most students received zero services aside from within-class differentiation. Too often, gifted programs tend to be low dose, offered for short periods of time and as pull-out programs a few times per week. We’ve also seen recent research that while students tend to be identified based on high scores in reading and math, they tend to be provided services in areas that don’t match with their area of need (e.g., creative and critical thinking programs). This mismatch in identification and services is not unusual and also is an issue in research with matching identification, services, and outcomes. In fact, the authors of the study found that if they matched student identification with outcome (that is, only looked at effects for students identified in math when examining math achievement and only looked at effects for students identified in reading when examining reading achievement), they found stronger programming effects. It’s also likely that some gifted services resulted in large gains while others resulted in zero or even negative outcomes – the main findings in Redding and Grissom’s study dealt with average effects. Again, this is not a criticism of the authors or their study. This is simply what happens when you look at outcomes across such a large scale. You’re unable to drill down into the specifics of what worked and what didn’t.

In the end, we should not be surprised that there were no effects. Indeed, a 2012 paper, using an earlier wave of ECLS-K, showed the same thing using a different quasi-experimental method of analysis. “Gifted” is just not a useful independent variable, especially across many states or districts. It just means too many different things for us to derive any meaningful inferences about what intervention did or didn’t work. Imagine looking at the effects of “instruction” on student outcomes and the variable was operationalized as whether or not the student was in school. Any findings, positive or negative, wouldn’t be terribly informative because we have no idea what kind of instruction the students received, which worked, and which didn’t.

In the end, there are some clear take-aways from this study:

  1. We have to do a better job of aligning goals, identification, services, and evaluation. This applies within districts and also applies to researchers. For districts, it is critical that that we are identifying student needs and then providing services to meet those specific needs (while aligning with our goals and being sure we evaluate our identification and services appropriately). For researchers, it is critical we consider what the outcome of gifted services should be and take into account how we are operationalizing “gifted” student and services.
  2. Gifted services should focus on actually extending skills based on student needs. Too many gifted services identify students if they are multiple years above grade level but then provide them with differentiated grade-level content. No wonder their test scores don’t go up! The services don’t do anything to actually extend learning in the areas measured by the test.
  3. Gifted services should be culturally sensitive. There is no such thing as a generically “good” gifted service. An effective service is one that extends learning based on the needs of the students in a particular setting and is further tailored to the culture of the students in that particular school. A service that might work well for students in rural Wisconsin farm country might not work as well for students in Memphis. If there’s anything that can be said about good teaching, it’s that it’s differentiated to meet students as individuals.
  4. Although it is good to evaluate the effects of general “gifted” programming, we should also do more to study specific types of interventions. Again, this is not a criticism of Redding and Grissom – the nature of ECLS-K did not allow them to drill further down as to the nature of the intervention. The effect of School A’s gifted program doesn’t generalize to School B’s gifted program because the services are completely different. “Gifted” as an independent variable and description of a service just falls flat. Better to focus on the outcomes of specific advanced academic interventions like subject acceleration, early algebra, or a talent development program like Young Scholars.

Authors: Scott Peters, Ph.D., professor of assessment and measurement methodology, University of Wisconsin-Whitewater and Jill Adelson, Ph.D., founder of Adelson Research and Consulting, LLC and co-Editor of Gifted Child Quarterly.

The opinions expressed here are not necessarily those of NAGC

Blog