Classroom Strategies

Using Data to Impact Student Achievement

Why we are looking at the wrong data, why we need to stop, and what we need to be looking at instead.

Data in Education

For years in education, we have been data rich but information poor. As educators, we assess, collect data, and assess some more. This data cycle has not allowed us to impact student outcomes—it simply measures them. This is in part because it is cumbersome and difficult to analyze, interpret, and respond due to looking at insufficient data metrics. With one small shift, we are able to turn the mountains of data we have into information to more equitably impact student achievement.

When we look at metrics such as benchmark, proficiency, or percentile, we are less able to act on the data and more likely to simply measure students. We may see a student who did not meet a benchmark, one that is at the 98th percentile, or one who is considered proficient. Other than patting ourselves on the back or flagging the student as at-risk, we are often unable to effectively act on this data. With newer, complex data metrics available and easily accessible to all, it is time we shifted what piece of data we look at. Stakeholders and educators need to stop overemphasizing these less actionable metrics and be more intentional about using more effective and equitable ones such as student growth percentile.

Student Growth Percentiles

Why is this such a game changer? With big data in education, we are much more able to quantitatively analyze the cause and effects of our instruction on student learning while also being able to more equitably look at data for students who arrive at different levels.

Not all data metrics are created equal. Student Growth Percentiles, or SGP, remove some of the variance that makes it difficult to interpret results. A SGP will measure a student’s growth relative to his or her academic peers on a spectrum. For instance, if Daniela got a scale score of 200 at the start of the year, her mid-year score will be compared with a large number of other students, her academic peers, also with a starting score of 200. If Daniela gets a Student Growth Percentile of 50, we know that her growth was at the 50th percentile and that half of her academic peers grew more than her and half less. If she earned a SGP of 90, then we know that Daniela made more growth than 90 percent of her academic peers.

As educators and stakeholders looking at data, we need to shift our question from: “Did the student make the benchmark?” to “What was the student’s growth rate relative to their academic peers?”

When benchmarks are the most commonly used metric, schools are compelled to focus the most on those students who are close to the cut off. This is an inequitable practice that needs to stop. In contrast, when SGPs are used, we can more equitably look at all students to analyze their rates of growth and begin asking questions in search for the causes. A student with a low score on an assessment can still demonstrate a high SGP, which would indicate they grew at a high rate of growth compared to their academic peers. Using SGP makes it easier to analyze data for all of our students instead of those who are close to a benchmark.

Comparing a student’s growth to their academic peers takes out some of the variance and allows us the opportunity to also ask the question: “How is this student responding to this instruction?” We are able to classify that student’s growth as flat, modest, typical, or aggressive. This change in metric now allows us to be able to analyze the data at a macro level by looking at a group of students with a high SGP and wondering what common trends were present with their instruction. Having data measures such as the SGP allows us to ask many more questions about how students are learning and responding to instruction.

Data Analysis and Goal Setting

Like other industries, we manage what we measure. The next time a goal needs to be set or data needs to be analyzed, the following guiding questions can be used to ensure your team is creating productive goals that measure what is intended and also increase the likelihood of having efficient and effective data analysis.

1. Metric

Is your team selecting the most effective metric? As we often manage what we measure, is the selected data metric one that will help guide your team to get your students where they need to be? Rather than writing a goal around proficiency data, would it be more appropriate to use Student Growth Percentiles?

2. Questions

During discussions around data we often hear a piece of data end in a statement: “He got a score of 85.” or “She did not meet the benchmark.” Rather than our data being the conclusion of a discussion, it should be just the beginning. When we notice a student earning a SGP of 95, we can’t afford to just pat ourselves on the back and move on to the next pieces of data. We need to begin asking questions of ourselves and of our system to determine what the catalyst was for that growth percentile. Data is not the answer to the question—it is just the genesis of asking questions. We also need to ensure we are asking the most effective questions around data. As an alternative to asking: “Did the student meet a benchmark?’” teams could ask: “What conditions did we create for this student that allowed them to receive a SGP of x?” This allows teams to analyze the cause and effect of our instruction and environment that allowed the student to respond the way they did.

3. Mindset

In the past, accountability measures have often had the intended consequence of focusing on students who are near the benchmark or on the bubble. This practice is inequitable and needs to stop. We cannot discount serving the child who is at the 10th percentile and likely unable to meet a benchmark yet, or the child who is already performing well above peers and likely to remain above the benchmark. Making one small change in selecting more equitable and actionable data metrics such as Student Growth Percentile can promote looking at growth for all of our students.

One Small Shift

Recently in education, we have been data rich, but we have not had the time or training to analyze, interpret, and act upon this data. With sophisticated data metrics available, it’s time we shift our practice to use these new measures such as Student Growth Percentiles to better support us educators in the process of turning data into information and this information into student achievement.

Tags:

3 thoughts on “Using Data to Impact Student Achievement

  1. Student Growth Percentile is a valuable but complex metric, so it is difficult to calculate manually. Most assessments calculate this for you now and list it in a “SGP” or “Growth” column. It is calculated by grouping a set of students who had similar scoring patterns on a specific assessment and then ranking those academic peers on a continuum after additional data points are collected. From this, each student is then given a percentile rank relative to those academic peers.

    1. Hi Lisa,
      Can you give a few specific examples of assessments that you have seen valuable SGP provided at a metric? Thank you! Our state testing has it, but what would be other examples of interim assessments?

Leave a Reply

Your email address will not be published.

About the Author: Lisa Phillips is an Instructional Coach for West Des Moines Community Schools. She draws on over a decade of experience of being in the classroom as a teacher and coach to explore innovative ways to increase student learning, achievement, and mental health. Her passion around brain research as well as data and technology in education leads her to rethinking what education can be like for our twenty first century learners.