Stop Using the Least Likely Example

Information Chart Statistics Graph Data Analysis

I’m at a Solution Tree Professional Learning Communities event this week. It’s fine, if a bit too salesy. As for the content, it’s pretty basic stuff. Speaker after speaker has stressed the importance of working as a team so that student achievement isn’t dependent on the teacher lottery because everyone is working together to ensure the success of every child. Ideally, according to the presenters, teachers collaborate to decide on essential standards, share best practices, assess students to evaluate their teaching, and then decide how to respond when kids don’t learn and when they do. Importantly, teachers take collective responsibility for student learning. A big part of this is comparing assessment data to improve teaching. Unfortunately, the presenters regularly made this sound simple. It’s not.

One of the examples used in a session was one we’ve all heard before:

“So if we as a grade level give a common assessment and my two colleagues’ classes have 95 percent proficiency and I have 50 percent, what should we do about that?”

And of course the answer is obvious.

Except I guess it isn’t, because this example keeps getting used. The implication behind the question is something like the following.

We have teachers working in isolation and they need to collaborate more and if they worked together to design common assessments, then got back together to look at the results of those common assessments, then took the next step of sharing best practices or having students who failed receive reteaching from a higher performing teacher, we would see improvements in our students’ performance.

The insinuation is that teachers don’t do these things. That may be true. In fact, it probably is more true than it isn’t, and it’s worth figuring out why and then removing barriers that prevent teachers from doing what is — I think most teachers would agree — only sensible.

But the example above is overly simplistic and hardly ever happens in real schools. Using it reveals more about what the speaker thinks is the problem in schools than what is actually the problem.

My school has been doing PLCs for some time. I am sure we don’t do them the “right” way (which is a topic for another blog post), but we do come together as grade-level teams and look at student assessment data. We do try to identify where we had success and where we need to do better. We are willing to learn from one another and share best practices. The problem is that the data are almost never as straightforward as the example the speaker in the session I attended offered.

The reality is that looking at data is easy; learning something from the data is much harder because the data are often very similar among classrooms, they are rarely crystal clear, and even if the data are clear it’s hard to determine the why behind the numbers. Here’s what usually happens when our grade-level team looks at data:

First, and maybe this is uncommon but I doubt it is, the proficiency numbers rarely vary as much as the example people like to use. I can’t think of a time when we came together and one teacher’s class decisively outperformed both other classes (except in the case where class lists were ridiculously unbalanced and one class outperformed the others every time). More often, after looking at the data, one of us says something like, “Well, fifteen of my kids got 80% or higher. Seven were between 60% and 80%. Three bombed it.” After which, the other two teachers look at their data and say, “Yep. Pretty much the same for me.”

Second, when you dig deeper you realize the data aren’t clear; they don’t tell a consistent story. Say you disaggregate by standard, something that, theoretically, should tell you where your teaching was effective and where it wasn’t (though that’s not exactly accurate, either, since quite a few kids did well on all the standards. So it’s more accurate to say you were effective with some students and not others). Often, even this kind of standards analysis leads to no clear conclusion because you’ll have five students (of the seven who fell between 60 and 80) who got some of the questions aligned to a standard correct and some incorrect. So what does that tell you? What should you conclude? What should you do about it? And how much stock should you put percentages that come from three or four items?

It tells me that a student may or may not be proficient on a particular standard (in other words, I really haven’t learned anything) and that the real difference was how the question was worded.

Maybe.

Because maybe three kids got distracted while on a question they knew how to solve. Maybe two of them were tired because one of those questions came at the end of the assessment. Maybe one kid accidentally clicked on the wrong answer even though she knew the correct one. Maybe a single vocabulary word tripped a few kids up. Maybe the answer choices were confusing and had the question been open-ended, students might have gotten it correct. Maybe two divisions problems were formatted in an unfamiliar way but the rest weren’t.

There are all kinds of reasons kids get questions wrong, including they just don’t understand a concept; but assuming every missed question is evidence of ineffective teaching is likely wrong, especially when the data are as mixed as they usually are. And if the diagnosis is wrong, you’ll write the wrong prescription.

We all know that assessment by itself doesn’t move the needle; it’s what you do after the assessment that matters. The above illustrates what makes this so hard. Those who believe teachers need to do a better job collaborating around student data so that we can improve our practices should stop using simple examples that rarely exist in the real world, and they should instead acknowledge the ambiguities most teachers face when looking at assessment data. Let’s allow that it might not be a lack of professionalism, commitment, or collegiality that prevents schools from using data to improve student learning; it might just be that the work is more complicated than it seems.

Leave a Reply

Your email address will not be published. Required fields are marked *