Why You Shouldn’t Care About Your Teacher Evaluation

Evaluations are in. All of your good intentions, hard work, and personal sacrifice have been boiled down to a number and a label. Are you “highly effective” or “innovating,” or merely “developing,” like you’re an insect in its larval stage instead of a professional educator?

Whatever your label or your number, you shouldn’t take too much pride or allow yourself to feel any disappointment or shame over it. Your evaluation is meaningless.

My district uses Marzano and everything is entered into iObservation. The last step in the evaluation process is for me, the teacher, to go in and “acknowledge” my scores. Why this is necessary is a bit of a mystery, since I am in no way allowed to question or challenge my final score. The state of Michigan gives districts total power when it comes to teacher evaluations. No due process. No appeals. No presumption of effectiveness. It’s all very democratic, and obviously designed to help teachers get better (he said sarcastically).

Once I acknowledged my rating, I was then provided the opportunity to leave a comment. I guess this is iObservation’s way of throwing teachers a bone. We may not be allowed to tell our principal, “Actually, the stupid learning goal was on the board. You just didn’t see it,” but we can sound off in the comments section. As a reminder, that’s the section nobody reads.

Nevertheless, it was my only chance to offer any thoughts, so here’s what I wrote:

I continue to find the evaluations arbitrary, based on questionable data, and demoralizing to the profession. That 75% of any teacher’s evaluation is in the hands of a single individual should be cause for concern. That that individual, however well-meaning and effective he or she might be, bases most of his or her evaluation on a small sample size of a teacher’s instruction is also concerning. It’s a flawed model, operating inside of a flawed system, foisted upon professional educators who were given little opportunity to provide input to the flawed legislators who pushed for more accountability based on the flawed belief that American schools, and therefore the people who work inside of them, are failing. The whole thing is nonsense, and I therefore put no stock in the above numbers, whether they be high, low, or somewhere in between. It’s a shame that principals have to waste so much time on it.

To add to the above and to put everything in list form, here is why your evaluation is meaningless and therefore not worth hanging your head or puffing your chest over.

Your evaluation is likely composed of two parts: administrator observations and student growth data. Both have major problems.

Student Growth

  • The student growth portion of your evaluation is likely based on cruddy assessments. Mine was based on screeners, which were never intended for teacher evaluations.
  • Students are not held accountable for their performance on the cruddy assessments, which makes you wonder how much they really care about them, which makes you wonder how hard they try on them. (I’ll give you a hint: two of my students were done with the 30-question reading test in 10 minutes.)
  • In my district,  growth scores are harmed by students who start the year with already high numbers. They have the least room for improvement, and that lack of growth lowers teachers’ ratings.
  • The whole thing sets up terrible incentives, which I try my best to ignore. Teachers in my district joke about getting students to bomb the fall screener to show more growth. You could actively lobby for the lowest students to be on your class roster to have a better chance of showing growth. There’s no doubt that some teach to the screeners, so kids get the idea that reading is really about saying words super fast. The list goes on.
  • Those students who missed more than 20 days of school? Doesn’t matter. It’s somehow your fault they didn’t learn as much as they should have.

Observations

  • Most of the evaluation is based on principal observations. I had two.  If we only needed two songs to evaluate a band, Tesla would be in the Rock and Roll Hall of Fame.
  • Observations are only as good as the people making them.  They’re meaningless if principals across buildings and districts evaluate their teachers in different ways, which they do.
  • Observations are only reliable if we assume that principals can shelve their personal biases when observing a teacher and rely only on their training (assuming they received any).
  • Evaluations lose their meaning when those being evaluated are judged against different criteria. The current system assumes districts have at least a somewhat similar approach to evaluating teachers. They don’t.  My wife’s district handles the whole thing differently than my district. An “effective” teacher in one district won’t necessarily be effective in a neighboring district. Some districts make it nearly impossible to be “innovating,” while other districts start teachers out there and only lower them for cause. That makes the system junk.
  • Basing a significant part of a teacher’s evaluation on an administrator’s observations makes the system ripe for abuse. Observations might be an honest appraisal of your skills or they could be the result of office politics or personal grudges. If it’s the latter? Well, there’s always the comments section.

And why only observations and student growth, anyway? I’m a teacher, a service professional. Why don’t parents get a say in this? Why don’t the students?

I don’t mind being evaluated. I just wish my evaluation actually told me something, anything, about how well or poorly I do my job. Until it does, I find it hard to care. You should too.