A VAM Supporter Struggles with Value-Added Data, Pt.2

This is part 2 of a series on how we use value-added data in Tennessee and across the nation. The entire series can be found below:

Part 1: what the research says about value-added data
Part 3: how to use value added data constructive
Part 4: some final thoughts and questions

This past week I wrote a piece about my struggles with how Tennessee uses value-added data from a quantitative perspective. But research isn’t the only thing that’s causing me to question my commitment to using value-added data in high-stakes decisions. While I do believe that there still exists a place for value-added data in policy decisions, I’ve seen too many instances where the way it is currently used, in high stakes decisions involving hiring, firing, school takeover sand salary, produces the opposite of its intended effect. During the past four years, I’ve seen how over focusing on this data may actually lower the quality of our teaching force in our highest-need schools.

First, the use of value-added data to make high-stakes decisions in schools too often creates a test-centric environment that produces dissatisfied educators. When it comes to testing, the ideal is that quality instruction and learning comes first and are simply then captured by tests. In the past four years I have found that frequently the opposite happens in our highest-need schools. Faculty and department meetings turn into strategy sessions on how to game the tests and achieve minimum proficiency rates. Teachers are asked to create lists of which kids will make it and who won’t. And too often, curiosity and creativity go by the wayside in favor of test preparation. In Tennessee, for example, the entire month before TCAP becomes solely devoted to prepping students for the state test in many high-need schools.

In one sense this is a perfectly understandable reaction. These schools know that if they don’t focus on the test first, they may be taken over by the state. But all this leads to a very unsatisfied teaching force. None of us signed up to prepare students for a test from April through May or to create lists of who will pass and who will fail. We signed on to be teachers, and being asked to do otherwise otherwise can lead to frustration with the school and the teaching profession itself.

Second, this environment is a factor in driving the best teachers away from the schools where they are most needed. Unlike many detractors claim, I haven’t seen any teachers leave the profession as a direct result of high stake testing. The bigger problem that I’ve seen is high quality teachers leaving high needs schools because of how we use standardized testing. I’ve seen colleagues choose to leave kids that they love because they don’t believe their career can survive if they teach at a high need school. They reason (correctly, I believe) that if they stay at such a school for too long, there’s no way they’ll be able to maintain the high test scores needed to continue their career. In Tennessee, for example, if a school is taken over by the state-run ASD or the local iZone, only teachers with high data scores will likely be rehired. If your school may be under threat of takeover every year, why would you stay when one or two bad years can result in a lost job?

This problem isn’t just confined to tested teachers. It also impacts non-tested teachers, because they are forced to take the collective value-added score for the school as a part of their evaluation score. When you teach at a high needs school, this score is often a 1 or a 2. I’ve seen level 4 and 5 teachers by observation reduced to level 3 or lower because they had to take their school score. They question the fairness of this, and when it happens over and over they get fed up with the system and leave for a school with higher overall value-added scores.

Can all teacher career decisions be attributed to high stakes value-added data? Absolutely not, and there are many other reasons why high quality teachers leave high need schools. But I’ve heard this one factor named too often among my friends and colleagues in their career decisions to believe that it is just a coincidence.

Third, the use of value-added data in high stakes decisions can negatively impact students learning. When high quality teachers leave struggling schools, the kids are the ones that lose out. Teachers don’t want to teach in a culture where they question whether or not they will have a job next year because of poor test scores. And when they leave they are too often replaced by early career or untested novice teachers that take time to develop, directly and negatively impacting student learning.

This isn’t the only way the focus on value-added data impacts students. It can happen in a much more insidious but just as damaging way. As I’ve already referenced, some schools instruct teachers to identify a certain number of students to achieve proficiency and focus only on them at the expense of others. Some schools suspend these “other” students continually. Some expel them outright to send them elsewhere (this is NOT just limited to charter schools!). More often than not, these students are  “left behind” as a result of this emphasis on testing where resources get funneled to their colleagues who show more potential. This is difficult to see because so often it happens under the radar, but I can assure readers that it does occur frequently enough to be worthy of note.

Would this happen without high-stakes testing? Possibly. But similar to teacher career decisions, my colleagues and I have personally seen these decisions made based off of the need for higher test scores.

Summing It Up

Would all these problems go away if tomorrow we ended the use of VAM in high-stakes decisions? Probably not. But if we base people’s jobs and careers so heavily on a single measure, they will go to extremes in order to attain the required outcome. With that in mind, we shouldn’t be surprised when teachers and schools act in the ways I’ve described above. In cases of low-performing schools, my best evidence tells me that the importance placed on value-added data obtained through high-stakes testing contributes to reducing teacher quality in our highest need schools.

Not only does this cause me to question its reliability as a mechanism for evaluating teachers, but it also causes me to question whether or not it is worth the potential damage that it can cause these schools.

I think the solution is to re-think how we use value-added data. I’m struggling to justify the use of this data as one of the primary method to make high stakes decisions like hiring/firing, school closures and compensation decisions.  However, there is some real value to this data, as it does capture at least in part school and teacher contributions to individual student learning.  think we need to start thinking about how to use this data to target teachers and schools for improvement, rather than to punish teachers and schools and by extension, students, when they under perform.

Part three of this series later this week will examine my policy recommendations for where I think we need to go from here and how we can use value-added data in a thoughtful and productive way to truly help schools, teachers and students improve.

Follow Bluff City Education on Twitter @bluffcityed and look for the hashtag #iteachiam and #TNedu to find more of our stories.  Please also like our page on facebook. The views expressed in this piece are solely those of the author and do not represent those of any affiliated organizations or Bluff City Ed writers. Inflammatory or defamatory comments will not be posted.

 

Leave a Reply

Your email address will not be published. Required fields are marked *