This is part 3 of a series on how we use value-added data in Tennessee and across the nation. The entire series can be found below:
Part 1: what the research says about value-added data
Part 2: how value added data impacts teachers in the classroom
Part 4: some final thoughts and questions
Over the last two weeks I’ve shared my increasing skepticism about the appropriateness of using value-added data in high-stakes policy decisions. A growing body of research suggests that this data is prone to error. Additionally, my own experiences tell me that the way we use it is often detrimental to the quality of education received by students in our highest-need schools.
That said, I still believe that value-added data does have value as a teacher support tool. Instead of simply dumping these data systems, what we need to do is establish a new paradigm for how we use value-added data to increase the quality of our teacher workforce.
Before jumping in, let me make clear; I won’t be answering the should we/shouldn’t we question that so often is asked about value-added data. Instead, I’m going to make a recommendation based off of the assumption that value-added data does include valuable information, and answer the question of how best to use that information.
My overall recommendation is this; that until we can increase the reliability of this tool, we should use value-added data to support and develop our teaching work force and not for high-stakes decisions.
The Value of Value-Added Data
Value-added data should not be junked outright. Yes, this data is imperfect. Yes, it contains flaws. But it does contain enough validity to be used when making limited policy decisions when it is included alongside other measures of teacher effectiveness. I’ve arrived at this conclusion for two reasons:
First, some of the growth demonstrated through value-added scores is attributable to individual teachers. Most every researcher agrees on this point, though they differ on the size of the effect.
Second, methods exist to somewhat improve the reliability of value-added scores for individual teachers. Scores can and do fluctuate significantly from year to year, but employing different statistical methods can help increase their reliability. For example, multi-year averages can increase the reliability of scores by spreading the variation of a period of, say, three years rather than one. This doesn’t fix all the accuracy problems of value-added data and I don’t think it pushes them to the point where they can be reliably used in high-stakes decisions without fear of error, but it can make the data more accurate to a point.
If we accept this, our next question must be; what is the best way to use this data to build the quality of our teacher workforce?
Towards A New Paradigm
I don’t believe we should stick with the status quo, but I also don’t believe we need to completely reject value-added data as a tool for improving teacher quality. Instead, I believe that we need a new policy for how we use value-added data and indeed evaluations as a whole.
This policy requires that we think about value-added data as a part of a system where the primary purpose is to support continuous teacher improvement over time.
Teaching is a skill that must be learned and acquired, not a gift with which one is born. Every teacher develops at a different speed. If true, then the solution isn’t to use data to remove those that grow at different paces, especially when that data is flawed and can produce considerable negative side effects among the teacher population in our schools.
A better solution is to use this data to improve our teaching work force by promoting educator development throughout our careers. This requires changing how we treat teachers based on the results of this data. This new policy would recognize that ALL teachers need to continually be pushed to improve over time, not just those who struggle.
Under this policy, teachers that score a “2” or a “1” by value-added data would no longer be identified as “ineffective” teachers, but instead as teachers in need of improvement. Instead of removing or punishing those teachers, we can target them for support, pairing them with mentors or targeted PD to help them improve their craft (check out an example of what this could look like here). Instead of being used as a tool for pushing teachers out of the classroom, value-added data would be transformed into a tool to help teachers grow their abilities. The worst thing that could happen in this case is that a teacher could receive too much PD and get too good!
Rethinking the way we use value-added data would also change the way we develop our best teachers, our “4’s” and “5’s”. These teachers also need continued growth and development, just like our most struggling teachers. We should never reach the point where we tell our highest performers that there’s no where left for them to grow. We need to continually push our highest level teachers to improve by designing both high level PDs and providing opportunities for these teachers to take on greater leadership roles.
This decision would necessitate some major policy shifts. For example, it requires that we dramatically increase the quantity and quality of our teacher professional development. It also necessitates a large expansion of teacher coaches and mentors, which will cost money. But if it is truly the case that teaching is a skill that can be learned, then these are certainly policy shifts worth making.
Long Term Policy Implications
I want to add two long term implications of adopting the paradigm I’ve outlined here. First, this policy could save us considerable money over the long run, because under the status quo we spend more money replacing teachers than we likely would by investing in their development. One study found that the cost of replacing each individual teacher that leaves ranges from just under $10,000 to over $17,000, with the collective cost runs into the millions of dollars. By contrast, the average cost of teacher PD ranges from $6,000 to $8,000. So much of the focus under the existing paradigm is in pushing ‘bad’ teachers out of the profession. If we develop these teachers, however, this policy could actually save us considerably money over time while still increasing the quality of our teaching force.
Second, this policy would allow us to continue to study and refine our value-added data systems over time in hopes that they would someday be rigorous enough to isolate individual teacher effects in a much more reliable manner. As someone who believes in the power of objective data, I personally hope that we can get to the point where value-added measures are reliable enough to use in more important teacher decisions. However, we’re not there yet and as such should not be using this data in the way in which we use it.
Summing It Up
Perhaps most importantly, this new paradigm for how we use value-added data would send an important message to educators, one that is often left out in the way we communicate regarding this data. It would send the message that we care about our teachers and that we’re doing everything possible to help them become better professionals. The rhetoric would stop being about good or bad teachers and start being about improving all teachers. Once we make this policy shift, I think we will truly see the improvement in teaching quality that we all want. And transforming our teaching profession leads to the best outcome of all – transforming the quality of education received by our students. That’s the most important transformation of all.
Still have questions? Check out Jon’s final piece on Value-Added data that wraps up all the loose ends.
Follow Bluff City Education on Twitter @bluffcityed and look for the hashtag #iteachiam and #TNedu to find more of our stories. Please also like our page on facebook. The views expressed in this piece are solely those of the author and do not represent those of any affiliated organizations or Bluff City Ed writers. Inflammatory or defamatory comments will not be posted.