Today I wrote an email to the school board and district superintendent of my district. I thought I’d share.
Greetings to you all. I hope this email finds you well. I have been teaching English at [name of school] for ten years now, and I can’t imagine myself in a better position. I wish to thank each of you for your dedication to education and your hard work in making our community better.
As districts all over the state of Wisconsin make important decisions about the future of our schools — and pursuant to the presentation offered to [district] educators earlier this year — I understand that [our] School District is considering new models of teacher evaluation that may link educator pay to student test scores. While I recognize the importance of such scores, especially given the prominence of Wisconsin’s new school report card system, I believe such models are deeply flawed, and I urge you to avoid them (and/or implement them with extreme caution). Three recent research reports shed important light on the risks associated with “merit pay” models, even when they are adjusted to reward “value-added” indicators.
The first was conducted at Vanderbilt University, and reported in USA Today: http://usatoday30.usatoday.com/news/education/2010-09-21-merit-pay_N.htm
The Vanderbilt study was conducted from 2006-2009 and awarded bonus pay to middle-school teachers who showed gains in student test scores. One of the study’s authors, Matthew G. Springer, said pay-for-performance is not “the magic bullet that so often the policy world is looking for”, and added: “it doesn’t work”.
Another study released in 2010 was conducted by the Economic Policy Institute, entitled “Problems with the use of student test scores to evaluate teachers”: http://www.epi.org/publication/bp278/
From the report’s Executive Summary: “For a variety of reasons, analyses of VAM [value-added modeling] results have led researchers to doubt whether the methodology can accurately identify more and less effective teachers. VAM estimates have proven to be unstable across statistical models, years, and classes that teachers teach. One study found that across five large urban districts, among teachers who were ranked in the top 20% of effectiveness in the first year, fewer than a third were in that top group the next year, and another third moved all the way down to the bottom 40%. Another found that teachers’ effectiveness ratings in one year could only predict from 4% to 16% of the variation in such ratings in the following year. Thus, a teacher who appears to be very ineffective in one year might have a dramatically different result the following year. The same dramatic fluctuations were found for teachers ranked at the bottom in the first year of analysis. This runs counter to most people’s notions that the true quality of a teacher is likely to change very little over time and raises questions about whether what is measured is largely a “teacher effect” or the effect of a wide variety of other factors.”
The final report I wish to mention comes from January of this year, written by Northwestern University Assistant Professor C. Kirabo Jackson. The report is here: http://works.bepress.com/cgi/viewcontent.cgi?article=1027&context=c_kirabo_jackson and a blog post about it can be found here: http://teacherleaders.typepad.com/the_tempered_radical/2013/03/research-proves-that-value-added-teacher-evaluation-models-are-failing-kids-and-communities.html
Jackson’s research focuses on cognitive vs. non-cognitive skills. He found that non-cognitive skills like determination and resilience are better indicators of future success (especially for students who have struggled in school) than the types of cognitive skills that are measured by standardized tests. He also found that most teachers are able to help students develop either cognitive or non-cognitive skills, but not both.
Jackson writes: “Teacher effects on test scores and teacher effects on non-cognitive ability are weakly correlated such that many teachers in the top of test score value-added distribution will also be among the bottom of teachers at improving non-cognitive skills. This means that a large share of teachers thought to be highly effective based on test score performance will be no better than the average teacher at improving college-going or wages.” He also writes: “Because variability in outcomes associated with individual teachers that is unexplained by test scores is not just noise, but is systematically associated with their ability to improve typically unmeasured non-cognitive skills, classifying teachers based on their test score value added will likely lead to large shares of excellent teachers being deemed poor and vice versa.”
Like many teachers, I am nervous about what the future may bring for me and my colleagues in the classroom. But like all good teachers, I am most concerned for the well-being and educational progress of the young people in our community. I worry that undue attention to certain statistics could divert our attention away from the steps that are likely to bring the best authentic improvements in student learning and intellectual growth. I am by no means an expert in the field of educational research, but it seems to me that these three studies raise vital questions which our district must consider seriously during this precarious transition.
I thank you for your time and attention to these matters, and would be happy to discuss them further in whatever format works best for you.
Eric S. Piotrowski
[name of school] English Department
(sent from home, of course)