Christopher Cerf, guestblogging at Eduwonk, talks about the issue thusly:
In combination, these two points explain why, quite appropriately, "value added analysis" has become the holy grail of the accountability systems urban school districts across the country are rushing to build. Opponents express the reasonable concern that isolating "teacher effect" as an independent variable is immensely complex given the many factors that contribute to achievement trends. But isn't that the point?Indeed, in my own work, I have yet to come up with ways to isolate some important inputs into the system, largely because there is both a plethora of data out there and not very much that is focused on the results of classroom functions.
The debate is no longer over whether, but how evidence of student learning will increasingly inform the management of a school. In unsophisticated hands, achievement data can be used as far too blunt instrument to meet basic standards of fairness. Wouldn't opponents of "value added" serve their interests far better if, instead of opposing "value added" analysis altogether, they put their shoulder into the challenge of designing data systems that take into account the complexity of each individual classroom and working with districts to ensure that they are used responsibly?
Still I think Cerf is right, we are moving beyond the simple and into more complex measurements, but it is a worthwhile move.
No comments:
Post a Comment