Log in  |  Search

Burying the Bias in Teacher Data Reports, Part II

A few weeks ago I posted a report on Edwize about biases in last year’s Teacher Data Reports. Teachers of high performing math students are 35 times more likely to fall at the bottom of the teacher ranking than at the top. [1]

Shortly after that, the DOE placed a document on its website that asserts that “…teachers of high-performing students are as likely to have high value-added scores as low value-added scores.”

To me, call me crazy, this is unlikely to be true. First of all DOE charts found in the very same document seem to contradict that (more on that in a minute). What’s more, DOE used a broad definition of “teachers of high-performing students,” and also included some reports that were so unreliable they were not issued to teachers. Let’s go through this step by step.

DOE broadened the definition of teachers with high performing students in two ways. In my own analysis, I looked at teachers who worked with classes that were predicted to score at 4 or higher on the math test. This seemed logical, and it included 7% of the total pool of teachers. DOE, however, included teachers whose students were predicted to score anywhere in the top 20%. That nearly tripled the number of teachers probably obscured any bias within the group.

I say probably because the UFT does not have this year’s data files. However, using last year’s reports as an example, the expanded 20% pool would have included 1870 teachers (up from the 690 that I used). Even in that expanded pools 123 teachers land at the bottom of the percentile ranking, compared to 35 at the top — a ratio of nearly four to one.

Math After One Year of Teacher Data (Last Year’s Reports)
Number of Teachers
Grade Level Predicted Score Total Bottom 5% Top 5%
All ≥3.8 1870 123 35

Again, that was last year’s results, based on teachers who received reports. But in its posted results for this year, DOE also includes teachers whose reports would have been so unreliable even by the DOE’s standards that they were not issued to the teachers themselves. Specifically, their analysis includes teachers who “…did not receive a report because they had too few students to generate a report.” The final results DOE posted are as follows:

Number of Teachers
Grade Level Total including
unreliable reports
added by DOE
Bottom 5% Top 5%
All 2093 142 113

Without more information, we don’t know whether or not last year’s bias still exist, though I suspect they do. Two DOE charts in the same document show that while an extra question correct for the teachers of top performing students may raise the teacher’s ranking 10 to 20 percentile points, an extra question wrong would lower the ranking by about 20 to 50 points. Double the punishment and half the pleasure, so to speak. DOE does not out and out call this a bias; rather it says the results for these teachers are “sensitive to changes in student performance,” and “Principals and teachers should take this into account when interpreting …results.”

Whatever that may mean.

If the DOE’s Teacher Data Reports were low-stakes documents, none of this would matter. But, of course, if these were low-stakes documents, the DOE would candidly document any problems it found and informed affected teachers and their supervisors that their Reports are patently unfair.

Unfortunately, the stakes are not low. They are high. They are high for teachers worried about the publication of their names, of course, but they are also high for the DOE. The DOE wants these reports out in the press, and sees a long term value in destabilizing the teaching force. And so it goes.

One final note: Since my earlier post on Edwize about the biases, the DOE has also posted other materials, and that’s a good thing. I had pointed out that DOE had removed proficiency scores from the Reports, and that even the separate data file used statistical numbers that had no meaning for teachers. Proficiency scores give results context, and without them, any potential release to the press has the potential to be that much more punishing.

Since then the DOE has posted a table that allows teachers to convert the statistical (z) scores into scale scores, but not proficiency scores. DOE says that a conversion to proficiency scores is not possible for multi-year results because of cut-score changes. Meanwhile, as we await the court’s decision, the new Reports themselves remain devoid of any contextualizing score information.


1 My initial post put the ratio at 30 to one, but I have updated it. The ratio is 35 to 1, and the number of teachers in the bottom five percent is 76 not 63.

Print

3 Comments: