Log in  |  Search

A Closer Look at Charter Practices

Will Dobbie and Roland Fryer’s new study of 35 New York City charter schools attempts to find a preliminary answer to the question of how different practices within charters are correlated with student progress on math and ELA tests. In general, this study’s premise and methods represent a promising shift away from just looking at test scores to measure school quality; it acknowledges variations between charters and gets to the issue of what policies and practices are actually happening inside these schools.

The researchers looked at a wide variety of possible practices based on surveys of principals, interviews with teachers, visits to schools, and reviews of site visit reports from authorizers. The result was that they found five policies that were significantly correlated to increased test scores: “frequent teacher feedback, the use of data to guide instruction, high-dosage tutoring, increased instructional time, and high expectations.” As Matt DiCarlo recently noted, the efficacy of some these factors in raising test scores — particularly increased instructional time — has also been supported in other studies.

The finding that’s getting the most attention, however, is their conclusion that “class size, per pupil expenditure, the fraction of teachers with no certification, and the fraction of teachers with an advanced degree” wasn’t positively correlated with test scores at these schools.

There are several problems with putting too much emphasis on either of these preliminary findings, however.

First and most importantly, the authors themselves state that “our estimates of the relationship between school inputs and school effectiveness are unlikely to be causal given the lack of experimental variation in school inputs. Unobserved factors such as principal skill, student selection into lotteries, or the endogeneity of school inputs could drive the correlations reported in the paper.”

In plain English:

1. A “lack of experimental variation” and “endogeneity of school inputs” both mean that the range of differences that existed between the school practices they examined was relatively small (e.g. they state that class size varies from 18 to 26 students per class in NYC charter elementary schools and 22 to 29 students in charter middle schools citywide; the range within this group of 35 schools might be even smaller.) So even if class size isn’t correlated with test scores within this narrow range, it doesn’t necessarily mean (as Mayor Bloomberg recently claimed), that doubling class size would be fine or that classes smaller than the lower end of the range wouldn’t be better for students.

Similarly, state charter law requires that no more than 5 teachers per school can lack state certification, and keeping state certification means getting a masters degree within five years. Again, this means that the variation in levels of certified/uncertified and BA/MA teachers at any given school is going to be fairly small (though the authors don’t provide data about the range for this issue).

2. The “unobserved factors” caveat is also an important one. Besides principal skill and self-selection into charters, student attrition and non-replacement and the lower proportion of special education and ELL students at charters undoubtedly result in peer effects on student test scores which are not addressed in this study. I wish the authors had asked schools about their non-replacement and ELL/SPED student services policies in their survey, actually — this is a major issue that we know very little about and seems just as potentially significant a practice as those they considered.

Second, there’s a real question as to whether their data on per pupil spending is correct. As usual, Rutgers professor Bruce Baker has done an effective (if blunt) job explaining this problem with the study in some depth. Here are his main points:

First, NYC charter schools are an eclectic mix of very small to small (nothing medium or large, really) schools at various stages of development, adding grade levels from year to year, adding schools and growing to scale over time. Some are there, others working their way there. And economies of scale has a substantial effect on per pupil spending. So too might other start-up costs which may not translate to same year effectiveness measures. […]

Further, NYC charter schools have different access to facilities. Some are provided NYC public school facilities (through colocation), while others are not. Having a facility provided can save a NYC charter school over $2500 per pupil per year (to be put toward other things). Dobbie and Fryer provide no documentation regarding whether these differences are accounted for in their mythical per pupil expenditure figure. […]

Capturing an accurate and precise representation of NYC charter school spending is messy. Not even trying is embarrassing and inexcusable. Even worse and most frustrating about this particular paper by Dobbie and Fryer is the absurd lack of documentation, or any real descriptives on the measures they used…surveys of interested parties are not how to get information on finances. Audited financial statements are probably a better starting point, and two forms of such data are available for nearly all NYC charter schools. Further, where specific programs/services are involved, a thorough resource cost analysis (ingredients method) is warranted.

In addition, their “free lunch” category is actually a combination of students eligible to receive free lunch and those eligible to receive reduced price lunches, demographic groups with important differences in poverty levels. Future papers from this study should re-label this factor accurately to avoid mistaken comparisons with methods (such as those used by the SUNY Charter School Institute) which actually use “free lunch” eligibility alone as a measure for calculating student progress compared to similar schools.

Finally, they somewhat minimize what I feel is an important finding regarding the kind of learning and teaching happening in charters with high test scores:

Surprisingly, lesson plans at high achieving charter schools are not more likely to be at or above grade level and do not have higher Bloom’s Taxonomy Scores. Higher achieving charter schools also appear no more likely to have more differentiated lesson plans and appear to have less thorough lesson plans than lower achieving charter schools.

I don’t actually find this surprising at all — it mirrors what Kay Merseth (also at Harvard) found in her study of high-achieving charters in Boston. High test score gains don’t necessarily correlate with other valid measures of quality teaching and student learning, especially the kind of learning measured through Bloom’s Taxonomy (in which analysis and investigation vs. simple knowledge of content are considered higher indicators of learning). The current Measures of Effective Teaching study also faces this challenge, as I’ve noted previously.

To me, this is one of the key weaknesses of measuring “student achievement” primarily through test scores, and this report’s decision not to make a connection between the test-oriented focus of their five factors and their connection to both high test scores and low Bloom’s Taxonomy scores is a missed opportunity for a more meaningful analysis of what student learning actually means.

Print

1 Comment: