Log in  |  Search

Brookings vs. Broader, Bolder:
Why the New Critique of the Harlem Children’s Zone Gets it Wrong

Harlem Children's ZoneGiven the high profile of recent efforts to spread the Harlem Children’s Zone model of school and social reform to other parts of New York City and the nation, it’s not surprising that a recent Brookings Institute report criticizing one of the Zone’s charter schools has attracted a lot of media attention. Despite the study’s small scale, its emphatic rejection of the HCZ argument that social service provision is an essential element of urban school reform makes it important to understand whether or not its criticisms are valid. In fact, the report has some significant weaknesses which call its biggest claims into question. The study’s statistical methods are one issue, especially its failure to consider proportions of special education students when comparing HCZ with other charter schools and the decision to use snapshots of test scores rather than measuring individual students’ or cohorts’ progress over time.

More importantly, however, the researchers’ decision to use a few years of test scores as the sole measure of whether HCZ “works” represents a huge misinterpretation of the purposes and significance of this model of school reform. While Geoffrey Canada and his supporters are certainly concerned about raising student achievement (as measured both by standardized tests and other factors), the uniqueness and value of HCZ is that it is aimed at larger-scale reform which affects the life chances of all young people in a region, with charters as one element of broader improvements in public schools and social services. From the beginning, HCZ has defined its success not only in terms of raising test scores, but also by longer-term goals such as college attendance, student health and safety, and community stability. Given recent criticisms of the poor design and questionable significance of the New York state tests on which the authors’ conclusions are based, it would be incredibly unfortunate if this single study resulted in a decline in support and funding for one of the few models which provides an alternative to the overemphasis on standardized test scores on which so much current urban school reform is based.


Over the past decade, the Harlem Children’s Zone’s model of combining social service provision with changes in local public schools — including the founding of two charter schools, Harlem Promise Academy I and II (HPA I and II) — has been one of the most well-known models of school reform in the country. It epitomizes the community school-based “Broader, Bolder” education reform model, a platform endorsed by both Arne Duncan and the NEA during the 2008 election; the UFT also recently partnered with HCZ founder Geoffrey Canada in an effort to create community schools in New York City. HCZ also serves as the model for the Obama administration’s “Promise Neighborhood” program, for which over $200 million in funding is currently being considered in Congress. This report is notable for its strong critique of the “Broader, Bolder” approach to education, arguing that the lack of evidence for its effect on test scores (relative to school reform which does not include social service provision) indicates that it may represent an inefficient use of public funds.

This new report was funded by the Brookings Institute’s Brown Center on Education Policy, which is known for its strong support of market-based education reform and for its advocacy of charter schools and school choice in general. Previous studies (including one by Harvard researchers Dobbie and Fryer) have found that HCZ’s two charter schools significantly outperform the surrounding district schools on standardized tests. This study is the first to compare the impact on test scores of a Harlem Children’s Zone charter school with other NYC charter schools, rather than solely comparing the HCZ schools’ impact with that of the district schools its students would have otherwise attended. Another unusual point is that in response to criticisms of the study posted by HCZ founder Geoffrey Canada, the researchers expanded on the original report’s research in a follow-up blog post. The analysis below includes discussion of all three sources.

Major Findings

The two questions the researchers address are “whether the HCZ works and whether it works as advertised.” They answer the first question in terms of levels of student achievement (as measured by test scores from 2007-2009) and the second in terms of comparing the test scores at HPA I (and later, HPA II) to other charter schools which do not provide social services.

On the first question, they conclude that (like the majority of the charter schools in NYC), HPA I and II do better than the average across the city for schools with their demographic profiles. However, they also conclude that HCZ does not work “as advertised,” because its two charter schools’ test scores are only average in comparison to other charters in New York City (which, they acknowledge, are unusually successful compared to charter schools nationally). If the HCZ model was truly better, they argue, then its charter schools should have higher test scores than those which don’t provide social services.

Considering mathematics and English language arts jointly, they find that half or more of the public charter schools in Manhattan and the Bronx produce test scores on state assessments that are superior to those produced by the HCZ Promise Academy. They conclude that “the same general pattern holds for math and English language arts considered separately, but it appears that mathematics is HCZ’s stronger suit.” This is true both for actual scores as well as scores adjusted for student demographics.

Their (somewhat confusing) table below summarizes these findings. Taken from the follow-up blog post, it combines data for HPA I alone (“Previous Analysis”) and from HPA I and II combined (“New Analysis”). Each number represents the percentage of charter schools with lower average scores than the HPA schools — this means that the higher the number, the better HPA did relative to other charter schools in the study. (For example, the “48” in the first column means that 48% of the NYC charter schools the researchers studied had lower average math scores than HPA I did; i.e., 52% had higher math scores.) The column labeled “Adj.” presents data adjusted for some of the demographic characteristics of the students in the schools (The researchers controlled for the percentages of students who received free lunch or reduced lunch, were limited English proficient, were African American, or were Hispanic, but did not control for the percentage of students in special education programs.)

Percentile Scores of the HCZ Promise Academy Charter Schools

Previous Analysis

New Analysis





Mathematics, relative to charter schools in Manhattan and the Bronx





English language arts, relative to charter schools in Manhattan and the Bronx





Grand Mean





Based on these numbers, the researchers come to “the inescapable conclusion…that the HCZ Promise Academy is a middling New York City charter school.” Citing their results as well as other studies, they argue that “there is no compelling evidence that investments in parenting classes, health services, nutritional programs, and community improvement in general have appreciable effects on student achievement in schools in the U.S.”

Validity of the Findings

The validity of these conclusions is questionable, due to several issues with the research. Perhaps most importantly, the researchers failed to include data about the proportions of special education students when comparing HPA I and II to district schools and to other charters. As we have noted repeatedly here at EdWize, it is essential to consider the relative proportion of special education students (especially those with high needs) when comparing schools’ performance, especially in terms of how many students require more than a few hours of extra help per week.

Another issue is their decision to exclude Harlem Promise Academy II from the original study while using the research to make claims about the HCZ model as a whole. The researchers’ explanation for doing so was “to avoid confusing the results from the newer Academy II with those from the Promise Academy and to circumvent the HCZ from competing against itself in the school rankings.” However, the response from the HCZ to the researchers on this point noted that HPA I is only one year older than HPA II, and pointed out that the researchers were willing to include four KIPP schools within the rankings. If HPA II as a separate school had been assessed using the study’s methodology, they claimed, it would have been in the top quarter of the charter school rankings. The researchers’ response was to re-do their calculations by combining the scores at HPA I and II; this method again put the HPA model in the middle of the rankings. Without access to the data, it is impossible to know why these two different methods regarding HPA II resulting in such different rankings.

In addition, the researchers used combined average cohort scores over multiple years and grades to measure charter school performance, rather than using individual student or cohort growth over time (currently considered a stronger method of evaluation). The researchers compared the average 2007-2009 state test scores in ELA and Math at the HPA charters with those at all other charter schools in Manhattan and the Bronx, as well as with district schools. (For 2007, they examined grades 6-8; for 2008, they examined grades 3, 7, and 8; for 2009, they examined grades 3-5 and 8). The researchers then ranked the charter schools using two different methods, each of which had slightly different outcomes.

In the first method, all charter schools’ test scores in ELA and Math (for the grades listed above) were averaged across the three year period, and then schools were ranked on each of these averages separately as well in combination (“Grand Mean” = ELA + Math, divided by 2). In the second method, the researchers used demographic data from all district and charter schools in NYC to create “predicted scores” for each subject, year, and grade in each charter school, based on its percentage of students who received free lunch or reduced lunch, and the percentage who were limited English proficient, African American, or Hispanic. They then calculated the difference between these adjusted scores and the school’s actual scores for each subject, year, and grade, and re-ranked the charter schools based on the average of these “difference scores.”

This methodology is both overly complicated (especially in terms of reporting findings) and less statistically rigorous than other methods. Again, the HCZ response provides several relevant critiques. First, they note that this method fails to measure HCZ and other charters’ ability to improve student test scores over time, as it uses cohort-level snapshots of different grades in different years rather than tracking individual students or even individual cohorts over several years. The researchers defend their methods by noting that individual student and cohort level data was not available to them, and argue that including the 8th grade scores is sufficient to measure growth; however, the weakness of the original data sources lessens the validity of their conclusions.

Second, the HCZ response notes that because of a reporting error on their own part, the Brookings researchers understated the proportion of reduced price/free lunch students enrolled in HPA, thus comparing it to district schools with wealthier student bodies and lowering it in the rankings. The researchers respond to this by recalculating their rankings with the higher proportions of combined reduced/free lunch students reported by HCZ (their “new analysis” on the graph), with little change to the results — however, this new method fails to distinguish between proportions of students who receive free lunch as compared to reduced price lunch, groups with significant differences in income.

The HCZ response also challenges the report’s interpretation of Dobbie and Fryer’s research, which, the researchers claim, found “that students outside the Zone garner the same benefit from the HCZ charter schools as the students inside the Zone…proximity to the community programs had no effect.” HCZ notes that students from outside the Zone also benefited from “community” programs such as “free medical, dental and mental‐health services; access to social workers and counseling; afterschool tutoring; healthy meals; test prep; college tours; after‐school, weekend and summer enrichment classes; and recreational opportunities.”

Most importantly, the HCZ response also emphasizes that the researchers’ single-minded focus on test scores in a single school as a measure of the model’s success is a serious misunderstanding of the HCZ model, which is based on affecting all children in the Zone (not just those in the charter schools), especially in terms of guiding them into college, and is designed “to support development of the whole child, not just how they perform on standardized tests.” As noted in the beginning of this post, this issue is central to the importance of both HCZ and to the possible impact of this report on its future. It would be incredibly unfortunate if this narrow perspective on both the program and on the goals of urban school reform weakened the Harlem Children’s Zone model before we have the opportunity to see if it truly “works.”



  • 1 Celso Garcia
    · Aug 23, 2010 at 1:23 pm

    Is this an endorsement of the HCZ or of the model they use? Did the researchers compute teacher retention into the equation?
    How is their model any different than wrap around services educators have been asking for years to be provided. We need a model that can be applied to all schools Duncan and Canada have both said that this us not meant to be applied in a global scale. What about improving the district schools. There are no gods with magic wands to fix the educational system are there?

  • 2 TFT
    · Aug 23, 2010 at 7:51 pm

    HCZ is the proxy for what should be a working government that has at its core a concern for the common citizens. If we had a government for the people, the people would’t have to fight for health care, housing, food and jobs.

    But we have an oligarchy.

    We have created the underclass, and they are just a symptom of what ails us.