You are here

US Charities involved in K-12 Education

A note on this page's publication date

The content on this page has not been recently updated. This content is likely to be no longer fully accurate, both with respect to the research it presents and with respect to what it implies about our views and positions.

In 2007, GiveWell invited New York based charities to apply for grants in three categories: Cause 3: Early Childhood Education, Cause 4: K-12 Education, and Cause 5: Employment Assistance. For more on our process of identifying and inviting organizations see our page on How we found applicants. This page discusses our process and priorities in evaluating organizations applying under Cause 4: K-12 Education.

Our priorities in evaluating charities in this cause

The achievement gap

Our overview of the achievement gap discusses the large discrepancy between different groups in the U.S. - both socioeconomic (e.g., high- vs. low-income families) and ethnic (e.g., African-American vs. white). The same groups that score lower on measures of academic achievement also have lower incomes and even higher incarceration rates, a phenomenon that has made some conjecture that education is a key to later-life success. The charities we examined for this cause aim to improve academic outcomes for disadvantaged youth (generally low-income members of underperforming ethnic groups). Yet it's important to recognize that this achievement gap is already present before children enter kindergarten, and grows only slightly afterward. Any organization focused on K-12 education, therefore, is likely trying to help children who are already struggling - and simply equalizing the quality of education (for underperforming and overperforming groups) would not necessarily close the achievement gap between them. We therefore feel it is appropriate to approach the cause of K-12 education with skepticism. We cannot assume that a program is improving children's academic performance simply because it is providing them with extra support or moving them to better schools; instead, we seek a program with a compelling empirical case.

The challenge of measuring success

Ideally, we would consider not just a charity's effect on students' academic performance, but its long-term effect on their lives. Much of the research on early childhood care attempts to gauge the impact of care on a variety of later life outcomes, including clients' mental health and their likelihood of having a criminal record; however, we believe that very little similar research exists on the impact of K-12 education. (We have seen a few studies on the correlation between academic performance and later earnings, but academic performance can reflect pre-existing abilities; we have seen no studies attempting to isolate the effect of education on earnings.) In the absence of data on later life outcomes, we have chosen to focus - for this cause - on charities' abilities to improve their clients' academic performance. This choice meshes with the way applicants chose to make their case, in our open-ended application; applicants commonly sent information about their applicants' high school graduation rates and achievement test scores (and less commonly about other academic-related measures, such as attendance rates and college enrollment rates). In our view, however, it is not enough to demonstrate that a program's participants outperform non-participants (as many of our applicants did). In evaluating education-related data, we believe that selection bias is a major issue: the fact that one student enters a program (charter school, summer school, tutoring session, etc.) while another does not may indicate pre-existing differences in ability and/or motivation that cause the students both to act differently (i.e., one enters the program and one does not) and to perform differently (i.e., one has more academic success than the other), regardless of the effects of the program itself. One good example of this phenomenon is seen in the research on the Federal Talent Search program: program participants outperformed non-participants in many ways (including test scores) that could not be plausibly attributed to the program. Our strongest applicants demonstrate an impact on student performance that is difficult to attribute to pre-existing differences between participants and non-participants.

Our process

We invited a total of 113 organizations to apply within this cause. (Details of how we found applicants.) 50 completed our Round 1 application; 54 did not apply (the remaining 9 applied in other causes). The goal of our Round 1 application was to identify the charities most likely to be able to demonstrate proven past success in improving educational outcomes for disadvantaged students, in keeping with our principle of focusing on already-proven programs. Details on the criteria we used to choose finalists - along with the applications of all who applied, finalists and non-finalists - are available at our page on K-12 education: Round 1 applicants. We named 12 finalists, all of which provided at least some evidence on the performance of their clients vs. non-clients that we felt was worth a second look (either because it showed very large differences, or because we had reasons to believe that the selection bias issues discussed above might not be a problem). After carefully reviewing all 12 second-round applications, we concluded that 3 of our applicants (the ones discussed under Organizations that advanced passed our Round 2 application process) had provided sufficient evidence to assess their effectiveness. For the other 9, we concluded that there were too many questions about the available data (most of them pertaining to selection bias concerns, or simply to small and unrepresentative data sets) to conclude that their programs have empirically demonstrated effectiveness. For more on our process see our How We Identify Top Charities page.

List of finalist charities for this cause

Organizations that advanced passed our Round 2 application process:

Organizations that advanced passed our Round 1 application process:

In addition to the 3 organizations above, the following 9 organizations advanced passed Round 1 of our application process in 2007.

Achievement First

What they do: Achievement First is a network of 12 charter schools serving 2500 students in Connecticut (New Haven and Bridgeport) and Brooklyn.1 (3 more schools will be opening during fall of 2008.)

Available evidence: Achievement First sent us achievement test score data from selected classes, showing large gains for 5th grade students and higher overall scores than students in nearby public schools.2

Why we don't recommend this organization: We aren't confident that the results Achievement First submitted are representative of the organization as a whole. We were sent data only for selected classes, and were not able to obtain much more data independently (as we were for KIPP) or to locate studies specifically addressing concerns about selection bias (as we did for KIPP).

Materials submitted in 2007

Double Discovery Center

Double Discovery Center requested complete confidentiality for all of its application materials. We are therefore unable to give the full details of our reasoning. Overall, we found Double Discovery Center to be similar to other applicants in this section, in that it provided data that we felt was insufficient due to issues along the lines of those discussed above.

Harlem Center for Education

What they do: Harlem Center for Education is a community organization running programs focused on low-income youth and adults in Harlem, including the federally funded (and evaluated) Talent Search Program, which aims to increase the likelihood that low-income students attend college through, among other things, college counseling, SAT preparation, and tutoring.3

Available evidence: As our overview of the Federal Talent Search Program states, we are somewhat optimistic about the program's general approach, but not enough to support an organization simply because the organization is affiliated with this program. HCE did submit a comparison of college enrollment rates for its students relative to the average enrollment for low-income students in New York State.4

Why we don't recommend this organization: Insufficient empirical case due to concerns over selection bias. We find it likely that a student's choice to participate in – and complete – the Talent Search program is reflective of other relevant characteristics which would lead to higher graduation rates, a concern that is born out by existing evidence about the program at the federal level.

Materials submitted in 2007

Learning through an Expanded Arts Program (LEAP)

What they do: LEAP runs a variety of programs; its application focused on the ALLL (Active Learning Leads to Literacy) program, 100 hours (20 weeks, 1 hr per day) of instruction in arts and other non-academic subjects to students in grades K-2.5 The program includes in-class instruction and field-trips and is administered jointly between the students' teacher (for 3 days out of the week) and a LEAP instructor (for 2 days of the week).6

Available evidence: In a pilot study of ALLL including 5,337 students at 25 schools, students randomly assigned to ALLL classrooms showed better yearly gains on the Early Childhood Language Assessment System (ECLAS) tests than did students randomly assigned to non-ALLL control classes.7

Why we don't recommend this organization: The ALLL program only constitutes about 13%8 of LEAP's total expected budget for 2007-08;9 we do not have sufficient empirical evidence about LEAP's wide variety of other activities.

Materials submitted in 2007

Leadership Enterprise for a Diverse America (LEDA)

What they do: LEDA holds a seven-week program at Princeton University, targeting students from minority (African-American, Latino, and Native American) backgrounds10 and relatively low-income families.11 The program consists largely of college-style academic work that aims to prepare students academically and socially for college.12 In addition, the program offers student-specific guidance to help scholars prepare for the college application process.13

Available evidence: In a study of participants in the first year of the program, LEDA found that its scholars are more likely to enroll in selective colleges than those who were finalists but ultimately not selected for the program.14

Why we don't recommend this organization: We found that LEDA scholars and non-scholar finalists cannot safely be compared. LEDA scholars are significantly more academically proficient than finalists who aren't selected,15 leading us to believe that the difference between scholars' and finalists' performance may be a reflection of their abilities rather than any impact of the program.

Materials submitted in 2007

New Visions for Public Schools

What they do: New Visions runs a wide variety of programs aiming to improve NYC's public schools. Its New Century High Schools (NCHS) Initiative replaced large high schools with smaller schools; its other programs focus largely on training, information sharing, and other resources for teachers and school leaders.

Available evidence: A third-year report on the NCHS Initiative compared students at NCHS small schools to students at comparison schools who were otherwise similar, as measured by characteristics including ethnicity, eligibility for free- and reduced-price lunch, and previous achievement test scores; NCHS students had higher attendance rates and grade promotion rates, but lower scores on Regents exams.16 A fourth-year report found similar results (higher graduation rates, fewer students graduating with Regents and Advanced Regents diplomas) using a similar though not identical methodology.17

Why we don't recommend this organization: Our concerns about the NCHS study include (a) the fact that students appeared to perform better on measures that are directly controlled by the school (e.g., grade promotion and graduation) but not on more direct measures of performance (e.g., performance on Regents exams); (b) the fact that while students were matched on some observable characteristics, this technique does not fully account for potential selection bias in the students who choose to attend NCHS schools (and in particular - unlike the analysis we did for KIPP - does not show any measure by which NCHS and non-NCHS students started at comparable levels and improved at different rates); (c) evaluations of ongoing small schools projects do not provide evidence of an effect on academic achievement.18

In addition, New Visions's focus now appears to be on other programs, particularly teacher and principal training programs, for which we have not seen an empirical case.

Materials submitted in 2007

Replications, Inc.

What they do: Replications, Inc. identifies high-performing public middle schools (6th-8th grades) and high schools and "replicates" them as New York City public schools. The process includes placing the school's future principal in the "model school" for training. Replications, Inc. has started 24 schools in New York City.19

Available evidence: Achievement test data shows that Replications students outperform students in nearby public schools.20

Why we don't recommend this organization: Insufficient empirical case, due to concerns over selection bias with the data we do have. In the data we've seen, Replications students outperform students in nearby public schools from year one, and the gap does not appear to grow, suggesting that the difference may be one of students rather than one of schooling.

Materials submitted in 2007

St. Aloysius

What they do: St. Aloysius runs a school (pre-K through 8th grade) serving 300 students in Harlem.21 Its program includes a mandatory summer academy for 5th-7th graders and after-school activities that are attended by the vast majority of students in all grades.22

Available evidence: St. Aloysius students perform significantly better than students in district public schools on the New York State English Language Arts exam, although their performance on math exams is mixed.23 St. Aloysius students have also graduated at a rate well above the city average (98% St. Aloysius over the last 15 years24 - compared to 43% for NYC as a whole, according to a Manhattan Institute study).

Why we don't recommend this organization: Insufficient empirical case, due to concerns over selection bias with the data we do have. St. Aloysius students pay an average of $2,400 in tuition;25 the willingness to do so may indicate a combination of family income, family environment, motivation, etc. that makes it inappropriate to compare St. Aloysius students to NYC students in general, and difficult to put their performance in context.

Materials submitted in 2007

Student Sponsor Partners

What they do: Student Sponsor Partners funds approximately 400 high-school scholarships for disadvantaged rising ninth-graders, enabling them to attend one of 22 non-public "Partner Schools" in New York City.26 SSP also provides each scholarship recipient with a Sponsor who meets with the student 4-6 times per year and pays a portion of the student's tuition.27

Available evidence: SSP provided data illustrating that its students graduate high school and matriculate to college at rates that are 2-3 times those of the public schools in which SSP students would otherwise have been enrolled (based on SSP's range of graduation rates in their feeder schools found in Student Sponsor Partners28) and almost twice the rate for their demographic groups in NYC.29

Why we don't recommend this organization: Insufficient empirical case, due to concerns over selection bias with the data we do have. We feel that the feeder school graduation rate does not provide a reasonable comparison group, given that SSP's participants enter the program performing better than students attending feeder schools. SSP students are more likely to be at Level 4 (the highest performing level) and less likely to score at Level 1 (the lowest performing level) even before they enter the program.30

Materials submitted in 2007

  • 1.

    Achievement First, “Application response - Round 1 (2007),” Pg 1.

  • 2.
    • Achievement First, “Application response - Round 1 (2007),” Pgs 9-10.
    • Achievement First, “Application response - Round 2 (2007),” Pgs 26-30.

  • 3.

    Harlem Center for Education, “Application response - Round 1 (2007),” Pg 1.

  • 4.

    Harlem Center for Education, “2005-06 Annual Progress Report (2007),” Pg 6.

  • 5.

    Learning through an Expanded Arts Program (LEAP), “2003-06 Evaluation of ALLL Pilot Program (2007),” Pg 2.

  • 6.

    Learning through an Expanded Arts Program (LEAP), “2003-06 Evaluation of ALLL Pilot Program (2007),” Pg 2.

  • 7.

    Learning through an Expanded Arts Program (LEAP), “2003-06 Evaluation of ALLL Pilot Program (2007),” Pgs 2-4.

  • 8.

    Learning through an Expanded Arts Program (LEAP), “Application response - Round 1 (2007),” Pg 4.

  • 9.

    Learning through an Expanded Arts Program (LEAP), “2007-08 Budget (2007).”

  • 10.

    Leadership Enterprise for a Diverse America (LEDA). “Application response - Round 1 (2007),” Pg 1.

  • 11.

    Leadership Enterprise for a Diverse America (LEDA), “1st LEDA Cohort's Performance Data (SAT, GPA etc.) And College Attended (No Names) (2007).”

  • 12.

    Leadership Enterprise for a Diverse America (LEDA), “Application response - Round 1 (2007),” Pg 3.

  • 13.

    Leadership Enterprise for a Diverse America (LEDA), “Application response - Round 1 (2007),” Pg 3.

  • 14.

    Leadership Enterprise for a Diverse America (LEDA), “1st LEDA Cohort's Performance Data (SAT, GPA etc.) And College Attended (No Names) (2007).”

  • 15.

    See data from Leadership Enterprise for a Diverse America (LEDA), "Hispanic Initiative Evaluation (Liza's Study) (2007)."

  • 16.

    New Visions for Public Schools, “Cause 4-NCHS Year 3 Evaluation (2007),” Pg 39-46.

  • 17.

    New Visions for Public Schools, “Cause 4-NCHS Year 4 Evaluation (2007),” Pg 39-41.

  • 18.

    See for example, this report on the Federal Smaller Learning Communities Program, starting at Pg 115.

  • 19.

    Replications, Inc., “Application response - Round 1 (2007),” Pg 1.

  • 20.

    See this chart of test scores using the data available at Replications, Inc, "Publicly Available Test Data (2007)."

  • 21.

    St. Aloysius, “Application response - Round 2 (2007),” Pg 1.

  • 22.

    St. Aloysius, “Application response - Round 2 (2007),” Pg 3.

  • 23.

    St. Aloysius, “Application response - Round 2 (2007),” Pg 6.

  • 24.

    St. Aloysius, “Application response - Round 1 (2007),” Pgs 6-7.

  • 25.

    St. Aloysius, “Application response - Round 2 (2007),” Pg 5.

  • 26.

    Student Sponsor Partners, “Application response - Round 1 (2007),” Pg 1.

  • 27.

    Student Sponsor Partners, “Application response - Round 1 (2007),” Pg 2.

  • 28.

    Student Sponsor Partners, “Application response - Round 1 (2007),” Pg 1.

  • 29.
    • Student Sponsor Partners, “Application - Round 2 (2007),” Pg 4.
    • Student Sponsor Partners, “Application response - Round 1 (2007),” Pg 2.

  • 30.

    Student Sponsor Partners, “Application response - Round 2 (2007),” Pgs 2, 7.