A note on this page's publication date
The content we created in 2010 appears below. This content is likely to be no longer fully accurate, both with respect to the research it presents and with respect to what it implies about our views and positions.
This page outlines how we evaluate United States social programs. We examine particularly promising programs in more depth than others, and have a preference for charities that exclusively or primarily focus on these programs.
We consider programs to be promising when they are associated with past demonstrated success in improving people's lives, generally via rigorous evaluation.
Table of Contents
What we look for
We have a strong preference for programs that have had demonstrable past success in improving lives - a preference we believe is particularly appropriate for individual donors seeking to help people they ultimately know very little about.
We feel that the most compelling evidence for programs in this area usually comes from formal evaluations. However, we also feel that formal evaluations should be read skeptically and critically, due in large part to our concerns about selection bias and publication bias in formal evaluations. Therefore, we review promising studies critically, preferring the randomized controlled trial (RCT) design in general and also looking for study qualities such as large sample size, low attrition, and clear/credible measures of impact.1
A good formal evaluation can leave very little doubt about a program's effects at a particular time and place, though it does leave questions about how the results of a small, carefully executed program would translate in new environments. This is why we have gone about our research in this area by first identifying charities running promising programs, then questioning these charities on topics including their ongoing monitoring and evaluation. In our charity reviews in this area, we discuss the former under "Evidence of impact" and the latter under "Ongoing monitoring."
Note: In our research on international aid, we sought and relied on "macro" evidence for program effects: i.e., programs carried out on a large scale (regional, national, or multinational) without separating people into "treatment groups" and "control groups." (For more, see our criteria page for international aid.) Because we have not identified examples of "macro" successes in this cause, we have not used "macro" evidence to identify programs, and have focused on evaluations of smaller-scale programs.
Helpful sources for critical discussions of evaluations
Below are two sources we've found particularly helpful for critical discussions of evaluations in this area.
Coalition for Evidence-Based Policy
The Coalition for Evidence-Based Policy2 reviews and publishes reports about social programs that meet its criteria.3 We agree with the criteria The Coalition uses in its assessments, have independently analyzed the evidence for the Nurse-Family Partnership program (which The Coalition for Evidence-Based Policy endorses), and have spoken with The Coalition's Vice President David Anderson multiple times about our research. Based on this, we believe that The Coalition would likely ask all the questions we would ask about a program. The Coalition provides plain-language summaries of the results of its analyses. We therefore rely on the Coalition's analysis and summaries when they are available.
The Campbell Collaboration conducts literature reviews of broad areas of social programs (e.g., volunteer tutoring or after-school programming).4 The Campbell Collaboration follows a methodology similar to that of the Cochrane Library, which we use heavily for evaluating health interventions in international aid. In particular, we find the Campbell Collaboration's analysis useful for determining whether specific programs may be susceptible to publication bias.
- Campbell Collaboration. Homepage. http://www.campbellcollaboration.org/ (accessed October 12, 2010). Archived by WebCite® at http://www.webcitation.org/5tRWW41ZI.
- Coalition for Evidence-Based Policy. Homepage. http://www.coalition4evidence.org/ (accessed October 12, 2010). Archived by WebCite® at http://www.webcitation.org/5tRVCIQf6.
- Coalition for Evidence-Based Policy. Checklist for reviewing a randomized controlled trial of a social program or project, to assess whether it produced valid evidence (PDF).
- Coalition for Evidence-Based Policy. Social programs that work. http://evidencebasedprograms.org/wordpress/ (accessed October 12, 2010). Archived by WebCite® at http://www.webcitation.org/5tRV1e9kr.
We will soon be publishing more information on what we consider to be the general qualities of good evidence of impact.
- Coalition for Evidence-Based Policy, "Social Programs that Work."
- Coalition for Evidence-Based Policy, "Homepage"
Coalition for Evidence-Based Policy, "Checklist For Reviewing a Randomized Controlled Trial of a Social Program or Project, To Assess Whether It Produced Valid Evidence."
Campbell Collaboration, "Homepage."