Many popular "cures to poverty" don’t hold up to scrutiny (examples here
But many more have simply never been scrutinized.
Ask a given charity how they know whether their program is changing lives, and odds are you’ll get one of the following:
1. Anecdotes, pictures and testimonials.
This is the kind of evidence that passionate believers in a program tend to find most relevant, because it vividly confirms what they already believed.
But scattered success stories can’t really capture whether a program is working. Out of 100 people in a program, the odds are that some of them will see their lives improve, just by chance – regardless of what programs they are or aren’t enrolled in.
As anyone familiar with medical trials and the "placebo effect" knows, testimonials are a far cry from evidence that a treatment works.
2. Simple comparisons of program participants and non-participants.
- A child care program points to superior results for its children, compared to those whose parents chose not to enroll them; the difference could be the child care program, but we’d guess it’s often the parents.
- An employment program boasts that its graduates are better able to hold jobs than its dropouts; of course, holding a job can often come down to being the sort of person who sticks with programs instead of dropping out.
We have also blogged on this topic here
So what kind of evidence is meaningful?
The New York City Voucher Experiment
had students apply for scholarships, then used a randomized draw to determine who would receive the scholarships. This gave it two groups of students, with the only difference between them being the luck of the draw, so any difference could be attributed to the scholarships themselves.
This "randomized draw" approach is considered the gold standard of evidence for effectiveness. It has been used to evaluate many large-scale projects, including all of those on our list of major failure stories
– as well as our top U.S. organization, the Nurse-Family Partnership
In identifying top charities, we haven’t limited ourselves to this approach. If we can make a reasonably compelling case for a charity, we recommend it. For example, analyzing test score data on KIPP
led us to the conclusion that it is most likely making a real difference in students’ performance, even though no "randomized draw" study is available.
But most charities’ programs are fundamentally untested. They’re like the Voucher Experiment
, minus the experiment: doing something that seems to make sense, without examining whether it actually works.
The charities we’ve seen
Within the area of charities working in the United States, as of 2008, we had evaluated a total of 83 charities.
Next: charities we recommend
- We recommended 2.
- An additional 4 ran programs that have been connected with thorough studies, of the kind described above.
- 2 of these evaluations showed no positive impact.
- One showed only minor positive impact (Teach For America).
- One showed a reasonably large impact, but only for a small subset of the organization’s activities.
- Another 15 had conducted evaluations plagued by major concerns along the lines sketched above.
- The remaining 62 programs were not associated with any formal study of impact.