Failure in international aid | GiveWell

You are here

Failure in international aid

We feel that international aid can be an extremely good option for a donor; but it also comes with serious risks that projects will accomplish no good, or will even cause harm. Below we present several broad ways in which an international aid project can fail, and some examples of when they have.

Poorly executed programs

Well-intended projects can fail if they're not well suited to local conditions, or are otherwise poorly carried out.

  • A particularly vivid example is described in Making Aid Work by Abhijit Banerjee:1
    The Gyandoot program in Madhya Pradesh, India ... provided computer kiosks in rural areas. The [evaluation publication by the World Bank] acknowledged that this program was hit hard by lack of electricity and poor connectivity and that "currently only a few of the Kiosks have proved commercially viable." It then goes on to say, without apparent irony, "Following the success of the initiative ..."

Ineffective programs

Any aid project rests on some assumptions about the people it's helping. When these assumptions turn out to be wrong, the project can fail to help them even if it's carried out as intended.

  • Many projects focused on improving the water supply have failed to substantially improve health outcomes. Waterborne diseases are generally transmitted in many ways other than the water supply; simply improving the water supply may not be enough, and changing hygiene-related behavior may be difficult. See our writeup on water supply programs.
  • A rigorous study of the Kenyan Government's national HIV/AIDS education curriculum found little to no effect on knowledge, attitudes, or behavior (aside from increasing the probability of girls' being married in the event of a pregnancy).2
  • A rigorously studied attempt to increase an Indian community's participation in their education system had no apparent impact on behavior.3
  • Improving children's performance in school is a particularly well-studied area in which many reasonable-seeming programs, including provision of computers4 and other learning materials,5 have failed to improve performance.6

Harmful aid projects

Aid projects, including successfully executed ones, can have unintended consequences and ultimately cause harm.

  • Economic development-focused projects often aim to change how locals work and earn income. There is a risk that such projects, backed with donor subsidies, will move people into markets and occupations that ultimately don't fit.
    • The DrumNet program in Kenya aimed, successfully, to transition farmers from growing "local crops" (i.e., crops for local/personal consumption) to growing "export crops" (i.e., crops to be sold on the export market).7 However, a year after the project evaluation was completed, the firm that had been buying the "export crops" stopped due to European regulations, leading to "the collapse of Drumnet as farmers were forced to undersell to middlemen, leaving sometimes a harvest of unsellable crops and thus defaulting on their loans."8
    • A development program in Lesotho aimed to help local people with crop and livestock management, as well as building roads so they could access markets. However, few of the people in the region were farmers, and conditions were not good for farming. Harsh weather destroyed pilot crop projects, and the roads allowed in competitors who drove the existing local farmers out of business.9
  • An influx of donor funding into a community may distort the community's existing dynamics, strengthening some locals (those with better access to the aid, who may be more privileged and/or powerful to begin with) at the expense of others. A 2004 study notes the lack of research on this issue to date,10 and presents rigorous analysis of a development program "targeted at strengthening organizational capacity among rural self-help women's groups in western Kenya."11 It concludes:12
    The funding program we study provided little benefit in terms of improving organizational strength, but that it changed those characteristics of groups that made the groups attractive to funders in the first place. The program increased entry into groups and into leadership positions by younger, more educated women, by women employed in the formal sector, and by men. If funders believe there is some positive externality when grassroots organizations of the disadvantaged are managed by the disadvantaged themselves, these results suggest a downside to outside funding of these organizations.
  • Medical interventions provide particularly clear cases of unintended side effects:
    • A formal evaluation of an iron supplementation program found that supplements caused higher rates of hospital and admission and death; the authors concluded that the iron supplements had made children more vulnerable to malaria.13 This study led to a change in international guidelines for iron supplementation.14
    • A large-scale program in Egypt aimed to control schistosomiasis (disease description here) through mass drug administration. A follow-up study found that the drugs administered through the program had contributed to higher rates of hepatitis C.15

For most programs, effects are unknown

Public, rigorous evaluation of programs' effects is extremely rare. Most project evaluations are completed by the same agencies that carry out the projects,16 and look only at whether immediate objectives (for example, building a well) were achieved - not whether the projects resulted in meaningful impact for the people they served.17 Despite widespread calls for more and better evaluation (more here), studies that rigorously examine the impact of a project are relatively rare; the examples given on this page are drawn from this relatively small set.


  • Banerjee, Abhijit, ed. 2007. Making Aid Work. MIT Press.
  • Banerjee, Abhijit, et al. 2008. "Pitfalls of Participatory Programs: Evidence from a randomized evaluation in education in India." Poverty Action Lab. Available online at, accessed 7/8/09.
  • Barrera-Osorio, Felipe, and Leigh L. Linden. 2009. "The use and misuse of computers in education: evidence from a randomized experiment in Colombia." World Bank. Available online at, accessed 7/9/09.
  • Duflo, Esther, et al. 2006. "Education and HIV/AIDS Prevention: Evidence from a randomized evaluation in Western Kenya." World Bank Policy Research Working Paper. Available online at, accessed 7/9/09.
  • Easterly, William. 2006. The White Man's Burden. Penguin.
  • Ferguson, James. 1994. The Anti-Politics Machine: "Development," Depoliticization, and Bureaucratic Power in Lesotho. University of Minnesota Press.
  • Frank, Christina et al. 2000. "The role of parenteral antischistosomal therapy in the spread of hepatitis C virus in Egypt." Lancet. Available online at, accessed 7/9/09 (free subscription required).
  • Glewwe, Paul, et al. 2004. "Retrospective vs. prospective analyses of school inputs: the case of Flip Charts in Kenya." Journal of Development Economics 74, 251–268. Available online at, accessed 7/9/09.
  • Glewwe, Paul and Michael Kremer. 2005. "Schools, Teachers, and Education Outcomes in Developing Countries." Second draft of chapter for Handbook on the Economics of Education. Available online at, accessed 7/9/09.
  • Gugerty, Mary Kay, and Michael Kremer. 2004. "The Rockefeller Effect." Poverty Action Lab. Available online at, accessed 7/9/09.
  • Horton, Sue, Harold Alderman, and Juan A. Rivera. 2008. "Copenhagen Consensus 2008 Challenge Paper: Hunger and Malnutrition." Copenhagen Consensus. Available online at, accessed 7/9/09.
  • Karlan, Dean, Nava Ashraf and Xavier Gine. 2008. "Finding Missing Markets (and a disturbing epilogue): Evidence from an Export Crop Adoption and Marketing Intervention in Kenya." Poverty Action Lab. Available online at, accessed 7/9/09.
  • Sazawal, S., et al. 2006. "Effects of routine prophylactic supplementation with iron and folic acid on admission to hospital and mortality in preschool children in a high malaria transmission setting: community-based, randomised, placebo-controlled trial." Lancet. Available online at, accessed 7/9/09 (PubMed subscription required; abstract publicly available).
  • 1.

    Banerjee 2007, Pg 15.

  • 2.

    "After two years, girls in schools where teachers had been trained were more likely to be married in the event of a pregnancy. The program had little other impact on students' knowledge, attitudes, and behavior, or on the incidence of teen childbearing." Duflo 2006, abstract. The paper is a randomized controlled evaluation of three HIV/AIDS prevention programs including the Kenyan government's.

  • 3.

    "In India, the current government flagship program on universal primary education organizes both locally elected leaders and parents of children enrolled in public schools into committees and gives these groups powers over resource allocation, and monitoring and management of school performance. However, in a baseline survey we found that people were not aware of the existence of these committees and their potential for improving education. This paper evaluates three different interventions to encourage beneficiaries' participation through these committees: providing information, training community members in a new testing tool, and training and organizing volunteers to hold remedial reading camps for illiterate children. We find that these interventions had no impact on community involvement in public schools, and no impact on teacher effort or learning outcomes in those schools." Banerjee 2008, abstract. The paper is a randomized controlled trial.

  • 4.

    See Barrera-Osorio 2009.

  • 5.

    For example, see Glewwe 2004, which evaluates the use of visual aids ("flip charts") in Kenya.

  • 6.

    See Glewwe and Kremer 2005 for a thorough discussion of evidence regarding developing-world education programs.

  • 7.

    Karlan 2008, abstract.

  • 8.

    Karlan 2008, Pg 4.

  • 9.

    Ferguson 1994. Referenced in Easterly 2006, Pgs 193-4.

  • 10.

    Gugerty 2004, Pgs 2-3.

  • 11.

    Gugerty 2004, Pg 3.

  • 12.

    Gugerty 2004, Pg 4.

  • 13.

    Sazawal 2006.

  • 14.

    Horton 2008, Pg 11.

  • 15.

    Frank 2000.

  • 16.
    • "The bulk of the evidence of project performance ... is drawn, almost exclusively, from the evidence of the agencies providing the aid and funding the projects ... the data almost certainly give an over-inflated picture of project success." Riddell 2007, Pg 186.
    • "Although evaluation has taken place for a long time in foreign aid, it is often self-evaluation, using reports from the same people who implemented the project .... The World Bank makes some attempt to achieve independence for its Operations Evaluation Department (OED) [now called the Independent Evaluation Group] ... However, staff move back and forth between OED and the rest of the World Bank; a negative evaluation could hurt staffs' career prospects." Easterly 2006, Pg 193.
  • 17.
  • "The available evidence suggests, quite strongly, that the clear majority of official aid projects achieve their immediate objectives .... For many, however, the key test of whether project aid is to be judged successful lies in its wider impact ... To this day, there remains a lack of evidence with which to draw firm overall conclusions about the wider impact of project aid." Riddell 2007, Pgs 192-193. See also Pgs 190-191.
  • "What effects have [non-profit organization-run] projects had on the lives of the beneficiaries: have they led to improvements in well-being, increases in income and enhanced lives and livelihoods? As with official aid projects, the evaluations and synthesis studies draw attention to the paucity of reliable data and information for reaching conclusions ... in a very large number of cases, the studies either point to the difficulty in drawing firm conclusions, or suggest (often on the basis of minimal hard evidence) that the overall impact appears to have been small." Riddell 2007, Pg 271.