Cost-Effectiveness | GiveWell

You are here


Updated: April 2017, (2009-2015 version)

We seek charities that are "cost-effective" in the sense of saving or improving lives as much as possible for as little money as possible. There are many limitations to cost-effectiveness estimates, and we do not assess charities only—or primarily—based on their estimated cost-effectiveness. However, because different approaches to helping people can have extremely different costs, we make cost-effectiveness estimates for our top charities.

We keep the following in mind when considering cost-effectiveness:

  • Charities frequently cite misleading and overly optimistic figures for cost-effectiveness.
  • Our cost-effectiveness estimates include administrative as well as program costs and generally look at the cost per life or life-year changed (death averted, year of additional income, etc.). However, there are many ways in which they do not account for all possible costs and benefits of a program.
  • Our cost-effectiveness estimates rely on a number of inputs for which we have very limited data on which to base our estimates as well as informed guesses and subjective value judgments, such as the likelihood that a charity's implementation of an intervention will have the same effect as measured in a separate study of that intervention, or how one weighs increasing income relative to averting death. Different individuals involved with GiveWell research thus calculate different cost-effectiveness estimates for our top charities, and we share those individual estimates as well as medians.
  • Because of the many limitations of cost-effectiveness estimates, we give estimated cost-effectiveness only limited weight in recommending charities. Confidence in an organization's track record or the strength of the evidence for an intervention generally carries heavier weight when differences in estimated cost-effectiveness are not large.
  • Programs can have many kinds of impact; a program may save lives and also improve lives, or may have benefits to income, for example. We try to quantify the different ways in which a program may be having an effect, such as lives saved per dollar or proportional increase in income per dollar donated. We do not fully rely on the disability-adjusted life-year metric purely using data from the Global Burden of Disease report, though our approach has a similar goal.

We elaborate on these points below.

Charities frequently cite misleading cost-effectiveness figures

In The Life You Can Save, Peter Singer discusses the fact that many common claims about cost-effectiveness are misleading. We quote at length from the book, with his permission. (Note that our excerpt does not include footnotes, which are in the original.)1

Organizations often put out figures suggesting that lives can be saved for very small amounts of money. WHO, for example, estimates that many of the 3 million people who die annually from diarrhea or its complications can be saved by an extraordinarily simple recipe for oral rehydration therapy: a large pinch of salt and a fistful of sugar dissolved in a jug of clean water. This lifesaving remedy can be assembled for a few cents, if only people know about it. UNICEF estimates that the hundreds of thousands of children who still die of measles each year could be saved by a vaccine costing less than $1 a dose. And Nothing But Nets, an organization conceived by American sportswriter Rick Reilly and supported by the National Basketball Association, provides anti-mosquito bed nets to protect children in Africa from malaria, which kills a million children a year. In its literature, Nothing But Nets mentions that a $10 net can save a life: "If you give $100 to Nothing But Nets, you've saved ten lives."

If we could accept these figures, GiveWell's job wouldn't be so hard. All we would have to do to know which organization can save lives in Africa at the lowest cost would be to pick the lowest figure. But while these low figures are undoubtedly an important part of the charities' efforts to attract donors, they are, unfortunately, not an accurate measure of the true cost of saving a life.

Take bed nets as an example. They will, if used properly, prevent people from being bitten by mosquitoes while they sleep, and therefore will reduce the risk of malaria. But not every net saves a life: Most children who receive a net would have survived without it. Jeffrey Sachs, attempting to measure the effect of nets more accurately, took this into account, and estimated that for every one hundred nets delivered, one child's life will be saved every year (Sachs estimated that on average a net lasts five years). If that is correct, then at $10 per net delivered, $1,000 will save one child a year for five years, so the cost is $200 per life saved (this doesn't consider the prevention of dozens of debilitating but nonfatal cases). But even if we assume that these figures are correct, there is a gap in them – they give us the cost of delivering a bed net, and we know how many bed nets "in use" will save a life, but we don't know how many of the bed nets that are delivered are actually used. And so the $200 figure is not fully reliable, and that makes it hard to measure whether providing bed nets is a better or worse use of our donations than other lifesaving measures.

[GiveWell] found similar gaps in the information on the effect of immunizing children against measles. Not every child immunized would have come down with the disease, and most who do get it, recover, so to find the cost per life saved, we must multiply the cost of the vaccine by the number of children to whom it needs to be given in order to reach a child who would have died without it. And oral rehydration treatment for diarrhea may cost only a few cents, but it costs money to get it to each home and village so that it will be available when a child needs it, and to educate families in how to use it.

The cost-effectiveness figures we use

Early in our history, we relied largely on cost-effectiveness estimates provided by the Disease Control Priorities in Developing Countries report (DCP2).2 In 2011, we did a deep-dive investigation into one of these estimates, and found major errors that caused the published figure to be off by ~100x,3 and we have therefore changed our approach to cost-effectiveness. Whenever we are estimating the cost-effectiveness of a contender for a top charity recommendation that is implementing an intervention we have not previously recommended, we perform our own analysis and publish the full details online.

Below, we list some strengths of our estimates as compared to commonly cited figures, and then some weaknesses of our estimates that explain why we don't take them literally.

Strengths compared to commonly cited figures:

As discussed above, many commonly cited figures are misleading because they (a) account for only a portion of costs (for example, citing the cost of oral rehydration treatment but not the cost of delivering it); and/or (b) cite "cost per item delivered" figures as opposed to "cost per life changed" figures (for example, equating insecticide-treated malaria nets delivered with deaths averted, even though there are likely many nets delivered that do not avert deaths; it is not the case that everyone who receives a net would have otherwise died of malaria).

The cost-effectiveness estimates we use reduce these problems:

  • Our estimates are given in terms of life impact, and specify what sort of life impact can be expected. For example, we publish a "marginal cost per under 5 death averted" estimate for the Against Malaria Foundation. We also estimate a "cost per life saved equivalent," which uses an individualized input to combine life-improving and death-averting impacts into a single figure that enables us to compare interventions with different impacts.
  • We try to be thorough when accounting for the costs involved in programs we recommend. Planning costs, management costs, and distribution costs are all included in our estimates. We also try to account for the counterfactual value of resources provided by other funders involved in our charities’ programs.
  • Estimates are generally based on actual costs and actual impact from past projects, to the extent this is possible. When we make projections, we attempt to gather all the information we can to inform such projections.

Limitations of GiveWell's cost-effectiveness analyses:

The estimates we use do not capture all considerations for cost-effectiveness. In particular:

  • We generally draw effectiveness estimates from studies, and we would guess that studies often involve particularly well-executed programs in particularly suitable locations. While we make efforts to assess how representative studies are of average conditions, our ability to do so is often limited.
  • Estimates don't consider all possible impacts; we generally focus on direct and/or easily measurable impacts. The issue of accounting for flow-through effects is discussed more in this blog post. This issue is discussed specifically as it relates to GiveWell in a paper by Leif Wenar.4
  • Estimates are generally based on extremely limited information and are therefore extremely rough. For example, prevalence and intensity data for areas that receive deworming treatments often varies from year to year and our impression is that the estimates are generally of low quality.
  • Estimates involve a number of subjective inputs, such as determining the number of years for which an individual's increased income is equivalent to averting 1 DALY (a common metric for measuring the burden of various diseases). Thus, donors may disagree with one another about the cost per life-year impacted, depending on their individual values.

    We often have to make educated guesses about inputs like the replicability of the deworming study we rely on, for which we have extremely limited information. Donors may also reasonably disagree about inputs like this.

Because of the many limitations of cost-effectiveness estimates, we give estimated cost-effectiveness limited weight in recommending charities (consistent with the appropriate approach to Bayesian adjustments to such estimates). Confidence in an organization's track record and the strength of evidence for an intervention often carries heavier weight when differences in estimated cost-effectiveness are not large.

That said, we think producing these estimates is useful for identifying large differences in cost-effectiveness as well as encouraging individuals who are involved with GiveWell research to think through and quantify important questions related to understanding and assessing a charity's work.

How cost-effective is cost-effective?

As stated above, we feel that common claims of donors' ability to save a life for a few dollars, or even a few cents, are generally overly optimistic. We list developing-world direct-aid programs we know of that seem to be among the most cost-effective and evidence-backed, with links to detailed write-ups including cost-effectiveness estimates, where available.

Based on our knowledge of these programs:

  • The impact we are most often able to estimate is the cost per life saved. There are many kinds of impact besides saving lives; we try to do our best to quantify the different ways in which a program may be improving lives, and to help donors decide how to compare these.
  • As of November 2016, the median estimate of our top charities' cost-effectiveness ranged from ~$900 to ~$7,000 per equivalent life saved (a metric we use to compare interventions with different outcomes, such as income improvements and averting a death) .

Our process for calculating cost-effectiveness and updating our model

One member of GiveWell's research team "owns" GiveWell's cost-effectiveness model. They are in charge of setting up the calculation, updating and adding new information into the model, making design changes to facilitate usability, and providing explanations of our parameters so that anyone who uses our model can understand our work and enter their own inputs. This staff member typically spends a significant proportion of their work-hours, about 50 percent over the course of a year, on the cost-effectiveness model.

Others closely involved with GiveWell research are invited to add their own inputs, based on their own judgments, as decision points approach that rely on cost-effectiveness estimates, such as GiveWell's year-end charity recommendations. This typically leads to adjustments being made to the model itself, as well as a good deal of internal conversation and debate over inputs. Last year, 12 individuals submitted their own inputs to be published with the model and more engaged informally in the run-up to our year-end recommendations.

These inputs are conveyed both individually and as median values in the model. Our bottom-line comparisons of the charities we recommend—the estimates that appeared in our year-end blog post on our new recommendations—are based on a weighted average of these results that incorporates seniority and the level of engagement with our cost-effectiveness analysis. You can view this calculation for 2016 here.

Our current and past cost-effectiveness models are available here, along with a video walkthrough of the latest model explaining our approach and inputs.

More on GiveWell's views on cost-effectiveness estimates

More blog posts on cost-effectiveness estimates:


  • 1.

    Singer 2009, Pgs 86-87.

  • 2.

    Jamison et al. 2006.

  • 3.
  • 4.

    From a passage discussing Population Services International (now called PSI), a GiveWell-recommended charity in 2008 and 2009:

    "Let us look closely at GiveWell’s best case: the aid initiative they recommend which looks the least likely to produce unintended effects. This is a campaign by an NGO called PSI to sell insecticide-treated bednets to poor people, primarily in Africa, with the aim of reducing malaria deaths. GiveWell estimates that donors can save a life that would not otherwise be saved through donations to PSI of between $623 and $2367. GiveWell settles on $820 as a reasonable estimate, and Singer reproduces these figures.

    For those who appreciate the challenges of aid, GiveWell’s methodology offers a case study of why the figures given cannot provide a reasonable answer to the Donor’s Question. GiveWell calculated its figures using only the following (PSI-supplied) data:

    • Number of bednets distributed;
    • Probability of bednets being used;
    • Probability of nets saving a life if used;
    • Budget for the project.

    This is all the information on which the “cost of saving a life” figure is based. Is this enough information to answer the Donor’s Question? Let’s say you had given $820 to PSI in 2008. Could you be reasonably confident that the morally salient outcome of your giving was that you had saved a life that would not otherwise have been saved? We can draw on just a few of the challenges of aid listed above to show why the answer is clearly 'no.' As always, the main confounding factors are the counterfactuals and unintended effects. This is apparent even just looking at PSI’s best case: the country where it claims to be most cost-effective, Madagascar.

    • PSI is one of several organizations distributing bednets in Madagascar, including UNICEF, various Red Cross affiliates, and the Madagascar Ministry of Health. Most of the money for these efforts comes from official sources: the Global Fund
      to Fight AIDS, Tuberculosis and Malaria, the World Bank, USAID, and so on. This multiplicity of agents raises the first set of counterfactuals: If PSI had not distributed a bed net that was bought with your donation, would another aid agency have distributed that bed net anyway? And if your private donation had been absent, would the Global Fund or the World Bank have compensated for the deficit? (If that compensatory money was drawn away from other Global Fund projects, what different impacts would those projects have had? Or would the Global Fund’s own donors have contributed more overall to make up for what you didn’t give?)
    • PSI is a 'social marketing' organization: it sells highly subsidized bednets instead of giving them away for free (as other NGOs and Madagascar’s health ministry do). PSI does this in the belief that the poor are more likely to get and use nets that are sold on the market. However several top experts (including ones Singer relies on for positive studies on aid) have been extremely critical of social marketing. Moreover a randomized evaluation of social marketing in Kenya found that selling bednets greatly reduces take-up of nets, does not get nets to those with greatest need, and does not lead to higher usage. It could be that your donation to PSI in 2008 hindered malaria-fighting efforts compared to what another NGO would have done with your money.
    • PSI does not publish a detailed budget (nor does any major aid NGO). It is therefore not possible to determine who ultimately received the money from your donation to PSI. Some proportion would have been spent on buying bednets to sell on to the poor. Yet some proportion of your donation might also have been diverted within Madagascar. In 2009 the president of Madagascar was overthrown with at least the acquiescence of the country’s powerful military. He was accused by his critics of 'massive corruption, nepotism, mismanagement and misuse of public resources.' It is possible that some of the money you gave to PSI might have been captured by and so empowered actors involved in the poor governance of Madagascar.
    • Madagascar’s Ministry of Health reports that it has distributed over one million free bednets since 2003, so it appears to be capable of distributing nets. One might wonder why foreign NGOs like PSI are distributing bednets in Madagascar at all. If the Malagasy government were wholly responsible for securing the basic health of the Malagasy people, would the people demand more from their own government?

    These concerns alone preclude the GiveWell figures on PSI from providing a reasonable answer to the Donor’s Question. And Singer’s cost estimates in The Life You Can Save inherit these weaknesses." Wenar 2010, Pg. 22-24.