We seek programs that are "cost-effective" in the sense of saving or improving lives as much as possible for as little money as possible. Cost-effectiveness is the single most important input in our evaluation of a program's impact. However, there are many limitations to cost-effectiveness estimates, and we do not assess programs solely based on their estimated cost-effectiveness. We build cost-effectiveness models primarily because:
- They help us compare programs or individual grant opportunities to others that we've funded or considered funding; and
- Working on them helps us ensure that we are thinking through as many of the relevant issues as possible.
We keep the following in mind when considering cost-effectiveness:
- Charities frequently cite misleading and overly optimistic figures for cost-effectiveness.
- Our cost-effectiveness estimates include administrative as well as program costs and generally look at the cost per life or life-year changed (death averted, year of additional income, etc.). They also include supplemental adjustments—simple up-or-down percentage adjustments that capture a wide range of effects a program might have on the world, from the effect of seasonal malaria chemoprevention on drug resistance to the herd immunity gained from a vaccine incentivization program, but that would be challenging or impossible to precisely estimate. However, there are many ways in which they do not account for all possible costs and benefits of a program.
- Our cost-effectiveness estimates rely on a number of inputs for which we have very limited data on which to base our estimates, as well as on informed guesses and subjective value judgments. An example of an input that we base on limited data or informed guesses is the likelihood that a charity's implementation of an intervention will have the same effect as measured in a separate study of that intervention (external validity).
- Because of the many limitations of cost-effectiveness estimates, we consider other factors when recommending programs or grants. For example, confidence in an organization's track record and the strength of the evidence for an intervention generally also carry significant weight in our investigations.
- Programs can have many kinds of impact; a program may save lives and also improve lives, and/or may have benefits to income, for example. We try to quantify all the different ways in which a program may be having an effect, such as lives saved per dollar or proportional increase in income per dollar donated. We do not measure impact solely in terms of disability-adjusted life-years (DALYs); more here.
We elaborate on these points below.
Table of Contents
Charities frequently cite misleading cost-effectiveness figures
In The Life You Can Save, Peter Singer discusses the fact that many common claims about cost-effectiveness are misleading. We quote at length from the book, with his permission. (Note that our excerpt does not include footnotes, which are in the original.)1
Organizations often put out figures suggesting that lives can be saved for very small amounts of money. WHO, for example, estimates that many of the 3 million people who die annually from diarrhea or its complications can be saved by an extraordinarily simple recipe for oral rehydration therapy: a large pinch of salt and a fistful of sugar dissolved in a jug of clean water. This lifesaving remedy can be assembled for a few cents, if only people know about it. UNICEF estimates that the hundreds of thousands of children who still die of measles each year could be saved by a vaccine costing less than $1 a dose. And Nothing But Nets, an organization conceived by American sportswriter Rick Reilly and supported by the National Basketball Association, provides anti-mosquito bed nets to protect children in Africa from malaria, which kills a million children a year. In its literature, Nothing But Nets mentions that a $10 net can save a life: "If you give $100 to Nothing But Nets, you've saved ten lives."
If we could accept these figures, GiveWell's job wouldn't be so hard. All we would have to do to know which organization can save lives in Africa at the lowest cost would be to pick the lowest figure. But while these low figures are undoubtedly an important part of the charities' efforts to attract donors, they are, unfortunately, not an accurate measure of the true cost of saving a life.
Take bed nets as an example. They will, if used properly, prevent people from being bitten by mosquitoes while they sleep, and therefore will reduce the risk of malaria. But not every net saves a life: Most children who receive a net would have survived without it. Jeffrey Sachs, attempting to measure the effect of nets more accurately, took this into account, and estimated that for every one hundred nets delivered, one child's life will be saved every year (Sachs estimated that on average a net lasts five years). If that is correct, then at $10 per net delivered, $1,000 will save one child a year for five years, so the cost is $200 per life saved (this doesn't consider the prevention of dozens of debilitating but nonfatal cases). But even if we assume that these figures are correct, there is a gap in them – they give us the cost of delivering a bed net, and we know how many bed nets "in use" will save a life, but we don't know how many of the bed nets that are delivered are actually used. And so the $200 figure is not fully reliable, and that makes it hard to measure whether providing bed nets is a better or worse use of our donations than other lifesaving measures.
[GiveWell] found similar gaps in the information on the effect of immunizing children against measles. Not every child immunized would have come down with the disease, and most who do get it, recover, so to find the cost per life saved, we must multiply the cost of the vaccine by the number of children to whom it needs to be given in order to reach a child who would have died without it. And oral rehydration treatment for diarrhea may cost only a few cents, but it costs money to get it to each home and village so that it will be available when a child needs it, and to educate families in how to use it.
The cost-effectiveness figures we use
Early in our history, we relied largely on cost-effectiveness estimates provided by the Disease Control Priorities in Developing Countries report (DCP2).2 In 2011, we did a deep-dive investigation into one of these estimates, and found major errors that caused the published figure to be off by around 100x,3 and we have therefore changed our approach to cost-effectiveness. Whenever we are estimating the cost-effectiveness of a contender for a GiveWell-directed grant, we perform our own analysis. If we go on to recommend a grant, we publish the full details online.
Below, we list some strengths of our estimates as compared to commonly cited figures, and then some weaknesses of our estimates that explain why we don't take them literally.
Strengths compared to commonly cited figures:
As discussed above, many commonly cited figures are misleading because they (a) account for only a portion of costs (for example, citing the cost of oral rehydration treatment but not the cost of delivering it), and/or (b) cite "cost per item delivered" figures as opposed to "cost per life changed" figures (for example, equating insecticide-treated malaria nets delivered with deaths averted, even though there are likely many nets delivered that do not avert deaths; it is not the case that everyone who receives a net would have otherwise died of malaria).
The cost-effectiveness estimates we use reduce these problems:
- We use individualized inputs to combine life-improving and death-averting impacts into a single figure that enables us to compare interventions with different impacts.
- We try to be thorough when accounting for the costs involved in programs we recommend. Planning costs, management costs, and distribution costs are all included in our estimates. We also try to account for the counterfactual value of resources provided by other funders involved in our charities’ programs.
- Estimates are generally based on actual costs and actual impact from past projects, to the extent this is possible. When we make projections, we attempt to gather all the information we can to inform such projections.
Limitations of GiveWell's cost-effectiveness analyses:
The estimates we use do not capture all considerations for cost-effectiveness. In particular:
- We generally draw effectiveness estimates from studies, and we would guess that studies often involve particularly well-executed programs in particularly suitable locations. We incorporate adjustments (called external validity adjustments) to account for this difference, but these are based on rough calculations.
- We don't include precisely calculated estimates for all possible impacts; our main model includes only direct and/or easily measurable impacts, although we also try to account for other impacts less precisely using supplemental adjustments.
- Estimates are often based on limited information and are therefore extremely rough. For example, prevalence and intensity estimates for areas that receive deworming treatments often vary from year to year, and our impression is that the estimates are generally of low quality.
- Estimates involve a number of subjective inputs, such as the relative value of increasing an individual's income compared to averting a death. We draw these inputs from our moral weights, which were updated in 2020 to include results from a survey of people demographically similar to those reached by GiveWell-funded programs. (You can read more about the survey here, and more about our moral weights update here). Donors may disagree with us about these inputs, depending on their individual values.
- We often have to make educated guesses about inputs like the replicability of the studies we rely on. Donors may also reasonably disagree about inputs like this.
Because of the many limitations of cost-effectiveness estimates, cost-effectiveness is not the only factor we consider in recommending charities (consistent with the appropriate approach to Bayesian adjustments to such estimates). Confidence in an organization's track record and the strength of the evidence for an intervention generally also carry significant weight in our investigations.
That said, we think producing these estimates is useful for identifying large differences in cost-effectiveness as well as encouraging individuals who are involved with GiveWell research to think through and quantify important questions related to understanding and assessing a charity's work.
How cost-effective is cost-effective?
As stated above, we feel that common claims of donors' ability to save a life for a few dollars are generally overly optimistic. Our top charities, which (based on 2021 figures) we estimate could save a life for roughly every $3,500 to $5,500 in donations, are the most cost-effective and evidence-backed programs that we know of. Full reviews of each of our top charities are linked from this page.
You can see our estimates per program, as well as our process to generate these estimates, on our page titled "How We Produce Impact Estimates."
Our process for creating and updating our models
The most up-to-date cost-effectiveness analyses for our top charities, as well as past analyses, can be found on this page. We routinely update these models of top charity cost-effectiveness whenever we have new information that will affect our inputs. We assess the cost-effectiveness of each individual grant opportunity to our top charities, then apply what we've learned in the course of that evaluation to our overall model.
For programs that are not top charities, we usually create a cost-effectiveness analysis for each program or grant opportunity we're considering. This analysis is then linked from the page we publish about the grant in question, or from the intervention report we publish about that program. You can find a list of pages on grants we've recommended since 2014 here, and a partial list of programs we've investigated here.
More on GiveWell's views on cost-effectiveness estimates
More writing on cost-effectiveness estimates:
- GiveWell's 2020 moral weights
- Why we can't take expected value estimates literally (even when they're unbiased)
- Errors in DCP2 cost-effectiveness estimate for deworming
- Some considerations against more investment in cost-effectiveness estimates
- Maximizing cost-effectiveness via critical inquiry
- Cost-effectiveness of nets vs. deworming vs. cash transfers, the programs implemented by our three 2012 top-rated charities.
- Deworming might have huge impact, but might have close to zero impact. Our assessment of deworming charities relies on the expected value of deworming. This post discusses how we arrived at this estimate and incorporated uncertainty into our cost-effectiveness model.
- GiveWell website:
- Cost-effectiveness models for top charities: GiveWell's Cost-Effectiveness Analyses
- Program report: Distribution of insecticide-treated nets (ITNs) to prevent malaria
- Research on programs
- Interpreting the DALY metric
- Guide to "room for more funding" analysis
- How we produce impact estimates
- Database of all GiveWell grants
- Research on moral weights – 2019
- Jamison, Dean T. et al., eds. 2006. Disease control priorities in developing countries (2nd Edition) (PDF). New York: Oxford University Press.
- Singer, Peter. 2009. The life you can save: Acting now to end world poverty. New York: Random House.
Singer 2009, Pgs 86-87.
Jamison et al. 2006.