Published: August 2018
Note: This page summarizes the rationale behind a GiveWell Incubation Grant to IDinsight. IDinsight staff reviewed this page prior to publication.
As part of GiveWell’s work to support the creation of future top charities, in April of 2018, IDinsight received a GiveWell Incubation Grant (GIG) of $1,196,729 to support the work of IDinsight's "GiveWell embedded team." This is a renewal of a May 2017 grant, at a similar level of funding, to support research that could lead to new top charities or otherwise influence our recommended allocations.
Table of Contents
IDinsight supports and conducts rigorous evaluations of development interventions, often involving randomized controlled trials (RCTs), with an explicit focus on partnering with funders and policy makers to use data to inform key strategy decisions. This "decision-focused evaluation" model appears to us to be both uncommon and particularly aligned with GiveWell's goals. We therefore see working with IDinsight as a promising way of supporting the development of future GiveWell top charities.
About the grant
This grant is intended to support further field evaluations and monitoring projects by IDinsight's "GiveWell embedded team." It will fund IDinsight staff capacity for two larger projects and two smaller projects, with some flexible capacity for scoping new projects. We expect the larger projects to be:
- Conducting a midline evaluation for its randomized controlled trial (RCT) of New Incentives, and
- Either a) a larger-scale survey of the beneficiaries of our top charities to inform our approach to moral weights; or, if we decide not to scale that work up, b) a vitamin A deficiency survey.
We expect the smaller projects to be:
- Supporting an RCT of Charity Science Health's program, and
- Conducting a deep assessment of the quality of the Against Malaria Foundation (AMF)'s monitoring.
We don't feel we're yet in a position to evaluate the impact of the bulk of IDinsight's work for us so far, since its major research projects are still in progress. However, we expect to have a better sense in about a year of how impactful this work has been overall.
Our rough cost-effectiveness model implies that funding this research is ~10x as cost-effective as cash transfers to people living in extreme poverty.1
New Incentives and Charity Science Health RCTs
We've attempted to model the total evaluation costs (including IDinsight's staff costs, field costs, overhead, etc.) that would be necessary for New Incentives and Charity Science Health to potentially become top charities, as well as the likelihood that we end up reallocating substantial funding to those organizations. Based on these models, our current best guess is that these evaluation projects are roughly 7x to 11x as cost-effective as cash.2 Key inputs to these models include:
- The probability of different levels of cost-effectiveness for marginal giving opportunities to our top charities. While our forecasts are rough, we estimate an 85% chance that marginal dollars directed to our top charities next year will be spent at around 5x cash (roughly as cost-effective as AMF),3 a 10% chance they'll be spent at around 2x cash, and a 5% chance they'll be spent at around 10x cash.4 We hope to refine these estimates in the future.
- The probability of New Incentives and Charity Science Health achieving different levels of cost-effectiveness. Our current estimates of the probabilities that New Incentives and Charity Science Health achieve various levels of cost-effectiveness are:
Less than 5x cash 5x-10x cash More than 10x cash New Incentives5 30% 65% 5% Charity Science Health6 65% 30% 5%
- The amount of money that would be reallocated if New Incentives and/or Charity Science Health turn out to be cost-effective giving opportunities. We currently estimate that we would reallocate between $20 million and $50 million to these organizations total (e.g., by reallocating between $4 million and $10 million per year over five years, which seems plausible to us).
These models don't include the costs of implementing New Incentives' and Charity Science Health's programs in the field (see "Remaining issues with our models" below).
Vitamin A deficiency survey
Our vitamin A deficiency survey cost-effectiveness model follows the same general methodology as those discussed in the previous section and attempts to model how this research might affect GiveWell's funding allocation to vitamin A supplementation charities over the next several years. Our best guess is that this survey could be highly cost-effective (around 14x cash).7
Beneficiary preferences survey and AMF monitoring
We haven't attempted to model the value of the beneficiary preferences and AMF monitoring projects in as much detail as the projects above, since we think a significant proportion of their benefits are less tangible. In brief, the cases for those projects are:
- Beneficiary preferences survey: There is limited information available about how people living in poverty in low-income countries would make some of the kinds of tradeoffs involved our cost-effectiveness models (e.g., how to value averting deaths at different ages, how to value averting death vs. improving income, etc.). We discuss the relevant literature on this question that we are aware of in this report. We expect IDinsight's survey to be a valuable input to our funding allocation decisions, and to help us make and explain some of the judgment calls in our cost-effective analyses.
- AMF monitoring: We have previously written about issues with AMF's approach to monitoring its programs. We expect this research project to substantially improve our understanding and potentially lead to improvements in AMF's monitoring. Spending around $500,000 to get better information about the core monitoring of a charity to which we've directed about $110 million seems reasonable to us from a learning and assessment perspective.
Remaining issues with our models
Key remaining issues with our cost-effectiveness models include:
- They exclude program implementation costs for New Incentives and Charity Science Health (see footnote for discussion).8
- They treat IDinsight's evaluations as potentially causing major updates to our views on the cost-effectiveness of the programs evaluated. In practice, our current best guess is that New Incentives (for example) is about as cost-effective as AMF, so there is a decent chance that the RCT results will only update our view moderately. However, because GiveWell would not recommend New Incentives or Charity Science Health as top charities without these RCTs, this evaluative work would still arguably be causally responsible for the reallocation of funding to those charities.
- We have significant uncertainty about some of the core parameters discussed above, and think there are reasonable arguments for a wide range of values.9
Plans for follow-up
We plan to check in one year from now for an update on IDinsight's work and to decide whether to provide additional funding. We also plan to do a more in-depth review of this grant's impact once the core projects are complete. Key follow-up questions might include:
- Which projects were the focus of the grant, and what were the outcomes?
- Do the New Incentives and Charity Science Health RCTs seem to have been worthwhile investments?
- Did the beneficiary preferences survey affect the moral weights inputted by GiveWell staff to our CEA? Did GiveWell publish a blog post on this work?
- Did the AMF monitoring project produce valuable information, and did GiveWell publish this information in some form (e.g. a blog post)?
- Does the impact of IDinsight's work so far warrant renewing our funding at a similar level next year?
For this grant, we are recording the following forecasts:
|65%||Following its RCT, we estimate that New Incentives is at least 5x as cost-effective as 2018 cash transfers via GiveDirectly.||August 2020|
|30%||Following its RCT, we estimate that Charity Science Health is at least 5x as cost-effective as 2018 cash transfers via GiveDirectly.||End of 2020|
|10%||We model the marginal cost-effectiveness of giving to our top charities at roughly 2x cash.||End of 2018|
|70%||We publish a blog post on IDinsight's work on AMF's monitoring.||February 2019|
|GiveWell, 2018 Cost-Effectiveness Analysis (Version 1)||Source|
|GiveWell, Forecasting RFMF||Source|
|GiveWell, IDinsight BOTEC||Source|
GiveWell, IDinsight BOTEC, sheet "Main", cell B10.
See GiveWell, IDinsight BOTEC, sheet "Main," cells E2 and E3; see also sheets "New NI BOTEC" and "New CSH BOTEC" for our cost-effectiveness models of New Incentives and Charity Science Health, respectively.
See our January 2018 cost-effectiveness model, "Results" sheet, for our current estimates of our top charities' cost-effectiveness vs. cash.
See GiveWell, Forecasting RFMF, Sheet "Josh," rows 3 through 11, for our 80% credible intervals for each of our top charities' room for more funding (RFMF) for each year through 2021. These estimates are based partially on this year's RFMF and partially on intuitive judgments based on our understanding of each charity. Taking into account these estimates and our rough predictions of GiveWell's money moved over the next few years, we think it's reasonable to predict a ~10% chance that at least $20 million per year would end up being spent at ~2x cash if we didn't end up funding New Incentives and Charity Science Health. However, we believe these estimates could easily change with further work.
GiveWell, IDinsight BOTEC, sheet "New NI BOTEC", rows 5 through 7.
GiveWell, IDinsight BOTEC, sheet "New CSH BOTEC", rows 5 through 7.
GiveWell, IDinsight BOTEC, sheet "IDi VAD BOTEC", cell K13. Note that this is a version of the analysis that contains edits from IDinsight, since we believe there was an error in our older version of the analysis.
The rationale for excluding these costs is that our current best guess puts the direct costs of these charities' programs at roughly as cost-effective as AMF, so we don't expect funding directed to those costs vs. to our top charities to impact our overall cost-effectiveness estimates of these RCT projects significantly. However, that assumption may turn out to be incorrect, and we might refine this piece of our model. This assumption would generally bias us toward spending more on evaluations even when implementation is expensive, and it doesn't take into account potential tradeoffs between funding evaluation projects and funding other potential GIGs (in the event that the GIG program overall becomes funding constrained at some point).
For example, we remain uncertain about the likelihood of there being different levels of funding gaps at our top charities and/or future opportunities, and we think it's possible our estimates could change substantially with further work.