All Categories Blogs | GiveWell

# All Categories Blogs

Exploring how to get real change for your dollar.
Updated: 15 min 1 sec ago

### GiveWell’s money moved and web traffic in 2017

Fri, 06/29/2018 - 16:33

GiveWell is dedicated to finding outstanding giving opportunities and publishing the full details of our analysis. In addition to evaluations of other charities, we publish substantial evaluation of our own work. This post lays out highlights from our 2017 metrics report, which reviews what we know about how our research impacted donors. Please note:

• We report on “metrics years” that run from February through January; for example, our 2017 data cover February 1, 2017 through January 31, 2018.
• We differentiate between our traditional charity recommendations and the work of the Open Philanthropy Project, which became a separate organization in 2017 and whose work we exclude from this report.
• More context on the relationships between GiveWell, Good Ventures, and the Open Philanthropy Project can be found here.

Summary of influence: In 2017, GiveWell influenced charitable giving in several ways. The following table summarizes our understanding of this influence.

Headline money moved: In 2017, we tracked $117.5 million in money moved to our recommended charities. Our money moved only includes donations that we are confident were influenced by our recommendations. Money moved by charity: Our nine top charities received the majority of our money moved. Our seven standout charities received a total of$1.8 million.

Money moved by size of donor: In 2017, the number of donors and amount donated increased across each donor size category, with the notable exception of donations from donors giving $1,000,000 or more. In 2017, 90% of our money moved (excluding Good Ventures) came from 20% of our donors, who gave$1,000 or more.

Donor retention: The total number of donors who gave to our recommended charities or to GiveWell unrestricted increased about 29% year-over-year to 23,049 in 2017. This included 14,653 donors who gave for the first time. Among all donors who gave in the previous year, about 42% gave again in 2017, up from about 35% who gave again in 2016.

Our retention was stronger among donors who gave larger amounts or who first gave to our recommendations prior to 2015. Of larger donors (those who gave $10,000 or more in either of the last two years), about 73% who gave in 2016 gave again in 2017. GiveWell’s expenses: GiveWell’s total operating expenses in 2017 were$4.6 million. Our expenses decreased from about $5.5 million in 2016 due to the Open Philanthropy Project becoming a separate organization in June 2017. We estimate that 67% of our total expenses ($3.1 million) supported our traditional top charity work and about 33% supported the Open Philanthropy Project. In 2016, we estimated that expenses for our traditional top charity work were about $2.0 million. Donations supporting GiveWell’s operations: GiveWell raised$5.7 million in unrestricted funding (which we use to support our operations) in 2017, compared to $5.6 million in 2016. Our major institutional supporters and the six largest individual donors contributed about 49% of GiveWell’s operational funding in 2017. Web traffic: The number of unique visitors to our website remained flat in 2017 compared to 2016 (when excluding visitors driven by AdWords, Google’s online advertising product). For more detail, see our full metrics report (PDF). The post GiveWell’s money moved and web traffic in 2017 appeared first on The GiveWell Blog. ### Announcing Zusha! as a standout charity Thu, 06/21/2018 - 12:51 We’ve added the Georgetown University Initiative on Innovation, Development, and Evaluation gui2de‘s Zusha! Road Safety Campaign (from here on, “Zusha!”) as a standout charity; see our full review here. Standout charities do not meet all of our criteria to be a GiveWell top charity, but we believe they stand out from the vast majority of organizations we have considered. See more information about our standout charities here. Zusha! is a campaign intended to reduce road accidents. Zusha! supports distribution of stickers to public service vehicles encouraging passengers to speak up and urge drivers to drive more safely. We provided a GiveWell Incubation Grant to Zusha! in January 2017 and discussed it in a February 2017 blog post. For more information, see our full review. Interested donors can give to Zusha! by clicking “Donate” on that page. The post Announcing Zusha! as a standout charity appeared first on The GiveWell Blog. ### June 2018 open thread Wed, 06/13/2018 - 13:49 Our goal with hosting quarterly open threads is to give blog readers an opportunity to publicly raise comments or questions about GiveWell or related topics (in the comments section below). As always, you’re also welcome to email us at info@givewell.org or to request a call with GiveWell staff if you have feedback or questions you’d prefer to discuss privately. We’ll try to respond promptly to questions or comments. You can view our March 2018 open thread here. The post June 2018 open thread appeared first on The GiveWell Blog. ### Allocation of discretionary funds from Q1 2018 Mon, 06/04/2018 - 14:46 In the first quarter of 2018, we received$2.96 million in funding for making grants at our discretion. In this post we discuss:

• The decision to allocate the $2.96 million to the Against Malaria Foundation (AMF) (70 percent) and the Schistosomiasis Control Initiative (SCI) (30 percent). • Our recommendation that donors give to GiveWell for granting to top charities at our discretion so that we can direct the funding to the top charity or charities with the most pressing funding need. For donors who prefer to give directly to our top charities, we continue to recommend giving 70 percent of your donation to AMF and 30 percent to SCI to maximize your impact. Allocation of discretionary funds The allocation of 70 percent of the funds to AMF and 30 percent to SCI follows the recommendation we have made, and continue to make, to donors. For more discussion on this allocation, see our blog post about allocating discretionary funds from the previous quarter. We also considered the following possibilities for this quarter: Helen Keller International (HKI) for stopgap funding in one additional country We discussed this possibility in our blog post about allocating discretionary funds from the previous quarter. After further discussing this possibility with HKI, our understanding is that (a) the amount of funding needed to fill this gap will likely be small relative to the amount of GiveWell-directed funding that HKI currently holds, and (b) we will have limited additional information in time for this decision round that we could use to compare this new use of funding to HKI’s other planned uses of funding. We will continue discussing this opportunity with HKI and may allocate funding to it in the future. Our current expectation is that we will ask HKI to make the tradeoff between allocating the GiveWell-directed funding it holds to this new opportunity and continuing to hold the funds. Holding the funds gives the current programs more runway (originally designed to fund three years) and gives HKI more flexibility to fund highly cost-effective, unanticipated opportunities in the future. We believe that HKI is currently in a better position to assess cost-effectiveness of the opportunities it has than we are, while we will seek to maximize cost-effectiveness in the longer run by assessing HKI’s track record of cost-effectiveness and comparing that to the cost-effectiveness of other top charities. We remain open to the possibility that HKI will share information with us that will lead us to conclude that this new opportunity is a better use of funds than our current recommendation of 70 percent to AMF and 30 percent to SCI. In that case, we would allocate funds from the next quarter to fill this funding gap (and could accelerate the timeline on that decision if it were helpful to HKI). Evidence Action’s Deworm the World Initiative for funding gaps in India and Nigeria We spoke with Deworm the World about two new funding gaps it has due to unexpected costs in its existing programs in India and Nigeria. In India, the cost overruns total$166,000. Deworm the World has the option of drawing down a reserve of $5.5 million (from funds donated on GiveWell’s recommendation). The reserve was intended to backstop funds that were expected but not fully confirmed from another funder. Given the small size of the gap relative to the available reserves, our preference is for Deworm the World to use that funding and for us to consider recommending further reserves as part of our end-of-year review of our top charities’ room for more funding. In Nigeria, there is a funding gap of$1.7 million in the states that Deworm the World is currently operating in. Previous budgets assumed annual treatment for all children, and Deworm the World has since become aware of the existence of areas where worm prevalence is high enough that twice per year treatment is recommended. Our best guess is that AMF and SCI are more cost-effective than Deworm the World’s Nigeria program (see discussion in this post). It is possible that because additional funding would go to support additional treatments in states where programs already operate, the cost to deliver these marginal treatments would be lower. We don’t currently have enough data to analyze whether that would significantly change the cost-effectiveness in this case.

Deworm the World also continues to have a funding gap for expansion to other states in Nigeria. We wrote about this opportunity in our previous post on allocating discretionary funding.

Malaria Consortium for seasonal malaria chemoprevention (SMC)

We continue to see a case for directing additional funding to Malaria Consortium for SMC, as we did last quarter. Our views on this program have not changed. For further discussion, see our previous post on allocating discretionary funding.

What is our recommendation to donors?

We continue to recommend that donors give to GiveWell for granting to top charities at our discretion so that we can direct the funding to the top charity or charities with the most pressing funding need. For donors who prefer to give directly to our top charities, we are continuing to recommend giving 70 percent of your donation to AMF and 30 percent to SCI to maximize your impact. The reasons for this recommendation are the same as in our previous post on allocating discretionary funding.

The post Allocation of discretionary funds from Q1 2018 appeared first on The GiveWell Blog.

### New research on cash transfers

Fri, 05/04/2018 - 12:21
Summary
• There has been a good deal of discussion recently about new research on the effects of cash transfers, beginning with a post by economist Berk Özler on the World Bank’s Development Impact blog. We have not yet fully reviewed the new research, but wanted to provide a preliminary update for our followers about our plans for reviewing this research and how it might affect our views of cash transfers, a program implemented by one of our top charities, GiveDirectly.
• In brief, the new research suggests that cash transfers may be less effective than we previously believed in two ways. First, cash transfers may have substantial negative effects on non-recipients who live near recipients (“negative spillovers”). Second, the benefits of cash transfers may fade quickly.
• We plan to reassess the cash transfer evidence base and provide our updated conclusions in the next several months (by November 2018 at the latest). One reason that we do not plan to provide a comprehensive update sooner is that we expect upcoming midline results from GiveDirectly’s “general equilibrium” study, a large and high-quality study explicitly designed to estimate spillover effects, will play a major role in our conclusions. Results from this study are expected to be released in the next few months.
• Our best guess is that we will reduce our estimate of the cost-effectiveness of cash transfers to some extent, but will likely continue to recommend GiveDirectly. However, major updates to our current views, either in the negative or positive direction, seem possible.

More detail below.

Background

GiveDirectly, one of our top charities, provides unconditional cash transfers to very poor households in Kenya, Uganda, and Rwanda.

Several new studies have recently been released that assess the impact of unconditional cash transfers, including a three-year follow-up study (Haushofer and Shapiro 2018, henceforth referred to as “HS 2018”) on the impact of transfers that were provided by GiveDirectly. Berk Özler, a senior economist at the World Bank, summarized some of this research in two posts on the World Bank Development Impact blog (here and here), noting that the results imply that cash transfers may be less effective than proponents previously believed. In particular, Özler raises the concerns that cash may:

1. Have negative “spillovers”: i.e., negative effects on households that did not receive transfers but that live near recipient households.
2. Have quickly-fading benefits: i.e., the standard of living for recipient households may converge to be similar to non-recipient households within a few years of receiving transfers.

Below, we discuss the topics of spillover effects and the duration of benefits of cash transfers in more detail, as well as some other considerations relevant to the effectiveness of cash transfers. In brief:

• If substantial spillover effects exist, they have the potential to significantly affect our cost-effectiveness estimates for cash transfers. We are uncertain what we will conclude about spillover effects of cash transfers after deeply reviewing all relevant new literature, but we expect that upcoming midline results from GiveDirectly’s “general equilibrium” study will play a major role in our conclusions. Our best guess is that the general equilibrium study and other literature will not imply that GiveDirectly’s program has large negative spillovers, but we remain open to the possibility that we should substantially negatively update our views after reviewing the relevant literature.
• Several new studies seem to find that cash may have little effect on recipients’ standard of living beyond the first year after receiving a transfer. Our best guess is that after reviewing the relevant research in more detail we will decrease our estimate of the cost-effectiveness of cash transfers to some extent. In the worst (unlikely) case, this factor could lead us to believe that cash is about 1.5-2x less cost-effective than we currently do.
Spillovers

Negative spillovers of cash transfers have the potential to lead us to majorly revise our estimates of the effects of cash; we currently assume that cash does not have major negative or positive spillover effects. At this point, we are uncertain what we will conclude about the likely spillover effects of cash after reviewing all relevant new literature, including GiveDirectly’s forthcoming “general equilibrium” study. Our best guess is that GiveDirectly’s current program does not have large spillover effects, but it seems plausible that we could ultimately conclude that cash either has meaningful negative spillovers or positive spillovers.

We will not rehash the methodological details and estimated effect sizes of HS 2018 in this post. For a basic understanding of the findings and methodological issues, we recommend reading Özler’s posts, the Center for Global Development’s Justin Sandefur’s post, GiveDirectly’s latest post, or Haushofer and Shapiro’s response to Özler’s posts. The basic conclusions that we draw from this research are:

• Under one interpretation of its findings, HS 2018 measures negative spillover effects that could outweigh the positive effects of cash transfers.1From Sandefur’s post: “Households who had been randomly selected to receive cash were much better off than their neighbors who didn’t. They had $400 more assets—roughly the size of the original transfer, with all figures from here on out in PPP terms—and about$47 higher consumption each month. It looked like an amazing success.

“But when Haushofer and Shapiro compared the whole sample in these villages—half of whom had gotten cash, half of whom hadn’t—they looked no different than a random sample of households in control villages. In fact, their consumption was about $6 per month less ($211 versus $217 a month). “There are basically two ways to resolve this paradox: “1) Good data, bad news. Cash left recipients only modestly better off after three years (lifting them from$217 to $235 in monthly consumption), and instead hurt their neighbors (dragging them down from$217 to $188 in monthly consumption). Taking the data at face value, this is the most straightforward interpretation of the results. “2) Bad data, good news. Alternatively, the$47 gap in consumption between recipients and their neighbors is driven by gains to the former not losses to the latter. The estimates of negative side-effects on neighbors are driven by comparisons with control villages where—if you get into the weeds of the paper—it appears sampling was done differently than in treatment villages. (In short, the $217 isn’t reliable.)” jQuery("#footnote_plugin_tooltip_1").tooltip({ tip: "#footnote_plugin_tooltip_text_1", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); • We do not yet have a strong view on how likely it is that the negative interpretation of HS 2018’s findings is correct. This would require having a deeper understanding of what we should believe about a number of key methodological issues in HS 2018 (see following footnote for two examples).2One methodological issue is how to deal with attrition, as discussed in Haushofer and Shapiro 2018, Pg. 9: “However, there is a statistically significant difference in attrition levels for households in control villages relative to households in treatment villages from endline 1 to endline 2: 6 percentage points more pure control households were not found at endline 2 relative to either group of households in treatment villages. In the analysis of across-village treatment effects and spillover effects we use Lee bounds to deal with this differential attrition; details are given below.” Another potential issue as described by Özler’s post: “The short-term impacts in Haushofer and Shapiro (2016) were calculated using within-village comparisons, which was a big problem for an intervention with possibility of spillovers, on which the authors had to do a lot of work earlier (see section IV.B in that paper) and in the recent paper. They got around this problem by arguing that spillover effects were small and insignificant. Of course, then came the working paper on negative spillovers on psychological wellbeing mentioned above and now, the spillover effects look sustained and large and unfortunately negative on multiple domains three years post transfers. “The authors estimated program impacts by comparing T [treatment group] to S [spillover group], instead of the standard comparison of T to C [control group], in the 2016 paper because of a study design complication: researchers randomly selected control villages, but did not collect baseline data in these villages. The lack of baseline data in the control group is not just a harmless omission, as in ‘we lose some power, no big deal.’ Because there were eligibility criteria for receiving cash, but households were sampled a year later, no one can say for certain if the households sampled in the pure control villages at follow-up are representative of the would-be eligible households at baseline. “So, quite distressingly, we now have two choices to interpret the most recent findings: “1) We either believe the integrity of the counterfactual group in the pure control villages, in which case the negative spillover effects are real, implying that total causal effects comparing treated and control villages are zero at best. Furthermore, there are no ITT [intention to treat] effects on longer-term welfare of the beneficiaries themselves – other than an increase in the level of assets owned. In this scenario, it is harder to retain confidence in the earlier published impact findings that were based on within-village comparisons – although it is possible to believe that the negative spillovers are a longer-term phenomenon that truly did not exist at the nine-month follow-up. “2) Or, we find the pure control sample suspect, in which case we have an individually randomized intervention and need to assume away spillover effects to believe the ITT estimates.” jQuery("#footnote_plugin_tooltip_2").tooltip({ tip: "#footnote_plugin_tooltip_text_2", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); HS 2018 reports that the potential bias introduced by methodological issues may be able to explain much of the estimated spillover effects.3Haushofer and Shapiro 2018, Pgs. 24-25: “These results appear to differ from those found in the initial endline, where we found positive spillover effects on female empowerment, but no spillover effects on other dimensions. However, the present estimates are potentially affected by differential attrition from endline 1 to endline 2: as described above, the pure control group showed significantly greater attrition than both treatment and spillover households between these endlines. To assess the potential impact of attrition, we bound the spillover effects using Lee bounds (Table 8). This analysis suggests that differential attrition may account for several of these spillover effects. Specifically, for health, education, psychological well-being, and female empowerment, the Lee bounds confidence intervals include zero for all sample definitions. For asset holdings, revenue, and food security, they include zero in two of the three sample definitions. Only for expenditure do the Lee bounds confidence intervals exclude zero across all sample definitions. Thus, we find some evidence for spillover effects when using Lee bounds, although most of them are not significantly different from zero after bounding for differential attrition across treatment groups.” jQuery("#footnote_plugin_tooltip_3").tooltip({ tip: "#footnote_plugin_tooltip_text_3", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); • The mechanism for what may have caused large negative spillovers (if they exist) in HS 2018 is uncertain, though the authors provide some speculation (see footnote).4Haushofer and Shapiro 2018, Pg. 3: “We do not have conclusive evidence of the mechanism behind spillovers, but speculate it could be due to the sale of productive assets by spillover households to treatment households, which in turn reduces consumption among the spillover group. Though not always statistically different from zero, we do see suggestive evidence of negative spillover effects on the value of productive assets such as livestock, bicycles, motorbikes and appliances. We note that GiveDirectly’s current operating model is to provide transfers to all eligible recipients in each village (within village randomization was conducted only for the purpose of research), which may mitigate any negative spillover effects.” jQuery("#footnote_plugin_tooltip_4").tooltip({ tip: "#footnote_plugin_tooltip_text_4", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); We would increase our credence in the existence of negative spillover effects if there were strong evidence for a particular mechanism. One further factor that complicates application of HS 2018’s estimate of spillover effects is that GiveDirectly’s current program is substantially different from the version of its program that was studied in HS 2018. GiveDirectly now provides$1,000 transfers to almost all households in its target villages in Uganda and Kenya; the intervention studied by HS 2018 predominantly involved providing ~$287 transfers to about half of eligible (i.e., very poor) households within treatment villages, and HS 2018 measured spillover effects on eligible households that did not receive transfers.5See this section of our cash transfers intervention report. jQuery("#footnote_plugin_tooltip_5").tooltip({ tip: "#footnote_plugin_tooltip_text_5", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); GiveDirectly asked us to note that it now defaults to village-level (instead of within-village) randomization for the studies it participates in, barring exceptional circumstances. Since GiveDirectly’s current program provides transfers to almost all households in its target villages, spillovers of its program may largely operate across villages rather than within villages. These changes to the program and the spillover population of interest may lead to substantial differences in estimated spillover effects. Fortunately, GiveDirectly is running a large (~650 villages) randomized controlled trial of an intervention similar to its current program that is explicitly designed to estimate the spillover (or “general equilibrium”) effects of GiveDirectly’s program.6From the registration for “General Equilibrium Effects of Cash Transfers in Kenya”: “The study will take place across 653 villages in Western Kenya. Villages are randomly allocated to treatment or control status. In treatment villages, GiveDirectly enrolls and distributes cash transfers to households that meet its eligibility criteria. In order to generate additional spatial variation in treatment density, groups of villages are assigned to high or low saturation. In high saturation zones, 2/3 of villages are targeted for treatment, while in low saturation zones, 1/3 of villages are targeted for treatment. The randomized assignment to treatment status and the spatial variation in treatment intensity will be used to identify direct and spillover effects of cash transfers.” Note that this study will evaluate a variant of GiveDirectly’s program that is different from its current program in that it will not provide transfers to almost all households in target villages. The study will estimate the spillover effects of cash transfers on ineligible (i.e., slightly wealthier) households in treatment villages, among other populations. Since GiveDirectly’s standard program now provides transfers to almost all households in its target villages, estimates of effects on ineligible households may need to be extrapolated to other populations of interest (e.g., households in non-target villages) to be most relevant to GiveDirectly’s current program. jQuery("#footnote_plugin_tooltip_6").tooltip({ tip: "#footnote_plugin_tooltip_text_6", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Midline results from this study are expected to be released in the next few months. Since we expect GiveDirectly’s general equilibrium study to play a large role in our view of spillovers, we expect that we will not publish an overview of the cash spillovers literature until we’ve had a chance to review its results. However, we see the potential for negative spillover effects of cash as very concerning and it is a high-priority research question for us; we plan to publish a detailed update that incorporates HS 2018, previous evidence for negative spillovers (such as studies on inflation and happiness), the general equilibrium study, and any other relevant literature in time for our November 2018 top charity recommendations at the latest. Duration of benefits Several new studies seem to find that cash may have little effect on recipients’ standard of living beyond the first year after receiving a transfer. Our best guess is that after reviewing the relevant research in more detail we will decrease our estimate of the cost-effectiveness of cash to some extent. In the worst (unlikely) case, this could lead us to believe that cash is about 1.5-2x less cost-effective than we currently do. In our current cost-effectiveness analysis for cash transfers, we mainly consider two types of benefits that households experience due to receiving a transfer: 1. Increases in short-term consumption (i.e., immediately after receiving the transfer, very poor households are able to spend money on goods such as food). 2. Increases in medium-term consumption (i.e., recipients may invest some of their cash transfer in ways that lead them to have a higher standard of living in the 1-20 years after first receiving the transfer). Potential spillover effects aside, our cost-effectiveness estimate for cash has a fairly stable lower bound because we place substantial value on increasing short-term consumption for very poor people, and providing cash allows for more short-term consumption almost by definition. In particular: • Our current estimates are consistent with assuming little medium-term benefit of cash transfers. We estimate that about 60% of a typical transfer is spent on short-term goods such as eating more food, and count this as about 40-60% of the benefits of the program.7For our estimate of the proportion of the benefits of cash transfers that come from short-term consumption increases, see row 30 of the “Cash” sheet in our 2018 cost-effectiveness model. For our estimate of the proportion of transfers that is spent on short-term consumption, we rely on results from GiveDirectly’s randomized controlled trial, which shows investments of$505.94 (USD PPP) (within villages, or $601.88 across villages) on a transfer of$1,525 USD PPP, or about one-third of the total. See Pg. 117 here and Pg. 1 here for total transfer size. jQuery("#footnote_plugin_tooltip_7").tooltip({ tip: "#footnote_plugin_tooltip_text_7", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); If we were to instead assume that 100% of the transfer was spent on short-term consumption (i.e., none of it was invested), our estimate of the cost-effectiveness of cash would become about 10-30% worse.8See a version of our cost-effectiveness analysis in which we made this assumption here. The calculations in row 35 of the “Cash” tab show how assuming that 0% of the transfer is invested would affect staff members’ bottom line estimates. jQuery("#footnote_plugin_tooltip_8").tooltip({ tip: "#footnote_plugin_tooltip_text_8", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); We think using the 100% short-term consumption estimate may be a reasonable and robust way to model the lower bound of effects of cash given various measurement challenges (discussed below).
• Nevertheless, our previous estimates of the medium-term benefits of cash transfers may have been too optimistic. Based partially on a speculative model of the investment returns of iron roofs (a commonly-purchased asset for GiveDirectly recipients), most staff assumed that about 40% of a transfer will be invested, and that those investments will lead to roughly 10% greater consumption for 10-15 years.9See rows 5, 8, and 14, “Cash” sheet, 2018 Cost-Effectiveness Analysis – Version 1. jQuery("#footnote_plugin_tooltip_9").tooltip({ tip: "#footnote_plugin_tooltip_text_9", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Some new research discussed in Özler’s first post suggests that there may be little return on investment from cash transfers within 2-4 years after the transfer, though the new evidence is somewhat mixed (see footnote).10See this section of Özler’s post: “This new paper and Blattman’s (forthcoming) work mentioned above join a growing list of papers finding short-term impacts of unconditional cash transfers that fade away over time: Hicks et al. (2017), Brudevold et al. (2017), Baird et al. (2018, supplemental online materials). In fact, the final slide in Hicks et al. states: ‘Cash effects dissipate quickly, similar to Brudevold et al. (2017), but different to Blattman et al. (2014).’ If only they were presenting a couple of months later…”

See also two other recent papers that find positive effects of cash transfers beyond the first year: Handa et al. 2018 and Parker and Vogl 2018. The latter finds intergenerational effects of a conditional cash transfer program in Mexico, so may be less relevant to GiveDirectly’s program. jQuery("#footnote_plugin_tooltip_10").tooltip({ tip: "#footnote_plugin_tooltip_text_10", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Additionally, under the negative interpretation of HS 2018’s results, it finds that cash transfers did not have positive consumption effects for recipients three years post-transfer, though it finds a ~40% increase in assets for treatment households (even in the negative interpretation).11Haushofer and Shapiro 2018, Abstract: “Comparing recipient households to non-recipients in distant villages, we find that transfer recipients have 40% more assets (USD 422 PPP) than control households three years after the transfer, equivalent to 60% of the initial transfer (USD 709 PPP).”

Haushofer and Shapiro 2018, Pg. 28: “Since we have outcome data measured in the short run (~9 months after the beginning of the transfers) and in the long-run (˜3 years after the beginning of transfers), we test equality between short and long-run effects…Results are reported in Table 9. Focusing on the within-village treatment effects, we find no evidence for differential effects at endline 2 compared to endline 1, with the exception of assets, which show a significantly larger treatment effect at endline 2 than endline 1. However, this effect is largely driven by spillovers; for across-village treatment effects, we cannot reject equality of the endline 1 and endline 2 outcomes. This is true for all variables in the across-village treatment effects except for food security and psychological well-being, which show a smaller treatment effect at endline 2 compared to endline 1. Thus, we find some evidence for decreasing treatment effects over time, but for most outcome variables, the endline 1 and 2 outcomes are similar.” jQuery("#footnote_plugin_tooltip_11").tooltip({ tip: "#footnote_plugin_tooltip_text_11", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Note that any benefits from owning iron roofs were not factored in to the consumption estimates in HS 2018.12Haushofer and Shapiro 2018, pgs. 32-33: “Total consumption…Omitted: Durables expenditure, house expenditure (omission not pre-specified for endline 1 analysis)” jQuery("#footnote_plugin_tooltip_12").tooltip({ tip: "#footnote_plugin_tooltip_text_12", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); If we imagine the potential worst case scenario implied by these results and assume that the ~40% of a cash transfer that is invested has zero benefits, our cost-effectiveness estimate would get about 2x worse.

Our best guess is that we’ll decrease our estimate for the medium-term effects of cash to some extent, though we’re unsure by how much. Challenging questions we’ll need to consider in order to arrive at a final estimate include:

• If we continue to assume that about 40% of transfers are invested, and that those investments do not lead to any future gains in consumption, then we are effectively assuming that money spent on investments is wasted. Is this an accurate reflection of reality, i.e. are recipients failing to invest transfers in a beneficial manner?
• Is our cost-effectiveness model using a reasonable framework for estimating recipients’ standard of living over time? Currently, we only estimate cash’s effects on consumption. However, assets such as iron roofs may provide an increase in standard of living for multiple years even if they do not raise consumption. How, if at all, should we factor this into our estimates?
• GiveDirectly’s cash transfer program differs in many ways from other programs that have been the subject of impact evaluations. For example, GiveDirectly provides large, one-time transfers whereas many government cash transfers provide smaller ongoing support to poor families. How should we apply new literature on other kinds of cash programs to our estimates of the effects of GiveDirectly?
Next steps

We plan to assess all literature relevant to the impact of cash transfers and provide an update on our view on the nature of spillover effects, duration of benefits, and other relevant issues for our understanding of cash transfers and their cost-effectiveness in time for our November 2018 top charity recommendations at the latest.

Notes   [ + ]

1. ↑ From Sandefur’s post: “Households who had been randomly selected to receive cash were much better off than their neighbors who didn’t. They had $400 more assets—roughly the size of the original transfer, with all figures from here on out in PPP terms—and about$47 higher consumption each month. It looked like an amazing success.

“But when Haushofer and Shapiro compared the whole sample in these villages—half of whom had gotten cash, half of whom hadn’t—they looked no different than a random sample of households in control villages. In fact, their consumption was about $6 per month less ($211 versus $217 a month). “There are basically two ways to resolve this paradox: “1) Good data, bad news. Cash left recipients only modestly better off after three years (lifting them from$217 to $235 in monthly consumption), and instead hurt their neighbors (dragging them down from$217 to $188 in monthly consumption). Taking the data at face value, this is the most straightforward interpretation of the results. “2) Bad data, good news. Alternatively, the$47 gap in consumption between recipients and their neighbors is driven by gains to the former not losses to the latter. The estimates of negative side-effects on neighbors are driven by comparisons with control villages where—if you get into the weeds of the paper—it appears sampling was done differently than in treatment villages. (In short, the $217 isn’t reliable.)” 2. ↑ One methodological issue is how to deal with attrition, as discussed in Haushofer and Shapiro 2018, Pg. 9: “However, there is a statistically significant difference in attrition levels for households in control villages relative to households in treatment villages from endline 1 to endline 2: 6 percentage points more pure control households were not found at endline 2 relative to either group of households in treatment villages. In the analysis of across-village treatment effects and spillover effects we use Lee bounds to deal with this differential attrition; details are given below.” Another potential issue as described by Özler’s post: “The short-term impacts in Haushofer and Shapiro (2016) were calculated using within-village comparisons, which was a big problem for an intervention with possibility of spillovers, on which the authors had to do a lot of work earlier (see section IV.B in that paper) and in the recent paper. They got around this problem by arguing that spillover effects were small and insignificant. Of course, then came the working paper on negative spillovers on psychological wellbeing mentioned above and now, the spillover effects look sustained and large and unfortunately negative on multiple domains three years post transfers. “The authors estimated program impacts by comparing T [treatment group] to S [spillover group], instead of the standard comparison of T to C [control group], in the 2016 paper because of a study design complication: researchers randomly selected control villages, but did not collect baseline data in these villages. The lack of baseline data in the control group is not just a harmless omission, as in ‘we lose some power, no big deal.’ Because there were eligibility criteria for receiving cash, but households were sampled a year later, no one can say for certain if the households sampled in the pure control villages at follow-up are representative of the would-be eligible households at baseline. “So, quite distressingly, we now have two choices to interpret the most recent findings: “1) We either believe the integrity of the counterfactual group in the pure control villages, in which case the negative spillover effects are real, implying that total causal effects comparing treated and control villages are zero at best. Furthermore, there are no ITT [intention to treat] effects on longer-term welfare of the beneficiaries themselves – other than an increase in the level of assets owned. In this scenario, it is harder to retain confidence in the earlier published impact findings that were based on within-village comparisons – although it is possible to believe that the negative spillovers are a longer-term phenomenon that truly did not exist at the nine-month follow-up. “2) Or, we find the pure control sample suspect, in which case we have an individually randomized intervention and need to assume away spillover effects to believe the ITT estimates.” 3. ↑ Haushofer and Shapiro 2018, Pgs. 24-25: “These results appear to differ from those found in the initial endline, where we found positive spillover effects on female empowerment, but no spillover effects on other dimensions. However, the present estimates are potentially affected by differential attrition from endline 1 to endline 2: as described above, the pure control group showed significantly greater attrition than both treatment and spillover households between these endlines. To assess the potential impact of attrition, we bound the spillover effects using Lee bounds (Table 8). This analysis suggests that differential attrition may account for several of these spillover effects. Specifically, for health, education, psychological well-being, and female empowerment, the Lee bounds confidence intervals include zero for all sample definitions. For asset holdings, revenue, and food security, they include zero in two of the three sample definitions. Only for expenditure do the Lee bounds confidence intervals exclude zero across all sample definitions. Thus, we find some evidence for spillover effects when using Lee bounds, although most of them are not significantly different from zero after bounding for differential attrition across treatment groups.” 4. ↑ Haushofer and Shapiro 2018, Pg. 3: “We do not have conclusive evidence of the mechanism behind spillovers, but speculate it could be due to the sale of productive assets by spillover households to treatment households, which in turn reduces consumption among the spillover group. Though not always statistically different from zero, we do see suggestive evidence of negative spillover effects on the value of productive assets such as livestock, bicycles, motorbikes and appliances. We note that GiveDirectly’s current operating model is to provide transfers to all eligible recipients in each village (within village randomization was conducted only for the purpose of research), which may mitigate any negative spillover effects.” 5. ↑ See this section of our cash transfers intervention report. 6. ↑ From the registration for “General Equilibrium Effects of Cash Transfers in Kenya”: “The study will take place across 653 villages in Western Kenya. Villages are randomly allocated to treatment or control status. In treatment villages, GiveDirectly enrolls and distributes cash transfers to households that meet its eligibility criteria. In order to generate additional spatial variation in treatment density, groups of villages are assigned to high or low saturation. In high saturation zones, 2/3 of villages are targeted for treatment, while in low saturation zones, 1/3 of villages are targeted for treatment. The randomized assignment to treatment status and the spatial variation in treatment intensity will be used to identify direct and spillover effects of cash transfers.” Note that this study will evaluate a variant of GiveDirectly’s program that is different from its current program in that it will not provide transfers to almost all households in target villages. The study will estimate the spillover effects of cash transfers on ineligible (i.e., slightly wealthier) households in treatment villages, among other populations. Since GiveDirectly’s standard program now provides transfers to almost all households in its target villages, estimates of effects on ineligible households may need to be extrapolated to other populations of interest (e.g., households in non-target villages) to be most relevant to GiveDirectly’s current program. 7. ↑ For our estimate of the proportion of the benefits of cash transfers that come from short-term consumption increases, see row 30 of the “Cash” sheet in our 2018 cost-effectiveness model. For our estimate of the proportion of transfers that is spent on short-term consumption, we rely on results from GiveDirectly’s randomized controlled trial, which shows investments of$505.94 (USD PPP) (within villages, or $601.88 across villages) on a transfer of$1,525 USD PPP, or about one-third of the total. See Pg. 117 here and Pg. 1 here for total transfer size. 8. ↑ See a version of our cost-effectiveness analysis in which we made this assumption here. The calculations in row 35 of the “Cash” tab show how assuming that 0% of the transfer is invested would affect staff members’ bottom line estimates. 9. ↑ See rows 5, 8, and 14, “Cash” sheet, 2018 Cost-Effectiveness Analysis – Version 1. 10. ↑ See this section of Özler’s post: “This new paper and Blattman’s (forthcoming) work mentioned above join a growing list of papers finding short-term impacts of unconditional cash transfers that fade away over time: Hicks et al. (2017), Brudevold et al. (2017), Baird et al. (2018, supplemental online materials). In fact, the final slide in Hicks et al. states: ‘Cash effects dissipate quickly, similar to Brudevold et al. (2017), but different to Blattman et al. (2014).’ If only they were presenting a couple of months later…”

See also two other recent papers that find positive effects of cash transfers beyond the first year: Handa et al. 2018 and Parker and Vogl 2018. The latter finds intergenerational effects of a conditional cash transfer program in Mexico, so may be less relevant to GiveDirectly’s program. 11. ↑ Haushofer and Shapiro 2018, Abstract: “Comparing recipient households to non-recipients in distant villages, we find that transfer recipients have 40% more assets (USD 422 PPP) than control households three years after the transfer, equivalent to 60% of the initial transfer (USD 709 PPP).”

Haushofer and Shapiro 2018, Pg. 28: “Since we have outcome data measured in the short run (~9 months after the beginning of the transfers) and in the long-run (˜3 years after the beginning of transfers), we test equality between short and long-run effects…Results are reported in Table 9. Focusing on the within-village treatment effects, we find no evidence for differential effects at endline 2 compared to endline 1, with the exception of assets, which show a significantly larger treatment effect at endline 2 than endline 1. However, this effect is largely driven by spillovers; for across-village treatment effects, we cannot reject equality of the endline 1 and endline 2 outcomes. This is true for all variables in the across-village treatment effects except for food security and psychological well-being, which show a smaller treatment effect at endline 2 compared to endline 1. Thus, we find some evidence for decreasing treatment effects over time, but for most outcome variables, the endline 1 and 2 outcomes are similar.” 12. ↑ Haushofer and Shapiro 2018, pgs. 32-33: “Total consumption…Omitted: Durables expenditure, house expenditure (omission not pre-specified for endline 1 analysis)” function footnote_expand_reference_container() { jQuery("#footnote_references_container").show(); jQuery("#footnote_reference_container_collapse_button").text("-"); } function footnote_collapse_reference_container() { jQuery("#footnote_references_container").hide(); jQuery("#footnote_reference_container_collapse_button").text("+"); } function footnote_expand_collapse_reference_container() { if (jQuery("#footnote_references_container").is(":hidden")) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery("#" + p_str_TargetID); if(l_obj_Target.length) { jQuery('html, body').animate({ scrollTop: l_obj_Target.offset().top - window.innerHeight/2 }, 1000); } }

The post New research on cash transfers appeared first on The GiveWell Blog.

### GiveWell’s outreach and operations: 2017 review and 2018 plans

Fri, 04/20/2018 - 13:48

This is the third of three posts that form our annual review and plan for the following year. The first two posts covered GiveWell’s progress and plans on research. This post reviews and evaluates GiveWell’s progress last year in outreach and operations and sketches out some high-level goals for the current year. A separate post will look at metrics on our influence on donations in 2017. We aim to release our metrics on our influence on donations in 2017 by the end of June 2018.

Summary

Outreach: Before 2017, outreach wasn’t a major organizational priority at GiveWell (more in this 2014 blog post). In our plans for 2017, we wrote that we planned to put more emphasis on outreach, but were at the early stages of thinking through what that might involve. In the second half of 2017, we experimented with a number of different approaches to outreach (more on the results below). In 2018, we plan to increase the resources we devote to outreach primarily by hiring a Head of Growth and adding staff to improve our post-donation follow-up with donors.

Operations: In 2017, we completed the separation of GiveWell and the Open Philanthropy Project and increased our operations capacity with three new hires. In 2018, our top priorities are to hire a new Director of Operations (which we have now done), maintain our critical functions, and prepare our systems for increased growth in outreach.

Outreach 2017 review and 2018 plans

Before 2017, outreach wasn’t a major organizational priority at GiveWell (more in this 2014 blog post). In our plans for 2017, we wrote that we planned to put more emphasis on outreach, but were at the early stages of thinking through what that might involve.

We currently have one staff member, Catherine Hollander, who works on outreach full-time. Two others, Tracy Williams and Isabel Arjmand, each spend significant time on outreach. From August 2017, our Executive Director, Elie Hassenfeld, also started to allocate a significant amount of his time to outreach.

How did we do in 2017?

In 2017, we focused on experimentation. In brief, we found that:

• Advertising on podcasts has had strong results. Using the methodology described in this blog post, our best guess is that each dollar we spent on podcast advertising returned $5-14 in donations to our top charities. • Increasing the consistency of our communication with members of the media had strong results for the time invested. • Retaining a digital marketing consultant yielded strong results. • Retaining a PR firm to generate media mentions did not have positive results. • We’ve had a limited number of conversations with high net worth donors. We don’t yet have enough information to conclude whether this was a good use of time. You can see our estimates of the five-year net present value of donations generated by each of these activities here. Overall, we spent approximately$200,000 and devoted significant staff time to this work. Our best estimate is that these efforts resulted in $2.5 million to$5.9 million in additional donations to our recommended charities.

We conclude:

• New work on outreach had a high return on investment in 2017.
• Some activities, such as podcast advertising and digital marketing improvements, have shown particularly strong results and should be scaled up.

What are our priorities for 2018?

Our marketing funnel has three stages:

1. Awareness/acquisition: more people hear about GiveWell and visit the website,
2. Conversion: more people who visit the site donate, and
3. Retention: over time, donors maintain or increase their donations.

Our current working theory is that we should prioritize (though not exclusively) improving the bottom of this funnel (retention and conversion) before moving more people through it. We also plan to scale up the activities that worked well in 2017 and to continue experimenting with different approaches.

Our primary outreach priorities (which we expect to achieve and devote substantial capacity to) for 2018 are:

1. Hire a Head of Growth to improve our efforts to acquire and convert new donors via our website. Over the long term, the Head of Growth will be responsible for digital marketing.

What does success look like? Hire a Head of Growth.

2. Improve the post-donation experience. We believe we have substantial room to improve our post-donation communication with donors. We have hired a consultant to help us improve our process.

What does success look like? Significantly improve our process for post-donation follow-up before giving season 2018.

At this point, we’re still in the earliest stages of figuring out how we’ll do this, so we don’t have concrete goals for the year beyond finalizing our plan in the next few months. Our stretch goal for the year is to succeed in achieving measured improvement in our dollar retention rate/lifetime value of each donor.

Our secondary outreach priorities (which we expect to achieve, but not devote substantial capacity to) for 2018 are:

1. Continue advertising on podcasts. This advertising was particularly successful in 2017. We want to systematically assess podcast advertising opportunities and increase our podcast advertising. We plan to spend approximately $250,000 to$350,000 on podcast advertising this year.

What does success look like? Advertise on new podcasts and measure results to decide how much to spend in 2019.

2. Receive coverage in major news outlets. This has led to increased donations in the past.

What does success look like? Pitch major news outlets on at least five stories in total and get at least one story covered.

3. Deepen relationships with the effective altruism community. We want to deepen our relationships with groups in the effective altruism community doing outreach, particularly to high net worth donors.

For a list of other potentially promising projects we’re unlikely to prioritize this year, see this spreadsheet.

Operations 2017 review and 2018 plans

In 2017, we increased our operations staff capacity, made a number of changes to our internal systems, and completed the separation of GiveWell and the Open Philanthropy Project. In addition to maintaining critical functions, our highest priorities for 2018 are to (i) appoint a new Director of Operations and (ii) make improvements to our processes across the board to prepare our systems for major growth in outreach.

How did we do in 2017?

We made a number of improvements to our operations. In brief:

• We completed the separation of GiveWell and the Open Philanthropy Project.
• Donations: We hired two new members of our donations team, which allowed us to process donations consistently notwithstanding increased volume. We also added Betterment and Bitpay (for Bitcoin) as donation options.
• Finance: We hired a Controller. We rolled out a few systems to improve the efficiency of our internal processes (Expensify, Bill.com, and others).
• Social cohesion: We created a regular schedule for visit days for remote staff and staff events to maintain cohesion.

In January 2018, Sarah Ward, our former Director of Operations, departed. Natalie Crispin (Senior Research Analyst) has been covering her previous responsibilities during our search for a new hire to take them on.

What are our priorities for 2018?

In the first half of 2018, we aim to move from a situation in which we were maintaining critical functions to positioning the organization to grow.

Our two main priorities for the first half of 2018 are to:

1. Appoint a new Director of Operations (complete). In April 2018, we hired Whitney Shinkle as our new Director of Operations. Between January and April 2018, Natalie Crispin served as our interim Director of Operations.
2. Prepare our systems for major growth in outreach, which we expect to lead to increases in spending, staff, and donations.
3. Maintain critical operations across domains: donations, finance, HR, office, website, recruiting, and staff cohesion.

Major operations projects we aim to complete in the first half of 2018 include:

• A significant improvement in our approach to budgeting making it significantly easier for us to share updated actual spending versus budget.
• We retained a compensation consultant to help us benchmark GiveWell staff compensation to comparable organizations.
• We published our 2016 metrics report and plan to publish our 2017 money moved report by the end of June.

The post GiveWell’s outreach and operations: 2017 review and 2018 plans appeared first on The GiveWell Blog.

### Our 2018 plans for research

Thu, 04/19/2018 - 09:58

This is the second of three posts that form our annual review and plan for the following year. The first post reviewed our progress in 2017. The following post will cover GiveWell’s progress and plans as an organization. We aim to release our metrics on our influence on donations in 2017 by the end of June 2018.

Summary

Our primary research goals for 2018 are to:

1. Explore areas that may be more cost-effective than our current recommendations but don’t fit neatly into our current criteria by investigating (i) interventions aimed at influencing policy in low- and middle-income countries and (ii) opportunities to influence major aid agencies.
2. Find new top charities that meet our current criteria by (i) completing intervention reports for at least two interventions we think are likely to result in GiveWell top charities by the end of 2019, (ii) considering renewal of GiveWell Incubation Grants to current grantee organizations that may become top charities in the future and making new Incubation Grants, and (iii) developing and maintaining high-quality relationships with charities, funders, and influencers in the global health and development community.
3. Improve our internal processes to support the above goals. We plan to continue to delegate significant parts of our top charity update process to non-management staff and to improve our year-end process for making recommendations.
4. Continue following our top charities and address priority questions. We are devoting fewer resources than we have in the past to top charity updates. We plan to continue gathering up-to-date information to allow us to make high-quality allocation decisions for giving season, and to answer a small number of high-priority questions.

Our secondary goals (which we hope to achieve, but are lower priority than the goals above) are to:

1. Improve the quality of our decisions and transparency about our decision-making process.
2. Hire more flexible research capacity to increase our output.
3. Complete reviews of two new potential top charities.

We discuss each of these goals in greater depth below.

Goal 1: Explore areas that may be more cost-effective than our current recommendations

We’ve added five new top charities in the last two years. We now believe that our current top charities have more room for more funding than we are able to fill. This increases the relative value of identifying giving opportunities that are substantially more cost-effective than our current top charities (because identifying similarly cost-effective opportunities will crowd out marginal funding for our current top charities), even if we believe we have a lower chance of success of identifying these opportunities.

We’re therefore prioritizing investigating the areas we believe have the highest chance of containing opportunities that are substantially more cost-effective than our current top charities.

The primary staff working on this are James Snowden (Research Consultant) and Josh Rosenberg (Senior Research Analyst).

Sub-goal 1.1: Assess interventions to influence policy in low- and middle-income countries

Our current top charities all implement direct-delivery interventions (although we believe that some leverage substantial domestic government funding). We think there’s a reasonable, intuitive case that philanthropists may, in some cases, have a greater impact by influencing government policy because (i) governments have access to regulatory interventions that are unavailable to philanthropists and (ii) there may be opportunities to help improve the allocation of large pools of funds. We’ve started work investigating advocacy for tobacco control (notes 1, 2, 3), lead paint regulation (1, 2, 3), and J-PAL’s Government Partnership Initiative (1, 2). More about why we’re prioritizing this area here.

What does success look like? We publish at least five reports on interventions to influence policy in low- and middle-income countries and prioritize one to three for deeper assessment.

Sub-goal 1.2: Improve our understanding of aid agencies

We believe there may be opportunities for GiveWell (or potential GiveWell grantees) to help improve the allocation of spending by aid agencies. We want to improve our understanding of what aid agencies spend their funds on, whether there are opportunities to improve this allocation, and whether GiveWell (or potential grantees) would be in a good position to assist.

What does success look like? As this project is at an early stage, we don’t yet have specific metrics to assess success.

Goal 2: Find new top charities that meet our current criteria

One of our most important long-term goals is to identify all charities that should be top charities under our current criteria. We are uncertain whether we will be able to identify organizations outside of our current scope of work that we believe are substantially better giving opportunities than our current top charities (Goal 1) and we want to ensure we’re recommending the best giving opportunities, even if we believe they’re similarly cost-effective to our current top charities.

The primary staff working on this are Caitlin McGugan (Senior Fellow), Andrew Martin (Research Analyst), Josh Rosenberg (Senior Research Analyst), Stephan Guyenet (Research Consultant), Sophie Monahan (Research Analyst), and Chelsea Tabart (Research Analyst).

Sub-goal 2.1: Produce two intervention reports

Intervention assessments are key to our research process. We generally only consider organizations that are implementing one of our priority programs—so designated upon our completion of an assessment of the intervention—for top charity status (an exception is if an organization has done rigorous evaluation of its own program, though in practice we have found this to be very rare). Last year, we completed two full intervention reports (as opposed to “interim” reports, which are less time-intensive). As we’re allocating a larger proportion of our capacity to Goal 1 than we did last year, we aim to maintain this level of output at two full intervention reports this year.

What does success look like? We complete and publish two full intervention reports on potential new priority programs.

Sub-goal 2.2: Complete grant renewal assessments and new reviews as part of GiveWell Incubation Grants

There are a number of GiveWell Incubation Grantees that we hope will become top charities in the future. We want to ensure we’re making good decisions about the renewals of their grants and to continue to support organizations in developing monitoring and evaluation to the point where they can be considered for top charity status.

In the past, we’ve made GiveWell Incubation Grants to promising opportunities that didn’t fit within our research priorities at the time. We want to remain open to investigating opportunities we’re not yet aware of.

What does success look like? Complete assessments for grant renewals for Results for Development, Charity Science: Health, and a new grant for Evidence Action’s work on iron and folic acid supplementation. Prioritize at least two new Incubation Grants and complete a thorough investigation of each.

Sub-goal 2.3: Develop and maintain high-quality relationships with charities, funders, and influencers in the global health and development community

We expect good relationships with relevant organizations to help us (i) increase the number and diversity of good-fit charities that express interest in applying for our recommendation, (ii) identify new interventions we should consider as potential GiveWell priority programs, and (iii) clearly communicate our approach to potential top charities, enabling them to determine whether they would be a good fit for our process.

While we feel our relationships with well-regarded global health and development implementers and funders have improved, we continue to feel limited in our ability to understand whether there are funding gaps for evidence-backed, highly cost-effective work within large international NGOs and multilateral aid organizations such as the Global Fund to Fight AIDS, Tuberculosis and Malaria.

What does success look like? We have at least one call or meeting with at least 60 different charities that we have not recommended or made an Incubation Grant to (last year, we had 42) and at least 100 such calls or meetings in total. We have at least five multi-program organizations with budgets of more than $50 million annually express interest in being considered for our top charity recommendation for a specific, promising program, if we invite them to apply. We prioritize research work beyond an initial, brief evidence assessment on at least five interventions that we became aware of through professional networks. Goal 3: Continue to improve our internal processes We believe there’s room for improvement in a number of research processes to support the above goals, as well as our work following our current top charities. We don’t expect the general public to see clear evidence of progress on these goals, as they largely relate to our internal operations. The primary staff working on this are Elie Hassenfeld (Executive Director), Josh Rosenberg (Senior Research Analyst), and Natalie Crispin (Senior Research Analyst). Sub-goal 3.1: Decrease the amount of time senior staff spend on top charity updates this year In the past, much of the work on top charity updates has been the responsibility of Natalie Crispin (Senior Research Analyst). We plan to move a higher proportion of this work to other research staff to minimize the extent to which our institutional knowledge is dependent on any one individual. What does success look like? Natalie spends less than 30 percent of her time on top charity updates, and, more subjectively, we believe at the end of 2018 that it would not cause significant disruption to further reduce Natalie’s time on this work (i.e., to 15 percent) in 2019. Sub-goal 3.2: Improve our process for publishing our year-end recommendations In 2017, we started finalizing our charity recommendations for giving season later than was optimal. This meant much of the work had to be completed in a short amount of time, and there was insufficient time to solicit feedback and criticism from our top charities. While this was partly a consequence of adding two new top charities, we want to be more disciplined this year about when we start preparation for our giving season recommendations. What does success look like? With exceptions for cases where we need to wait (i.e., final room for more funding estimates and cost-per-treatment estimates for existing top charities, information related to new top charities, or information that isn’t available until after July 31 and is crucial to our recommendations), finalize underlying research directly relevant to our 2018 recommendations by July 31; finalize all research and pages by November 1 (two-plus weeks before our publication deadline) to allow for (a) charity feedback and (b) internal debate. Goal 4: Continue following our top charities and address priority questions We are devoting fewer resources than we have in the past to top charity updates. We plan to continue gathering up-to-date information to allow us to make high-quality allocation decisions for giving season and to answer a small number of high-priority questions: • For each top charity, we plan to review spending over the last year and new monitoring and evaluation reports; update our estimate of their cost per deliverable (e.g., deworming treatment, preventative malaria treatment, or loan provided); and complete an analysis of their room for more funding. • For Helen Keller International (HKI), we plan to explore three major outstanding questions: 1. What is HKI’s impact on coverage rates in vitamin A supplementation campaigns? To date, we have only supported HKI’s work to fund campaigns that are unlikely to occur without funding from HKI, and we would like to understand whether we should expand this support to other campaigns that HKI works on. 2. What other interventions are delivered alongside vitamin A and how does that impact the cost-effectiveness of HKI’s work? 3. What would it take to gather more data on current levels of vitamin A deficiency in locations where HKI works or may work in the future? • We want to increase our confidence in the costs incurred by other actors for net distributions that are supported by the Against Malaria Foundation, one of our current top charities. • We plan to speak with each of our standout charities for an update on their work. The primary staff working on this are Natalie Crispin (Senior Research Analyst), Isabel Arjmand (Research Analyst), Andrew Martin (Research Analyst), Chelsea Tabart (Research Analyst), and Nicole Zok (Research Analyst). What does success look like? By the end of November 2018, we complete updated reviews of each of our current top charities that include the information listed above. We also publish conversation notes from discussions with each current standout charity. Goal 5 (Secondary): Improve the quality of our decisions and transparency about our decision-making process We would like to improve the process by which we set our allocations during giving season. We don’t know yet exactly what this will involve, but we intend to do some initial work to determine ways we can improve the quality of our decisions and transparency about them. Goal 6 (Secondary): Hire more flexible research capacity to increase our output We believe our research team is currently capacity constrained. We would like to hire more flexible research generalists at all levels of seniority. We don’t expect to spend more time on this goal than we already are, but we would be excited about hiring the right candidates. If you’re interested in working for GiveWell, you can apply through our jobs page. Goal 7 (Secondary): Complete reviews of at least two new top charities We are prioritizing top charity reviews less highly this year than we have in previous years because we currently expect to identify significantly larger funding gaps than we will be able to fill. However, we have a shortlist of potential candidates for top charity status, and if we have the capacity, would like to complete evaluations of one or two of these organizations. What does success look like? Complete evaluations for one or two new potential top charities. The post Our 2018 plans for research appeared first on The GiveWell Blog. ### Review of our research in 2017 Wed, 04/18/2018 - 13:13 This is the first of three posts that form our annual review and plan for the following year. This post reviews and evaluates last year’s progress on our work of finding and recommending evidence-based, thoroughly-vetted charities that serve the global poor. The following two posts will cover (i) our plans for GiveWell’s research in 2018 and (ii) GiveWell’s progress and plans as an organization. We aim to release our metrics on our influence on donations in 2017 by the end of June 2018. Summary We believe that 2017 was a successful year for GiveWell’s research. We met our five primary goals for the year, as articulated in our plan post from the beginning of the year: Our primary research goals for 2017 are to: 1. Speed up our output of new intervention assessments, by hiring a Senior Fellow and by improving our process for reviewing interventions at a shallow level. 2. Increase the number of promising charities that apply for our recommendation. Alternatively, we may learn why we have relatively few strong applicants and decide whether to change our process as a result. Research Analyst Chelsea Tabart will spend most of her time on this project. 3. Through GiveWell Incubation Grants, fund projects that may lead to more top charity contenders in the future and consider grantees No Lean Season and Zusha! as potential 2017 top charities. 4. Further improve the robustness and usability of our cost-effectiveness model. 5. Improve our process for following the progress of current top charities to reduce staff time, while maintaining quality. We also have some specific goals (discussed below) with respect to answering open questions about current top charities. We achieved our five primary goals for the year: 1. Our intervention-related output was greater than in any past year, although we still see room for improvement in the pace with which we complete and publish this work (more). We hired a Senior Fellow and published nine full or interim intervention reports in 2017, compared to four in 2016. 2. We increased the number of promising charities that applied for our recommendation (more). 3. We added two new top charities: Evidence Action’s No Lean Season (the first top charity to start as a GiveWell Incubation Grant recipient) and Helen Keller International’s vitamin A supplementation program (which joined our list as a result of our charity outreach work). We continued to follow our current Incubation Grant recipients and made several new Incubation Grants to grow the pipeline of new top charities (more). 4. We made substantial improvements to our cost-effectiveness analysis (more). 5. We reduced the amount of staff time spent on following our current top charities. We also completed 17 of the 19 activities outlined in last year’s plan (more). We discuss progress on each of our primary goals below. For each high-level goal, we include (i) the subgoals we set in our last annual review, (ii) an evaluation of whether we met those subgoals, and (iii) a summary of key activities completed last year. Goal 1: Speed up intervention assessments In early 2017, we wrote: In recent years, we have completed few intervention reports, which has limited our ability to consider new potential top charities. We plan to increase the rate at which we form views on interventions this year by: • Hiring a Senior Fellow (or possibly more than one). We expect a Senior Fellow to have a Ph.D. in economics, public health, or statistics or equivalent experience and to focus on in-depth evidence reviews and cost-effectiveness assessments of interventions that appear promising after a shallower investigation. In addition, Open Philanthropy Project Senior Advisor David Roodman may spend some more time on intervention related work. • Doing low-intensity research on a large number of promising interventions. We generally start with a two to four hour “quick intervention assessment,” and then prioritize interventions for a 20-30 hour “interim intervention report” (example). We don’t yet have a good sense of how many of these of these we will complete this year, because we’re unsure both about how much capacity we will have for this work and about how many promising interventions there will be at each step in the process. • Continuing to improve our systems for ensuring that we become aware of promising interventions and new relevant research as it becomes available. We expect to learn about additional interventions by tracking new research, particularly randomized controlled trials, in global health and development and by talking to select organizations about programs they run that they think we should look into. How did we do? Achieved our goal. Due to our uncertainty about the capacity we could devote to intervention assessments, we did not have an explicit target for how many reports we expected to complete. In 2017, we published seven interim intervention reports, two full intervention reports, and completed ~30 quick evidence assessments (defined below). Our research output for 2017 was higher than 2016, when we published one full intervention report, three interim intervention reports, and completed 30 quick evidence assessments. What did we do? Goal 2: Increase the pipeline of promising charities applying for our recommendation In early 2017, we wrote: We would like to better understand whether we have failed to get the word out about the potential value we offer or communicate well about our process and charities’ likelihood of success, or, alternatively, whether charities are making well-informed decisions about their fit with our criteria. (More on why we think more charities should consider applying for a GiveWell recommendation in this post.) This year, we have designated GiveWell Research Analyst Chelsea Tabart as charity liaison. Her role is to increase and improve our pipeline of top charity contenders by answering charities’ questions about our process and which program(s) they should apply with, encouraging promising organizations to apply, and, through these conversations, understanding what the barriers are to more charities applying. We aim by the end of the year to have a stronger pipeline of charities applying, have confidence that we are not missing strong contenders, or understand how we should adjust our process in the future. How did we do? Achieved our goal. More charities entered our top charity review process in 2017, although it’s unclear whether this was due to our charity liaison activities. Five charities formally applied in 2017, compared to two in 2016, and four in 2015. One of those charities, Helen Keller International’s vitamin A supplementation program, became a top charity. While we feel our relationships with well-regarded global health and development implementers and funders have improved, we continue to feel limited in our ability to understand whether there are funding gaps for evidence-backed, highly cost-effective work within large international NGOs and multilateral aid organizations such as the Global Fund to Fight AIDS, Tuberculosis and Malaria. What did we do? • We had at least one conversation with 42 organizations to introduce them to GiveWell’s work in 2017, compared to 16 in 2016. • Where organizations running multiple programs expressed interest in applying for our recommendation, we had several calls with them to help determine whether they should apply and which of their programs would be the most promising fit for a top charity evaluation. We had not offered this proactive support to organizations in the past. • We hosted two charity-focused events: (i) a conference call for charities with GiveWell senior staff to present an update on our work as it relates to charities and to give them a chance to ask questions directly of our senior team and (ii) a networking event for our recommended organizations in London. • We attended seven conferences on global health and development issues to broaden our network and perspective in subject-matter areas that GiveWell has not historically worked on. Goal 3: Maintain Incubation Grants In early 2017, we wrote: We made significant progress on Incubation Grants in 2016 and plan in 2017 to largely continue with ongoing engagements, while being open to new grantmaking opportunities that are brought to our attention. Among early-to-mid stage grants, we plan to spend the most time on working with IDinsight and New Incentives (where our feedback is needed to move the projects forward), and a smaller amount of time on Results for Development and Charity Science: Health (where we are only following along with ongoing projects). Another major priority will be following up on two later-stage grantees, No Lean Season and Zusha!, groups that are contenders for a top charity recommendation in 2017. For No Lean Season, a program run by Evidence Action, our main outstanding questions are whether the program will have room for more funding in 2018 and whether monitoring will be high quality as the program scales. We have similar questions about Zusha! and in addition are awaiting randomized controlled trial results that are expected later this year. How did we do? Exceeded goal. As expected, our work last year focused on following up on current grantees. No Lean Season, one of our later-stage grantees, graduated to top charity status and we made one grant to a new grantee, the Centre for Pesticide Suicide Prevention. We also made a number of grants to improve our understanding of the evidence base for our priority programs and deepened our partnership with IDinsight. What did we do? Goal 4: Improve our cost-effectiveness analysis In early 2017, we wrote: We plan to continue making improvements to our cost-effectiveness model and the data it draws on (separate from adding new interventions to the model, which is part of the intervention report work discussed above). Projects we are currently prioritizing include: • Making it more straightforward to see how personal values are incorporated into the model and what the implications of those values are. • Revisiting the prevalence and intensity adjustment that we use to compare the average per-person impact of deworming in places that our top charities work to the locations where the studies that found long-term impact of deworming were conducted. More in this post. • Improving the insecticide-treated nets model by revisiting how it incorporates effects on adult mortality and adjustments for regions with different malaria burdens and changes in malaria burden over time. How did we do? Achieved goal. We made substantial progress on improving our cost-effectiveness analysis in 2017. What did we do? • Moved to a system of making more frequent updates to our cost-effectiveness analysis. This has made it easier to identify which specific factors are driving changes in the estimated cost-effectiveness of our top charities. • Revisited how we think about leverage and funging (how donating to our top charities influences how other funders spend their money) and updated our cost-effectiveness analysis accordingly. • Published a report on how other global actors approach the difficult moral tradeoffs we face. • Prior to announcing our 2017 recommendations, we performed a sensitivity check on our cost-effectiveness analysis to identify how sensitive our final outputs were to different uncertain inputs. This has helped us identify which inputs we should prioritize additional research on, and we believe it has made our communication more transparent, particularly around our personal values. • Revisited and updated our prevalence and intensity adjustments for deworming. • Deprioritized improving how our insecticide-treated net model incorporates effects on adult mortality. A limited number of conversations with malaria experts made us less confident that there was informative research on the question that would improve the accuracy of our models. • Deprioritized making adjustments for subnational regions with different malaria burdens because it would take substantial time to deeply understand the assumptions informing the subnational models we have seen. We believe this remains an important weakness of our model and that it limits our ability to make high-quality decisions about prioritization among different regional funding gaps. Goal 5: Improve our process for following top charities In early 2017, we wrote: “In 2017, we plan to have a single staff member do most of this work and expect it to take a half to two-thirds of a full-time job. Three other staff will spend a small portion of their time, totaling approximately the equivalent of one full-time job, on this work.” How did we do? Achieved goal. We estimate that it took about 40 percent of the staff member’s time who focused on this work plus a small portion of four other staff members’ time, totaling at most and likely somewhat less than the equivalent of a full-time job (roughly half the time we dedicated to top charity updates in 2016). We believe we maintained or increased the quality of the top charity updates, as we completed or made major progress on all but two of the activities and questions outlined in last year’s plan. What did we do? The table below summarizes our progress on each of the activities and open questions outlined in last year’s plan. Charity Goals and open questions from 2017 plan Did we meet our goal? What did we do? Evidence Action’s Deworm the World Initiative “We have now followed these groups for several years and do not have major outstanding questions about them. We plan to ask for updates on financial information, monitoring results, and room for more funding and have regular phone calls with them to learn about operational changes that might lead us to ask additional questions.” Yes We had regular phone calls, received up-to-date financial information, updated room for more funding, and reviewed new monitoring information from Nigeria, Vietnam, Kenya, and Ethiopia (see rows 11-20 and an overview of what we learned). GiveDirectly “We have now followed these groups for several years and do not have major outstanding questions about them. We plan to ask for updates on financial information, monitoring results, and room for more funding and have regular phone calls with them to learn about operational changes that might lead us to ask additional questions.” Yes We had regular phone calls, received up-to-date financial information, updated room for more funding, and reviewed new monitoring information from Kenya (1, 2). (Overview of what we learned.) Schistosomiasis Control Initiative “We have now followed these groups for several years and do not have major outstanding questions about them. We plan to ask for updates on financial information, monitoring results, and room for more funding and have regular phone calls with them to learn about operational changes that might lead us to ask additional questions.” Yes We had regular phone calls, received up-to-date financial information, updated room for more funding, and reviewed new monitoring information from 2016 programs in a number of countries. (Monitoring information, overview of what we learned.) Against Malaria Foundation (AMF) “Will AMF’s monitoring processes be high quality?” Yes We commissioned IDinsight, an organization with which we are partnering as part of our Incubation Grants program, to observe post-distribution surveys in Malawi and Ghana and report their findings. “Going forward, AMF aims to fund larger distributions and commit funding further ahead of when a distribution is scheduled to occur than it has, for the most part, done in the past. Will this increase the extent to which AMF funds displace funds from other sources, or will there continue to be evidence that AMF’s funds are largely adding to the total number of nets distributed?” Partial We learned relatively little about the displacement/fungibility question because AMF signed relatively few new agreements to fund long-lasting insecticide-treated net distributions in 2017. There was an update to how AMF will be tracking displacement, described in the second paragraph here. “In order to estimate AMF’s room for more funding, we will seek out information on the location and size of funding gaps for mass net distribution campaigns from AMF, the African Leaders Malaria Alliance, and possibly other funders of nets. As we have in the past, we will use this information in conjunction with conversations with AMF about non-funding bottlenecks to its ability to fill various gaps.” Yes We got updates on AMF’s room for more funding, as summarized in this post. The END Fund’s deworming program “We have not yet seen monitoring on par with that from our other top charities from the END Fund. We expect results from coverage surveys from END Fund programs this year. Will these surveys be high quality and demonstrate that the END Fund is funding successful programs?” Yes We saw some monitoring from END Fund programs; previously our recommendation of the END Fund was based on specific monitoring plans that we found credible (more here). “We have not yet tried to compare the cost-effectiveness of the END Fund to our other top charities in our cost-effectiveness model. We will be seeking additional information from the END Fund about cost per treatment and baseline infection rates” Yes We significantly improved our understanding of the END Fund’s cost per treatment and the baseline prevalence in areas where the END Fund works. We completed a cost-effectiveness analysis, though we continue to have lower confidence in our estimates than we do for the deworming organizations that we have recommended for several years. “Questions around room for more funding: the extent to which funding due to GiveWell’s recommendation increases the amount that the END Fund spends on deworming versus other programs, actual and projected revenue from other sources, and what deworming grantmaking opportunities the END Fund expects to have.” Yes We estimated the extent to which funding due to GiveWell’s recommendation increases the amount that the END Fund spends on deworming versus other programs, discussed here. “We visited the END Fund’s programs in Rwanda and Idjwi island, DRC in January 2017 and will publish notes and photos from our visit shortly.” Yes We posted notes and photos from our site visit here. Malaria Consortium’s seasonal malaria chemoprevention program “Further research on the evidence of effectiveness, cost-effectiveness, and potential downsides of seasonal malaria chemoprevention (SMC) (due to time constraints we have not yet completed a full intervention report, though we felt sufficiently confident in the intervention to recommend Malaria Consortium).” Yes We reviewed each of the RCTs included in the Cochrane review for seasonal malaria chemoprevention, and possible negative/offsetting impacts. We updated our interim intervention report to a full intervention report and added new information to our cost-effectiveness analysis. Our key conclusions did not change substantially and SMC remains a priority program. “Getting a better understanding of the methodology Malaria Consortium uses for estimating coverage rates.” Yes We spoke with Malaria Consortium to understand how they measure coverage and updated our cost-effectiveness analysis to account for different levels of coverage in the Malaria Consortium program relative to the headline results of the RCTs in the Cochrane review (conversation notes here). “Completing a more in-depth room for more funding analysis for the program for 2018 than we did for 2017.” Yes We completed a significantly more in-depth room for more funding analysis than we had previously (more here). “We may visit a Malaria Consortium seasonal malaria chemoprevention program in summer 2017.” No We did not conduct a site visit. Sightsavers’ deworming program “We expect to make limited progress this year because the first deworming mass drug administration funded with GiveWell-influenced funds is not expected to take place until September at the earliest and monitoring results aren’t expected until early 2018. Because Sightsavers has done fairly little deworming in the past year, we don’t expect to be able to learn much from its ongoing programs.” Exceeded In 2017, as expected, we learned relatively little about the performance of Sightsavers’ deworming programs, because programs funded with GiveWell-directed funds were at early stages. We did not expect to receive any monitoring results from programs funded with GiveWell-directed funds; however, Sightsavers shared a coverage survey from Guinea with us earlier than expected. The survey found middling coverage results. “Getting more information from Sightsavers about baseline prevalence and intensity of worm infections in the areas it is working, to inform our cost-effectiveness analysis.” Yes We significantly improved our understanding of Sightsavers’ cost per treatment and the baseline prevalence in areas where Sightsavers works (which is used in our cost-effectiveness analysis). “Using Sightsavers’ budget for the projects and planned treatment numbers to improve our estimate of the cost per treatment – another input into our cost-effectiveness analysis.” Yes We significantly improved our understanding of Sightsavers’ cost per treatment and the baseline prevalence in areas where Sightsavers works (which is used in our cost-effectiveness analysis). “Completing a room for more funding analysis for 2018.” Yes We completed a room for more funding analysis (more here). Standout charities “We plan to have at least one phone call with each of these groups to discuss whether anything has changed that might lead us to reopen consideration of the organization as a potential top charity” Yes We spoke with each standout charity. Conversation notes here: Development Media International, Food Fortification Initiative, Global Alliance for Improved Nutrition’s Universal Salt Iodizational program, Iodine Global Network, Living Goods, and Project Healthy Children. The post Review of our research in 2017 appeared first on The GiveWell Blog. ### Allocation of discretionary funds from Q4 2017 Fri, 04/06/2018 - 12:02 In the fourth quarter of 2017, we received$5.6 million in funding for making grants at our discretion. In this post we discuss:

• The decision to allocate the $5.6 million to the Schistosomiasis Control Initiative (SCI). • Our recommendation that donors give to GiveWell for granting to top charities at our discretion so that we can direct the funding to the top charity or charities with the most pressing funding need. For donors who prefer to give directly to our top charities, we continue to recommend giving 70 percent of your donation to AMF and 30 percent to SCI to maximize your impact. We noted in November that we would use funds received for making grants at our discretion to fill the next highest priority funding gaps among our top charities. We also noted that our best guess at the time was that we would give 70 percent to the Against Malaria Foundation (AMF) and 30 percent to SCI. Based on information received since November, described below, we allocated the$5.6 million to SCI, rather than dividing these funds between AMF and SCI, as previously expected. GiveWell’s Executive Director, Elie Hassenfeld, the fund advisor on the Effective Altruism Fund for Global Health and Development, also recommended that the fund grant out the $1.5 million that it held to SCI. Update on AMF AMF has been somewhat slower to make commitments to fund distributions of insecticide-treated nets than we expected and our best guess is that its currently available funding will be sufficient to fund all distributions that it is likely to commit to before our next major round of funding allocations in November. Notwithstanding that fact, we continue to believe that AMF has room for more funding. Additional funds would reduce the risk that AMF’s progress will be slowed if it is able to sign several major agreements in the next few months, which, while somewhat unlikely in our estimation, remains a possibility. We wrote in November 2017: Progress at signing new agreements was slow in 2017, leaving AMF with a large amount of funds on hand. We attribute this to the fact that countries spent much of 2017 applying for Global Fund funding and decisions about how much funding would be allocated to LLIN distributions for 2018-2020 and what the funding gaps would be for LLINs were being finalized in many countries as of October 2017. AMF noted that it did not commit to funding distributions earlier in part because GiveWell had asked AMF not to make funding commitments until the size of funding gaps were known. Our expectation had been that the last couple months of 2017 and first months of 2018 would be a period in which AMF would commit a significant portion of its available funding to help fill these gaps because we expected countries to have more visibility into their funding gaps following finalization of Global Fund commitments around October 2017. This has not been the case. AMF recently told us that most of the countries that it was in discussions with did not have visibility into their funding gaps until December 2017, and in some cases it has taken longer than that. In making the decision regarding the fourth quarter discretionary funds, we relied on a document from AMF detailing its signed and potential agreements as of early February. The document noted that AMF had committed to one new distribution since October, in Ghana in 2018. This distribution will cost about$8 million. (We have since learned that AMF has also committed to additional distributions in Papua New Guinea in 2019 and 2020, costing $5.2 million and signed in November 2017, and in Malawi in 2018, costing$10.1 million and signed in mid February.)

AMF’s pipeline of potential future distributions includes both repeat distributions with partners and in countries it has worked with in the past and distributions with new potential partners. AMF has decided to move somewhat slowly with both types of partners. In the case of repeat partners, for several distributions, AMF is waiting to verify that the partner is able to deliver all requested data from distributions that took place in 2017 (and the monitoring that follows each distribution) before agreeing to fund the next round of nets to be delivered in 2020. These decisions seem very reasonable to us, but do result in a short-term decrease in the amount of funding we expect AMF to be able to absorb. When it is ready to do so, AMF could potentially commit up to $50 million to distributions in this category. For the largest potential new partnership that AMF is considering, there are some concerns about in-country capacity and AMF expects to to commit to a smaller-scale distribution (with an estimated cost of$5 million) with the partner and assess the results of that distribution before committing to a larger-scale distribution. AMF is also considering two additional opportunities to commit $5 to$7 million each to distributions with new partners. It could potentially commit tens of millions of dollars to one or more of these countries in future rounds if the initial engagements go well. AMF is also in several early stage conversations about potential distributions with new partners.

According to the document that we relied on for this decision, AMF held $64 million in uncommitted funds, of which$15 million was set aside for “agreement imminent” distributions, leaving $49 million “available to allocate.” Accounting for the additional agreements for Papua New Guinea and Malawi noted above, we estimate that AMF had$49 million in uncommitted funds and $45 million available to allocate as of late February. The combination of somewhat slower progress in signing distributions than expected and our updated understanding of AMF’s pipeline led us to conclude that AMF continues to have room for more funding, but that SCI’s funding needs were more urgent. Our best guess was that the$5.6 million from GiveWell discretionary funds and $1.5 million from the Effective Altruism Fund would have a greater impact if allocated to SCI. Update on SCI In November, we recommended that donors give 30 percent to SCI because SCI had additional room for more funding to sustain its work in its current countries of operation and would need to scale down without additional funding. SCI recently confirmed to us that it would need to cut budgets if it did not receive additional funds before setting its annual budget for April 2018 to March 2019 in March 2018. With AMF having a less urgent funding need than previously expected, we concluded that the best use of the fourth quarter discretionary funds would be to allocate them to SCI. It is also the case that in the last few months of 2017 SCI received less funding than we projected, both from donors influenced by GiveWell’s research and other donors. We believe that SCI will continue to have room for more funding after the two grants totaling about$7 million. Recently, SCI sent us an early version of a budget for its 2018-19 budget year. It includes funding requests from each country program, estimates of country program requests in cases where the country has not yet submitted a request, and estimates of SCI spending on central costs and research costs. We estimate that, assuming the same budget in each of the next three years, SCI’s funding gap for that period, after receiving the grants discussed above, is about $9 million. SCI could likely absorb funding beyond that level, as the budget does not include opportunities it has to expand to additional countries. It also assumes that SCI’s other major funders will continue their support at the same level, and some of this funding may be in doubt. We note that about 13 percent of treatments that would be delivered at this scale would be for adults (discussion of this here). Other possibilities that we decided against Helen Keller International (HKI) for stopgap funding in one additional country In December, Good Ventures, on GiveWell’s recommendation, provided HKI with funding for vitamin A supplementation (VAS) programs in Burkina Faso, Mali, and Guinea. Since then, HKI has learned about an unanticipated funding gap for VAS in another country. As a result, a planned VAS distribution in September may not reach national scale and/or may not include deworming (as is common for VAS campaigns). We are in ongoing conversations with HKI about either HKI allocating some of the Good Ventures funding to this country, or GiveWell providing additional funding to cover the gap. We plan to consider this funding opportunity when allocating discretionary funds from the first quarter of 2018. We expect to hold more than enough in discretionary funds (received in the first quarter of 2018) to fill the potential gap and HKI has told us that more information about the gap will be available in time for that decision. (We grant out funds from the previous quarter about two months after the end of that quarter, after we have fully checked the accuracy of our data and the size of grants). Evidence Action’s Deworm the World for Nigeria The grant that Good Ventures made to Evidence Action for Deworm the World in December 2017, based on our recommendation, did not include sufficient funds to fund expansion of Deworm the World’s work in Nigeria. Deworm the World sought funding for this work and we prioritized other charities’ funding gaps ahead of this work because we modeled the cost-effectiveness of this work as being lower. We noted in November, “its planned work in Nigeria is around three times as cost-effective as cash transfers (though this estimate is based on low-quality information).” We continue to think that AMF and SCI’s marginal uses of funding are likely more cost-effective than Deworm the World’s potential work in Nigeria, but this conclusion is highly dependent on a model that incorporates many highly uncertain values. Malaria Consortium for seasonal malaria chemoprevention (SMC) Our recommendation of Malaria Consortium has resulted in about$30 million in funding for its SMC program since November; however, we believe that there will still be a large funding gap for the program over the next three years. We decided against providing additional funding to Malaria Consortium at this time because of worries about increasing our already very large bet on a program that’s relatively new to us. We are not opposed to increasing this funding level in the future but on balance believe that granting additional funds to SCI is a stronger option at current levels. We’d also note that we’d expect additional funding at this time to go to funding SMC in 2019 and beyond (given the time needed to order drugs and plan programs for the 2018 SMC season) and there is some uncertainty as to the size of the funding gap for SMC in 2019. The program is in a scale-up phase globally and other major funders may increase their contributions to SMC starting in 2019.

What is our recommendation to donors?

We continue to recommend that donors give to GiveWell for granting to top charities at our discretion so that we can direct the funding to the top charity or charities with the most pressing funding need. For donors who prefer to give directly to our top charities, we are continuing to recommend giving 70 percent of your donation to AMF and 30 percent to SCI to maximize your impact.

As part of the process we went through to decide where to allocate these funds, we also discussed whether we should update our recommendation for donors who prefer to give directly to our top charities. We ultimately decided that because updating that recommended allocation is a difficult and time-consuming process because of the additional research and internal discussions involved and because, relatively speaking, few dollars follow this recommendation outside of giving season, we plan to update that allocation only once each year (in November) unless we believe our previously recommended allocation is clearly suboptimal.

In this case, we believe that the $7 million in grants to SCI roughly brings the situation back in line with where it was in November, with AMF and SCI having the next most impactful funding gaps and it being difficult to distinguish on the margin between the quality of AMF and SCI’s funding gaps. SCI has better modeled cost-effectiveness, while AMF appears to be better on several qualitative factors, including monitoring of program performance. The post Allocation of discretionary funds from Q4 2017 appeared first on The GiveWell Blog. ### GiveWell’s money moved and web traffic in 2016 Fri, 03/30/2018 - 10:00 In September 2017, we posted an interim update on GiveWell’s 2016 money moved and web traffic. This post summarizes the key takeaways from our full 2016 money moved and web traffic metrics report. Note that some of the numbers, including the total headline money moved, have changed since our interim report. Since then, we decided to exclude some donations from our headline money moved figure (details in the full report), and we corrected some minor errors. This report was highly delayed (as discussed in the interim update). We expect to publish our report on GiveWell’s 2017 money moved and web traffic much more quickly; our current expectation is that we will publish that report by the end of June. GiveWell is dedicated to finding outstanding giving opportunities and publishing the full details of our analysis. In addition to evaluations of other charities, we publish substantial evaluation of our own work. This post lays out highlights from our 2016 metrics report, which reviews what we know about how our research impacted donors. Please note: • We report on “metrics years” that run from February through January; for example, our 2016 data cover February 1, 2016 through January 31, 2017. • We differentiate between our traditional charity recommendations and our work on the Open Philanthropy Project, which became a separate organization in 2017 and whose work we exclude from this report. • More context on the relationship between Good Ventures and GiveWell can be found here. Summary of influence: In 2016, GiveWell influenced charitable giving in several ways. The following table summarizes our understanding of this influence. Headline money moved: In 2016, we tracked$88.6 million in money moved to our recommended charities. Our money moved only includes donations that we are confident were influenced by our recommendations.

Money moved by charity: Our seven top charities received the majority of our money moved. Our six standout charities received a total of $2.9 million. Money moved by size of donor: In 2016, the number of donors and amount donated increased across each donor size category, with the notable exception of donations from donors giving$1,000,000 or more. In 2016, 93% of our money moved (excluding Good Ventures) came from 19% of our donors, who gave $1,000 or more. Donor retention: The total number of donors who gave to our recommended charities or to GiveWell unrestricted increased about 16% year-over-year to 17,834 in 2016. This included 12,461 donors who gave for the first time. Among all donors who gave in the previous year, about 35% gave again in 2016, down from about 40% who gave again in 2015. Our retention was stronger among donors who gave larger amounts or who first gave to our recommendations prior to 2014. Of larger donors (those who gave$10,000 or more in either of the last two years), about 77% who gave in 2015 gave again in 2016.

GiveWell’s expenses: GiveWell’s total operating expenses in 2016 were $5.5 million. Our expenses increased from about$3.4 million in 2015 as the size of our staff grew and average seniority level rose. We estimate that about one-third of our total expenses ($2.0 million) supported our traditional top charity work and about two-thirds supported the Open Philanthropy Project. In 2015, we estimated that expenses for our traditional charity work were about$1.1 million.

Donations supporting GiveWell’s operations: GiveWell raised $5.6 million in unrestricted funding (which we use to support our operations) in 2016, compared to$5.1 million in 2015. Our major institutional supporters and the five largest individual donors contributed about 70% of GiveWell’s operational funding in 2016. This is driven in large part by the fact that Good Ventures funded two-thirds of the costs of the Open Philanthropy project, in addition to funding 20% of GiveWell’s other costs.

Web traffic: The number of unique visitors to our website was down very slightly (by 1%) in 2016 compared to 2015 (when excluding visitors driven by AdWords, Google’s online advertising product).

For more detail, see our full metrics report (PDF).

The post GiveWell’s money moved and web traffic in 2016 appeared first on The GiveWell Blog.

### Considering policy advocacy organizations: Why GiveWell made a grant to the Centre for Pesticide Suicide Prevention

Thu, 03/22/2018 - 12:00

In August 2017, GiveWell recommended a grant of $1.3 million to the Centre for Pesticide Suicide Prevention (CPSP). This grant was made as part of GiveWell’s Incubation Grants program to seed the development of potential future GiveWell top charities and to grow the pipeline of organizations we can consider for a recommendation. CPSP implements a different type of program from work GiveWell has funded in the past. Namely, CPSP identifies the pesticides which are most commonly used in suicides and advocates for governments to ban the most lethal pesticides. Because CPSP’s goal is to encourage governments to enact bans, its work falls into the broader category of policy advocacy, an area we are newly focused on. We plan to investigate or are in the process of investigating several other policy causes, including tobacco control, lead paint regulation, and measures to improve road traffic safety. Summary This post will discuss: • GiveWell’s interest in researching policy advocacy interventions as possible priority programs. (More) • Why CPSP is promising as a policy advocacy organization and Incubation Grant recipient. (More) • Our plans for following CPSP’s work going forward. (More) Policy advocacy work One of the key criteria we use to evaluate potential top charities is their cost-effectiveness—how much good each dollar donated to that charity can accomplish. In recent years, we’ve identified several charities that we estimate to be around 4 to 10 times as cost-effective as GiveDirectly, which we use as a benchmark for cost-effectiveness. Our top charities are extremely cost-effective, but we wonder whether we might be able to find opportunities that are significantly more cost-effective than the charities we currently recommend. Our current top charities largely focus on direct implementation of health and poverty alleviation interventions. One of our best guesses for where we might find significantly more cost-effective charities is in the area of policy advocacy, or programs that aim to influence government policy. Our intuition is that spending a relatively small amount of money on advocacy could lead to policy changes resulting in long-run benefits for many people, and thus could be among the most cost-effective ways to help people. As a result, researching policy advocacy interventions is one of our biggest priorities for the year ahead. Policy advocacy work may have the following advantages: • Leverage: A relatively small amount of spending on advocacy may influence larger amounts of government funding; • Sustainability: A policy may be in place for years after its adoption; and • Feasibility: Some effective interventions can only be effectively implemented by governments, such as increasing taxes on tobacco to reduce consumption. Policy advocacy also poses serious challenges for GiveWell when we consider it as a potential priority area: • Evidence of effectiveness will likely be lower quality than what we’ve seen from our top charities, e.g. it may involve analyzing trends over time (where confounding factors may complicate analysis) rather than randomized controlled trials or quasi-experimental evidence; • Causal attribution will be challenging in that multiple players are likely to be involved in any policy change and policymakers are likely to be influenced by a variety of factors; • There may be a substantial chance of failure to pass the desired legislation; and • Regulation may have undesirable secondary effects. Overall, evaluating policy advocacy requires a different approach to assessing evidence and probability of success than our top charities work has in the past. Incubation Grant to the Centre for Pesticide Suicide Prevention CPSP began work in 2016 and aims to reduce deaths due to deliberate ingestion of lethal pesticides. With this Incubation Grant, which is intended to cover two years of expenses, CPSP expects to collect data on which pesticides are most often used in suicide attempts and which are most lethal, and then to use this data to advocate to the governments of India and Nepal to implement bans of certain lethal pesticides. Research suggests that worldwide, approximately 14% to 20% of suicides involved the deliberate ingestion of pesticides. This method of suicide may be particularly common in agricultural populations. The case we see for this grant relies largely on data from Sri Lanka, where bans on the pesticides that were most lethal and most commonly used in suicide coincided with a substantial decrease in the overall suicide rate; we find the case that the decline in suicides was primarily caused by the pesticide bans reasonably compelling. CPSP’s director, Michael Eddleston, was involved in advocating for some of those bans. Read more here. GiveWell learned of CPSP’s work through James Snowden, who joined GiveWell as a Research Consultant in early 2017. We decided to recommend support to CPSP based on the evidence that pesticide regulation may reduce overall suicide rates, our impression that an advocacy organization could effect changes in regulations, our view that Michael Eddleston and Leah Utyasheva (the co-founders) are well-positioned to do this type of work, and our expectation that we would be able to evaluate CPSP’s impact on pesticide regulation in Nepal and India over the next few years. We thus think CPSP is a plausible future GiveWell top charity and a good fit for an Incubation Grant. While deciding whether to make this grant, GiveWell staff discussed how to think about the impact of preventing a suicide. Thinking about this question depends on limited empirical information, and staff did not come to an internal consensus. Our best guess at this point is that CPSP generally prevents suicide by people who are making impulsive decisions. We see several risks to the success of this grant: • Banning lethal pesticides may be ineffective as a means of preventing suicide, in India and Nepal or more broadly. The case for this area of policy advocacy relies largely on the observational studies from Sri Lanka mentioned above, supported by Sri Lankan medical records suggesting the decline is partially explained by a shift to less lethal pesticides in suicide attempts. • CPSP may not be able to translate its research into policy change. This risk of failure to achieve legislative change characterizes policy advocacy work in general, to some extent, and requires us to make a type of prediction that is not needed when evaluating a charity directly implementing a program. • Banning pesticides could lead to offsetting effects in agricultural production. The limited evidence we have seen on this question suggests that past pesticide bans have not led to notable decreases in agricultural production, but we still believe this is a risk. • CPSP is a new organization, so it does not have a track record of successfully conducting this type of research and achieving policy change. To quantify the risks above, GiveWell Executive Director Elie Hassenfeld and James Snowden each recorded predictions about the outcomes of this grant at the time the grant was made. Briefly (more predictions here), Elie and James predict with 33% and 55% probability, respectively, that Nepal will pass legislation banning at least one of the three pesticides most commonly used in suicide by July 1, 2020, and with 15% and 35% probability, respectively, that at least one state in India will do so. Going forward We plan to continue having regular conversations with CPSP, and a more substantial check-in one year after the grant was made. At that point, we intend to assess whether CPSP has been meeting the milestones it expected to meet and decide whether to provide a third year of funding. If this grant is successful, we hope we may be able to evaluate CPSP as a potential top charity. ### March 2018 open thread Tue, 03/13/2018 - 16:27 Our goal with hosting quarterly open threads is to give blog readers an opportunity to publicly raise comments or questions about GiveWell or related topics (in the comments section below). As always, you’re also welcome to email us at info@givewell.org or to request a call with GiveWell staff if you have feedback or questions you’d prefer to discuss privately. We’ll try to respond promptly to questions or comments. You can view our December 2017 open thread here. The post March 2018 open thread appeared first on The GiveWell Blog. ### Revisiting leverage Tue, 02/13/2018 - 11:05 Many charities aim to influence how others (other donors, governments, or the private sector) allocate their funds. We call this influence on others “leverage.” Expenditure on a program can also crowd out funding that would otherwise have come from other sources. We call this “funging” (from “fungibility”). In GiveWell’s early years, we didn’t account for leverage in our cost-effectiveness analysis; we counted all costs of an intervention equally, no matter who paid for them.1For example, see row 3 of our 2013 cost-effectiveness analysis for Against Malaria Foundation. jQuery("#footnote_plugin_tooltip_1").tooltip({ tip: "#footnote_plugin_tooltip_text_1", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); For example, for the Schistosomiasis Control Initiative (SCI), a charity that treats intestinal parasites (deworming), we counted both drug and delivery costs, even when the drugs were donated. We did this because we felt it was the simplest approach, least prone to significant error or manipulation. Over the last few years, our approach has evolved, and we made some adjustments for leverage and funging to our cost-effectiveness analyses where we felt they were clearly warranted. In our top charities update at the end of 2017, we made a major change to how we dealt with the question of leverage by incorporating explicit, formal leverage estimates for every charity we recommend. This change made our cost-effectiveness estimates of deworming charities (which typically leverage substantial government funding) look more cost-effective than our previous method. For example, our new method makes SCI look 1.2x more cost-effective than in the previous cost-effectiveness update. More details are in the table at the end of this post. We also think the change makes our reasoning more transparent and more consistent across organizations. In this post, we: • Describe how our treatment of leverage and funging has evolved. • Highlight two major limitations of our current approach. • Present how much difference leverage and funging make to our cost-effectiveness estimates. Details follow. How our thinking has evolved We last wrote about our approach to leverage and funging in a 2011 blog post. In short, we didn’t explicitly account for leverage in our cost-effectiveness analysis, counting costs to all entities equally. We concluded: When we do cost-effectiveness estimates (e.g., “cost per life saved”) we consider all expenses from all sources, not just funding provided by GiveWell donors. For SCI, we count both drug and delivery costs, even when drugs are donated. (Generally, we try to count all donated goods and services at market value, i.e., the price the donor could have sold them for instead of donating them.) For [the Against Malaria Foundation (AMF)], we count net costs and distribution costs, even though AMF pays only for the former. In the case of VillageReach, we even count government costs of delivering vaccines, even though VillageReach works exclusively to improve the efficiency of the delivery system. We consider this approach the simplest approach to dealing with the issues discussed here, and given our limited understanding of how “leverage” works, we believe that this approach minimizes the error in our estimates that might come from misreading the “leverage” situation. As our understanding of “leverage” improves, we may approach our cost-effectiveness estimates differently. Since 2011, our thinking changed. Over time, we started applying some adjustments to our cost-effectiveness model to account for leverage and funging when it seemed important to our bottom line and fairly clear that some adjustment was warranted: • We applied discounts to costs incurred by certain entities. For example, we applied a 50% discount to the value of teacher time spent distributing deworming tablets, and excluded the costs to pharmaceutical companies donating these tablets.2See our May 2017 cost-effectiveness analysis. jQuery("#footnote_plugin_tooltip_2").tooltip({ tip: "#footnote_plugin_tooltip_text_2", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Our rationale was that without our top charities, these resources would likely otherwise been used less productively. • We applied ‘alternative funders adjustments’ to account for the possibility that we were crowding out other funders. For example, some of the distributions that AMF considered funding, but didn’t ultimately fund, were picked up by other funders (more). This helped us explicitly think through considerations relevant to our top charities. But by the end of 2016, our model had a handful of ad hoc adjustments that were difficult to identify, understand, and vet. For example, the discounts we applied to costs incurred by certain entities were ‘baked in’ to our estimates of cost per treatment, rather than explicit on the main spreadsheet of our cost-effectiveness analysis. Changes to how we incorporate leverage and funging into our cost-effectiveness analysis We revisited the way we thought about leverage and funging in preparation for our 2017 top charities decision. We wanted to make sure our adjustments were transparent and consistent across all charities. We now explicitly make quantitative judgments about (i) the probability that our charities are causing governments and multilateral aid agencies to spend more or less on a program than they otherwise would have and (ii) the value of what those funds would otherwise have been spent on.3Our current best guess of a reasonable benchmark for the counterfactual value of government funds is ~75% as cost-effective as GiveDirectly (discussed later in the post). We view this is a very rough guess. jQuery("#footnote_plugin_tooltip_3").tooltip({ tip: "#footnote_plugin_tooltip_text_3", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Here’s an exercise that some GiveWell staff have found helpful for getting a more intuitive feel for different ways of treating leverage. Suppose a charity pays$5,000 to purchase magic pills. This would cause (with 100% certainty) the government to spend another $5,000 distributing those pills. The pill distribution saves 1,000 lives in total. If the government didn’t fund the pill distribution, it would have spent$5,000 on something that would have saved 250 lives.

How should a philanthropist think about the cost-effectiveness of this charity?

1. One option is to include all costs to all actors on the cost side of the cost-effectiveness ratio. Total costs are $10,000 to save 1,000 lives and cost-effectiveness is$10 / life saved. This was GiveWell’s approach in 2011.
2. Another option is to discount government costs by 50%, because the government would otherwise have spent the funds on something 50% as effective. So total costs are $5,000 + (50% x$5,000) = $7,500. 1,000 lives are saved and cost-effectiveness is$7.50 / life saved. This was GiveWell’s approach from 2014 through 2016.
3. A third option is to include only the costs to the charity on the ‘cost’ side. The charity causes the magic pill distribution to happen, saving 1,000 lives. But it also causes the government to spend $5,000, which otherwise would have been used to save 250 lives. So the total costs are$5,000, and 1,000 – 250 = 750 lives are saved. Cost-effectiveness is 6.66 / life saved. This is GiveWell’s approach now.4In order to isolate the effect that leverage/funging has, we first calculate the impact of the program using the first method (including all costs equally), then apply a “leverage/funging” adjustment to transform the answer to the third method. jQuery("#footnote_plugin_tooltip_4").tooltip({ tip: "#footnote_plugin_tooltip_text_4", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); We believe the third way of treating leverage best reflects the true counterfactual impact of a charity’s activities. It also makes charities that are leveraging other funders look substantially more cost-effective than we previously thought. Limitations of our approach There are two important limitations to the way we account for leverage and funging. First, these estimates rely on more guesswork than most of our cost-effectiveness analysis, reflecting a fundamental tradeoff we face in deciding which considerations to explicitly quantify. Quantification forces us to think through not just whether a particular consideration matters, but how much it matters relative to other factors, and to be explicit about that. On the other hand, incorporating very uncertain factors into our analysis can reduce its reliability, give a false impression of certainty, and make it difficult for others to engage with our work. In this case, we thought the benefits of explicit quantification outweighed the costs. Two examples of assumptions going into our leverage and funging adjustments that we’re highly uncertain about: 1. Our best guess is that the average counterfactual use of domestic government spending that could be leveraged by our top charities is ~75% as cost-effective as GiveDirectly. We think using this figure is a useful heuristic, which roughly accords with our intuitions (and ensures we’re being consistent between charities), but we don’t feel confident that we have a good sense of what governments would counterfactually spend their funds on, or how valuable those activities might be. 2. We estimate there is a ~70% chance that, without Malaria Consortium funding, the marginal seasonal malaria chemoprevention (SMC) program would go unfunded, but only a ~40% chance that, without Against Malaria Foundation funding, the marginal bednet distribution would go unfunded. Estimating these probabilities is challenging, but taking our best guess forces us to evaluate how much weight to place on the qualitative consideration that there are more alternative funders for bednet distribution than SMC. Second, we don’t explicitly model the long-term financial sustainability of a program. One worldview we find plausible for the role of effective philanthropy is in demonstrating the effectiveness of novel projects that, in the long run, are taken up by governments. This is not captured within our current model, which only looks at the effects of leverage and funging in the short term. Due to the difficulty of explicitly modelling this consideration, we take it into account qualitatively.5For example, we allocated more discretionary funding than we would have on the basis of cost-effectiveness alone to No Lean Season in 2017 due to our view that it was demonstrating the effectiveness of a novel program, which may have long-run funding implications. jQuery("#footnote_plugin_tooltip_5").tooltip({ tip: "#footnote_plugin_tooltip_text_5", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); How much of a difference do leverage and funging make? In the table below, we present how our new method of accounting for leverage and funging compares to (i) counting all costs equally and (ii) our previous method of accounting for leverage and funging. Adjustments range between a modest penalty for AMF (because we expect AMF crowds out some funds from other sources) to a large boost to SCI (because the cost to pharmaceutical companies of manufacturing donated drugs comprises a substantial proportion of cost per treatment in SCI distributions, and we expect that without SCI, these resources would have been put to less valuable uses). Note: 1.2x implies the adjustment makes the charity look 20% more cost-effective; 0.8x implies the adjustment makes the charity look 20% less cost-effective. All charities listed are GiveWell top charities as of November 2017. Charity Versus counting all costs equally6Calculations here. “N/A” refers to charities for which we had not completed a cost-effectiveness analysis before October 2017. jQuery("#footnote_plugin_tooltip_6").tooltip({ tip: "#footnote_plugin_tooltip_text_6", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Versus our 2014-16 methodology Commentary Against Malaria Foundation 0.8x 1.1x Government costs represent a small proportion of funding for AMF programs. Our analysis of distributions that AMF considered, but did not fund, suggests that some of these distributions are covered by alternative funders, who would otherwise have supported less valuable programs. Schistosomiasis Control Initiative 2x 1.2x We estimate ~60% of the costs of SCI-supported deworming programs are incurred by either governments or pharmaceutical companies. We expect that without SCI, most of these resources would have been used on less valuable programs. Evidence Action’s Deworm the World Initiative 1.4x 1.1x We estimate ~40% of the costs of Deworm the World-supported deworming programs are incurred by either governments or pharmaceutical companies. We expect that without Deworm the World, most of these resources would have been used on less valuable programs. Sightsavers’ deworming program 1.6x 1.3x We estimate ~50% of the costs of deworming in Sightsavers supported programs are from governments or donated drugs from pharmaceutical companies. We expect that without Sightsavers, most of these resources would have been used on less valuable programs. END Fund’s deworming program 1.3x N/A We estimate ~40% of the costs of END Fund-supported deworming programs are incurred by either governments or pharmaceutical companies. We expect that without the END Fund, most of these resources would have been used on less valuable programs. Helen Keller International (HKI)’s vitamin A supplementation (VAS) program 1.1x N/A We estimate ~25% of the costs of HKI-supported VAS programs are covered by governments. We expect that without HKI, most of these resources would have been used on less valuable programs. GiveDirectly 1x 1x Due to the scalability of GiveDirectly’s program, we believe it is unlikely that GiveDirectly crowds out funding from other sources. GiveDirectly does not leverage funds from other sources. Malaria Consortium’s seasonal malaria chemoprevention program .98x 1.04x Government costs represent a small proportion of funding for Malaria Consortium programs. We believe it is possible but unlikely that Malaria Consortium crowds out additional government funding. Evidence Action’s No Lean Season 1x N/A No Lean Season is a novel program, and we think it’s unlikely to be crowding out funding from other sources. No Lean Season does not leverage substantial funding from other sources. Notes [ + ] 1. ↑ For example, see row 3 of our 2013 cost-effectiveness analysis for Against Malaria Foundation. 2. ↑ See our May 2017 cost-effectiveness analysis. 3. ↑ Our current best guess of a reasonable benchmark for the counterfactual value of government funds is ~75% as cost-effective as GiveDirectly (discussed later in the post). We view this is a very rough guess. 4. ↑ In order to isolate the effect that leverage/funging has, we first calculate the impact of the program using the first method (including all costs equally), then apply a “leverage/funging” adjustment to transform the answer to the third method. 5. ↑ For example, we allocated more discretionary funding than we would have on the basis of cost-effectiveness alone to No Lean Season in 2017 due to our view that it was demonstrating the effectiveness of a novel program, which may have long-run funding implications. 6. ↑ Calculations here. “N/A” refers to charities for which we had not completed a cost-effectiveness analysis before October 2017. function footnote_expand_reference_container() { jQuery("#footnote_references_container").show(); jQuery("#footnote_reference_container_collapse_button").text("-"); } function footnote_collapse_reference_container() { jQuery("#footnote_references_container").hide(); jQuery("#footnote_reference_container_collapse_button").text("+"); } function footnote_expand_collapse_reference_container() { if (jQuery("#footnote_references_container").is(":hidden")) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery("#" + p_str_TargetID); if(l_obj_Target.length) { jQuery('html, body').animate({ scrollTop: l_obj_Target.offset().top - window.innerHeight/2 }, 1000); } } The post Revisiting leverage appeared first on The GiveWell Blog. ### GiveWell is hiring! Thu, 01/25/2018 - 14:09 We’re actively hiring for roles across GiveWell. Operations We’re hiring a Director of Operations. The job posting is here. The Director of Operations is responsible for many domains and manages a team of eight people. A successful candidate will excel at prioritizing the most impactful work, shepherding improvements to completion, and managing the team. This job is perfect for someone who wants to: • be part of the leadership team at an organization that’s dedicated to making the world a better place. • work with colleagues who are passionate about the problems they’re trying to solve. • have significant personal ownership and responsibility. We’re looking for someone based in the San Francisco Bay Area, where GiveWell’s office is located. This job has flexible hours and can partly be done remotely. Outreach We’re hiring a Head of Growth. The job posting is here. The Head of Growth will be responsible for leading our efforts to increase the amount of money GiveWell’s recommended charities receive as a result of our recommendation. The Head of Growth will set a strategy to maximize our money moved by identifying, implementing, and testing a variety of growth strategies and will build a team to support these objectives. We’re looking for a Head of Growth who is excited for the challenge of starting and building our Growth team and aligned with our commitment to honesty and transparency about our, and our recommended organizations’, shortcomings and strengths. Research We’re looking for talented people to add to our research team. Some of our most successful analysts are people who followed our work closely prior to joining GiveWell, so if you read our blog, please consider applying! We’re hiring for three positions: Research Analysts and Senior Research Analysts are responsible for all of our research work: reviewing potential top charities and following up with current recommended charities, reviewing the evidence for charitable interventions, building cost-effectiveness models, and evaluating potential Incubation Grants. Our Summer Research Analyst position is for rising college seniors or graduate students with one year left in their program, and offers the opportunity to work on a variety of research tasks at GiveWell over two to three months. Research Analysts and Senior Research Analysts do not need to be based in the San Francisco Bay Area. Summer Research Analysts do need to be in the San Francisco Bay Area. The post GiveWell is hiring! appeared first on The GiveWell Blog. ### Revisiting the evidence on malaria eradication in the Americas Fri, 12/29/2017 - 09:14 Summary • Two of GiveWell’s top charities fight malaria in sub-Saharan Africa. • GiveWell’s valuations of these charities place some weight on research by Hoyt Bleakley on the impacts of malaria eradication efforts in the American South in the 1920s and in Brazil, Colombia, and Mexico in the 1950s. • I reviewed the Bleakley study and mostly support its key findings: the campaigns to eradicate malaria from Brazil, Colombia, and Mexico, and perhaps the American South as well, were followed by accelerated income gains for people whose childhood exposure to the disease was reduced. The timing of these events is compatible with the theory that rolling back malaria increased prosperity. Full write-up here. Introduction I blogged three weeks ago about having reviewed and reanalyzed Hoyt Bleakley’s study of the effort in the 1910s to rid the American South of hookworm disease (not malaria). That study, published in 2007, seems to show that the children who benefited from the campaign attended school more and went on to earn more as adults. For GiveWell, Bleakley’s 2010 study is to malaria parasites as his 2007 study is to intestinal worms. Like the 2007 paper, the 2010 one looks back at large-scale, 20th-century eradication campaigns in order to estimate impacts on schooling and adult income. It too produces encouraging results. And it has influenced GiveWell’s recommendations of certain charities—the Against Malaria Foundation and Malaria Consortium’s seasonal malaria chemoprevention program. Because GiveWell had already invested in replicating and reanalyzing Bleakley (2007), and because the two papers overlap in data and method, I decided to do the same for Bleakley (2010). And here the parallel between the two papers breaks down: having run the evidence through my analytical sieve, my confidence that eradicating malaria boosted income is substantially higher than my confidence that eradicating hookworm did. I’m a bit less sure that it did so in the United States than in Brazil, Colombia, and Mexico; but the Latin American experience is probably more relevant for the places in which our recommended charities work. This post will walk through the results. For details, see the new working paper. Because my malaria reanalysis shares so much with the hookworm one, I have written this post as if you read the last one. If you haven’t, please do that now. How the malaria analysis differs from the hookworm one Having just emphasized the commonality between Bleakley’s hookworm and malaria eradication studies—and my reanalyses thereof—in order to orient you, I should explain how the two differ: • The hookworm study is set exclusively in the American South, while the malaria study looks at efforts in four countries. In the United States in the 1920s, no doubt inspired by the previous decade’s success against hookworm, the Rockefeller Foundation and the U.S. Public Health Service promoted a large-scale program to drain swamps and spray larvicides, which cut malaria mortality in the South by 60%. Then in the 1950s, with the discovery of DDT, the World Health Organization led a worldwide campaign against the disease. Partly because of data availability, Bleakley (2010) studies the consequences in Brazil, Colombia, and Mexico.1Bleakley (2010) also chose these countries because they had malarial and non-malarial regions, allowing comparisons. See Bleakley (2010), note 6. For sample maps see this. jQuery("#footnote_plugin_tooltip_1").tooltip({ tip: "#footnote_plugin_tooltip_text_1", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); • Where the hookworm study groups data two ways—first by place of residence to study short-term effects, then by place of birth to study long-term effects—the malaria study does only the latter. • I pre-registered my analysis plan for the malaria study with the Open Science Framework and hewed to it. While I did not allow the plan to bind my actions, it serves to disclose which analytical tactics I settled on before I touched the data and could know what results they would produce.2Actually we registered a plan for the hookworm study too, but the malaria plan was better informed—and better followed—precisely because it came on the heels of the similar hookworm reanalysis. For brevity, I skipped this theme in my blog post. I did write about it in the hookworm working paper. jQuery("#footnote_plugin_tooltip_2").tooltip({ tip: "#footnote_plugin_tooltip_text_2", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); • The Bleakley malaria paper appeared in a journal published by the American Economic Association (AEA), which requires its authors to post data and computer code on the AEA website. This aided replication and reanalysis. Unfortunately, as appears to be the norm among AEA journals, the Bleakley (2010) data and code only reproduce the paper’s tables, not the graphs that in this case I see as central. • For Brazil, Colombia, and Mexico, I mostly relied on that publicly posted data for the crucial information on which regions within a country had the most malaria, rather than trying to construct those variables from old maps and books in Spanish and Portuguese. I also relied on the public data for geographic control variables. I think it can be valuable to go back to primary sources, but for the time being at least, this step looked too time-consuming. I did update and expand the Latin outcome data, on such things as literacy and income, because it is already conveniently digitized in IPUMS International. And I reconstructed all the U.S. data from primary sources, simply by copying what we assembled for the hookworm reanalysis. Results In showing you what I found, I’ll follow nearly the same narrative as in my previous post’s section on the “long-term impact on earnings.” To start, here is a key graph from the Bleakley (2010) paper—or really four graphs. In each country’s graph, as with the hookworm graphs, each dot shows the association between historical disease burden in a state (or municipio) and the average income in adulthood of people born there in a given year. In all but Colombia, the leftmost dots line up with the negative range on the vertical axis, meaning that, initially, coming from a historically malarial area stunted one’s income. For example, some of the early U.S. dots are around –0.1 on the vertical axis, which means that being native to swampy Mississippi instead of arid Wyoming cut one’s adult earnings by about 10%.3For cross-country comparability, Bleakley (2010) normalizes the malaria mortality and ecology indexes so that the 5th- and 95th-percentile geographic units—Wyoming and Mississippi in the U.S. case—score 0 and 1. Income proxies are taken in logs. jQuery("#footnote_plugin_tooltip_3").tooltip({ tip: "#footnote_plugin_tooltip_text_3", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); The dots later rise, suggesting that the liability of coming from malarial areas faded, and even reversed. In Colombia, the dots start around zero but also then rise. As in the hookworm study, here, Bleakley (2010) superimposes on the dots the step-like contour representing how malaria eradication is expected to play out in the data. The steps reach their full height when the campaigns are taken to have started—1920 in the United States and 1957 in the Latin nations. All babies born after these points were alike in that they grew up fully in the post–eradication campaign world. The step contours begin their rises 18 years earlier, when the first babies were born who would benefit from eradication at least a bit by their 18th birthdays.4These graphs incorporate all of Bleakley’s control variables. In my hookworm post, I began both results sections with “basic” graphs that did not include all the controls, imitating Bleakley (2007). In contrast, all the Bleakley (2010) graphs incorporate full controls. So I do the same. jQuery("#footnote_plugin_tooltip_4").tooltip({ tip: "#footnote_plugin_tooltip_text_4", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Next is my closest replication of the key Bleakley (2010) graphs. These use Bleakley’s data, as posted, but not Bleakley’s computer code, since that was not posted: The next version adds the latest rounds of census data from the Latin nations and the newer, larger samples from old census rounds for the United States. It also redefines childhood as lasting 21 instead of 18 years, because I discovered that the Bleakley (2010) code uses 18 but the text uses 21. That budges the first dashed lines back by three years: I avoided superimposing step contours on these data points because I worried that it would trick the brain into thinking that the contours fit the data better than they do. But whether the step contour fits the plots above is exactly what you should ask yourself now. Does it seem as if the dots rise, or rise more, between each pair of vertical, dashed lines? I could see the answer being “yes” for all but Mexico. And that could be a fingerprint of malaria eradication. I ask that question more formally in the next quartet, fitting line segments to successive ranges of the data. The dots in the four graphs are the same as above, but I’ve taken away the grey confidence intervals for readability. The p values in the lower-left of each pane speak to whether any upward or downward bends at the allowed kink points are statistically significant, i.e., hard to ascribe to chance alone. Where the p values are low—and they mostly are, even in Mexico—they favor the Bleakley (2010) reading that rolling back malaria raised incomes. In Brazil, Colombia, and Mexico, this statistical test is fairly confident that red lines bend upward at the first kinks (p = 0.00 for Brazil and Colombia and 0.07 for Mexico). That is: in high-malaria areas, relative to low-malaria areas, as the first babies were born who could benefit in childhood from eradication, future incomes rose. The test is less confident for the United States, where the first allowed kink, in 1899, gets a high-ish p value of 0.39. However, the U.S. trend clearly bends upward—just earlier than predicted by the Bleakley (2010) theory. That might mean that the Bleakley (2010) theory is slightly wrong: maybe when it came to impacts on future earnings, malaria exposure continued to matter into one’s twenties, at least in the United States 100 years ago. Then, people born in the South even a bit before 1899 (the date of the first U.S. kink point) would have benefited from the eventual campaign against malaria; and that first kink should be moved to the left, where it would match the data better and produce a lower p value. Or perhaps that high p value of 0.39 signifies that the Bleakley (2010) model is completely wrong for the United States, and that forces other than malaria eradication drove the South’s catch-up on income. Now, in addition to the four measures of income studied above–one for each country—the Bleakley (2010) paper looks at eight other outcomes. Six are literacy and years of schooling completed, tracked separately in Brazil, Colombia, and Mexico. In addition, there is, for Brazil, earned income—as distinct from total income (“earned” meaning earned through work). And there is, for the United States, Duncan’s Socioeconomic Index (SEI), which blends the occupational income score, explained in my last post, with information about a person’s education level. Your Duncan’s SEI is highest if you hold what is typically a high-paying job (as with the occupational income score) and you have a lot of education. The first public version of the Bleakley study makes graphs for the additional eight outcomes too. But the final, journal-published version drops them, perhaps to save space. Since for me, the graphs are so central, I generated my own graphs for the other eight outcomes: These figures hand us a mixed bag. In the United States, the trend on Duncan’s index appears to bend as predicted at the first allowed kink (p = 0.04) but not the second. Seemingly, relative income gains continued in the South well after malaria eradication could cause them. In Brazil, while relative progress on earned income slows when expected (second kink, p = 0.04), it does not appear to accelerate when expected (first kink), perhaps owing to small samples in the early years. In none of the Latin countries does relative progress on adult literacy or years of schooling slow with much statistical significance at the expected time (second kink points in bottom six graphs). The trend bends in all six at the first kink point, and with statistical significance—but the wrong way in Mexico. In fact, the mixed bag partly corroborates Bleakley (2010), which also questions whether rolling back malaria increased schooling. The new results depart from Bleakley (2010) in also questioning the benefit for literacy. And they cast some doubt on the income impact in the United States. In both the U.S. plots—in the upper-left of the last two sets of graphs above—it’s clear that the income gap between the South and the rest narrowed over many decades. It’s less clear that it did so with a rhythm attributable to the malaria eradication effort of the 1920s. Conclusion For me, this reanalysis triggers a modest update to my understanding of the impacts of malaria prevention. With regard to adult income in Latin America, and perhaps the United States, the Bleakley (2010) theory withstands reexamination. It holds up less well for literacy, but this is not very surprising given that Bleakley (2010) also did not find clear impacts on schooling. I wouldn’t say that my confirmation proves that malaria eradication campaigns in the Americas boosted income in the way that a large-scale randomized study might. But then neither, if you read him closely, does Bleakley. Rather, the evidence “indicates” impact. The theory that malaria eradication in the Americas increased earnings fits pretty well to the data we have. And that is probably about as much certainty as we can expect from this historical analysis. Much of the data and code for this study are here (2 GB). Because of IPUMS licensing limitations, the download leaves out the census data for Brazil, Colombia, and Mexico. The included “read me” file explains how to obtain this data. The full write-up is here. Notes [ + ] 1. ↑ Bleakley (2010) also chose these countries because they had malarial and non-malarial regions, allowing comparisons. See Bleakley (2010), note 6. For sample maps see this. 2. ↑ Actually we registered a plan for the hookworm study too, but the malaria plan was better informed—and better followed—precisely because it came on the heels of the similar hookworm reanalysis. For brevity, I skipped this theme in my blog post. I did write about it in the hookworm working paper. 3. ↑ For cross-country comparability, Bleakley (2010) normalizes the malaria mortality and ecology indexes so that the 5th- and 95th-percentile geographic units—Wyoming and Mississippi in the U.S. case—score 0 and 1. Income proxies are taken in logs. 4. ↑ These graphs incorporate all of Bleakley’s control variables. In my hookworm post, I began both results sections with “basic” graphs that did not include all the controls, imitating Bleakley (2007). In contrast, all the Bleakley (2010) graphs incorporate full controls. So I do the same. function footnote_expand_reference_container() { jQuery("#footnote_references_container").show(); jQuery("#footnote_reference_container_collapse_button").text("-"); } function footnote_collapse_reference_container() { jQuery("#footnote_references_container").hide(); jQuery("#footnote_reference_container_collapse_button").text("+"); } function footnote_expand_collapse_reference_container() { if (jQuery("#footnote_references_container").is(":hidden")) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery("#" + p_str_TargetID); if(l_obj_Target.length) { jQuery('html, body').animate({ scrollTop: l_obj_Target.offset().top - window.innerHeight/2 }, 1000); } } The post Revisiting the evidence on malaria eradication in the Americas appeared first on The GiveWell Blog. ### Key questions about Helen Keller International’s vitamin A supplementation program Thu, 12/28/2017 - 11:46 One of our two new top charities this year is Helen Keller International (HKI)’s vitamin A supplementation program. We named HKI’s vitamin A supplementation program a top charity this year because: • There is strong evidence from many randomized controlled trials of vitamin A supplementation that the program leads to substantial reductions in child deaths. • HKI-supported vitamin A supplementation programs are inexpensive (we estimate around0.75 in total costs per supplement delivered) and highly cost-effective at preventing child deaths in countries where HKI plans to work using GiveWell-directed funds.
• HKI is transparent—it has shared significant, detailed information about its programs with us, including the results and methodology of monitoring surveys HKI conducted to determine whether its vitamin A supplementation programs reach a large proportion of targeted children.
• HKI has a funding gap—we believe it is highly likely that its vitamin A supplementation programs will be constrained by funding next year.

HKI’s vitamin A supplementation program is an exceptional giving opportunity, but as with the case for donating to any of our other top charities, not a “sure thing.”

I’m the Research Analyst who has led our work on HKI this year. In this post, I discuss some key questions about the impact of Helen Keller International’s vitamin A supplementation program and what we’ve learned so far. I also discuss GiveWell’s plans for learning more about these issues in the future.

In short:

• Is vitamin A deficiency still a major concern? Our best guess is that vitamin A deficiency is considerably less common today where HKI works than it was among children who participated in past trials of vitamin A supplementation, but not so rare that vitamin A supplementation would not be cost-effective. We are quite uncertain about our estimate of the prevalence of vitamin A deficiency where HKI works because little high-quality, up-to-date data on vitamin A deficiency is available. We plan to consider funding new surveys of vitamin A deficiency to improve our understanding of the effectiveness of HKI’s programs.
• Have improvements in health conditions over time reduced the need for vitamin A supplementation? Child mortality rates remain quite high in areas where HKI plans to use GiveWell-directed funding for vitamin A supplementation programs. We think it’s unlikely that health conditions in these countries have improved enough for vitamin A supplementation to no longer be effective.
• How strong is HKI’s track record of supporting fixed-point vitamin A supplement distributions? HKI expects to primarily support fixed-point vitamin A supplement distributions (rather than door-to-door campaigns) going forward. Results from monitoring surveys have found that, on average, HKI’s fixed-point programs have not reached as high a proportion of targeted populations as its door-to-door programs, but these monitoring surveys may not have been fully representative of HKI’s programs overall. Our best guess is that future fixed-point programs will achieve moderate to high coverage.
Is vitamin A deficiency still a major concern?

Vitamin A deficiency, a condition resulting from chronic low vitamin A intake, can cause loss of vision and increased severity of infections. If vitamin A deficiency is less common today than it was among participants in trials of vitamin A supplementation, today’s programs may prevent fewer deaths than the evidence from the trials suggests.

We estimate that the prevalence of vitamin A deficiency was high (around 60%) in the populations studied in trials included in the Cochrane Collaboration review of vitamin A supplementation programs for preschool-aged children, Imdad et al. 2017.1See the “Imdad 2017 – VAD prevalence estimates” sheet here for details. jQuery("#footnote_plugin_tooltip_1").tooltip({ tip: "#footnote_plugin_tooltip_text_1", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });

The map below, from Our World in Data, presents the World Health Organization (WHO)’s most recent estimates of the prevalence of vitamin A deficiency among preschool-aged children by country, covering the period from 1995 to 2005. WHO categorizes prevalences of vitamin A deficiency among preschool-aged children of 20% or above as a severe public health problem.2WHO Global prevalence of vitamin A deficiency in populations at risk 2009, Pg 8, Table 5. jQuery("#footnote_plugin_tooltip_2").tooltip({ tip: "#footnote_plugin_tooltip_text_2", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });

Since WHO’s most recent estimates are now considerably out-of-date, we decided to investigate a variety of additional sources in order to create best-guess estimates of rates of vitamin A deficiency today in countries in sub-Saharan Africa where HKI works.

We learned that there is very little useful, up-to-date data on vitamin A deficiency in countries in sub-Saharan Africa. In many countries, the most recent surveys of vitamin A deficiency were completed ten or more years ago. Many governments have also recently mandated the fortification of vegetable oil or other foods with vitamin A, but little information is available on whether foods are actually adequately fortified in practice.3See this spreadsheet for the information we collected on the most recent vitamin A deficiency surveys and on vitamin A fortification programs in countries where HKI has supported vitamin A supplementation programs. jQuery("#footnote_plugin_tooltip_3").tooltip({ tip: "#footnote_plugin_tooltip_text_3", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });

Taking the limited available data into account, our best guess is that prevalence of vitamin A deficiency in countries where HKI works today is likely to be considerably lower than the prevalence of vitamin A deficiency among children who participated in vitamin A supplementation trials—closer to 20% prevalence than 60% prevalence.

We find that HKI’s vitamin A supplementation programs still appear highly cost-effective, even when taking our estimate of the change in the prevalence of vitamin A deficiency over time into account (see our most recent cost-effectiveness analysis for full details). But we remain quite uncertain about our estimate of the prevalence of vitamin A deficiency in countries where HKI works—new information could cause us to update our views on HKI’s cost-effectiveness considerably.

Next year, we’ll continue to follow research relevant to estimating vitamin A deficiency rates where HKI works. We also plan to consider funding new vitamin A deficiency surveys ourselves through a GiveWell Incubation Grant.

Have improvements in health conditions over time reduced the need for vitamin A supplementation?

In a blog post last year, we wrote that vitamin A supplementation has a mixed evidence base. There is strong evidence from many randomized controlled trials conducted in the 1980s and 1990s that the program reduces child mortality, but a more recent trial in northern India with more participants than all the other trials combined (the Deworming and Enhanced Vitamin A trial, or DEVTA) did not find a statistically significant effect.

There have been broad declines in child mortality rates over the past few decades. Participants in the control group in the DEVTA trial had a mortality rate of 5.3 deaths per 1,000 child-years, lower than the mortality rates in the control groups in earlier trials that found statistically significant results (ranging from 10.6 to 126 deaths per 1,000 child-years). One potential explanation for the difference between the results of the DEVTA trial and earlier trials is that the some types of deaths prevented by vitamin A supplementation in previously studied populations had already been prevented through other means (e.g., increased access to immunizations and medical care) in the DEVTA population.

We looked into child mortality rates in countries in sub-Saharan Africa where HKI plans to use GiveWell-directed funding in the near future—Guinea, Burkina Faso, and Mali—as well as other countries where HKI has recently worked. Mortality rates among preschool-aged children in Guinea, Burkina Faso and Mali remain quite high—around 13 deaths per 1,000 child-years, within the range of mortality rates among control groups in vitamin A trials that found statistically significant results.4The control group mortality rate in the DEVTA trial was 5.3 per 1,000 child-years. See this spreadsheet for child mortality rates in Burkina Faso, Guinea, and Mali (13 deaths per 1,000 child-years is the simple average of “Average of GBD and UN IGME data” child mortality rates for the three countries), and see here for more information on control group mortality rates in other vitamin A supplementation trials. jQuery("#footnote_plugin_tooltip_4").tooltip({ tip: "#footnote_plugin_tooltip_text_4", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Based on these high child mortality rates, we don’t believe it’s very likely that overall health conditions have improved enough in these countries for vitamin A supplementation to no longer be effective at preventing deaths.

It is also possible that changes in causes of child deaths between the 1980s and 1990s and today could mean that vitamin A supplementation is now less effective than it was in the past. Different vitamin A experts have different views on whether vitamin A primarily prevents deaths due to a few specific causes (we’ve seen diarrhea and measles most frequently pointed to) or whether it reduces deaths due to a wider range of conditions by, perhaps, strengthening the immune system against infection. In our view, the research on this is inconclusive. According to the data we’ve seen, infectious disease overall and diarrhea in particular cause a similar proportion of total deaths among young children today as they did in the 1980s and 1990s; measles causes a substantially lower proportion of total deaths today than it did in the past.5See the final bullet point in this section of our review of HKI for more on this topic. jQuery("#footnote_plugin_tooltip_5").tooltip({ tip: "#footnote_plugin_tooltip_text_5", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); We’ve added an adjustment to our cost-effectiveness analysis to account for changes in the composition of causes of child mortality since the vitamin A trials were implemented—HKI’s work still appears highly cost-effective following this adjustment.

We may conduct additional research next year to learn about child mortality rates in places where HKI works at a more granular (e.g., regional or sub-regional) level. We may also conduct additional research on the impact of changes in cause-specific mortality rates on the effectiveness of vitamin A supplementation.

How strong is HKI’s track record of supporting fixed-point vitamin A supplement distributions?

In many past HKI-supported campaigns, healthcare workers have traveled door-to-door to administer vitamin A supplements to preschool-aged children. Funding was already available from other sources for sending teams of healthcare workers door-to-door to administer polio vaccinations, and adding vitamin A supplementation to these campaigns was relatively simple and cheap.

In fixed-point distributions, caregivers are expected to bring their children to a central location to receive vitamin A supplements. Due to recent progress in polio elimination, many door-to-door programs have recently been scaled-down or eliminated, so HKI expects to primarily be supporting fixed-point distributions going forward.

It may be more challenging to reach a large proportion of a targeted population with fixed-point distributions. HKI’s recent monitoring surveys have found that, on average, its door-to-door distributions have achieved higher coverage rates (around 90%) than its fixed-point distributions (around 60%). The average of around 60% for fixed-point programs reflects surveys finding high coverage in a few campaigns in the Democratic Republic of the Congo and Mozambique, and relatively low coverage in campaigns in Nigeria, Tanzania, and Kenya.

A complication for assessing HKI’s track record is that HKI often chose to conduct coverage surveys in areas where it expected coverage to be particularly low, so we would guess that these results are not fully representative of HKI’s work on fixed-point distributions.

Based on the available information, our best guess is that HKI-supported fixed-point vitamin A supplementation distributions next year will achieve moderate to high coverage.6To be more precise about what I mean: in Guinea (the program I am most familiar with, following our site visit in October), I’m 70% confident that coverage surveys representative of the distribution as a whole will indicate that the first vitamin A supplement distribution in 2018 reached at least 55% of targeted children across the country. jQuery("#footnote_plugin_tooltip_6").tooltip({ tip: "#footnote_plugin_tooltip_text_6", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); HKI has told us that it will conduct representative monitoring surveys (not only in areas where it expects coverage to be low) following its vitamin A supplement distributions supported with GiveWell-directed funding next year—we expect that these surveys will provide data useful for assessing how successful the programs were overall.

Notes   [ + ]

1. ↑ See the “Imdad 2017 – VAD prevalence estimates” sheet here for details. 2. ↑ WHO Global prevalence of vitamin A deficiency in populations at risk 2009, Pg 8, Table 5. 3. ↑ See this spreadsheet for the information we collected on the most recent vitamin A deficiency surveys and on vitamin A fortification programs in countries where HKI has supported vitamin A supplementation programs. 4. ↑ The control group mortality rate in the DEVTA trial was 5.3 per 1,000 child-years. See this spreadsheet for child mortality rates in Burkina Faso, Guinea, and Mali (13 deaths per 1,000 child-years is the simple average of “Average of GBD and UN IGME data” child mortality rates for the three countries), and see here for more information on control group mortality rates in other vitamin A supplementation trials. 5. ↑ See the final bullet point in this section of our review of HKI for more on this topic. 6. ↑ To be more precise about what I mean: in Guinea (the program I am most familiar with, following our site visit in October), I’m 70% confident that coverage surveys representative of the distribution as a whole will indicate that the first vitamin A supplement distribution in 2018 reached at least 55% of targeted children across the country. function footnote_expand_reference_container() { jQuery("#footnote_references_container").show(); jQuery("#footnote_reference_container_collapse_button").text("-"); } function footnote_collapse_reference_container() { jQuery("#footnote_references_container").hide(); jQuery("#footnote_reference_container_collapse_button").text("+"); } function footnote_expand_collapse_reference_container() { if (jQuery("#footnote_references_container").is(":hidden")) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery("#" + p_str_TargetID); if(l_obj_Target.length) { jQuery('html, body').animate({ scrollTop: l_obj_Target.offset().top - window.innerHeight/2 }, 1000); } }

The post Key questions about Helen Keller International’s vitamin A supplementation program appeared first on The GiveWell Blog.

### How uncertain is our cost-effectiveness analysis?

Fri, 12/22/2017 - 15:22

When our cost-effectiveness analysis finds robust and meaningful differences between charities, it plays a large role in our recommendations (more on the role it plays in this post).

But while our cost-effectiveness analysis represent our best guess, it’s also subject to substantial uncertainty; some of its results are a function of highly debatable, difficult-to-estimate inputs.

Sometimes these inputs are largely subjective, such as the moral weight we assign to charities achieving different good outcomes (e.g. improving health vs increasing income). But even objective inputs are uncertain; a key input for anti-malaria interventions is malaria mortality, but the Institute for Health Metrics and Evaluation estimates 1.6 times more people died in Africa from malaria in 2016 (641,000) than the World Health Organization does (407,000; pg. 41).1Differences in their methodology have been discussed, with older figures, in a 2012 blog post by the Center for Global Development. jQuery("#footnote_plugin_tooltip_1").tooltip({ tip: "#footnote_plugin_tooltip_text_1", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });

Before we finalized the recommendations we released in November, we determined how sensitive our results were to some of our most uncertain parameters.

In brief:

• Comparisons between charities achieving different types of good outcome are most sensitive to the relative value we assign to those outcomes (more on how and why we and other policymakers assign these weights in this post).
• Our deworming models are very uncertain, due to the complexity of the evidence base. They are also sensitive to the choice of discount rate: how we value good done today vs. good done in the future.
• Our malaria models (seasonal malaria chemoprevention and long-lasting insecticide-treated nets) are less uncertain than our deworming models, but are particularly sensitive to our estimate of the long term effects of malaria on income.

In this post, we discuss:

• The sensitivity of our analysis to moral weights (more) and other parameters (more).
• How this uncertainty influences our recommendations (more).
• Why this sensitivity analysis doesn’t capture the full scope of our uncertainty and ways in which we could improve our assessment and presentation of uncertainty (more).

The tornado charts at the bottom of this post show the results of our full sensitivity analysis. For a brief explanation of how we conducted our sensitivity analysis see this footnote.2Each contributor to our cost-effectiveness analysis inputs their own values for particularly uncertain parameters in our cost-effectiveness analysis. We use the median of contributors’ final cost-effectiveness results for our headline cost-effectiveness figures. To simplify the sensitivity analysis, we used the median of contributors’ parameter inputs to form a central cost-effectiveness estimate for each charity. The results below therefore differ slightly from our headline cost-effectiveness figures. To determine how sensitive the model is to each parameter, we flexed each parameter between the highest and lowest contributors’ inputs, while holding all other parameters constant. For more details, see our sensitivity analysis spreadsheet. jQuery("#footnote_plugin_tooltip_2").tooltip({ tip: "#footnote_plugin_tooltip_text_2", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });

Sensitivity to moral weights

Some of the inputs in our model rely on judgement calls, which reasonable, informed people might disagree on. For example, we assign quantitative weights to our relative valuations of different good outcomes. These inputs capture important considerations in our decision-making, but are often difficult to justify precisely.

We ask contributors to our cost-effectiveness analysis (mostly staff) to input how many people’s income would have to double for 1 year to be equally valuable to averting the death of a child under 5. Contributors’ values vary widely, between 8 and 100 (see Figure 1).3You can see each of our contributors’ inputs for moral weights, and other uncertain parameters, on the Moral weights and Parameters tabs of our cost-effectiveness analysis. This year, contributors were also asked to provide a brief justification for their inputs in the cell notes. jQuery("#footnote_plugin_tooltip_3").tooltip({ tip: "#footnote_plugin_tooltip_text_3", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });

Differences in cost-effectiveness between charities which primarily prevent child deaths (Helen Keller International, Malaria Consortium, Against Malaria Foundation) and charities which primarily increase income (Deworm the World Initiative, Schistosomiasis Control Initiative, Sightsavers, No Lean Season, End Fund) are highly sensitive to different plausible moral weights (See Figure 2).

The orange points represent the median estimated cost-effectiveness of our charities (in terms of how many times more cost-effective than GiveDirectly we model them to be). The blue bars represents the range of cost-effectiveness for different valuations of preventing the death of an under-5 child between 8x and 100x as good as doubling consumption for one person for one year (holding all other parameters in the model constant). Deworming sensitivities

Our deworming models are very uncertain, due to the complexity of the evidence base, and the long time horizons over which we expect the potential benefits to be realized. Aside from our moral weights, our deworming charities are highly sensitive to three inputs:

• Replicability adjustment. We make a “replicability adjustment” for deworming to account for the fact that the consumption increase in a major study we rely upon may not hold up if it were replicated. If you’re skeptical that such a large income increase would occur, given the limited evidence for short-term health benefits and generally unexpected nature of the findings, you may think that the effect the study measured wasn’t real, wasn’t driven by deworming, or relied on an atypical characteristic shared by the study population but not likely to found among recipients of the intervention today. This adjustment is not well-grounded in data. (For more discussion see our deworming intervention report and blog posts here, here, here and here).4You can read more about how contributors settled on the values they used for this parameter in the cell notes in row 16 of the Parameters sheet of our November 2017 cost-effectiveness model. jQuery("#footnote_plugin_tooltip_4").tooltip({ tip: "#footnote_plugin_tooltip_text_4", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });
• Adjustment for years of treatment in Baird et al. vs. years of treatment in charities’ programs. Our charities aim to deworm children for up to 10 years, which is longer than the intervention studies in Baird et al. 2015 (where children in the treatment group received 4 years of deworming). There may be diminishing returns as the total years of treatment increase, although this is difficult to estimate.
• Discount rate. The discount rate adjusts for benefits that occur at different points in time. For a number of reasons, individuals may believe it is preferable for income to rise now than at some point in the future.

Figure 3 shows how the cost-effectiveness of Deworm the World Initiative5The sensitivity of other deworming charities is largely dependent on the same parameters. Charts are presented in the Appendix jQuery("#footnote_plugin_tooltip_5").tooltip({ tip: "#footnote_plugin_tooltip_text_5", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); varies depending on different contributor inputs for different parameters (more on how to interpret these parameters here).

The orange line represents the median estimated cost-effectiveness of our charities (in terms of how many times more cost-effective than GiveDirectly we model them to be). The blue bars represents the range of cost-effectiveness for different inputs from our contributors for that parameter (holding all other parameters in the model constant). The figures in square brackets represent the range of contributor inputs for those parameters. Malaria sensitivities

Our malaria models are less uncertain than our deworming models, but are still sensitive to our estimate of the long term effects of malaria on income (see Figures 4 and 5).

Interpreting the evidence base for the effect of malaria prevention on long run income is complex, and contributors differ widely in their interpretation. We’re planning to do more research on this topic further but summarize our current understanding here.

What does this mean for our recommendations?

When we model large differences in cost-effectiveness, we generally follow those recommendations. When charities are closer on cost-effectiveness, we pay more attention to qualitative considerations, such as the quality of their monitoring and evaluation, and potential upside benefits which are difficult to quantify (e.g. scaling a novel program).

What counts as a meaningful difference in modelled cost-effectiveness depends on a number of factors, including:

• Do the programs target the same outcomes? We place less weight on modelled differences between charities which have different good outcomes because our cost-effectiveness analysis is sensitive to different reasonable moral weights.
• How similar are the programs? We’re more confident in our comparison between our deworming charities than we are between deworming charities and other charities targeting income such as GiveDirectly. This is because we expect the most likely errors in our deworming estimates (e.g. based on our interpretation of the evidence) for different charities to be correlated.
• Are there important qualitative reasons to differentiate between the charities? We place less relative weight on cost-effectiveness analysis when there are important qualitative reasons to differentiate between charities.

For a more detailed explanation of how we made our recommendations this year, see our recent announcement of our top charities for giving season 2017.

What are the limitations of this sensitivity analysis?

This sensitivity analysis shouldn’t be taken as a full representation of our all things considered uncertainty:

• The charts above show the sensitivity of the cost-effectiveness analysis to changing one input at a time (holding all other constant). The ranges don’t necessarily imply any particular credible interval, and are more useful for identifying which inputs are most uncertain than for reflecting our all things considered uncertainty around the cost-effectiveness of a particular charity.
• We don’t ask multiple contributors to input their own values for all uncertain inputs (e.g. because we think the benefits of using the inputs of the contributors with most context outweigh the benefit of getting inputs from many contributors). These inputs have not been included in the sensitivity analysis.
• Model uncertainty. Explicitly modelling all the considerations relevant to our charity would be infeasible. Even if all our inputs were fully accurate, we’d still retain some uncertainty about the true cost-effectiveness of our charities.

We’re considering a number of different options to improve our sensitivity analysis and communication of uncertainty in the future, such as expressing inputs as probability distributions or creating a Monte Carlo simulation. But we’re uncertain whether these would create sufficient decision-relevant information for our readers to justify the substantial time investment and additional complexity.

If you’d find such an analysis helpful, let us know in the comments.

Appendix

In this section, we present tornado charts for each of our top charities. You can see more detailed descriptions of how to interpret these parameters here, or in the cell notes of our cost-effectiveness analysis.

Notes   [ + ]

1. ↑ Differences in their methodology have been discussed, with older figures, in a 2012 blog post by the Center for Global Development. 2. ↑ Each contributor to our cost-effectiveness analysis inputs their own values for particularly uncertain parameters in our cost-effectiveness analysis. We use the median of contributors’ final cost-effectiveness results for our headline cost-effectiveness figures. To simplify the sensitivity analysis, we used the median of contributors’ parameter inputs to form a central cost-effectiveness estimate for each charity. The results below therefore differ slightly from our headline cost-effectiveness figures. To determine how sensitive the model is to each parameter, we flexed each parameter between the highest and lowest contributors’ inputs, while holding all other parameters constant. For more details, see our sensitivity analysis spreadsheet. 3. ↑ You can see each of our contributors’ inputs for moral weights, and other uncertain parameters, on the Moral weights and Parameters tabs of our cost-effectiveness analysis. This year, contributors were also asked to provide a brief justification for their inputs in the cell notes. 4. ↑ You can read more about how contributors settled on the values they used for this parameter in the cell notes in row 16 of the Parameters sheet of our November 2017 cost-effectiveness model. 5. ↑ The sensitivity of other deworming charities is largely dependent on the same parameters. Charts are presented in the Appendix function footnote_expand_reference_container() { jQuery("#footnote_references_container").show(); jQuery("#footnote_reference_container_collapse_button").text("-"); } function footnote_collapse_reference_container() { jQuery("#footnote_references_container").hide(); jQuery("#footnote_reference_container_collapse_button").text("+"); } function footnote_expand_collapse_reference_container() { if (jQuery("#footnote_references_container").is(":hidden")) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery("#" + p_str_TargetID); if(l_obj_Target.length) { jQuery('html, body').animate({ scrollTop: l_obj_Target.offset().top - window.innerHeight/2 }, 1000); } }

The post How uncertain is our cost-effectiveness analysis? appeared first on The GiveWell Blog.

### Update on our work on outreach

Tue, 12/19/2017 - 11:33

GiveWell’s impact is a function of the quality of our research and the amount of money we direct to our recommended charities (our “money moved”). Historically, we’ve focused mostly on research because we felt that the quality of our recommendations was a greater constraint to our impact than our money moved.

This has changed. Outreach is now a major organizational priority. The goal of this work is to increase the amount of money we direct to our top-recommended charities.

In April 2014 I wrote about our work on outreach to explain why we hadn’t prioritized it: in brief, our growth had largely been driven by inbound interest in GiveWell, and proactive outreach efforts (beyond building relationships with existing donors) hadn’t yielded results that were worth the cost.

What changed?

• We believe that the amount of money we move is now a greater constraint to our impact than additional improvements in the quality of our research. Over the last two years, we’ve added five new top charities (three of which implement programs that weren’t previously represented on our top charities list), and we expect that our top charities, collectively, will have more than $200 million in unfilled funding gaps once they’ve received the funding that we expect to direct to them. (This calculation excludes GiveDirectly, which we believe could absorb and distribute hundreds of millions of dollars.) At the same time, the quality of our research and our capacity for research is higher than it’s ever been, so the returns to adding staff there (in terms of the pace at which we identify significantly better giving opportunities) are now lower. • Increased capacity for outreach. In our 2014 post, we wrote that one of our key constraints was that senior staff (which at the time meant primarily GiveWell Co-Founder Holden Karnofsky and me) were necessary for most outreach-related work. This has changed. We now have capacity to take on outreach work as other staff have been hired and trained on this type of work. • Better information on the impact of GiveWell’s outreach. We now have better information about the returns to outreach because: 1. We’ve collected better data (via an improved donations processing system and outreach efforts) about where donors find out about us. Because of our ability to track donors, we know that a single appearance on NPR or major podcasts tends to drive$50,000+ in annual donations.
2. More time passing has demonstrated that the lifetime value of the donations of a first time donor is higher than we expected. In several cases, we’ve seen major donors (i.e., those giving $10,000-$100,000) increase their annual giving by a factor of 10 or more.

We’re in the early stages of figuring out how we can proactively invest time and money in outreach to significantly increase our money moved. For now, we’ve taken some opportunities that we think will have positive returns; these are the three that we’ve invested the most time and money in to date:

• Podcast advertising. We’ve been advertising on podcasts that we believe our target audience listens to, based on interviews with current donors and GiveWell staff. In February and March, we ran a small experiment with a few ads on FiveThirtyEight’s Politics podcast and Vox’s The Weeds.1We’ve also been running ads on Julia Galef’s Rationally Speaking podcast since then. Because it’s much smaller and more targeted, we’ve excluded it from this analysis. Measured returns to advertising on Rationally Speaking have been significantly better than the more mainstream podcasts discussed in this post. jQuery("#footnote_plugin_tooltip_1").tooltip({ tip: "#footnote_plugin_tooltip_text_1", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });

In total, we spent approximately $20,000 on ads for this initial experiment. We ask donors who give via our website to tell us where they learned about GiveWell when they donate. GiveWell received approximately$8,000 in donations between February 1 and November 20 from donors who reported that they had learned about us via these podcasts.

The donations we received were from first-time donors; to assess the impact of our advertising, we need to estimate the lifetime value of acquiring a new donor. In work we’ve done to assess our retention rate, we’ve seen that (a) approximately 20-25% of the donors who make a first-time donation of less than $1,000 give again in the subsequent year but (b) because many first-time donors increase the size of their donation over time, collectively, the donors who recur give more than 100% of the value of what they give in their first year. At higher donation levels ($1,000-$100,000), we measure 40-45% retention among donors, which leads to retention of approximately two-thirds of dollars given.2I say “measure” retention because we’ve learned that many donors give subsequent donations directly to our top charities and don’t report those donations to us. We’ve tried to follow up with lapsed donors and with charities to track these donors down. jQuery("#footnote_plugin_tooltip_2").tooltip({ tip: "#footnote_plugin_tooltip_text_2", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); We therefore estimated the net present value of expected future donations (over the next five years) from these podcasts ads as somewhere between approximately$20,000 (assuming two-thirds dollar retention for the first two years and 100% dollar retention subsequently) and $45,000 (assuming 100% dollar retention).3We only projected donations over five years. This is fairly arbitrary because we don’t have long-term enough data to know whether or not this is a reasonable assumption. We capped it to prevent our assessment being driven by speculation about how much money would be donated many years in the future. jQuery("#footnote_plugin_tooltip_3").tooltip({ tip: "#footnote_plugin_tooltip_text_3", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); A few additional facts are worth keeping in mind about the above figures: • We ran this experiment in February and March; most donors give at the end of the calendar year. We consistently see donors who find out about GiveWell during the course of the year, but donate in December. Other things equal, we expect that our advertising would have had greater measured returns in December than earlier in the year. • We are only able to track donors who (a) fill out our donation form telling us where they learned about us and (b) give directly through our website rather than to our top charities. Less than 50% of donors who give via credit card (and a smaller percentage of donors who give via check) tell us where they learned about GiveWell. Also, roughly speaking, approximately 50% of the donors and dollars we influence come through GiveWell rather than going to our top charities.4I took this rough estimate from footnote 26, on page 15, of GiveWell’s 2015 metrics report. jQuery("#footnote_plugin_tooltip_4").tooltip({ tip: "#footnote_plugin_tooltip_text_4", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); • It’s certainly possible that donors who learn about us via podcast would be more likely to give through our website than an average donor, more likely to report on how they found us (since their source is clear), or less likely to be retained. My best guess is that donors who learn about us via podcast ads behave similarly to our other donors, but I won’t be surprised if they don’t. With all that in mind, I believe that the impact of our podcast advertising is higher than what we directly measured. The results we saw from February to November this year were promising enough that we decided to increase the size of our experiment by spending approximately$100,000 on podcast ads. We’re currently running ads on FiveThirtyEight’s Politics podcast and Ezra Klein’s podcast and The Weeds at Vox.

• Earned media outreach. Mentions of GiveWell in the media have historically been a strong driver of growth. We aimed to increase mentions of GiveWell in high-quality, high-profile media where we’ve had the most past success as measured by dollars donated (i.e., media like The New York Times, NPR, The Wall Street Journal, and Financial Times). We retained a PR firm that came strongly recommended; we also increased 1-to-1 outreach by GiveWell staff to members of the media who have covered GiveWell in the past. It’s very hard to attribute the impact of the additional effort we’ve invested—overall, our effort has been fairly limited, and it’s hard to easily draw the causal lines between our work and the stories that appear—but my guess is that our increased efforts have led to more coverage of GiveWell and our top charities this giving season than in the recent past.
• Website improvements. Companies that sell products online invest significant effort into optimizing their websites and checkout pages to maximize their revenues. We retained a marketing consultant, Will Wong of Mission Street, and we’ve been A/B testing different donation pages and plan to test other pages on our website such as our homepage or top charities page to see whether we can increase our conversion rate (i.e., the percentage of visitors to our website who give to one of our top charities). For context, our current conversion rate is 1%. Our understanding is that a standard conversion rate for e-commerce companies is 2%, and that international nonprofits have a similar conversion rate.5See Pg 51 of the study downloadable here. jQuery("#footnote_plugin_tooltip_5").tooltip({ tip: "#footnote_plugin_tooltip_text_5", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); An increase in our conversion rate to the industry average would lead to a significant increase in the amount of money we direct to our top charities.

Notes   [ + ]

1. ↑ We’ve also been running ads on Julia Galef’s Rationally Speaking podcast since then. Because it’s much smaller and more targeted, we’ve excluded it from this analysis. Measured returns to advertising on Rationally Speaking have been significantly better than the more mainstream podcasts discussed in this post. 2. ↑ I say “measure” retention because we’ve learned that many donors give subsequent donations directly to our top charities and don’t report those donations to us. We’ve tried to follow up with lapsed donors and with charities to track these donors down. 3. ↑ We only projected donations over five years. This is fairly arbitrary because we don’t have long-term enough data to know whether or not this is a reasonable assumption. We capped it to prevent our assessment being driven by speculation about how much money would be donated many years in the future. 4. ↑ I took this rough estimate from footnote 26, on page 15, of GiveWell’s 2015 metrics report. 5. ↑ See Pg 51 of the study downloadable here. function footnote_expand_reference_container() { jQuery("#footnote_references_container").show(); jQuery("#footnote_reference_container_collapse_button").text("-"); } function footnote_collapse_reference_container() { jQuery("#footnote_references_container").hide(); jQuery("#footnote_reference_container_collapse_button").text("+"); } function footnote_expand_collapse_reference_container() { if (jQuery("#footnote_references_container").is(":hidden")) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery("#" + p_str_TargetID); if(l_obj_Target.length) { jQuery('html, body').animate({ scrollTop: l_obj_Target.offset().top - window.innerHeight/2 }, 1000); } }

The post Update on our work on outreach appeared first on The GiveWell Blog.

### Maximizing the impact of your donation: saving on fees means more money for great charities

Fri, 12/15/2017 - 14:09

We recently discussed how you can give to reduce the administrative burden on charities. This post will focus on how you can save money on fees and give tax-efficiently so that more of your charitable budget can go directly to the organizations you want to support. This is an updated version of a post we originally ran in 2012; some content is the same, other content has been added or updated.

1. Don’t wait until the last minute. Many donors wait until the very end of the calendar year to give. If you’re hoping to make a donation by that deadline, we strongly advise against this. Here’s why:
• Some methods of donating require some planning and preparation, such as giving appreciated stock.
• Checks are tax-deductible according to the postmarked date on the envelope—you can’t write a check in 2018, backdate it to 2017, and claim a deduction. So, please head to the post office before the new year if you’re looking to get a tax deduction this year.
• Leaving little time between making your donation and the deadline means you’ll have limited time to react if something unexpected happens, such as your credit card charge being declined.

We recommend building in a cushion of a week or two if you’re aiming to donate by a particular deadline. The earlier you can give, the less likely you’ll have any issues. For end-of-year giving, we recommend a target date of December 24 or earlier.

2. Try to get a tax benefit. Details vary by country and personal situation, but a tax deduction can allow you to give much more to charity at the same cost to yourself. (That said, as discussed later in the post, we believe it is more important to give to the most effective possible charity than to get the maximum tax benefit.) Below, we discuss our understanding of donation methods for tax-advantaged giving, although please note that none of this information should be construed as legal or tax advice.

Donors in the United States may make tax-deductible gifts to any of our nine top charities by giving to GiveWell. There are also a large number of tax-deductible options for giving to our top charities in other countries; please see the table here for more information.

Donors in certain U.S. states and income brackets who are interested in maximizing their tax deduction may also consider “donation bunching,” or making two donations in one year rather than one donation in each of two years to take advantage of the standard deduction in one year and maximize the size of their itemized charitable deduction in a subsequent year. Considerations related to donation bunching are discussed in this post by former GiveWell intern Ben Kuhn.

3. Avoid the large transaction fees and delays associated with large online donations. When donating via credit card, you will almost always be charged standard credit card processing fees. Making a large donation via credit card may also trigger your card’s fraud protection (though a call to the credit card company can generally resolve the situation quickly).

We discussed some of the tradeoffs between the ease of donating via certain platforms and the fees for donors and the administrative costs to charities for processing them in a previous post. In short, we do not advise making donations via credit card if you’re planning to give $1,000 or more. 4. Give appreciated stock and cryptocurrency. In the U.S., if you give stock or cryptocurrency (such as Bitcoin) to a charity, neither you nor the charity will have to pay taxes on capital gains (as you would if you sold the stock yourself). If you have stock or cryptocurrency that you acquired for$1,000 (and has a cost basis of $1,000) but is now worth$2,000, you can give the stock to charity, take a deduction for $2,000, and not have to pay capital gains tax on the$1,000 of appreciation. This can result in significant savings.

Due to the administrative cost associated with processing donations of stock, we ask that donors donate stock directly to GiveWell only if the value of the stock at the time of transfer is estimated at approximately $1,000 or more. More information on giving appreciated stock to GiveWell, either through E*Trade or GiveWell’s Vanguard donor-advised fund, is available here. You can also use Betterment to donate appreciated stock to GiveWell. If you’re interested in making a Bitcoin donation to GiveWell, please email us at donations@givewell.org to receive instructions on how to give. 5. Look into donor-advised funds to make the process smoother and more consistent year-to-year. Donor-advised funds allow donors to make a charitable donation (and get a tax deduction) now, while deciding which charity they’d like to support later. The donation goes into a fund that is “advised” by the donor, and the donor may later recommend a grant from the fund to the charity of his/her choice. We see a couple of advantages to this setup. One advantage is that you can separate your “decision date” (the date on which you decide which charity you’d like to support) from your “transaction date.” That means that if you aren’t ready to decide which charity to support yet, you can still get started on the process of transferring funds and getting a tax deduction for the appropriate year. Another advantage is that if you change the charity you support from year to year, you’re still working with the same partner when it comes to transactions, so the process for e.g. donating stock will not change from year to year. Donor-advised funds are often set up to easily accept donated stock or non-traditional assets, whereas charities may or may not be. Many large investment companies—Vanguard, Fidelity, Schwab—offer donor-advised funds. They generally charge relatively modest management fees. We also maintain our own donor-advised fund for donors interested in supporting our recommended charities; the minimum size for a donation is$5,000. The GiveWell donor-advised fund is likely most helpful for donors interested in giving certain types of securities, such as Vanguard mutual funds, that are not accepted by E*Trade.

6. Find out if your company offers donation matching. Many companies offer to match employees’ gifts up to a certain amount. We recommend checking with your employer if you’re unsure whether they offer this option. Some employers have a limited list of charities to which they will match donations; consider asking your employer whether they would add the charity of your choosing if it isn’t already on the list.
7. Consider the political environment. If you believe that your likelihood of taking charitable deductions is higher in 2017 than it will be in 2018, consider increasing your giving this year.
8. Choose your charity wisely. Saving money on taxes and transaction fees can be significant, in some cases approaching or exceeding a 50 percent increase in the amount you’re able to give. However, we believe that your choice of charity is a much larger factor in how much good your giving accomplishes.

Our charity recommendations make it possible to support outstanding, thoroughly vetted organizations—which we’ve investigated by reviewing academic evidence, interviewing staff, analyzing internal documents, conducting site visits, assessing funding needs, and more—without needing to do your own research. We publicly publish the full details of our process and analysis, so you can spot-check whatever part of our work and reasoning you’d like to.

Final notes

If you support our recommended charities (on the basis of our recommendation) but you don’t give through our website, please fill out this form to let us know about your gift; doing so helps GiveWell track our impact.

We believe that even when dealing with a relatively complicated gift (for example, a gift of stock), it’s possible to give quite quickly and with only minor hassle. The much more difficult challenge is choosing a charity, and we’ve tried to make that easy as well. We hope you’ll give this season, even if you’re just starting to think about it now.