# All Categories Blogs

Exploring how to get real change for your dollar.
Updated: 1 hour 30 sec ago

Wed, 12/12/2018 - 12:33

Our goal with hosting quarterly open threads is to give blog readers an opportunity to publicly raise comments or questions about GiveWell or related topics (in the comments section below). As always, you’re also welcome to email us at info@givewell.org or to request a call with GiveWell staff if you have feedback or questions you’d prefer to discuss privately. We’ll try to respond promptly to questions or comments.

You can view our September 2018 open thread here.

The post December 2018 open thread appeared first on The GiveWell Blog.

### Staff members’ personal donations for giving season 2018

Mon, 12/10/2018 - 13:30

For this post, GiveWell staff members wrote up the thinking behind their personal donations for the year. We made similar posts in previous years.1See our staff giving posts from 2017, 2016, 2015, 2014, and 2013. jQuery("#footnote_plugin_tooltip_1").tooltip({ tip: "#footnote_plugin_tooltip_text_1", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Staff are listed in order of their start dates at GiveWell.

Elie Hassenfeld

This year, I’m planning to donate to GiveWell for granting to top charities at its discretion.

I feel the same way I did last year, when I wrote, “GiveWell is currently producing the highest-quality research it ever has, which has led to more thoroughly researched, higher-quality recommendations that have been compared to more potential alternatives than ever before.”

I asked Holden Karnofsky, GiveWell’s co-founder, whether he thought there were promising opportunities for individuals with long-termist views; after checking with him, I believed that the Open Philanthropy Project and other donors were covering most of the opportunities I would find most promising.

I also considered giving to animal welfare organizations. I looked briefly at Animal Charity Evaluators’ research but ultimately didn’t feel like I had enough time to think through how their recommendations compared to giving to GiveWell, so I defaulted to GiveWell. I hope to give this more consideration in the future.

Natalie Crispin

I will be giving my annual gift to GiveWell for granting at its discretion to top charities. We expect that all of our top charities will be constrained by funding in the next year and that several will have unfunded opportunities to spend funds in highly cost-effective ways (at least 5 times as cost-effective as cash transfers). Our current best guess is that GiveWell will grant the funds it receives for granting at its discretion to Malaria Consortium, which would allow it to expand its work preventing child deaths from malaria in Nigeria or other countries. There is also a possibility that we will identify an opportunity that is more cost-effective than how Malaria Consortium would use funding at the current margin. Over the next few months, we will be discussing with our top charities how they plan to use funding from Good Ventures and other funders and what that means for how they would use additional funding. Giving to GiveWell for granting at its discretion allows for flexibility to take advantage of those opportunities.

I am very grateful for all the work, thoughtfulness, and hours of debate that my colleagues put into GiveWell’s recommendations this year. I am excited to support the most effective charities I know of.

Josh Rosenberg

I’m planning to give the same way that I did last year:

• 80% to GiveWell for granting at its discretion to top charities. GiveWell’s top charities are the most cost-effective ways to help people that I know of. I see Malaria Consortium’s work on seasonal malaria chemoprevention (the current default option for discretionary funding) as a robust and highly effective giving opportunity.
• 10% to animal welfare charities. I believe that animal welfare is a particularly important and neglected problem.
• 10% to long-term future-oriented causes. I have not yet chosen a donation target in this cause area. If I do not find an opportunity I am satisfied with after a small amount of additional research, I will enter this portion of my giving into a donor lottery.

I focused most of my giving on global health and development since GiveWell’s top charities have the most pressing funding gaps I am aware of. If I knew of an especially strong case for a particular giving opportunity in another cause area, I would be open to changing my allocation in the future.

Devin Jacob

I plan on making approximately 80% of my charitable donations in 2018 to GiveWell, with 100% of that money allocated to GiveDirectly. Compared to my colleagues at GiveWell, I value near-term improvements in material well-being more than I value reducing deaths. Donating to GiveDirectly is the best means of supporting this goal that I know of.

I struggle each year when attempting to assess whether I should bet on the possible long-term income effects of deworming. To date I have been unable to convince myself I should make this bet, even though I find little to argue with in our work on the expected value of donations to charities implementing deworming programs. I am making a decision to ignore the difference in expected value between a donation to a deworming charity and a donation to GiveDirectly due to the greater certainty of impact via the latter. I think my approach to charitable giving is conservative relative to other staff at GiveWell and many of our donors but I also think that my approach is reasonable given my specific ethical commitments.

I also support other organizations with gifts each year. This year, approximately 10-15% of my giving will go to organizations that do not meet GiveWell’s criteria. These organizations work in a number of areas including:

• Immigration policy, activism, and legal aid – International Refugee Assistance Project, RAICES, and the National Immigration Law Center
• Nonprofit news – primarily CALmatters, the Center for Investigative Reporting, and ProPublica
• Local issues I care about such as transit infrastructure – eg, Bike East Bay
• Other political causes

I choose to keep the political contributions I make private as some of the causes I support are controversial and I would not want my political beliefs to have any potential impact on GiveWell’s work.

In the course of my day-to-day work duties at GiveWell, I also frequently make small donations to our charities when testing various payment platforms. To date, these donations account for approximately 5-10% of my remaining planned gifts in 2018. These gifts are distributed among our recommended and standout charities haphazardly. I could refund these transactions, but choose not to do that as I think all of our recommended charities do excellent work and I am happy to support them.

Catherine Hollander

I plan to give 75% of my total charitable giving to Malaria Consortium’s seasonal malaria chemoprevention program. I value averting deaths quite highly and I believe, based on GiveWell’s assessment, that contributing toward filling Malaria Consortium’s funding gap will accomplish a lot of good in the world. In previous years (2017, 2016, and 2015), the majority of my gift has been directed to the Against Malaria Foundation (AMF), but I believe Malaria Consortium currently has a more pressing funding gap for its seasonal malaria chemoprevention work.

I plan to give 10% of my total giving to AMF to continue their work. I understand that giving predictably is helpful for organizations’ planning and I don’t wish to abruptly alter my support for AMF. I also think that AMF continues to represent an outstanding giving opportunity as one of GiveWell’s top charities.

I plan to give 5% of my total giving to StrongMinds, an organization focused on treating depression in Africa. I have not vetted this organization anywhere nearly as closely as GiveWell’s top charities have been vetted, though I understand that a number of people in the effective altruism community have a positive view of StrongMinds within the cause area of mental health (though I don’t have any reason to think it is more cost-effective than GiveWell’s top charities). Intuitively, I believe mental health is an important cause area for donors to consider, and although we do not have GiveWell recommendations in this space, I would like to learn more about this area by making a relatively small donation to an organization that focuses on it.

I plan to give the remaining 10% of my charitable giving this year in conjunction with my partner to an organization working on criminal justice reform in the United States. We are going to discuss and review organizations together between now and the end of the year and make a joint gift in this space. I plan to consult previous recommendations made by Open Philanthropy Project’s program officer focused on criminal justice reform, Chloe Cockburn, as well as checking with friends who are better informed of the needs in this space than I am.

Andrew Martin

I think there’s a strong case for donating to GiveWell to grant to top charities at its discretion this year.

Our top charities have substantial funding gaps for highly cost-effective programs, even after taking the $63.2 million that we’ve recommended that Good Ventures allocate between our top charities into account. These funding gaps include expanding Malaria Consortium’s work on seasonal malaria chemoprevention in Nigeria, Chad, and Burkina Faso, extending HKI’s vitamin A supplementation programs in several countries over the next three years, and extending Deworm the World’s programs in Pakistan and Nigeria. As Natalie and James have noted, it seems likely that donations given to GiveWell at the end of 2018 to allocate at its discretion will be directed to Malaria Consortium’s seasonal malaria chemoprevention program. I’m planning to donate to GiveWell to allocate at its discretion because I expect that GiveWell will either direct those funds to Malaria Consortium or to another funding gap it judges to be even more valuable to fill. Christian Smith I’m planning to make my year-end donation to Malaria Consortium for its seasonal malaria chemoprevention (SMC) program. As my colleagues have mentioned, Malaria Consortium appears to be in a great position for scaling up a highly-effective intervention in areas with substantial malaria burdens. I decided not to give to GiveWell for granting at its discretion because I think there’s a chance GiveWell will decide deworming programs look more worthwhile than SMC on the margin. I take a more skeptical stance than most of my colleagues on the value of deworming programs. While I’m not confident, I would guess that our process for modeling the value of deworming relative to malaria prevention puts deworming in too favorable a light. Isabel Arjmand My giving this year looks very similar to last year’s. It’s important to me for the bulk of my giving to go to organizations where I’m confident that my donation will have a substantial impact, and I don’t know of any giving opportunities in that vein that are as strong as GiveWell’s top charities. Each year I also give to a handful of other organizations, some in international development and others operating in the United States. I intend each of those donations to be large enough to be meaningful to me and to signal support for these programs, while still leaving the vast majority for GiveWell-recommended charities. In all, 80% of my charitable budget is going to GiveWell’s top charities and 20% to other causes, which is the same as my donation last year. I’m giving 75% of my total year-end donation to grants to recommended charities at GiveWell’s discretion. I strongly considered designating my donation to Malaria Consortium’s seasonal malaria chemoprevention (SMC) program instead. I’m very excited about Malaria Consortium’s opportunity to provide SMC in Nigeria; I’ve been particularly impressed by Malaria Consortium as an organization over the past year; and I have more confidence in SMC as an intervention than I do in some others. It’s hard for me to imagine preferring for my donation to go elsewhere when it’s time for GiveWell to grant out its discretionary funding from the fourth quarter of 2018. But, I believe that if GiveWell does decide to give the next round of discretionary funding elsewhere, I’m more likely than not to agree with that decision. I hold this belief in part because my moral weights and overall outputs in our cost-effectiveness analysis are quite similar to the median staff member, and while I’m concerned about the evidence for deworming, I think that concern is adequately reflected in my cost-effectiveness analysis inputs. An additional 5% of my donation will go to GiveDirectly. I look forward to continuing to follow the work they do, particularly their cash benchmarking project, their work with refugees, and their continual research to improve the effectiveness of their programs. I plan to distribute the remaining 20% of my donation across the following organizations: • International Refugee Assistance Project, which advocates for refugees and displaced people with a focus on those from the Middle East. • StrongMinds, which is the most promising organization I know of focused on mental health in low- and middle-income countries. • Planned Parenthood Action Fund, which takes a comprehensive, intersectional view of women’s health and reproductive justice. • Cool Earth, which works with local communities to protect rainforests and reduce carbon dioxide emissions. As I wrote last year, I’d be somewhat surprised if these organizations were competitively cost-effective with GiveWell’s top charities, and I haven’t vetted them with an intensity that comes anywhere close to the rigor of GiveWell evaluations. I choose to support these programs in order to promote more justice-focused causes, further my own civic engagement, and signal support for work I think is important. I also make small donations throughout the year to grassroots organizations working in the Bay Area like Causa Justa :: Just Cause, Initiate Justice, and the Sogorea Te Land Trust. These donations, which are motivated primarily by community engagement and relationship-building, come out of my personal discretionary spending, rather than what I budget for charitable giving. As always, I’m grateful for the thoughtfulness of my colleagues, the work that went into producing this year’s recommendations, and the conversations we’ve had that have informed my own giving. James Snowden I’m planning to donate to GiveWell for allocating funds at its discretion because (i) I prefer GiveWell to have the flexibility to react to new information, and (ii) in the absence of new information, I expect additional funds will be allocated to Malaria Consortium, the charity I would have given to. I expect Malaria Consortium would use those funds to scale up seasonal malaria chemoprevention in Nigeria, Chad and Burkina Faso. According to the Global Burden of Disease, Nigeria has the most deaths from malaria of any country, and Burkina Faso has the highest rate of deaths from malaria given its population size. This drives my view that donations to Malaria Consortium are likely to be more cost-effective than donations to the Against Malaria Foundation, which sometimes distributes nets in countries with a lower malaria burden. I may also continue to give a smaller proportion of my donations to organizations working on improving animal welfare, and focused on the long-term future, but haven’t yet decided whether to do so, or where to give. Dan Brown I will give 75% of my 2018 charity donation to GiveWell to allocate to recommended charities at its discretion. This is my first year working for GiveWell and I’ve been very impressed with the quality of work that goes into our recommendations. My moral values seem to be quite close to the median values across staff members in our cost-effectiveness analysis, and so I see no reason to deviate from GiveWell’s choice on that basis. As Natalie and James note, our best guess is that these funds will be allocated to Malaria Consortium to scale up its seasonal malaria chemoprevention programs. I will give 15% of my donation to No Means No Worldwide, a global rape prevention organisation. I spent a reasonable amount of time during my PhD researching gender based violence. This encouraged me to donate to an organisation tackling sexual violence, particularly because the frequency of sexual violence globally is staggering. I have not vetted No Means No Worldwide with anything like the rigor of a GiveWell evaluation, but I have been impressed by what I have read so far (e.g. they are evaluating their program using RCTs, and I like that part of their approach is to promote positive masculinity amongst boys). I will give 6% of my donation to Stonewall (UK), an organisation tackling discrimination against LGBT people. Whilst I have focused most of my donation on global health and development, I would also like to support a more justice-focused cause. I have fairly limited information with which to choose amongst charities in this area as I’m not aware of a GiveWell-type organisation to help direct my donation. However, I would like to see more done to tackle homophobia in sport, and the main organisation I am aware of that has tried to do this is Stonewall (UK) (through its Rainbow Laces campaign). I will give the remaining 4% of my donation to Afrinspire. I have donated to this charity for a number of years. To my knowledge, the money I donate is used to help pay for school costs for orphaned children in Kampala (through the Jaguza Initiative). I do not expect this to be as cost-effective as other charitable giving opportunities, but I do not think it would be responsible to unexpectedly decrease this donation now that I am paying more attention personally to cost-effectiveness. Olivia Larsen This year, I plan to give 95% of my year-end donation to GiveWell for granting at its discretion. This is my first year working at GiveWell full-time, and it will be my first time contributing to GiveWell’s discretionary fund. In previous years, I have chosen to support specific top charities among GiveWell’s recommendations. Knowing which charity I was supporting in advance of my donation helped me more clearly conceptualize the impact I was making. Since starting at GiveWell, however, I’ve seen the level of detail and thought that the research team puts into analyzing each top charity’s funding gaps and identifying where a marginal dollar will have the largest impact. I’m convinced that the additional good associated with GiveWell being able to adapt to additional information and allocate my donation to the highest-impact charity we see when the grants are disbursed outweighs my desire to know where my donation will go ahead of time. I also expect to allocate 5% of my year-end donation to helping factory farmed animals. This will be my first donation to an animal-focused charity, and it is a decision I went back and forth on. I believe that animals suffer, and I believe that I should act to alleviate that suffering; for example, by not eating animal products. Due to the scale of factory farming, the intensity of factory farming, and the neglectedness of the cause, I think it’s reasonable that interventions there might be orders of magnitude more cost-effective at averting the suffering of animals than GiveWell’s charities are at averting the suffering of humans. However, I’m very uncertain about how to compare helping animals to helping humans. I’m uncomfortable about the idea of allowing a human to suffer, even if I can alleviate the suffering of many animals with the same donation. I haven’t fully engaged with this discomfort yet, but I’m planning to make a donation targeted at helping animals this year to help me both clarify my own values and learn more about the effective animal advocacy space. I haven’t yet decided how to allocate this donation, but I expect that I’ll either donate to the Animal Welfare Fund through Effective Altruism Funds or through outsourcing the decision to a trusted friend who knows more about effective animal advocacy than I do. Amar Radia This year, I plan to give 75% of my donations to GiveWell to allocate at its discretion. I believe that this will ensure that my donations go the furthest in global health and development. In previous years, I have given to either one of GiveWell’s top charities, or to the Global Health Effective Altruism fund. This year, my greater understanding of the advantages in allowing my donations to be channelled at GiveWell’s discretion, coupled with my U.S. taxpayer status, have caused me to prefer to give to GiveWell for regranting. I plan to give the remaining 25% of my donations to an organization working on animal welfare but have not yet decided which one. It will likely be one of Animal Charity Evaluators top charities, and I expect to rely on the advice of a friend who has thought about effective animal charities far more than I have. I also considered giving some money to organizations focusing on the long-term future, but my view is that these organizations are not funding constrained. Notes [ + ] 1. ↑ See our staff giving posts from 2017, 2016, 2015, 2014, and 2013. function footnote_expand_reference_container() { jQuery("#footnote_references_container").show(); jQuery("#footnote_reference_container_collapse_button").text("-"); } function footnote_collapse_reference_container() { jQuery("#footnote_references_container").hide(); jQuery("#footnote_reference_container_collapse_button").text("+"); } function footnote_expand_collapse_reference_container() { if (jQuery("#footnote_references_container").is(":hidden")) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery("#" + p_str_TargetID); if(l_obj_Target.length) { jQuery('html, body').animate({ scrollTop: l_obj_Target.offset().top - window.innerHeight/2 }, 1000); } } The post Staff members’ personal donations for giving season 2018 appeared first on The GiveWell Blog. ### We’ve added more options for cryptocurrency donors Fri, 12/07/2018 - 13:03 We’ve updated our donations processing to better meet the needs of those who want to give via cryptocurrencies. Last year, after we began to accept Bitcoin, we received over$290,000 in Bitcoin donations.

By allowing more types of cryptocurrency donations, we’re enabling donors to realize tax deductions and to contribute more funding to their chosen charity based on gains in the cryptocurrencies they hold.

We’re now accepting donations in the following cryptocurrencies:

• Bitcoin (BTC)
• Bitcoin Cash (BCH)
• Ethereum (ETH)
• Ethereum Classic (ETC)
• Litecoin (LTC)
• 0x (ZRX)

We’ve built different pages for donating based on where you’d like to direct your support. To donate cryptocurrency, click the option you prefer:

If you have any questions or would like to donate in a currency not listed above, please reach out to us at donations@givewell.org.

If you have questions about the different options for directing your donation (top charities, standout charities, or operating expenses), please let us know.

The post We’ve added more options for cryptocurrency donors appeared first on The GiveWell Blog.

### Response to concerns about GiveWell’s spillovers analysis

Thu, 12/06/2018 - 14:02

Last week, we published an updated analysis on “spillover” effects of GiveDirectly‘s cash transfer program: i.e., effects that cash transfers may have on people who don’t receive cash transfers but who live nearby those who do receive cash transfers.1For more context on this topic, see our May 2018 blog post. jQuery("#footnote_plugin_tooltip_1").tooltip({ tip: "#footnote_plugin_tooltip_text_1", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); We concluded: “[O]ur best guess is that negative or positive spillover effects of cash are minimal on net.” (More)

Economist Berk Özler posted a series of tweets expressing concern over GiveWell’s research process for this report. We understood his major questions to be:

1. Why did GiveWell publish its analysis on spillover effects before a key study it relied on was public? Is this consistent with GiveWell’s commitment to transparency? Has GiveWell done this in other cases?
2. Why did GiveWell place little weight on some papers in its analysis of spillover effects?
3. Why did GiveWell’s analysis of spillovers focus on effects on consumption? Does this imply that GiveWell does not value effects on other outcomes?

These questions apply to GiveWell’s research process generally, not just our spillovers analysis, so the discussion below addresses topics such as:

• When do our recommendations rely on private information, and why?
• How do we decide on which evidence to review in our analyses of charities’ impact?
• How do we decide which outcomes to include in our cost-effectiveness analyses?

Finally, this feedback led us to realize a communication mistake we made: our initial report did not communicate as clearly as it should have that we were specifically estimating spillovers of GiveDirectly’s current program, not commenting on spillovers of cash transfers in general. We will now revise the report to clarify this.

Note: It may be difficult to follow some of the details of this post without having read our report on the spillover effects of GiveDirectly’s cash transfers.

Summary

In brief, our responses to Özler’s questions are:

• Why did GiveWell publish its analysis on spillover effects before a key paper it relied on was public? One of our major goals is to allocate money to charities as effectively as possible. Sometimes, research we learn about cannot yet be made public but we believe it should affect our recommendations. In these cases, we incorporate the private information into our recommendations and we are explicit about how it is affecting our views. We expect that private results may be more likely to change but nonetheless believe that they contain useful information; we believe ignoring such results because they are private would lead us to reach less accurate conclusions. For another recent example of an important conclusion that relied on private results, see our update on the preliminary (private) results from a study on No Lean Season, which was key to the decision to remove No Lean Season as a top charity in 2018. We discuss other examples below.
• Why did GiveWell place little weight on some papers in its analysis of spillover effects? In general, our analyses aim to estimate the impact of programs as implemented by particular charities. The goal of our spillovers analysis is to make our best guess about the size of spillover effects caused by GiveDirectly’s programs in Kenya, Uganda, and Rwanda. We are not trying to communicate an opinion on the size of spillover effects of cash transfers in other countries or in development economics more broadly. Therefore, our analysis places substantially more weight on studies that are most similar to GiveDirectly’s program on basic characteristics such as geographic location and program type. Correspondingly, we place little weight on papers that do not meet these criteria. However, we’d welcome additional information that would help us improve our future decisionmaking about which papers to put the most weight on in our analyses.
• Why did GiveWell’s analysis of spillovers focus on effects on consumption? Our cost-effectiveness models focus on key outcomes that we expect to drive the bulk of the welfare effects of a program. In the case of our spillovers analysis, we believe the two most relevant outcomes for estimating spillover effects on welfare are consumption and subjective well-being. We chose to focus on consumption effects in large part because (a) this is consistent with how we model the impacts of other programs, such as deworming, and (b) distinguishing effects on subjective well-being from effects on consumption in a way that avoids double-counting benefits was too complex to do in the time we had available. It is possible that additional work on subjective well-being measures would meaningfully change how we assess benefits of programs (for this program and potentially others). This is a question we plan to return to in the future.

As noted above, our current best guess is that negative or positive spillover effects of GiveDirectly’s cash transfers are minimal on net. However, we emphasize that our conclusion at this point is very tentative, and we hope to update our views next year if there is more public discussion or research on the areas of uncertainty highlighted in our analysis and/or if public debate about the studies covered in our report raises major issues we had not previously considered.

Details follow.

Why did GiveWell publish its analysis on spillover effects before a key paper it relied on was public?

In our analysis of the spillover effects of GiveDirectly’s cash transfer program, we place substantial weight on GiveDirectly’s “general equilibrium” (GE) study (as we noted we would do in May 2018,2“We plan to reassess the cash transfer evidence base and provide our updated conclusions in the next several months (by November 2018 at the latest). One reason that we do not plan to provide a comprehensive update sooner is that we expect upcoming midline results from GiveDirectly’s “general equilibrium” study, a large and high-quality study explicitly designed to estimate spillover effects, will play a major role in our conclusions. Results from this study are expected to be released in the next few months.” (More.) jQuery("#footnote_plugin_tooltip_2").tooltip({ tip: "#footnote_plugin_tooltip_text_2", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); prior to seeing the study’s results) because:

• it is the study with the largest sample size,
• its methodology was designed to estimate both across-village and within-village spillover effects, and
• it is a direct study of a version of GiveDirectly’s program.

The details of this study are currently private, though we were able to share the headline results and methodology when we published our report.

This represents one example of a general policy we follow, which is to be willing to compromise to some degree on transparency in order to use the best information available to us to improve the quality of our recommendations. More on the reasoning behind this policy:

• Since our recommendations affect the allocation of over $100 million each year, the value of improving our recommendations by factoring in the best information (even if private) can be high. Every November we publish updates to our recommended charities so that donors giving in December and January (when the bulk of charitable giving occurs) can act on the most up-to-date information. • We have ongoing communications with charities and researchers to learn about new information that could affect our recommendations. Private information (both positive and negative) has been important to our views on a number of occasions. Beyond the example of our spillovers analysis, early private results were key to our views on topics including: • No Lean Season in 2018 (negative result)3“In a preliminary analysis shared with GiveWell in September 2018, the researchers did not find evidence for a negative or positive impact on migration, and found no statistically significant impact on income and consumption.” (More.) jQuery("#footnote_plugin_tooltip_3").tooltip({ tip: "#footnote_plugin_tooltip_text_3", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); • Deworming in 2017 (positive result)4“We have seen preliminary, confidential results from a 15-year follow-up to Miguel and Kremer 2004. We are not yet able to discuss the results in detail, but they are broadly consistent with the findings from the 10-year follow-up analyzed in Baird et al. 2016.” (More.) jQuery("#footnote_plugin_tooltip_4").tooltip({ tip: "#footnote_plugin_tooltip_text_4", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); • Insecticide resistance in 2016 (modeling study)5“We have seen two modeling studies which model clinical malaria outcomes in areas with ITN coverage for different levels of resistance based on experimental hut trial data. Of these two studies, the most recent study we have seen is unpublished (it was shared with us privately), but we prefer it because the insecticide resistance data it draws from is more recent and more comprehensive.” (More.) jQuery("#footnote_plugin_tooltip_5").tooltip({ tip: "#footnote_plugin_tooltip_text_5", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); • Development Media International in 2015 (negative result)6“The preliminary endline results did not find any effect of DMI’s program on child mortality (it was powered to detect a reduction of 15% or more), and it found substantially less effect on behavior change than was found at midline. We cannot publicly discuss the details of the endline results we have seen, because they are not yet finalised and because the finalised results will be embargoed prior to publication, but we have informally incorporated the results into our view of DMI’s program effectiveness.” (More.) jQuery("#footnote_plugin_tooltip_6").tooltip({ tip: "#footnote_plugin_tooltip_text_6", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); • Living Goods in 2014 (positive result)7“The researchers have published an abstract on the study, and shared a more in-depth report with us. The more in-depth report is not yet cleared for publication because the authors are seeking publication in an academic journal.” (More.) jQuery("#footnote_plugin_tooltip_7").tooltip({ tip: "#footnote_plugin_tooltip_text_7", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); • Note that in all of the above cases we worked with the relevant researchers to get permission to publicly share basic information about the results we were relying on, as we did in the case of the GE study. • In all cases, we expected that full results would be made public in the future. Our understanding is that oftentimes early headline results from studies can be shared publicly while it may take substantially longer to publicly release full working papers because working papers are time-intensive to produce. We would be more hesitant to rely on a study that has been private for an unusually long period of time unless there were a good reason for it. • However, relying on private studies conflicts to some extent with our goal to be transparent. In particular, we believe two major downsides of our policy with respect to private information are (a) early private results are more likely to contain errors, and (b) we are not able to benefit from public scrutiny and discussion of the research. We would have ideally seen a robust public discussion of the GE study before we released our recommendations in November, but the timeline for the public release of GE study results did not allow that. We look forward to closely following the public debate in the future and plan to update our views based on what we learn. • Despite these limitations, we have generally found early, private results to be predictive of final, public results. This, combined with the fact that we believe private results have improved our recommendations on a number of occasions, leads us to believe that the benefits of our current policy on using private information outweigh the costs. A few other notes: • Although we provide a number of cases above in which we relied on private information, the vast majority of the key information we rely on for our charity recommendations is public. • When private information is shared with us that implies a positive update about a charity’s program, we try to be especially attentive about potential conflicts of interest. In this case, there is potential for concern because the GE study was co-authored by Paul Niehaus, Chairman of GiveDirectly. We chose not to substantially limit the weight we place on the GE study because (a) a detailed pre-analysis plan was submitted for this study, and (b) three of the four co-authors (Ted Miguel, Johannes Haushofer, and Michael Walker) do not have an affiliation with GiveDirectly. We have no reason to believe that GiveDirectly’s involvement altered the analysis undertaken. In addition, the GE study team informed us that Paul Niehaus recused himself from final decisions about what the team communicated to GiveWell. • When we published our report (about one week ago), we expected that some additional analysis from the GE study would be shared publicly soon (which we still expect). We do not yet have an exact date and do not know precisely what content will be shared (though we expect it to be similar to what was shared with us privately). Why did GiveWell place little weight on some papers in its analysis of spillover effects? Some general context on GiveWell’s research that we think is useful for understanding our approach in this case is: • We are typically estimating the impact of programs as implemented by particular charities, not aiming to publish formal meta-analyses about program areas as a whole. As noted above, we believe we should have communicated more clearly about this in our original report on spillovers and we will revise the report to clarify. • We focus our limited time on the research that we think is most likely to affect our decisions, so our style of analysis is often different from what is typically seen in academia. (We think the differences in the kind of work we do is captured well by a relevant Rachel Glennerster blog post.) Consistent with the above, the goal of our spillovers analysis was to make a best guess for the size of the spillover effect of GiveDirectly’s (GD’s) program in Kenya, Uganda, and Rwanda specifically.8This program provides$1,000 unconditional transfers and treats almost all households within target villages in Kenya and Uganda (though still treats only eligible households in Rwanda). jQuery("#footnote_plugin_tooltip_8").tooltip({ tip: "#footnote_plugin_tooltip_text_8", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); We are not trying to communicate an opinion on the size of spillover effects of cash transfers in other countries or development economics more broadly. If we were trying to do the latter, we would have considered a much wider range of literature.

We expect that studies that are most similar to GD’s program on basic characteristics such as geographic location and program type will be most useful for predicting spillovers in the GD context. So, we prioritize looking at studies that 1) took place in sub-Saharan Africa, and 2) evaluate unconditional cash transfer programs (further explanation in footnote).9On (1): Our understanding is that the nature and size of spillover effects is likely to be highly dependent on the context studied, for example because the extent to which village economies are integrated might differ substantially across contexts (e.g. how close households are to larger markets outside of the village in which they live, how easily goods can be transported, etc.).
On (2): We expect that providing cash transfers conditional on behavioral choices is a fairly different intervention from providing unconditional cash transfers, and so may have different spillover effects. jQuery("#footnote_plugin_tooltip_9").tooltip({ tip: "#footnote_plugin_tooltip_text_9", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); We would welcome additional engagement on this topic: that is, (a) to what extent should we believe that effects estimated in studies not meeting these criteria would apply to GD’s cash transfer programs, and (b) are there other criteria that we should have used?

A further factor that causes us to put more weight on the five studies we chose to review deeply is that they all study transfers distributed by GD, which we see as increasing their relevance to GD’s current work (though the specifics of the programs that were studied vary from GD’s current program). We believe that studies that do not meet the above criteria could affect our views on spillovers of GD’s program to some extent, but they would receive lower weight in our conclusions since they are less directly relevant to GD’s program.

We saw further review of studies that did not meet the above criteria as lower priority than a number of other analyses that we think would be more likely to shift our bottom line estimate of the spillovers of GD’s program. Even though we focused on the subset of studies most relevant to GD’s program, we were not able to combine their results to create a reasonable explicit model of spillover effects because we found that key questions were not answered by the available data (our attempt at an explicit model is in the following footnote).10We tried to create such an explicit model here (explanation here). jQuery("#footnote_plugin_tooltip_10").tooltip({ tip: "#footnote_plugin_tooltip_text_10", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); One fundamental challenge is that we are trying to apply estimates of “within-village” spillover effects to predict across-village spillover effects.11GiveDirectly treats almost all households within target villages in Kenya and Uganda (though still treats only eligible households in Rwanda). jQuery("#footnote_plugin_tooltip_11").tooltip({ tip: "#footnote_plugin_tooltip_text_11", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Additional complications are described here.

More on why we placed little weight on particular studies that Özler highlighted in his comments:12Note on terminology: In our spillovers analysis report, we talk about studies in terms of “inclusion” and “exclusion.” We may use the term “exclude” differently than it is sometimes used in, e.g., academic meta-analyses. When we say that we have excluded studies, we have typically lightly reviewed their results and placed little weight on them in our conclusions. We did not ignore them entirely, as may happen for papers excluded from an academic meta-analysis. To try to clarify this, in this blog post we have used the term “place little weight.” We will try to be attentive to this in future research that we publish. jQuery("#footnote_plugin_tooltip_12").tooltip({ tip: "#footnote_plugin_tooltip_text_12", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });

• We placed little weight on the following papers in our initial analysis for the reasons given in parentheses: Angelucci & DiGiorgi 2009 (conditional transfers, study took place in Mexico), Cunha et al. 2017 (study took place in Mexico), Filmer et al. 2018 (conditional transfers, study took place in the Philippines), and Baird, de Hoop, and Özler 2013 (mix of conditional and unconditional transfers).
• In addition, the estimates of mental health effects on teenage schoolgirls in Baird, de Hoop, and Özler 2013 seem like they would be relatively less useful for predicting the impacts of spillovers from cash transfers given to households, particularly in villages where almost all households receive transfers as is often the case in GD’s program.13We expect that local spillover effects via psychological mechanisms are less likely to occur with the current spatial distribution of GD’s program. In GD’s program in Kenya and Uganda, almost all households are treated within its target villages. In addition, the majority of villages within a region are treated in a block. Baird, de Hoop, and Özler 2013 estimate spillover effects within enumeration areas (groups of several villages), and the authors believe that the “detrimental effects on the mental well-being of those randomly excluded from the program in intervention areas is consistent with the idea that an individual’s utility depends on her relative consumption (or income or status) within her peer group”, p.372. The spatial distribution of GD’s program in Kenya and Uganda makes it more likely that the majority of one’s local peer group receives the same treatment assignment. jQuery("#footnote_plugin_tooltip_13").tooltip({ tip: "#footnote_plugin_tooltip_text_13", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });
Why did GiveWell’s analysis of spillovers focus on effects on consumption? Does this imply that GiveWell does not value effects on other outcomes?

Some general context on GiveWell’s research that we think is useful for understanding our approach in this case is:

• When modeling the cost-effectiveness of any program, there are typically a large number of outcomes that could be included in the model. In our analyses, we focus on the key outcomes that we expect to drive the bulk of the welfare effects of a program.
• For example, our core cost-effectiveness model primarily considers various programs’ effects on averting deaths and increasing consumption (either immediately or later in life). This means that, e.g., we do not include benefits of averting vision impairment in our cost-effectiveness model for vitamin A supplementation (in part because we expect those effects to be relatively small as a portion of the overall impact of the program).
• This does not mean that we think excluded outcomes are unimportant. We focus on the largest impacts of programs because (a) we think they are a good proxy for the overall impact of the relevant programs, and (b) having fewer outcomes simplifies our analysis, which leads to less potential for error, better comparability between programs, and a more manageable time investment in modeling.
• For a deeper assessment of which program impacts we include and exclude from our core cost-effectiveness model and why, see our model’s “Inclusion/exclusion” sheet.14We have not yet added it, but we plan to add “Subjective well-being” under the list of outcomes excluded in the “Cross-cutting / Structural” section of the sheet, since it may be relevant to all programs. jQuery("#footnote_plugin_tooltip_14").tooltip({ tip: "#footnote_plugin_tooltip_text_14", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); We aim to include outcomes that can be justified by evidence, feasibly modeled, and are consistent with how we handle other program outcomes. We revisit our list of excluded outcomes periodically to assess whether such outcomes could lead to a major shift in our cost-effectiveness estimate for a particular program.

In our spillovers analysis, we applied the above principles to try to identify the key welfare effects. Among the main five studies we reviewed on spillovers, it seems like the two most relevant outcomes are consumption and subjective well-being. We chose to focus on consumption for the following reasons:

• Assessing the effects of cash transfers on consumption (rather than subjective well-being) is consistent with how we model the welfare effects of other programs that we think increase consumption on expectation, such as deworming.
• Distinguishing effects on subjective well-being from effects on consumption in order to avoid double-counting benefits was too complex to do in the time we had available. It seems intuitively likely that standards of living (proxied by consumption) affect subjective well-being. In the Haushofer and Shapiro studies and in the GE study, the spillover effects act in the same direction for both consumption and subjective well-being. We do not think it would be appropriate to simply add subjective well-being effects into our model over and above effects on consumption since that risks double-counting benefits.
• We do not have a strong argument that consumption is a more robust proxy for “true well-being” than subjective well-being, but given that consumption effects can be more easily compared across our programs we have chosen it as the default option at this point.

We hope to broadly revisit in the future whether we should be placing more weight on measures of subjective well-being across programs. It is possible that additional work on subjective well-being measures would meaningfully change how we assess benefits of programs (for this program and potentially others).

Examples of our questions about how to interpret subjective well-being effects in the cash spillovers literature include:

• In the Haushofer and Shapiro studies, how should we interpret each of the underlying components of the subjective well-being indices? For example, how does self-reported life satisfaction map onto utility versus self-reported happiness?
• In Haushofer, Reisinger, & Shapiro 2015, there is a statistically significant negative spillover effect on life-satisfaction, but there are no statistically significant effects on happiness, depression, stress, cortisol levels or the overall subjective well-being index (column (4) of Table 1). How should we interpret these findings?
Next steps
• We hope that there is more public discussion on some of the policy-relevant questions we highlighted in our report and on the other points of uncertainty highlighted throughout this post. Our conclusions on spillovers are very tentative and could be affected substantially by more analysis, so we would greatly appreciate any feedback or pointers to relevant work.15If you are aware of relevant analyses or studies that we have not covered here, please let us know at info@givewell.org. jQuery("#footnote_plugin_tooltip_15").tooltip({ tip: "#footnote_plugin_tooltip_text_15", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });
• We are planning to follow up with Dr. Özler to better understand his views on spillover effects of cash transfers. We have appreciated his previous blog posts on this topic and want to ensure we are getting multiple perspectives on the relevant issues.

Notes   [ + ]

1. ↑ For more context on this topic, see our May 2018 blog post. 2. ↑ “We plan to reassess the cash transfer evidence base and provide our updated conclusions in the next several months (by November 2018 at the latest). One reason that we do not plan to provide a comprehensive update sooner is that we expect upcoming midline results from GiveDirectly’s “general equilibrium” study, a large and high-quality study explicitly designed to estimate spillover effects, will play a major role in our conclusions. Results from this study are expected to be released in the next few months.” (More.) 3. ↑ “In a preliminary analysis shared with GiveWell in September 2018, the researchers did not find evidence for a negative or positive impact on migration, and found no statistically significant impact on income and consumption.” (More.) 4. ↑ “We have seen preliminary, confidential results from a 15-year follow-up to Miguel and Kremer 2004. We are not yet able to discuss the results in detail, but they are broadly consistent with the findings from the 10-year follow-up analyzed in Baird et al. 2016.” (More.) 5. ↑ “We have seen two modeling studies which model clinical malaria outcomes in areas with ITN coverage for different levels of resistance based on experimental hut trial data. Of these two studies, the most recent study we have seen is unpublished (it was shared with us privately), but we prefer it because the insecticide resistance data it draws from is more recent and more comprehensive.” (More.) 6. ↑ “The preliminary endline results did not find any effect of DMI’s program on child mortality (it was powered to detect a reduction of 15% or more), and it found substantially less effect on behavior change than was found at midline. We cannot publicly discuss the details of the endline results we have seen, because they are not yet finalised and because the finalised results will be embargoed prior to publication, but we have informally incorporated the results into our view of DMI’s program effectiveness.” (More.) 7. ↑ “The researchers have published an abstract on the study, and shared a more in-depth report with us. The more in-depth report is not yet cleared for publication because the authors are seeking publication in an academic journal.” (More.) 8. ↑ This program provides $1,000 unconditional transfers and treats almost all households within target villages in Kenya and Uganda (though still treats only eligible households in Rwanda). 9. ↑ On (1): Our understanding is that the nature and size of spillover effects is likely to be highly dependent on the context studied, for example because the extent to which village economies are integrated might differ substantially across contexts (e.g. how close households are to larger markets outside of the village in which they live, how easily goods can be transported, etc.). On (2): We expect that providing cash transfers conditional on behavioral choices is a fairly different intervention from providing unconditional cash transfers, and so may have different spillover effects. 10. ↑ We tried to create such an explicit model here (explanation here). 11. ↑ GiveDirectly treats almost all households within target villages in Kenya and Uganda (though still treats only eligible households in Rwanda). 12. ↑ Note on terminology: In our spillovers analysis report, we talk about studies in terms of “inclusion” and “exclusion.” We may use the term “exclude” differently than it is sometimes used in, e.g., academic meta-analyses. When we say that we have excluded studies, we have typically lightly reviewed their results and placed little weight on them in our conclusions. We did not ignore them entirely, as may happen for papers excluded from an academic meta-analysis. To try to clarify this, in this blog post we have used the term “place little weight.” We will try to be attentive to this in future research that we publish. 13. ↑ We expect that local spillover effects via psychological mechanisms are less likely to occur with the current spatial distribution of GD’s program. In GD’s program in Kenya and Uganda, almost all households are treated within its target villages. In addition, the majority of villages within a region are treated in a block. Baird, de Hoop, and Özler 2013 estimate spillover effects within enumeration areas (groups of several villages), and the authors believe that the “detrimental effects on the mental well-being of those randomly excluded from the program in intervention areas is consistent with the idea that an individual’s utility depends on her relative consumption (or income or status) within her peer group”, p.372. The spatial distribution of GD’s program in Kenya and Uganda makes it more likely that the majority of one’s local peer group receives the same treatment assignment. 14. ↑ We have not yet added it, but we plan to add “Subjective well-being” under the list of outcomes excluded in the “Cross-cutting / Structural” section of the sheet, since it may be relevant to all programs. 15. ↑ If you are aware of relevant analyses or studies that we have not covered here, please let us know at info@givewell.org. function footnote_expand_reference_container() { jQuery("#footnote_references_container").show(); jQuery("#footnote_reference_container_collapse_button").text("-"); } function footnote_collapse_reference_container() { jQuery("#footnote_references_container").hide(); jQuery("#footnote_reference_container_collapse_button").text("+"); } function footnote_expand_collapse_reference_container() { if (jQuery("#footnote_references_container").is(":hidden")) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery("#" + p_str_TargetID); if(l_obj_Target.length) { jQuery('html, body').animate({ scrollTop: l_obj_Target.offset().top - window.innerHeight/2 }, 1000); } } The post Response to concerns about GiveWell’s spillovers analysis appeared first on The GiveWell Blog. ### Our updated top charities for giving season 2018 Mon, 11/26/2018 - 11:59 We’re excited to share our list of top charities for the 2018 giving season. We recommend eight top charities, all of which we also recommended last year. Our bottom line We recommend three top charities implementing programs whose primary benefit is reducing deaths. They are: Five of our top charities implement programs that aim to increase recipients’ incomes and consumption. They are: These charities represent the best opportunities we’re aware of to help people, according to our criteria. We expect GiveWell’s recommendations to direct more than$100 million to these organizations collectively over the next year. We expect our top charities to be able to effectively absorb hundreds of millions of dollars beyond that amount.

Our list of top charities is the same as it was last year, with the exception of Evidence Action’s No Lean Season. We removed No Lean Season from the list following our review of the results of a 2017 study of the program.

We also recognize a group of standout charities. We believe these charities are implementing programs that are evidence-backed and may be extremely cost-effective. However, we do not feel as confident in the impact of these organizations as we do in our top charities. We provide more information about our standout organizations here.

Where do we recommend donors give?
• We recommend that donors choose the “Grants to recommended charities at GiveWell’s discretion” option on our donation forms. We grant these funds quarterly to the GiveWell top charity or top charities where we believe they can do the most good.
• If you prefer to give to a specific charity, we believe that all of our top charities are outstanding and will use additional funding effectively. If we had additional funds to allocate now, the most likely recipient would be Malaria Consortium to scale up its work providing seasonal malaria chemoprevention.
• If you have supported GiveWell’s operations in the past, we ask that you maintain your support. If you have not supported GiveWell’s operations in the past, we ask that you consider designating 10 percent of your donation to help fund GiveWell’s operations.
How should donors give?
Conference call to discuss recommendations

We’re holding a conference call on Tuesday, December 4, at 12pm ET/9am PT to discuss our latest recommendations and to answer any questions you have. Sign up here to join the call.

Below, we provide:

• An overview of the research we conducted in 2018 that was directly relevant to these recommendations. More
• An explanation of changes to our recommended charity list and of major updates in the past year. More
• The funding allocation that we are recommending to Good Ventures and our top charities’ remaining room for more funding. More
• Our recommendations for people interested in supporting our top charities. More
Our research process in 2018

We plan to summarize all of the research we completed this year in a future post as part of our annual review process. A major focus of 2018 was improving our recommendations in future years, in particular through our work on GiveWell Incubation Grants and completing intervention reports on promising programs.

Below, we highlight the key research that led to our current charity recommendations. This page describes our general process for conducting research.

• Following existing top charities. We followed the progress and plans of each of our 2017 top charities. We had several conversations with each organization and reviewed documents they shared with us. We published updated reviews of each of our top charities. Key information from this work is available in the following locations:
• Our page summarizing changes at each of our top charities and standouts in 2018.
• Our workbook with each charity’s funding needs and our estimates of the cost-effectiveness of filling each need.
• Staying up to date on the research on the interventions implemented by our top charities. Details on some of what we learned in the section below.
• Making extensive updates to our cost-effectiveness model and publishing 14 updates to the model over the course of the year. In addition to updating our cost-effectiveness model with information from the intervention research described above, we added a “country selection” tab to our cost-effectiveness analysis (so that users can toggle between overall and country-specific cost-effectiveness estimates); an “inclusion/exclusion” tab, which lists different items that we considered whether or not to account for in our cost-effectiveness analysis; and we explicitly modeled factors that could lead to wastage (charities failing to use the funds they receive to implement their programs effectively).
• Completing a review of Zusha! We completed our review of the Georgetown University Initiative on Innovation, Development, and Evaluation—Zusha! Road Safety Campaign and determined that it did not meet all of our criteria to be a top charity. We named Zusha! a standout charity.
Major updates from the last 12 months

Below, we summarize major updates across our recommended charities over the past year. For detailed information on what changed at each of our top and standout charities, see this page.

• We removed Evidence Action’s No Lean Season from our top charity list. At the end of 2017, we named No Lean Season, a program that provides loans to support seasonal migration in Bangladesh, as one of GiveWell’s top charities. This year, we updated our assessment of No Lean Season based on preliminary results we received from a 2017 study of the program. These results suggested the program did not successfully induce migration in the 2017 lean season. Taking this new information into account alongside previous studies of the program, we and Evidence Action no longer believe No Lean Season meets our top charity criteria. We provide more details on this decision in this blog post.
• We received better information about Sightsavers’ deworming program. In previous years, we had limited information from Sightsavers documenting how it knew that its deworming programs were effectively reaching their intended beneficiaries. This year, Sightsavers shared significantly more monitoring information with us. This additional information substantially increased our confidence in Sightsavers’ deworming program. This spreadsheet shows the monitoring we received from Sightsavers in 2018.
• We reviewed new research on the priority programs implemented by our top charities and updated our views and cost-effectiveness analyses accordingly. Examples of such updates include:
Recommended allocation of funding for Good Ventures and top charities’ remaining room for more funding Allocation recommended to Good Ventures

Good Ventures is a large foundation with which GiveWell works closely; it has been a major supporter of GiveWell’s top charities since 2011. Each year, we provide recommendations to Good Ventures regarding how we believe it can most effectively allocate its grants to GiveWell’s recommended charities, in terms of the total amount donated (within the constraints of Good Ventures’ planning, based in part on the Open Philanthropy Project’s recommendations on how to allocate funding across time and across cause areas) as well as the distribution between recipient charities.

Because Good Ventures is a major funder that we expect to follow our recommendations, we think it’s important for other donors to take its actions into account; we also want to be transparent about the research that leads us to make our recommendations to Good Ventures. That said, Good Ventures has not finalized its plans for the year and may give differently from what we’ve recommended. We think it’s unlikely that any differences would have major implications for our bottom-line recommendations for other donors.

This year, GiveWell recommended that Good Ventures grant $64.0 million to our recommended charities, allocated as shown in the table below. Charity Recommended allocation from Good Ventures Remaining room for more funding1This column displays our top charities’ remaining room for more funding, or the amount we believe they can use effectively, for the next three years (2019-2021), after accounting for the$64.0 million we’ve recommended that Good Ventures give (our recommendation won’t necessarily be followed, but we think it’s unlikely that differences will be large enough to affect our bottom-line recommendation to donors) and an additional $1.1 million from GiveWell’s discretionary funding. jQuery("#footnote_plugin_tooltip_1").tooltip({ tip: "#footnote_plugin_tooltip_text_1", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Malaria Consortium (SMC program)$26.6 million $43.9 million Evidence Action (Deworm the World Initiative)$10.4 million $27.0 million Sightsavers (deworming program)$9.7 million $1.6 million Helen Keller International (VAS program)$6.5 million $20.6 million Against Malaria Foundation$2.5 million $72.5 million Schistosomiasis Control Initiative$2.5 million $16.9 million The END Fund (deworming program)$2.5 million $45.8 million GiveDirectly$2.5 million >$100 million Standout charities$800,000 (combined)

We discuss our process for making our recommendation to Good Ventures in detail in this blog post.

Allocation of GiveWell discretionary funds

As part of reviewing our top charities’ funding gaps to make a recommendation to Good Ventures, we also decided how to allocate the $1.1 million in discretionary funding we currently hold. The latter comes from donors who chose to donate to “Grants to recommended charities at GiveWell’s discretion” in recent months. We decided to allocate this funding to Malaria Consortium’s seasonal malaria chemoprevention program, due to how large and cost-effective we believe Malaria Consortium’s funding gap is. Top charities’ remaining room for more funding Although we are expecting to direct a significant amount of funding to our top charities ($65.1 million between Good Ventures and our discretionary funding), we believe that nearly all of our top charities could productively absorb considerably more funding than we expect them to receive from Good Ventures, our discretionary funding, and additional donations we direct based on our recommendation. This spreadsheet lists all of our top charities’ funding needs; rows 70-79 show total funding gaps by charity.

Our recommendation for donors The bottom line
• We recommend that donors choose the option to support “Grants to recommended charities at GiveWell’s discretion” on our donate forms. We grant these funds quarterly to the GiveWell top charity or top charities where we believe they can do the most good. We take into account charities’ funding needs and donations they have received from other sources when deciding where to grant discretionary funds. (The principles we outline in this post are indicative of how we will make decisions on what to fund.) We then make these grants to the highest-value funding opportunities we see among our recommended charities. This page lists discretionary grants we have made since 2014.
• If you prefer to give to a specific charity, we believe that all of our top charities are outstanding and will use additional funding effectively. See below for information that may be helpful in deciding between charities we recommend.
• If we had additional funds to allocate, the most likely recipient would be Malaria Consortium to scale up its work providing seasonal malaria chemoprevention.
Comparing our top charities

If you’re interested in donating to a specific top charity or charities, the following information may be helpful as you compare the options on our list. The table summarizes key facts about our top charities; column headings are defined below.

Note: the cost-effectiveness estimates we present in this post differ from those in our published cost-effectiveness analysis for a number of reasons.2“The cost-effectiveness estimates in this sheet, which we used to inform our recommended allocation differ from those in our published cost-effectiveness analysis because (1) we apply a number of adjustments to incorporate additional information (2) we apply different weightings to each program (which affects the weighted average of cost-effectiveness).” Source: Giving Season 2018 – Allocation (public), “Cost-effectiveness results” tab, row 17. Additional details at the link. jQuery("#footnote_plugin_tooltip_2").tooltip({ tip: "#footnote_plugin_tooltip_text_2", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });

One major reason for our uncertainty follows. As discussed here, Sightsavers’ prioritization of how to spend additional funds differed substantially from what would be implied by our cost-effectiveness analysis, but we think that this discrepancy may largely be due to factors that our model does not capture or ways our model may be inaccurate; therefore, it is difficult to rely on our model to assess the cost-effectiveness of specific remaining country funding gaps.

jQuery("#footnote_plugin_tooltip_5").tooltip({ tip: "#footnote_plugin_tooltip_text_5", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Possibly increasing income in adulthood Moderate Moderate The END Fund (deworming program) 5.4 Possibly increasing income in adulthood Moderate Relatively weak GiveDirectly 1 Immediately increasing income and assets Strong Strong

• Estimated cost-effectiveness (relative to cash transfers) at the present margin. We recommended that Good Ventures give $64.0 million to our top and standout charities, prioritizing the funding gaps that we believe are most cost-effective. The table above shows our estimates for the cost-effectiveness of additional donations to each charity, after accounting for the$64.0 million we’ve recommended that Good Ventures give (our recommendation won’t necessarily be followed, but we think it’s unlikely that differences will be large enough to affect our bottom-line recommendation to donors).
• Primary benefits of the intervention. This column describes the major benefit we see to supporting a charity implementing this intervention.
• Quality of the organization’s communication. In most cases, we have have spent dozens or hundreds of hours interacting with our top charities. Here, we share our subjective impression of how well each organization has communicated with us. Our assessment of the quality of a charity’s communications is driven by whether we have been able to resolve our questions—particularly our less straightforward questions—about the organization’s activities, impact, and plans; how much time and effort was required to resolve those questions; how often the charity has sent us information that we later learned is inaccurate; and how direct we believe the charity is in acknowledging their weaknesses and mistakes.

The organizations that stand out for high-quality communications are those that have most thoughtfully and completely answered our questions; brought problems with the program to our attention; and communicated clearly with us about timelines for providing additional information. High-quality communications reduce the time that we need to spend answering each question and therefore allow us to gain a greater degree of confidence in an organization. More importantly, our communication with an organization is one of the few ways that we can directly observe an organization’s general competence and thoughtfulness, so we see this as a proxy for unobserved ways in which the organization’s staff affect the impact of the program.

• Ongoing monitoring and likelihood of detecting future problems. The quality of the monitoring we have received from our top charities varies widely, although we believe it stands out from that of the majority of charities. Ideally, the monitoring data charities collect would be representative of the program overall (by sampling all or a random selection of locations or other relevant units); would measure the outcomes of greatest interest for understanding the impact of the program; and would use methods that result in a low risk of bias or fraud in the results. In assessing the quality of a charity’s monitoring, we ask ourselves, “how likely do we believe it is that there are substantive problems with the program that are not detected by this monitoring?”

Monitoring results inform our cost-effectiveness analyses directly. In addition, we believe that the quality of an organization’s monitoring give us information that is not fully captured in these analyses. Similar to how we view communication quality, we believe that understanding how an organization designs and implements monitoring is a opportunity to observe its general competence and degree of openness to learning and program improvement.

Other key factors donors might want to consider when making their giving decision:

• As shown in the table above, our top charities implement programs with different primary benefits: some primarily avert deaths; others primarily increase incomes or consumption. Donors’ preference for programs that avert deaths relative to those that increase incomes (or how one weighs the value of averting a death at a given cost or increasing incomes a certain amount at a given cost) depends on their moral values. The cost-effectiveness estimates shown above rely on the GiveWell research team’s moral values. For more on how we (and others) compare the “good” accomplished by different programs, see this blog post. Donors may make a copy of our cost-effectiveness model to input their own moral weights and see how that impacts the relative cost-effectiveness of our top charities.
• The table above shows cost-effectiveness estimates for different charities. We put significant weight on cost-effectiveness figures, but they have limitations. Read more about how we use cost-effectiveness estimates in this blog post.
• Ultimately, donors are faced with a decision about how to weigh estimated cost-effectiveness (incorporating their moral values) against additional information about an organization that we have not explicitly modeled. We’ve written about this choice in the context of choosing between GiveDirectly and SCI in this 2016 blog post.
• Four of our top charities implement deworming programs. We recommend the provision of deworming treatments to children for its possible impact on recipients’ incomes in adulthood. We work in an expected value framework; in other words, we’re willing to support a higher-risk intervention if it has the potential for higher impact (more in this post about our worldview). Deworming is such an intervention. We believe that deworming may have very little impact, but that risk is outweighed by the possibility that it has very large impact, and it’s very cheap to implement. We describe our assessment of deworming in this summary blog post as well as this detailed post. Donors who have lower risk tolerance may choose not to support charities implementing deworming programs.
• The table above lists our views on the quality of each of our top charities’ monitoring. This 2016 blog post describes our view of AMF’s monitoring and may give donors more insight into how we think about monitoring quality.
Giving to support GiveWell’s operations

GiveWell is currently in a financially stable position. Over the next few years, we are planning to significantly increase our spending, driven by hiring additional research and outreach staff. We project that our revenue will approximately equal our expenses over the next few years; however, this projection includes an expectation of growth in the level of operating support we receive.

We retain our “excess assets policy” to ensure that if we fundraise for our own operations beyond a certain level, we will grant the excess to our recommended charities. In June of 2018, we applied our excess assets policy and designated $1.75 million in unrestricted funding for grants to recommended charities. We cap the amount of operating support we ask Good Ventures to provide to GiveWell at 20 percent of our operating expenses, for reasons described here. We ask that donors who use GiveWell’s research consider the following: • If you have supported GiveWell’s operations in the past, we ask that you maintain your support. Having a strong base of consistent operations support allows us to make valuable hires when opportunities arise and to minimize staff time spent on fundraising for our operating expenses. • If you have not supported GiveWell’s operations in the past, we ask that you designate 10 percent of your donation to help fund GiveWell’s operations. This can be done by selecting the option to “Add 10% to help fund GiveWell’s operations” on our credit card donation form or letting us know how you would like to designate your funding when giving another way. Questions? We’re happy to answer questions in the comments below. Please also feel free to reach out directly with any questions. This post was written by Andrew Martin, Catherine Hollander, Elie Hassenfeld, James Snowden, and Josh Rosenberg. Notes [ + ] 1. ↑ This column displays our top charities’ remaining room for more funding, or the amount we believe they can use effectively, for the next three years (2019-2021), after accounting for the$64.0 million we’ve recommended that Good Ventures give (our recommendation won’t necessarily be followed, but we think it’s unlikely that differences will be large enough to affect our bottom-line recommendation to donors) and an additional $1.1 million from GiveWell’s discretionary funding. 2. ↑ “The cost-effectiveness estimates in this sheet, which we used to inform our recommended allocation differ from those in our published cost-effectiveness analysis because (1) we apply a number of adjustments to incorporate additional information (2) we apply different weightings to each program (which affects the weighted average of cost-effectiveness).” Source: Giving Season 2018 – Allocation (public), “Cost-effectiveness results” tab, row 17. Additional details at the link. 3. ↑ For sources on the estimates included in this table, see this spreadsheet, “Cost-effectiveness results” tab. The estimates presented here differ from the estimates presented in our recommendation to Good Ventures because they estimate cost-effectiveness on the margin, if Good Ventures were to follow our recommendations. 4. ↑ At the margin, we expect additional funding to Deworm the World Initiative to support its programs in Pakistan and Nigeria in 2021 as well as Deworm the World’s general reserves. We think these are broadly good uses of funds, but our cost-effectiveness model is not currently built to meaningfully model the cost-effectiveness of reserves. In the absence of more information, we would guess that additional funding to Deworm the World would be roughly in the range of our estimate for Deworm the World’s overall organizational cost-effectiveness (~15x as cost-effective as cash transfers), but we have not analyzed the details of additional spending at the current margin enough to be confident in that estimate. However, if Good Ventures generally follows our recommended allocation, we expect that Deworm the World will have sufficient funding to continue its most time-sensitive work and we can decide whether to fund other marginal opportunities at a later date. 5. ↑ We do not have a strong sense of the cost-effectiveness of additional funds to Sightsavers at the current margin. Our cost-effectiveness estimate of Sightsavers’ remaining funding gap is 15.4x as cost-effective as cash transfers, but this fails to capture a number of features particular to the program Sightsavers would fund on the margin. We would guess that the value of marginal funding to Sightsavers is roughly in the range of our overall estimate for Sightsavers of ~12x as cost-effective as cash transfers. One major reason for our uncertainty follows. As discussed here, Sightsavers’ prioritization of how to spend additional funds differed substantially from what would be implied by our cost-effectiveness analysis, but we think that this discrepancy may largely be due to factors that our model does not capture or ways our model may be inaccurate; therefore, it is difficult to rely on our model to assess the cost-effectiveness of specific remaining country funding gaps. function footnote_expand_reference_container() { jQuery("#footnote_references_container").show(); jQuery("#footnote_reference_container_collapse_button").text("-"); } function footnote_collapse_reference_container() { jQuery("#footnote_references_container").hide(); jQuery("#footnote_reference_container_collapse_button").text("+"); } function footnote_expand_collapse_reference_container() { if (jQuery("#footnote_references_container").is(":hidden")) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery("#" + p_str_TargetID); if(l_obj_Target.length) { jQuery('html, body').animate({ scrollTop: l_obj_Target.offset().top - window.innerHeight/2 }, 1000); } } The post Our updated top charities for giving season 2018 appeared first on The GiveWell Blog. ### Our recommendation to Good Ventures Mon, 11/26/2018 - 11:59 Today, we announce our list of top charities for the 2018 giving season. We expect to direct over$100 million to the eight charities on our list as a result of our recommendation.

Good Ventures, a large foundation with which we work closely, is the largest single funder of our top charities. We make recommendations to Good Ventures each year for how much funding to provide to our top charities and how to allocate that funding among them. As this funding is significant, we think it’s important for other donors to take into account the recommendation we make to Good Ventures.

This blog post explains in detail how we decide what to recommend to Good Ventures and why; we want to be transparent about the research that leads us to our recommendations to Good Ventures. If you’re interested in a bottom-line recommendation for where to donate this year, please view our post with recommendations for non-Good Ventures donors.

Note that Good Ventures has not finalized its plans for the year and may give differently from what we’ve recommended. We think it’s unlikely that any differences would have major implications for our bottom-line recommendations for other donors.

Summary

In this post, we discuss:

• How we decided how much funding to recommend Good Ventures provide to our top charities.
• Our recommendation for how Good Ventures should allocate that funding among our top charities, and how we arrived at that allocation:
How we decided how much funding to recommend to Good Ventures

This year, GiveWell recommended that Good Ventures grant $64.0 million to our top charities and standout charities. The amount Good Ventures gives to our top charities is based in part on how the Open Philanthropy Project plans to allocate funding across time and across cause areas. (Read more about our relationships with Good Ventures and the Open Philanthropy Project here.) The Open Philanthropy Project currently plans to allocate around 10% of its total available capital to “straightforward charity,” which it currently allocates to global health and development causes based on GiveWell’s recommendations. This 10% allocation includes two “buckets”—a fixed percentage of total giving each year of 5% and another “flexible” bucket of 5%, which can be spent down quickly (over a few years) or slowly (over many years). GiveWell’s recommendation that Good Ventures grant$64.0 million this year puts the flexible bucket on track to be spent down within the next 14 years.

Since then, Good Ventures has made three additional grants totaling approximately $2.7 million to support the program’s scale-up. No Lean Season continued to test and scale their program with this and other support. We decided to recommend No Lean Season as a top charity in late 2017. We based our recommendation on three randomized controlled trials (RCTs) of the program. (We generally consider RCTs to be one of the strongest types of evidence available; you can read more about why we rely on RCTs here.) Two of the RCTs (conducted in 2008 and 2014) indicated increased migration, income, and consumption for program participants. In the third RCT, which was conducted in 2013 and has not been published, the program is considered to have failed to induce migration, potentially due to political violence that year. We discuss the RCT evidence in greater depth in our intervention report on conditional subsidies for seasonal labor migration in northern Bangladesh. Weighing the evidence, the cost of the program, and the potential impacts, we decided No Lean Season met our criteria to be named a top charity in November 2017. We summarized our reasoning in our blog post announcing our 2017 list of top charities, and noted the risks of this recommendation: Several randomized controlled trials (RCTs) of subsidies to increase migration provide moderately strong evidence that such an intervention increases household income and consumption during the lean season. An additional RCT is ongoing. We estimate that No Lean Season is roughly five times as cost-effective as cash transfers (see our cost-effectiveness analysis). Evidence Action has shared some details of its plans for monitoring No Lean Season in the future, but, as many of these plans have not been fully implemented, we have seen limited results. Therefore, there is some uncertainty as to whether No Lean Season will produce the data required to give us confidence that loans are appropriately targeted and reach their intended recipients in full; that recipients are not pressured into accepting loans; and that participants successfully migrate, find work, and are not exposed to major physical and other risks while migrating. As indicated above, No Lean Season conducted an additional RCT to evaluate its program during the 2017 lean season (approximately September to December), the preliminary results of which indicate the program failed to induce migration. With the evidence from the 2017 RCT, the case for the program’s impact and cost-effectiveness looks weaker. Our updated perspective on No Lean Season The 2017 RCT was a key factor in the decision to remove No Lean Season from our top charities list. Below, we discuss: What did the 2017 RCT find? The 2017 RCT was a collaboration between Evidence Action, Innovations for Poverty Action, and researchers from Yale University, the London School of Economics, and the University of California, Davis. In a preliminary analysis shared with GiveWell in September 2018, the researchers did not find evidence for a negative or positive impact on migration, and found no statistically significant impact on income and consumption.[1] However, the implementation of the program during the 2017[2] lean season and the evaluation of it differed from previous iterations. No Lean Season operated at a larger scale in the fall of 2017 than it had previously, offering loans to 158,155 households, compared with 16,268 households in 2016. Relative to earlier versions of the program, the program in 2017 involved (a) higher-intensity delivery of the intervention (offering loans to most eligible individuals) and (b) broader eligibility requirements (the eligibility rate in 2017 was 77 percent, compared with 49 percent in 2016).[3] At this point, neither GiveWell, nor No Lean Season, nor the researchers feel we have a conclusive understanding of why the program failed to induce migration. However, No Lean Season and the researchers are exploring various hypotheses about what may explain the failure to induce migration, and they note that some suggestive evidence supports some hypotheses more than others. The researchers have posited several possibilities: 1. The way the program was targeted in 2017 was suboptimal. The Migration Organizers, who survey households for eligibility and offer and disburse loans (more detail here under “Migration Organizers”), may have focused their efforts on the individuals that were seen as most likely to migrate, rather than those who needed a loan to afford migration. The use of loan targets during implementation may have inadvertently incentivized this behavior.[4] If, for example, loan officers mostly made loans to people who would have migrated regardless of receiving a loan, this could have led to the lack of impact on migration found in the study. 2. The 2017 lean season was particularly bad for the program. The researchers note that severe flooding and associated implementation delays in some regions may have caused problems in 2017. The researchers plan to look more closely at the regions that experienced flooding, though they note that they don’t have the data necessary to make experimental comparisons.[5] In addition, a 2013 trial may have failed due to issues that were specific to the year of that trial, such as increased labor strikes. 3. There exists another (currently unknown) reason why this program won’t work at scale. Conditions in Bangladesh may have changed, negative spillovers (harmful impacts for individuals who did not receive loans) may cancel out gains, or pilot villages may have been strategically picked in earlier trials.[6] The researchers are considering all of these possibilities. After considering various possible theories as well as some non-experimental data (including administrative data and data from a special-purpose survey of Migration Organizers who worked on the program in 2017), they feel that the ‘mistargeting’ theory is the most likely explanation and the explanation most consistent with the analysis.[7] In scenario (1), No Lean Season may be able to identify and fix the problem. In scenario (2), GiveWell will need to update our estimate of the impact of the program to take into account the fact that periodic program failures due to external factors are more likely than we previously thought. In scenario (3), the program is unlikely to be effective in the future. How did we interpret the RCT results? We don’t know the extent to which each of the above explanations contributed to the study not finding an effect on migration. We used the results of the 2017 RCT to update our cost-effectiveness estimate for the program. Cost-effectiveness estimates form arguably the most important single input into our decisions about whether or not to recommend charities (more on how GiveWell uses cost-effectiveness analyses here). When we calculate a program’s cost-effectiveness, we take many different factors into account, such as the administrative and program costs and the expected impact. We also make a number of educated guesses, such as the likelihood that a program’s impact in a new country will be similar to that in a country where it has previously worked. Below, we describe the mechanism by which the 2017 RCT result was incorporated into our model and how it changed our conclusion. Prior to this year, we formed our view of No Lean Season based on the three small-scale RCTs mentioned above (conducted in 2008, 2013, and 2014). Each of these RCTs looked at a slightly different version of the program. We believed that the ‘high-intensity’ arm of the 2014 RCT was the version most likely to resemble the program at scale. We thus used the migration rate measured in this arm of the RCT as our starting point for calculating the program’s impact. The high-intensity arm of the 2014 RCT also had the highest measured migration rate of the three RCTs we assessed, and so we wanted to give some consideration to the less-positive results found in the other two assessments. We applied a small, downward adjustment to the rate of induced migration observed in the 2014 high-intensity arm in our cost-effectiveness model; this was an educated guess, based on the information we had. Our best guess was that the program would lead, in expectation, to 80% of the induced migration seen in the 2014 high-intensity arm.[8] Now, the preliminary 2017 RCT results show no significant impact on migration rates or incomes. Because this trial was large and very recent, we updated our expectations of the impact of the program substantially, and in a negative direction. Our best guess now is that the program will lead, in expectation, to 40% of the induced migration seen in the 2014 high-intensity arm. Holding other inputs constant, this adjustment reduces our estimate of No Lean Season’s cost-effectiveness by a factor of two. This reduced cost-effectiveness, along with our updated qualitative picture of No Lean Season’s evidence of effectiveness, led to the decision to remove No Lean Season from our top charities list. What does the future of No Lean Season look like? Although they are not raising more funding at this time, No Lean Season has over two years’ worth of remaining funding. We understand that the organization has made changes to the program design in 2018 based on emerging interpretations of the 2017 results, and has collected additional data to evaluate some of the hypotheses which may explain those results (including, for example, a survey of Migration Organizers who worked on the 2017 program). They plan to subject the 2018 implementation round to an additional ‘RCT-at-scale,’ with a particular focus on reassessing the program’s effects on migration, income and consumption, as well as potential effects at migration destinations. They will continue to explore what may have caused the issue in the 2017 program at scale, and to see whether they can find a solution. If they do that, we’ll want to reassess the evidence and the costs to determine whether No Lean Season meets our bar for top charity status. Evidence Action believes we should have the necessary information to reassess starting in mid-2019, based on the results of the RCT conducted during the 2018 lean season and other analyses they perform. Conclusion This is the second time since 2011 that we have removed a top charity from our list (prior to 2011, our top charities list was fairly different from today; we made a big-picture shift in our priorities that year that led us to our more recent lists). The previous removal occurred in 2013, when we took the Against Malaria Foundation (AMF) off of our list because we didn’t believe it could absorb additional funding effectively in the near term. AMF was reinstated as a top charity in 2014. The decision to remove a top charity is never easy. But continuously evaluating GiveWell’s recommended charities is an important part of our work, and we take it seriously. It’s easy to talk about a commitment to evidence when the results are positive. It’s hard to maintain that commitment when the results are not. We’re excited to work with a group like Evidence Action that is committed to rigorous program evaluation and open discussion of the results of those evaluations. Its openness about these results has increased our confidence in Evidence Action as an organization. We look forward to seeing the results from the 2018 RCT in 2019. Notes [1] “At this early stage in analysis, we find no evidence that the program had an impact (positive or negative) on migration, caloric intake, food expenditure, or income.” Evidence Action, unpublished summary document, Page 1. [2] The 2017 RCT studied a period from the fall of 2017 through early 2018. [3] “This study has two main goals: 1. “A replication of previous findings showing positive impact of incentivized migration on seasonal migration, caloric intake, food and non-food expenditure, income, and food security. Our aim is to estimate impact of a scaled version of the No Lean Season program: intensifying program implementation within branches and expanding the provision of loans to all eligible households.” Unpublished summary document, Page 1. [4] “The second set of explanations focus on unintentional implementation changes caused by the change ineligibility, the vastly expanded scope of the program, or other factors. In the most recent round, it is possible that Migration Organizers (MOs) focused their efforts on those households who were most likely to migrate even without a loan to the exclusion of the target population households who need a loan to afford migration. Such behavior may have even been encouraged by the use of targets set by the NGO to manage implementation at such a large scale. We have implemented a qualitative survey to understand the incentives and actions of MOs last year, and are revising our instructions to avoid any possibility of this issue this year.” Evidence Action, unpublished summary document (with minor revision from Evidence Action), Page 11. [5] “Most notably, the program was affected by severe flooding in many regions, and implementation was subsequently delayed as well. We are still evaluating whether these regions are the ones with the most diminished effects, although we lack the data in control areas to conduct an experimental comparison.” Evidence Action, unpublished summary document, Page 11-12. [6] “It is possible that what we observe this year may be the true effect of the No Lean Season program when implemented at scale. This may be because conditions in rural Bangladesh have changed since the initial years of success, spillovers at scale cancel out any gains observed in small-scale pilots, or pilot villages were selected because they were most likely to be receptive to the program.” Evidence Action, unpublished summary document, Page 11. [7] Evidence Action, “Interpretation of 2017 Results” deck and narrative (unpublished) [8] “This adjustment is used to account for external validity concerns not accounted for elsewhere in the CEA. “The default adjustment value of 80% is our best guess about the appropriate value, but it is not based on a formal calculation. “The program at scale takes place in the same region with the same implementers (RDRS and Evidence Action) as the source of our key evidence for the intervention (the 2014 RCT). The program at scale differs in some aspects of implementation, particularly the inclusiveness of the eligibility criteria and the proportion of eligible households offered an incentive. In the 2014 RCT, the subsidy was a cash transfer rather than an interest-free loan, however the 2008 RCT found a similar effect regardless of whether the subsidy was a cash transfer or an interest-free loan. “There is some evidence (from a 2013 RCT) suggesting that the program may be ineffective when the perceived risk of migrating increases for reasons such as labor strikes and violence. The researchers estimated that these are 1-in-10 year events. “Additional discussion related to this parameter can be found at https://www.givewell.org/charities/no-lean-season#programdifferentfromRCTs.” 2018 GiveWell Cost-Effectiveness Model — Version 10, “Migration subsidies” tab, note on cell A19. The post Update on No Lean Season’s top charity status appeared first on The GiveWell Blog. ### A grant to Evidence Action Beta to prototype, test, and scale promising programs Tue, 10/09/2018 - 11:46 In July 2018, we recommended a$5.1 million grant to Evidence Action Beta to create a program dedicated to developing potential GiveWell top charities by prototyping, testing, and scaling programs which have the potential to be highly impactful and cost-effective.

This grant was made as part of GiveWell’s Incubation Grants program, which aims to support potential future GiveWell top charities and to help grow the pipeline of organizations we can consider for a recommendation. Funding for Incubation Grants comes from Good Ventures, a large foundation with which we work closely.

Summary

This post will discuss the following:

• Why Evidence Action Beta is promising. (More)
• Risks we see with this Incubation Grant. (More)
• Our plans for following Evidence Action Beta’s work going forward. (More)
Incubation Grant to Evidence Action Beta

We summarized our case for making this grant in a recently-published write-up:

A key part of GiveWell’s research process is trying to identify evidence-backed, cost-effective programs. GiveWell sometimes finds programs that seem potentially highly impactful based on academic research, but for which there is no obvious organizational partner that could scale up and test them. This grant will fund Evidence Action Beta to create … [an] incubator … focused on interventions that GiveWell and Evidence Action believe are promising but that lack existing organizations to scale them.

We have found that which program a charity works on is generally the most important factor in determining its overall cost-effectiveness. Through partnering with Evidence Action Beta to test programs that we think have the potential to be very cost-effective, … our hope is that programs tested and scaled up through this partnership may eventually become GiveWell top charities.

We believe this incubator has the potential to fill a major gap in the nonprofit world by providing a well-defined path for testing and potentially scaling … promising idea[s] for helping the global poor.

For full details on the grant activities and budget, see this page.

We believe that Evidence Action Beta is well-positioned to run this incubator because of its track record of scaling up cost-effective programs with high-quality monitoring. Evidence Action Beta’s parent organization, Evidence Action, leads two of our top charities (Deworm the World Initiative and No Lean Season) and one standout charity (Dispensers for Safe Water).

Modeling cost-effectiveness

In addition to the theoretical case for the grant outlined above, we also made explicit predictions and modeled the potential cost-effectiveness of this grant, so we could better consider it relative to other options. In this section, we provide more details on our process for estimating the grant’s cost-effectiveness.

The main path to impact we see with this grant is by creating new top charities which could use GiveWell-directed funds more cost-effectively than alternatives could.

This could occur:

1. if Evidence Action Beta incubates charities which are more cost-effective than our current top charities, or
2. if Evidence Action Beta incubates charities which are similarly cost-effective to our current top charities—in a scenario in which we have mostly filled our current top charities’ funding gaps. Right now, we believe our top charities can absorb significantly more funding than we expect to direct to them; this diminishes our view of the value of finding additional, similarly cost-effective opportunities. If our current top charities’ funding gaps were close to filled, we would place higher value on identifying additional room for more funding at a similarly cost-effective level.

This grant could also have an impact if it causes other, non-GiveWell funders to allocate resources to charities incubated by this grant. This incubator may create programs that GiveWell doesn’t direct funding to but others do. If these new opportunities are more cost-effective than what these funders would have otherwise supported, then this grant will have had a positive impact by causing funds to be spent more cost-effectively, even if GiveWell never recommends funding to the new programs directly.

We register forecasts for all Incubation Grants we make. We register these not because we are confident in them but because they help us clarify and communicate our expectation for the outcomes of the grant. Here, we forecast a 55% chance that Evidence Action Beta’s incubator leads to a new top charity by December 2023 that is 1-2x as cost-effective as the giving opportunity to which we would have otherwise directed those funds and a 30% chance that the grant does not lead to any new top charities by that time. (For more forecasts we made surrounding this grant, see here.)

We incorporated our forecasts as well as the potential impacts outlined above in our cost-effectiveness estimate for the grant: note that the potential upside coming from other funders is a particularly rough estimate which could change substantially with additional research.

Our best guess is that this grant is approximately ~9x as cost-effective as cash transfers, but we have spent limited time on this estimate and are highly uncertain about it. For context, we estimate that the average cost-effectiveness of our current top charities is between ~3x and ~12x as cost-effective as cash transfers.

Risks to the success of the grant

We do see risks to the success of this grant:

• Few programs may be more cost-effective than our current top charities, or our top charities may remain underfunded for a long time. If Evidence Action Beta fails to identify more cost-effective giving opportunities than GiveWell’s 2017 top charities, or if it only identifies similarly cost-effective giving opportunities while our current top charities remain underfunded, barring any major upside effects, this grant will have failed to make an impact.
• We expect this partnership with Evidence Action Beta to require a fair amount of senior staff capacity. If other means of identifying cost-effective giving opportunities, such as our work to evaluate policy opportunities, end up seeming more promising, this capacity may have been misused.
Going forward

This grant initiates a partnership with Evidence Action Beta toward which we might contribute substantial additional GiveWell Incubation Grant funding in the future. We plan to spend a fair amount of staff time on this ongoing partnership and follow this work closely.

We look forward to sharing updates and the results.

The post A grant to Evidence Action Beta to prototype, test, and scale promising programs appeared first on The GiveWell Blog.

### Publishing more frequent updates to our cost-effectiveness model

Tue, 10/02/2018 - 12:46

We’ve recently made a number of adjustments to improve our research process. Not all of them are easily visible outside of the organization.

This post is to highlight one of them: Publishing more frequent updates to our cost-effectiveness model throughout the year.

Summary

This post will explain:

• What changed in how we make updates to our cost-effectiveness model. (More)
• Why we made this change. (More)
• How to engage with updates to our model. (More)
What changed?

Last week, we published the ninth and tenth versions of our cost-effectiveness model in 2018. We made a number of updates to the newest versions of the model. They included accounting for reductions in malaria incidence for individuals who don’t receive seasonal malaria chemoprevention (SMC), the treatment one of our top charities distributes to prevent malaria, but who might benefit from living near other people receiving SMC (version 9) and the cost per deworming treatment delivered by another top charity, Sightsavers (version 10). These changes, and six others that were incorporated in the two latest versions, are described in our changelog.

Up until last year, we generally updated our cost-effectiveness model once or twice per year. However, as our model grew in complexity and we dedicated more research staff capacity to it, we decided that it would be beneficial to publish updates more regularly. We published our first in this series of more-frequent updates to our cost-effectiveness model in May 2017, as well as “release notes” (PDF) detailing the changes we made and the impact each had on our cost-effectiveness estimates.

We published five versions of our cost-effectiveness model in 2017. In 2018, we shifted from publishing PDF release notes to creating a “changelog“—a public page listing the changes we made to each version of the model, to be updated in tandem with the publication of each new version.

Internally, we moved toward having one staff member, Christian Smith, who is responsible for managing all changes to our cost-effectiveness model. He aims to publish a new version whenever there is a large, structurally complicated change to the model, or if there are several small and simple changes. Our internal process prioritizes being able to track how each change to the model moves the bottom line.

Changes we’ve published this year include updated inputs based on new research, such as the impact of insecticide resistance on the effectiveness of insecticide-treated nets; changes to inputs we include or exclude from the model altogether, such as removing short-term health benefits from deworming; and cosmetic changes to make the model easier to engage with, such as removing adjustments to account for the influence of GiveWell’s top charities on other actors from a particular tab.

Why we moved to this approach

Although it involves uncertainty, GiveWell’s cost-effectiveness model is a core piece of our research work and important input into our decisions about which charities to research and recommend. However, we believe it is challenging to engage with our model—to give a sense of the scale, our current model has 16 tabs, some of which use over 100 rows—and to keep up with changes we’ve made to the model over time.

Our hope is that publishing more frequent and transparent updates brings us closer in line to our goal of intense transparency and presenting a clear, vettable case for our recommendations to the public. It makes clearer the magnitude of any given change’s impact on our bottom line, and makes the evolution of the model over time easier to track. We also expect that it reduces the likelihood for errors, as fewer elements are being changed at any given time.

How to engage with updates to our model

We update our changelog, viewable here, when we publish a new version.

Going forward, we also plan to publish an announcement to our “Newly published GiveWell materials” email list when we do this. You can sign up to receive alerts from this email address here.

The post Publishing more frequent updates to our cost-effectiveness model appeared first on The GiveWell Blog.

Mon, 09/10/2018 - 12:25

Our goal with hosting quarterly open threads is to give blog readers an opportunity to publicly raise comments or questions about GiveWell or related topics (in the comments section below). As always, you’re also welcome to email us at info@givewell.org or to request a call with GiveWell staff if you have feedback or questions you’d prefer to discuss privately. We’ll try to respond promptly to questions or comments.

You can view our June 2018 open thread here.

The post September 2018 open thread appeared first on The GiveWell Blog.

### Allocation of discretionary funds from Q2 2018

Tue, 08/28/2018 - 12:48

In April to June 2018, we received $1.2 million in funding for making grants at our discretion. In addition, GiveWell’s Board of Directors voted to allocate$2.9 million in unrestricted funds to making grants to recommended charities. In this post we discuss:

• The decision to allocate the $4.1 million to the Against Malaria Foundation (AMF) (70 percent) and the Schistosomiasis Control Initiative (SCI) (30 percent). • Our recommendation that donors give to GiveWell for granting to top charities at our discretion so that we can direct the funding to the top charity or charities with the most pressing funding need. For donors who prefer to give directly to our top charities, we continue to recommend giving 70 percent of your donation to AMF and 30 percent to SCI to maximize your impact. • Why we have allocated unrestricted funds to making grants to recommended charities. Allocation of discretionary funds The allocation of 70 percent of the funds to AMF and 30 percent to SCI follows the recommendation we have made, and continue to make, to donors. For more discussion on this allocation, see our blog post about allocating discretionary funds from the fourth quarter of 2017. We ask each top charity to provide details of how they will use additional funding each year, as part of our process to update our “room for more funding” summary for each top charity. This year, we have asked for this information by the end of July. We also ask each of our top charities to let us know if they encounter unexpected funding gaps at other times of year. We have not learned of new funding gaps in the last quarter. What is our recommendation to donors? We continue to recommend that donors give to GiveWell for granting to top charities at our discretion so that we can direct the funding to the top charity or charities with the most pressing funding need. For donors who prefer to give directly to our top charities, we are continuing to recommend giving 70 percent of your donation to AMF and 30 percent to SCI to maximize your impact. The reasons for this recommendation are the same as in our Q4 2017 post on allocating discretionary funding. We will complete a full analysis of our top charities’ funding gaps and cost-effectiveness by November and expect to update our recommendation to donors at that time. Why we have allocated unrestricted funds to making grants to recommended charities In June, GiveWell’s Board of Directors voted to allocate$2.9 million in unrestricted funds to making grants to recommended charities. We generally use unrestricted funds to support GiveWell’s operating costs. The decision was made to grant out some of the unrestricted funds we hold in accordance with two policies:

• Our “excess assets” policy specifies that once we surpass a certain level of unrestricted assets, we grant out the excess rather than continue to hold it ourselves. We reviewed our unrestricted asset holdings and projected revenue and expenses for 2018-2020 and concluded that we held $1.8 million more than was required to give us a stable, predictable financial situation (details of how this rule is applied are at the previous link). The Board voted to irrevocably restrict this amount to making grants to recommended charities. Note that we continue to need ongoing donor support for our operations. This decision incorporates our projections for future donations. • In order to limit the risks of relying too heavily on any single source of revenue, we cap the amount of funding that we will use from one source to support our operating costs at 20% of our projected annual expenses. In early 2018, we received a donation of$2.1 million in unrestricted funds. Our operating expense budget for 2018 is $4.9 million. Therefore, the Board voted to retain$1.0 million to support operating costs in 2018 and irrevocably restrict $1.1 million to making grants to recommended charities. The post Allocation of discretionary funds from Q2 2018 appeared first on The GiveWell Blog. ### Why we don’t use subnational malaria mortality estimates in our cost-effectiveness models Wed, 08/22/2018 - 13:15 Summary We recently completed a small project to determine whether using subnational baseline malaria mortality estimates would make a difference to our estimates of the cost-effectiveness of two of our top charities, the Against Malaria Foundation and Malaria Consortium. We ultimately decided not to include these adjustments because they added complexity to our models and would require frequent updating, while only making a small difference (a 3-4% improvement) to our bottom line. Though this post is on a fairly narrow topic, we believe this example illustrates the principles we use to make decisions about what to include in our cost-effectiveness model. Background Two of our top charities—the Against Malaria Foundation (AMF) and Malaria Consortium’s seasonal malaria chemoprevention program—implement programs to prevent malaria, a leading killer of people in low- and middle-income countries. One of the core reasons we recommend AMF and Malaria Consortium is their cost-effectiveness: how much impact they have (e.g., cases of malaria prevented, malaria deaths averted) with the funds they receive. Our estimates of charities’ cost-effectiveness isn’t just helpful to us in determining which charities should be GiveWell top charities; we also rely on these estimates to guide our decisions about how to allocate funding between our top charities. Our cost-effectiveness estimates for AMF and Malaria Consortium use country-wide data on malaria mortality and malaria incidence in the places that both organizations work.1In both cases, we rely on reports by Cochrane, an organization that produces systematic reviews and other synthesized research to inform decision-makers. For AMF, we use a decline in all-cause mortality, because the Cochrane review of anti-malarial bed net distributions reports the effect in terms of a reduction in all-cause mortality. For Malaria Consortium, we use a decline in malaria mortality (proxied by a decline in malaria incidence), as the Cochrane review of seasonal malaria chemoprevention reports the effect in terms of a reduction in malaria incidence, but not all-cause mortality. See our cost-effectiveness analysis for more details. jQuery("#footnote_plugin_tooltip_1").tooltip({ tip: "#footnote_plugin_tooltip_text_1", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); However, neither organization serves a whole country—rather, they operate in sub-national regions—so the use of country-level estimates could cause us to either underestimate or overestimate their cost-effectiveness. If, for example, these programs are focused in the areas of the country with the highest malaria burden, using the average burden for the country would lead us to underestimate their cost-effectiveness. So, we completed a project to determine how much of an impact using subnational estimates would have, to consider whether we ought to incorporate this information into our cost-effectiveness analysis. How we estimated the impact of subnational malaria incidence AMF distributes insecticide-treated nets to prevent malaria; Malaria Consortium’s seasonal malaria chemoprevention (SMC) program provides preventive anti-malarial drugs. We used estimates of subnational malaria incidence from the Malaria Atlas Project (MAP) to see if regions covered by nets or eligible for SMC had higher or lower incidence than the average in the country in which they are located.2We assume that the regional distribution of malaria incidence is a reasonable proxy for the regional distribution of malaria mortality. jQuery("#footnote_plugin_tooltip_2").tooltip({ tip: "#footnote_plugin_tooltip_text_2", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); We focused on all areas covered by nets or eligible for SMC (rather than those covered by our top charities, specifically) for two reasons: 1. Our understanding is that when our top charities contribute resources to a country’s net distribution or SMC programs, the marginal region covered by these additional resources is not necessarily the same as the region to which these resources are assigned (because these resources are fungible with other resources within the national programs).3A limitation of this analysis is it does not account for the possibility that AMF and Malaria Consortium are causing locations that are higher priority or lower priority than the average location already covered by nets or eligible for SMC to be covered on the margin. We do not explicitly include estimates of the marginal region funded in our cost-effectiveness analysis because we often have limited information about which regions would be covered with marginal additional funds. jQuery("#footnote_plugin_tooltip_3").tooltip({ tip: "#footnote_plugin_tooltip_text_3", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); 2. Our aim is to estimate the cost-effectiveness of funds donated to these organizations in the future. The subnational region where AMF has worked in the past has not historically been a good indicator of the region where it will work in future. Results for net distributions in countries where AMF works We looked at geographical variation in malaria incidence in countries where AMF works, weighting each region by the number of nets it currently receives.4We assume that where nets have been delivered in the past is a good proxy for where new nets will be delivered in the future. The data and calculations are in this spreadsheet. jQuery("#footnote_plugin_tooltip_4").tooltip({ tip: "#footnote_plugin_tooltip_text_4", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); The average net delivered in the countries in which AMF works is hung in an area with 0-9% higher malaria incidence than the average in that country, and the weighted average adjustment to AMF’s cost-effectiveness would be 3% (in other words, AMF becomes 3% more cost-effective if we incorporate subnational estimates).5See Cell J114. We did not include Papua New Guinea (where AMF funds some nets) in this analysis, as MAP only covers countries in Africa. jQuery("#footnote_plugin_tooltip_5").tooltip({ tip: "#footnote_plugin_tooltip_text_5", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Country Adjustment Zambia +9% Uganda +4% Ghana +4% Democratic Republic of the Congo +1% Togo +1% Malawi +0% Results for SMC in countries where Malaria Consortium works We looked at six countries comprising >95% of Malaria Consortium’s SMC spending and compared malaria incidence in districts eligible for SMC with the country-wide average.6“The suitability of an area for SMC is determined by the seasonal pattern of rainfall, malaria transmission and the burden of malaria. SMC is recommended for deployment in areas: (i) where more than 60% of the annual incidence of malaria occurs within 4 months (ii) where there are measures of disease burden consistent with a high burden of malaria in children (incidence ≥ 10 cases of malaria among every 100 children during the transmission season) (iii) where SP and AQ [the drugs used to treat children] retain their antimalarial efficacy.” WHO SMC field guide (2013), Pg 8. jQuery("#footnote_plugin_tooltip_6").tooltip({ tip: "#footnote_plugin_tooltip_text_6", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });7The data and calculations are in this spreadsheet. jQuery("#footnote_plugin_tooltip_7").tooltip({ tip: "#footnote_plugin_tooltip_text_7", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); The average region eligible for SMC in countries where Malaria Consortium works has -2% to 17% higher malaria incidence than the average in that country. The weighted average adjustment to Malaria Consortium’s cost-effectiveness would be 4%.8See Cell C126. jQuery("#footnote_plugin_tooltip_8").tooltip({ tip: "#footnote_plugin_tooltip_text_8", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Country Adjustment Commentary Guinea +17% Conakry, the capital, is ineligible for SMC and has low incidence. Nigeria +12% SMC appears to be targeted in the north, where malaria incidence is slightly higher. Niger +2% The majority of the population is either covered or planned to be covered from 2019. Burkina Faso 0% All districts are eligible. Mali 0% All districts are eligible. Chad -2% The four regions with very low malaria incidence (Borkou, Tibesti, Ennedi Est and Ouest) aren’t eligible for SMC, but are sparsely populated. What we concluded We decided not to include these adjustments in our cost-effectiveness analysis because they increased complexity, without substantially affecting the bottom line. When we decide whether to include adjustments in our model in general, we use a framework that first takes our best guess of the likely effect size and then rates each of the remaining question on a three-point scale. Score9We use these scores as a qualitative guide to help us think through what to include in our cost-effectiveness analysis. You can see the rubric we use to assign scores in this spreadsheet. jQuery("#footnote_plugin_tooltip_9").tooltip({ tip: "#footnote_plugin_tooltip_text_9", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Commentary Best guess of effect size 3-4% Can it be objectively justified? 3/3 While we have not investigated the MAP data in detail, we would guess that after further investigation, we would conclude it provides a reasonable approximation of subnational malaria incidence.10You can read more about MAP’s methodology in this paper. jQuery("#footnote_plugin_tooltip_10").tooltip({ tip: "#footnote_plugin_tooltip_text_10", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); How easily can it be modelled? 3/3 The methodology is clear and simple. Is it consistent with our other cost-effectiveness analyses? 2/3 We could include subnational adjustments for both of our top charities that implement malaria-prevention programs, but we believe it is unlikely there would be sufficient data to do the same for prevalence of worms or vitamin A deficiency (the focus of five of our other seven top charities). Even though these adjustments can be objectively justified and are fairly easy to model, the bottom-line difference they make to our cost-effectiveness estimates is insufficient to warrant the (moderate) increase in the complexity of our models. These adjustments would also introduce an inconsistency between our methodologies for top charities. As a result, we are not planning to incorporate subnational adjustments at this time. When would we revisit this conclusion? We will revisit using subnational malaria mortality estimates if AMF or Malaria Consortium start working in countries where it would make a large difference to the bottom line. We would include subnational adjustments if AMF contributed nets in any of these countries: Djibouti (+500% adjustment), South Africa (+259%), and Swaziland (+126%), where malaria is endemic in some parts of the country but not others. We would also consider subnational adjustments if AMF contributed nets in Namibia (+25%), Kenya (+23%), Madagascar (+14%), or Rwanda (+10%).11The data and calculations are in this spreadsheet. jQuery("#footnote_plugin_tooltip_11").tooltip({ tip: "#footnote_plugin_tooltip_text_11", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); We will investigate whether subnational adjustments would make a substantial difference if Malaria Consortium enters additional countries; at this time, we do not have details on which regions are eligible for SMC in countries in which Malaria Consortium is not currently operating.12We have not yet prioritized getting details on which regions are eligible for SMC in countries in which Malaria Consortium does not currently work, as this would likely impose a substantial time cost on Malaria Consortium. jQuery("#footnote_plugin_tooltip_12").tooltip({ tip: "#footnote_plugin_tooltip_text_12", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); You can read the internal emails discussing our decision process here. Notes [ + ] 1. ↑ In both cases, we rely on reports by Cochrane, an organization that produces systematic reviews and other synthesized research to inform decision-makers. For AMF, we use a decline in all-cause mortality, because the Cochrane review of anti-malarial bed net distributions reports the effect in terms of a reduction in all-cause mortality. For Malaria Consortium, we use a decline in malaria mortality (proxied by a decline in malaria incidence), as the Cochrane review of seasonal malaria chemoprevention reports the effect in terms of a reduction in malaria incidence, but not all-cause mortality. See our cost-effectiveness analysis for more details. 2. ↑ We assume that the regional distribution of malaria incidence is a reasonable proxy for the regional distribution of malaria mortality. 3. ↑ A limitation of this analysis is it does not account for the possibility that AMF and Malaria Consortium are causing locations that are higher priority or lower priority than the average location already covered by nets or eligible for SMC to be covered on the margin. We do not explicitly include estimates of the marginal region funded in our cost-effectiveness analysis because we often have limited information about which regions would be covered with marginal additional funds. 4. ↑ We assume that where nets have been delivered in the past is a good proxy for where new nets will be delivered in the future. The data and calculations are in this spreadsheet. 5. ↑ See Cell J114. We did not include Papua New Guinea (where AMF funds some nets) in this analysis, as MAP only covers countries in Africa. 6. ↑ “The suitability of an area for SMC is determined by the seasonal pattern of rainfall, malaria transmission and the burden of malaria. SMC is recommended for deployment in areas: (i) where more than 60% of the annual incidence of malaria occurs within 4 months (ii) where there are measures of disease burden consistent with a high burden of malaria in children (incidence ≥ 10 cases of malaria among every 100 children during the transmission season) (iii) where SP and AQ [the drugs used to treat children] retain their antimalarial efficacy.” WHO SMC field guide (2013), Pg 8. 7, 11. ↑ The data and calculations are in this spreadsheet. 8. ↑ See Cell C126. 9. ↑ We use these scores as a qualitative guide to help us think through what to include in our cost-effectiveness analysis. You can see the rubric we use to assign scores in this spreadsheet. 10. ↑ You can read more about MAP’s methodology in this paper. 12. ↑ We have not yet prioritized getting details on which regions are eligible for SMC in countries in which Malaria Consortium does not currently work, as this would likely impose a substantial time cost on Malaria Consortium. function footnote_expand_reference_container() { jQuery("#footnote_references_container").show(); jQuery("#footnote_reference_container_collapse_button").text("-"); } function footnote_collapse_reference_container() { jQuery("#footnote_references_container").hide(); jQuery("#footnote_reference_container_collapse_button").text("+"); } function footnote_expand_collapse_reference_container() { if (jQuery("#footnote_references_container").is(":hidden")) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery("#" + p_str_TargetID); if(l_obj_Target.length) { jQuery('html, body').animate({ scrollTop: l_obj_Target.offset().top - window.innerHeight/2 }, 1000); } } ### GiveWell’s money moved and web traffic in 2017 Fri, 06/29/2018 - 16:33 GiveWell is dedicated to finding outstanding giving opportunities and publishing the full details of our analysis. In addition to evaluations of other charities, we publish substantial evaluation of our own work. This post lays out highlights from our 2017 metrics report, which reviews what we know about how our research impacted donors. Please note: • We report on “metrics years” that run from February through January; for example, our 2017 data cover February 1, 2017 through January 31, 2018. • We differentiate between our traditional charity recommendations and the work of the Open Philanthropy Project, which became a separate organization in 2017 and whose work we exclude from this report. • More context on the relationships between GiveWell, Good Ventures, and the Open Philanthropy Project can be found here. Summary of influence: In 2017, GiveWell influenced charitable giving in several ways. The following table summarizes our understanding of this influence. Headline money moved: In 2017, we tracked$117.5 million in money moved to our recommended charities. Our money moved only includes donations that we are confident were influenced by our recommendations.

Money moved by charity: Our nine top charities received the majority of our money moved. Our seven standout charities received a total of $1.8 million. Money moved by size of donor: In 2017, the number of donors and amount donated increased across each donor size category, with the notable exception of donations from donors giving$1,000,000 or more. In 2017, 90% of our money moved (excluding Good Ventures) came from 20% of our donors, who gave $1,000 or more. Donor retention: The total number of donors who gave to our recommended charities or to GiveWell unrestricted increased about 29% year-over-year to 23,049 in 2017. This included 14,653 donors who gave for the first time. Among all donors who gave in the previous year, about 42% gave again in 2017, up from about 35% who gave again in 2016. Our retention was stronger among donors who gave larger amounts or who first gave to our recommendations prior to 2015. Of larger donors (those who gave$10,000 or more in either of the last two years), about 73% who gave in 2016 gave again in 2017.

GiveWell’s expenses: GiveWell’s total operating expenses in 2017 were $4.6 million. Our expenses decreased from about$5.5 million in 2016 due to the Open Philanthropy Project becoming a separate organization in June 2017. We estimate that 67% of our total expenses ($3.1 million) supported our traditional top charity work and about 33% supported the Open Philanthropy Project. In 2016, we estimated that expenses for our traditional top charity work were about$2.0 million.

Donations supporting GiveWell’s operations: GiveWell raised $5.7 million in unrestricted funding (which we use to support our operations) in 2017, compared to$5.6 million in 2016. Our major institutional supporters and the six largest individual donors contributed about 49% of GiveWell’s operational funding in 2017.

Web traffic: The number of unique visitors to our website remained flat in 2017 compared to 2016 (when excluding visitors driven by AdWords, Google’s online advertising product).

For more detail, see our full metrics report (PDF).

The post GiveWell’s money moved and web traffic in 2017 appeared first on The GiveWell Blog.

### Announcing Zusha! as a standout charity

Thu, 06/21/2018 - 12:51

We’ve added the Georgetown University Initiative on Innovation, Development, and Evaluation gui2de‘s Zusha! Road Safety Campaign (from here on, “Zusha!”) as a standout charity; see our full review here. Standout charities do not meet all of our criteria to be a GiveWell top charity, but we believe they stand out from the vast majority of organizations we have considered. See more information about our standout charities here.

Zusha! is a campaign intended to reduce road accidents. Zusha! supports distribution of stickers to public service vehicles encouraging passengers to speak up and urge drivers to drive more safely. We provided a GiveWell Incubation Grant to Zusha! in January 2017 and discussed it in a February 2017 blog post.

For more information, see our full review. Interested donors can give to Zusha! by clicking “Donate” on that page.

The post Announcing Zusha! as a standout charity appeared first on The GiveWell Blog.

Wed, 06/13/2018 - 13:49

Our goal with hosting quarterly open threads is to give blog readers an opportunity to publicly raise comments or questions about GiveWell or related topics (in the comments section below). As always, you’re also welcome to email us at info@givewell.org or to request a call with GiveWell staff if you have feedback or questions you’d prefer to discuss privately. We’ll try to respond promptly to questions or comments.

You can view our March 2018 open thread here.

The post June 2018 open thread appeared first on The GiveWell Blog.

### Allocation of discretionary funds from Q1 2018

Mon, 06/04/2018 - 14:46

In the first quarter of 2018, we received $2.96 million in funding for making grants at our discretion. In this post we discuss: • The decision to allocate the$2.96 million to the Against Malaria Foundation (AMF) (70 percent) and the Schistosomiasis Control Initiative (SCI) (30 percent).
• Our recommendation that donors give to GiveWell for granting to top charities at our discretion so that we can direct the funding to the top charity or charities with the most pressing funding need. For donors who prefer to give directly to our top charities, we continue to recommend giving 70 percent of your donation to AMF and 30 percent to SCI to maximize your impact.

Allocation of discretionary funds

The allocation of 70 percent of the funds to AMF and 30 percent to SCI follows the recommendation we have made, and continue to make, to donors. For more discussion on this allocation, see our blog post about allocating discretionary funds from the previous quarter.

We also considered the following possibilities for this quarter:

Helen Keller International (HKI) for stopgap funding in one additional country

We discussed this possibility in our blog post about allocating discretionary funds from the previous quarter. After further discussing this possibility with HKI, our understanding is that (a) the amount of funding needed to fill this gap will likely be small relative to the amount of GiveWell-directed funding that HKI currently holds, and (b) we will have limited additional information in time for this decision round that we could use to compare this new use of funding to HKI’s other planned uses of funding. We will continue discussing this opportunity with HKI and may allocate funding to it in the future. Our current expectation is that we will ask HKI to make the tradeoff between allocating the GiveWell-directed funding it holds to this new opportunity and continuing to hold the funds. Holding the funds gives the current programs more runway (originally designed to fund three years) and gives HKI more flexibility to fund highly cost-effective, unanticipated opportunities in the future. We believe that HKI is currently in a better position to assess cost-effectiveness of the opportunities it has than we are, while we will seek to maximize cost-effectiveness in the longer run by assessing HKI’s track record of cost-effectiveness and comparing that to the cost-effectiveness of other top charities.

We remain open to the possibility that HKI will share information with us that will lead us to conclude that this new opportunity is a better use of funds than our current recommendation of 70 percent to AMF and 30 percent to SCI. In that case, we would allocate funds from the next quarter to fill this funding gap (and could accelerate the timeline on that decision if it were helpful to HKI).

Evidence Action’s Deworm the World Initiative for funding gaps in India and Nigeria

We spoke with Deworm the World about two new funding gaps it has due to unexpected costs in its existing programs in India and Nigeria.

In India, the cost overruns total $166,000. Deworm the World has the option of drawing down a reserve of$5.5 million (from funds donated on GiveWell’s recommendation). The reserve was intended to backstop funds that were expected but not fully confirmed from another funder. Given the small size of the gap relative to the available reserves, our preference is for Deworm the World to use that funding and for us to consider recommending further reserves as part of our end-of-year review of our top charities’ room for more funding.

In Nigeria, there is a funding gap of $1.7 million in the states that Deworm the World is currently operating in. Previous budgets assumed annual treatment for all children, and Deworm the World has since become aware of the existence of areas where worm prevalence is high enough that twice per year treatment is recommended. Our best guess is that AMF and SCI are more cost-effective than Deworm the World’s Nigeria program (see discussion in this post). It is possible that because additional funding would go to support additional treatments in states where programs already operate, the cost to deliver these marginal treatments would be lower. We don’t currently have enough data to analyze whether that would significantly change the cost-effectiveness in this case. Deworm the World also continues to have a funding gap for expansion to other states in Nigeria. We wrote about this opportunity in our previous post on allocating discretionary funding. Malaria Consortium for seasonal malaria chemoprevention (SMC) We continue to see a case for directing additional funding to Malaria Consortium for SMC, as we did last quarter. Our views on this program have not changed. For further discussion, see our previous post on allocating discretionary funding. What is our recommendation to donors? We continue to recommend that donors give to GiveWell for granting to top charities at our discretion so that we can direct the funding to the top charity or charities with the most pressing funding need. For donors who prefer to give directly to our top charities, we are continuing to recommend giving 70 percent of your donation to AMF and 30 percent to SCI to maximize your impact. The reasons for this recommendation are the same as in our previous post on allocating discretionary funding. The post Allocation of discretionary funds from Q1 2018 appeared first on The GiveWell Blog. ### New research on cash transfers Fri, 05/04/2018 - 12:21 Summary • There has been a good deal of discussion recently about new research on the effects of cash transfers, beginning with a post by economist Berk Özler on the World Bank’s Development Impact blog. We have not yet fully reviewed the new research, but wanted to provide a preliminary update for our followers about our plans for reviewing this research and how it might affect our views of cash transfers, a program implemented by one of our top charities, GiveDirectly. • In brief, the new research suggests that cash transfers may be less effective than we previously believed in two ways. First, cash transfers may have substantial negative effects on non-recipients who live near recipients (“negative spillovers”). Second, the benefits of cash transfers may fade quickly. • We plan to reassess the cash transfer evidence base and provide our updated conclusions in the next several months (by November 2018 at the latest). One reason that we do not plan to provide a comprehensive update sooner is that we expect upcoming midline results from GiveDirectly’s “general equilibrium” study, a large and high-quality study explicitly designed to estimate spillover effects, will play a major role in our conclusions. Results from this study are expected to be released in the next few months. • Our best guess is that we will reduce our estimate of the cost-effectiveness of cash transfers to some extent, but will likely continue to recommend GiveDirectly. However, major updates to our current views, either in the negative or positive direction, seem possible. More detail below. Background GiveDirectly, one of our top charities, provides unconditional cash transfers to very poor households in Kenya, Uganda, and Rwanda. Several new studies have recently been released that assess the impact of unconditional cash transfers, including a three-year follow-up study (Haushofer and Shapiro 2018, henceforth referred to as “HS 2018”) on the impact of transfers that were provided by GiveDirectly. Berk Özler, a senior economist at the World Bank, summarized some of this research in two posts on the World Bank Development Impact blog (here and here), noting that the results imply that cash transfers may be less effective than proponents previously believed. In particular, Özler raises the concerns that cash may: 1. Have negative “spillovers”: i.e., negative effects on households that did not receive transfers but that live near recipient households. 2. Have quickly-fading benefits: i.e., the standard of living for recipient households may converge to be similar to non-recipient households within a few years of receiving transfers. Below, we discuss the topics of spillover effects and the duration of benefits of cash transfers in more detail, as well as some other considerations relevant to the effectiveness of cash transfers. In brief: • If substantial spillover effects exist, they have the potential to significantly affect our cost-effectiveness estimates for cash transfers. We are uncertain what we will conclude about spillover effects of cash transfers after deeply reviewing all relevant new literature, but we expect that upcoming midline results from GiveDirectly’s “general equilibrium” study will play a major role in our conclusions. Our best guess is that the general equilibrium study and other literature will not imply that GiveDirectly’s program has large negative spillovers, but we remain open to the possibility that we should substantially negatively update our views after reviewing the relevant literature. • Several new studies seem to find that cash may have little effect on recipients’ standard of living beyond the first year after receiving a transfer. Our best guess is that after reviewing the relevant research in more detail we will decrease our estimate of the cost-effectiveness of cash transfers to some extent. In the worst (unlikely) case, this factor could lead us to believe that cash is about 1.5-2x less cost-effective than we currently do. Spillovers Negative spillovers of cash transfers have the potential to lead us to majorly revise our estimates of the effects of cash; we currently assume that cash does not have major negative or positive spillover effects. At this point, we are uncertain what we will conclude about the likely spillover effects of cash after reviewing all relevant new literature, including GiveDirectly’s forthcoming “general equilibrium” study. Our best guess is that GiveDirectly’s current program does not have large spillover effects, but it seems plausible that we could ultimately conclude that cash either has meaningful negative spillovers or positive spillovers. We will not rehash the methodological details and estimated effect sizes of HS 2018 in this post. For a basic understanding of the findings and methodological issues, we recommend reading Özler’s posts, the Center for Global Development’s Justin Sandefur’s post, GiveDirectly’s latest post, or Haushofer and Shapiro’s response to Özler’s posts. The basic conclusions that we draw from this research are: • Under one interpretation of its findings, HS 2018 measures negative spillover effects that could outweigh the positive effects of cash transfers.1From Sandefur’s post: “Households who had been randomly selected to receive cash were much better off than their neighbors who didn’t. They had$400 more assets—roughly the size of the original transfer, with all figures from here on out in PPP terms—and about $47 higher consumption each month. It looked like an amazing success. “But when Haushofer and Shapiro compared the whole sample in these villages—half of whom had gotten cash, half of whom hadn’t—they looked no different than a random sample of households in control villages. In fact, their consumption was about$6 per month less ($211 versus$217 a month).

“There are basically two ways to resolve this paradox:

“1) Good data, bad news. Cash left recipients only modestly better off after three years (lifting them from $217 to$235 in monthly consumption), and instead hurt their neighbors (dragging them down from $217 to$188 in monthly consumption). Taking the data at face value, this is the most straightforward interpretation of the results.

“2) Bad data, good news. Alternatively, the $47 gap in consumption between recipients and their neighbors is driven by gains to the former not losses to the latter. The estimates of negative side-effects on neighbors are driven by comparisons with control villages where—if you get into the weeds of the paper—it appears sampling was done differently than in treatment villages. (In short, the$217 isn’t reliable.)” jQuery("#footnote_plugin_tooltip_1").tooltip({ tip: "#footnote_plugin_tooltip_text_1", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });
• We do not yet have a strong view on how likely it is that the negative interpretation of HS 2018’s findings is correct. This would require having a deeper understanding of what we should believe about a number of key methodological issues in HS 2018 (see following footnote for two examples).2One methodological issue is how to deal with attrition, as discussed in Haushofer and Shapiro 2018, Pg. 9: “However, there is a statistically significant difference in attrition levels for households in control villages relative to households in treatment villages from endline 1 to endline 2: 6 percentage points more pure control households were not found at endline 2 relative to either group of households in treatment villages. In the analysis of across-village treatment effects and spillover effects we use Lee bounds to deal with this differential attrition; details are given below.”

Another potential issue as described by Özler’s post: “The short-term impacts in Haushofer and Shapiro (2016) were calculated using within-village comparisons, which was a big problem for an intervention with possibility of spillovers, on which the authors had to do a lot of work earlier (see section IV.B in that paper) and in the recent paper. They got around this problem by arguing that spillover effects were small and insignificant. Of course, then came the working paper on negative spillovers on psychological wellbeing mentioned above and now, the spillover effects look sustained and large and unfortunately negative on multiple domains three years post transfers.

“The authors estimated program impacts by comparing T [treatment group] to S [spillover group], instead of the standard comparison of T to C [control group], in the 2016 paper because of a study design complication: researchers randomly selected control villages, but did not collect baseline data in these villages. The lack of baseline data in the control group is not just a harmless omission, as in ‘we lose some power, no big deal.’ Because there were eligibility criteria for receiving cash, but households were sampled a year later, no one can say for certain if the households sampled in the pure control villages at follow-up are representative of the would-be eligible households at baseline.

“So, quite distressingly, we now have two choices to interpret the most recent findings:

“1) We either believe the integrity of the counterfactual group in the pure control villages, in which case the negative spillover effects are real, implying that total causal effects comparing treated and control villages are zero at best. Furthermore, there are no ITT [intention to treat] effects on longer-term welfare of the beneficiaries themselves – other than an increase in the level of assets owned. In this scenario, it is harder to retain confidence in the earlier published impact findings that were based on within-village comparisons – although it is possible to believe that the negative spillovers are a longer-term phenomenon that truly did not exist at the nine-month follow-up.

“2) Or, we find the pure control sample suspect, in which case we have an individually randomized intervention and need to assume away spillover effects to believe the ITT estimates.” jQuery("#footnote_plugin_tooltip_2").tooltip({ tip: "#footnote_plugin_tooltip_text_2", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); HS 2018 reports that the potential bias introduced by methodological issues may be able to explain much of the estimated spillover effects.3Haushofer and Shapiro 2018, Pgs. 24-25: “These results appear to differ from those found in the initial endline, where we found positive spillover effects on female empowerment, but no spillover effects on other dimensions. However, the present estimates are potentially affected by differential attrition from endline 1 to endline 2: as described above, the pure control group showed significantly greater attrition than both treatment and spillover households between these endlines. To assess the potential impact of attrition, we bound the spillover effects using Lee bounds (Table 8). This analysis suggests that differential attrition may account for several of these spillover effects. Specifically, for health, education, psychological well-being, and female empowerment, the Lee bounds confidence intervals include zero for all sample definitions. For asset holdings, revenue, and food security, they include zero in two of the three sample definitions. Only for expenditure do the Lee bounds confidence intervals exclude zero across all sample definitions. Thus, we find some evidence for spillover effects when using Lee bounds, although most of them are not significantly different from zero after bounding for differential attrition across treatment groups.” jQuery("#footnote_plugin_tooltip_3").tooltip({ tip: "#footnote_plugin_tooltip_text_3", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] });
• The mechanism for what may have caused large negative spillovers (if they exist) in HS 2018 is uncertain, though the authors provide some speculation (see footnote).4Haushofer and Shapiro 2018, Pg. 3: “We do not have conclusive evidence of the mechanism behind spillovers, but speculate it could be due to the sale of productive assets by spillover households to treatment households, which in turn reduces consumption among the spillover group. Though not always statistically different from zero, we do see suggestive evidence of negative spillover effects on the value of productive assets such as livestock, bicycles, motorbikes and appliances. We note that GiveDirectly’s current operating model is to provide transfers to all eligible recipients in each village (within village randomization was conducted only for the purpose of research), which may mitigate any negative spillover effects.” jQuery("#footnote_plugin_tooltip_4").tooltip({ tip: "#footnote_plugin_tooltip_text_4", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); We would increase our credence in the existence of negative spillover effects if there were strong evidence for a particular mechanism.

One further factor that complicates application of HS 2018’s estimate of spillover effects is that GiveDirectly’s current program is substantially different from the version of its program that was studied in HS 2018. GiveDirectly now provides $1,000 transfers to almost all households in its target villages in Uganda and Kenya; the intervention studied by HS 2018 predominantly involved providing ~$287 transfers to about half of eligible (i.e., very poor) households within treatment villages, and HS 2018 measured spillover effects on eligible households that did not receive transfers.5See this section of our cash transfers intervention report. jQuery("#footnote_plugin_tooltip_5").tooltip({ tip: "#footnote_plugin_tooltip_text_5", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); GiveDirectly asked us to note that it now defaults to village-level (instead of within-village) randomization for the studies it participates in, barring exceptional circumstances. Since GiveDirectly’s current program provides transfers to almost all households in its target villages, spillovers of its program may largely operate across villages rather than within villages. These changes to the program and the spillover population of interest may lead to substantial differences in estimated spillover effects.

Fortunately, GiveDirectly is running a large (~650 villages) randomized controlled trial of an intervention similar to its current program that is explicitly designed to estimate the spillover (or “general equilibrium”) effects of GiveDirectly’s program.6From the registration for “General Equilibrium Effects of Cash Transfers in Kenya”: “The study will take place across 653 villages in Western Kenya. Villages are randomly allocated to treatment or control status. In treatment villages, GiveDirectly enrolls and distributes cash transfers to households that meet its eligibility criteria. In order to generate additional spatial variation in treatment density, groups of villages are assigned to high or low saturation. In high saturation zones, 2/3 of villages are targeted for treatment, while in low saturation zones, 1/3 of villages are targeted for treatment. The randomized assignment to treatment status and the spatial variation in treatment intensity will be used to identify direct and spillover effects of cash transfers.”

Note that this study will evaluate a variant of GiveDirectly’s program that is different from its current program in that it will not provide transfers to almost all households in target villages. The study will estimate the spillover effects of cash transfers on ineligible (i.e., slightly wealthier) households in treatment villages, among other populations. Since GiveDirectly’s standard program now provides transfers to almost all households in its target villages, estimates of effects on ineligible households may need to be extrapolated to other populations of interest (e.g., households in non-target villages) to be most relevant to GiveDirectly’s current program. jQuery("#footnote_plugin_tooltip_6").tooltip({ tip: "#footnote_plugin_tooltip_text_6", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Midline results from this study are expected to be released in the next few months.

Since we expect GiveDirectly’s general equilibrium study to play a large role in our view of spillovers, we expect that we will not publish an overview of the cash spillovers literature until we’ve had a chance to review its results. However, we see the potential for negative spillover effects of cash as very concerning and it is a high-priority research question for us; we plan to publish a detailed update that incorporates HS 2018, previous evidence for negative spillovers (such as studies on inflation and happiness), the general equilibrium study, and any other relevant literature in time for our November 2018 top charity recommendations at the latest.

Duration of benefits

Several new studies seem to find that cash may have little effect on recipients’ standard of living beyond the first year after receiving a transfer. Our best guess is that after reviewing the relevant research in more detail we will decrease our estimate of the cost-effectiveness of cash to some extent. In the worst (unlikely) case, this could lead us to believe that cash is about 1.5-2x less cost-effective than we currently do.

In our current cost-effectiveness analysis for cash transfers, we mainly consider two types of benefits that households experience due to receiving a transfer:

1. Increases in short-term consumption (i.e., immediately after receiving the transfer, very poor households are able to spend money on goods such as food).
2. Increases in medium-term consumption (i.e., recipients may invest some of their cash transfer in ways that lead them to have a higher standard of living in the 1-20 years after first receiving the transfer).

Potential spillover effects aside, our cost-effectiveness estimate for cash has a fairly stable lower bound because we place substantial value on increasing short-term consumption for very poor people, and providing cash allows for more short-term consumption almost by definition. In particular:

• Our current estimates are consistent with assuming little medium-term benefit of cash transfers. We estimate that about 60% of a typical transfer is spent on short-term goods such as eating more food, and count this as about 40-60% of the benefits of the program.7For our estimate of the proportion of the benefits of cash transfers that come from short-term consumption increases, see row 30 of the “Cash” sheet in our 2018 cost-effectiveness model.

For our estimate of the proportion of transfers that is spent on short-term consumption, we rely on results from GiveDirectly’s randomized controlled trial, which shows investments of $505.94 (USD PPP) (within villages, or$601.88 across villages) on a transfer of $1,525 USD PPP, or about one-third of the total. See Pg. 117 here and Pg. 1 here for total transfer size. jQuery("#footnote_plugin_tooltip_7").tooltip({ tip: "#footnote_plugin_tooltip_text_7", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); If we were to instead assume that 100% of the transfer was spent on short-term consumption (i.e., none of it was invested), our estimate of the cost-effectiveness of cash would become about 10-30% worse.8See a version of our cost-effectiveness analysis in which we made this assumption here. The calculations in row 35 of the “Cash” tab show how assuming that 0% of the transfer is invested would affect staff members’ bottom line estimates. jQuery("#footnote_plugin_tooltip_8").tooltip({ tip: "#footnote_plugin_tooltip_text_8", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); We think using the 100% short-term consumption estimate may be a reasonable and robust way to model the lower bound of effects of cash given various measurement challenges (discussed below). • Nevertheless, our previous estimates of the medium-term benefits of cash transfers may have been too optimistic. Based partially on a speculative model of the investment returns of iron roofs (a commonly-purchased asset for GiveDirectly recipients), most staff assumed that about 40% of a transfer will be invested, and that those investments will lead to roughly 10% greater consumption for 10-15 years.9See rows 5, 8, and 14, “Cash” sheet, 2018 Cost-Effectiveness Analysis – Version 1. jQuery("#footnote_plugin_tooltip_9").tooltip({ tip: "#footnote_plugin_tooltip_text_9", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Some new research discussed in Özler’s first post suggests that there may be little return on investment from cash transfers within 2-4 years after the transfer, though the new evidence is somewhat mixed (see footnote).10See this section of Özler’s post: “This new paper and Blattman’s (forthcoming) work mentioned above join a growing list of papers finding short-term impacts of unconditional cash transfers that fade away over time: Hicks et al. (2017), Brudevold et al. (2017), Baird et al. (2018, supplemental online materials). In fact, the final slide in Hicks et al. states: ‘Cash effects dissipate quickly, similar to Brudevold et al. (2017), but different to Blattman et al. (2014).’ If only they were presenting a couple of months later…” See also two other recent papers that find positive effects of cash transfers beyond the first year: Handa et al. 2018 and Parker and Vogl 2018. The latter finds intergenerational effects of a conditional cash transfer program in Mexico, so may be less relevant to GiveDirectly’s program. jQuery("#footnote_plugin_tooltip_10").tooltip({ tip: "#footnote_plugin_tooltip_text_10", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Additionally, under the negative interpretation of HS 2018’s results, it finds that cash transfers did not have positive consumption effects for recipients three years post-transfer, though it finds a ~40% increase in assets for treatment households (even in the negative interpretation).11Haushofer and Shapiro 2018, Abstract: “Comparing recipient households to non-recipients in distant villages, we find that transfer recipients have 40% more assets (USD 422 PPP) than control households three years after the transfer, equivalent to 60% of the initial transfer (USD 709 PPP).” Haushofer and Shapiro 2018, Pg. 28: “Since we have outcome data measured in the short run (~9 months after the beginning of the transfers) and in the long-run (˜3 years after the beginning of transfers), we test equality between short and long-run effects…Results are reported in Table 9. Focusing on the within-village treatment effects, we find no evidence for differential effects at endline 2 compared to endline 1, with the exception of assets, which show a significantly larger treatment effect at endline 2 than endline 1. However, this effect is largely driven by spillovers; for across-village treatment effects, we cannot reject equality of the endline 1 and endline 2 outcomes. This is true for all variables in the across-village treatment effects except for food security and psychological well-being, which show a smaller treatment effect at endline 2 compared to endline 1. Thus, we find some evidence for decreasing treatment effects over time, but for most outcome variables, the endline 1 and 2 outcomes are similar.” jQuery("#footnote_plugin_tooltip_11").tooltip({ tip: "#footnote_plugin_tooltip_text_11", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); Note that any benefits from owning iron roofs were not factored in to the consumption estimates in HS 2018.12Haushofer and Shapiro 2018, pgs. 32-33: “Total consumption…Omitted: Durables expenditure, house expenditure (omission not pre-specified for endline 1 analysis)” jQuery("#footnote_plugin_tooltip_12").tooltip({ tip: "#footnote_plugin_tooltip_text_12", tipClass: "footnote_tooltip", effect: "fade", fadeOutSpeed: 100, predelay: 400, position: "top right", relative: true, offset: [10, 10] }); If we imagine the potential worst case scenario implied by these results and assume that the ~40% of a cash transfer that is invested has zero benefits, our cost-effectiveness estimate would get about 2x worse. Our best guess is that we’ll decrease our estimate for the medium-term effects of cash to some extent, though we’re unsure by how much. Challenging questions we’ll need to consider in order to arrive at a final estimate include: • If we continue to assume that about 40% of transfers are invested, and that those investments do not lead to any future gains in consumption, then we are effectively assuming that money spent on investments is wasted. Is this an accurate reflection of reality, i.e. are recipients failing to invest transfers in a beneficial manner? • Is our cost-effectiveness model using a reasonable framework for estimating recipients’ standard of living over time? Currently, we only estimate cash’s effects on consumption. However, assets such as iron roofs may provide an increase in standard of living for multiple years even if they do not raise consumption. How, if at all, should we factor this into our estimates? • GiveDirectly’s cash transfer program differs in many ways from other programs that have been the subject of impact evaluations. For example, GiveDirectly provides large, one-time transfers whereas many government cash transfers provide smaller ongoing support to poor families. How should we apply new literature on other kinds of cash programs to our estimates of the effects of GiveDirectly? Next steps We plan to assess all literature relevant to the impact of cash transfers and provide an update on our view on the nature of spillover effects, duration of benefits, and other relevant issues for our understanding of cash transfers and their cost-effectiveness in time for our November 2018 top charity recommendations at the latest. Notes [ + ] 1. ↑ From Sandefur’s post: “Households who had been randomly selected to receive cash were much better off than their neighbors who didn’t. They had$400 more assets—roughly the size of the original transfer, with all figures from here on out in PPP terms—and about $47 higher consumption each month. It looked like an amazing success. “But when Haushofer and Shapiro compared the whole sample in these villages—half of whom had gotten cash, half of whom hadn’t—they looked no different than a random sample of households in control villages. In fact, their consumption was about$6 per month less ($211 versus$217 a month).

“There are basically two ways to resolve this paradox:

“1) Good data, bad news. Cash left recipients only modestly better off after three years (lifting them from $217 to$235 in monthly consumption), and instead hurt their neighbors (dragging them down from $217 to$188 in monthly consumption). Taking the data at face value, this is the most straightforward interpretation of the results.

“2) Bad data, good news. Alternatively, the $47 gap in consumption between recipients and their neighbors is driven by gains to the former not losses to the latter. The estimates of negative side-effects on neighbors are driven by comparisons with control villages where—if you get into the weeds of the paper—it appears sampling was done differently than in treatment villages. (In short, the$217 isn’t reliable.)” 2. ↑ One methodological issue is how to deal with attrition, as discussed in Haushofer and Shapiro 2018, Pg. 9: “However, there is a statistically significant difference in attrition levels for households in control villages relative to households in treatment villages from endline 1 to endline 2: 6 percentage points more pure control households were not found at endline 2 relative to either group of households in treatment villages. In the analysis of across-village treatment effects and spillover effects we use Lee bounds to deal with this differential attrition; details are given below.”

Another potential issue as described by Özler’s post: “The short-term impacts in Haushofer and Shapiro (2016) were calculated using within-village comparisons, which was a big problem for an intervention with possibility of spillovers, on which the authors had to do a lot of work earlier (see section IV.B in that paper) and in the recent paper. They got around this problem by arguing that spillover effects were small and insignificant. Of course, then came the working paper on negative spillovers on psychological wellbeing mentioned above and now, the spillover effects look sustained and large and unfortunately negative on multiple domains three years post transfers.

“The authors estimated program impacts by comparing T [treatment group] to S [spillover group], instead of the standard comparison of T to C [control group], in the 2016 paper because of a study design complication: researchers randomly selected control villages, but did not collect baseline data in these villages. The lack of baseline data in the control group is not just a harmless omission, as in ‘we lose some power, no big deal.’ Because there were eligibility criteria for receiving cash, but households were sampled a year later, no one can say for certain if the households sampled in the pure control villages at follow-up are representative of the would-be eligible households at baseline.

“So, quite distressingly, we now have two choices to interpret the most recent findings:

“1) We either believe the integrity of the counterfactual group in the pure control villages, in which case the negative spillover effects are real, implying that total causal effects comparing treated and control villages are zero at best. Furthermore, there are no ITT [intention to treat] effects on longer-term welfare of the beneficiaries themselves – other than an increase in the level of assets owned. In this scenario, it is harder to retain confidence in the earlier published impact findings that were based on within-village comparisons – although it is possible to believe that the negative spillovers are a longer-term phenomenon that truly did not exist at the nine-month follow-up.

“2) Or, we find the pure control sample suspect, in which case we have an individually randomized intervention and need to assume away spillover effects to believe the ITT estimates.” 3. ↑ Haushofer and Shapiro 2018, Pgs. 24-25: “These results appear to differ from those found in the initial endline, where we found positive spillover effects on female empowerment, but no spillover effects on other dimensions. However, the present estimates are potentially affected by differential attrition from endline 1 to endline 2: as described above, the pure control group showed significantly greater attrition than both treatment and spillover households between these endlines. To assess the potential impact of attrition, we bound the spillover effects using Lee bounds (Table 8). This analysis suggests that differential attrition may account for several of these spillover effects. Specifically, for health, education, psychological well-being, and female empowerment, the Lee bounds confidence intervals include zero for all sample definitions. For asset holdings, revenue, and food security, they include zero in two of the three sample definitions. Only for expenditure do the Lee bounds confidence intervals exclude zero across all sample definitions. Thus, we find some evidence for spillover effects when using Lee bounds, although most of them are not significantly different from zero after bounding for differential attrition across treatment groups.” 4. ↑ Haushofer and Shapiro 2018, Pg. 3: “We do not have conclusive evidence of the mechanism behind spillovers, but speculate it could be due to the sale of productive assets by spillover households to treatment households, which in turn reduces consumption among the spillover group. Though not always statistically different from zero, we do see suggestive evidence of negative spillover effects on the value of productive assets such as livestock, bicycles, motorbikes and appliances. We note that GiveDirectly’s current operating model is to provide transfers to all eligible recipients in each village (within village randomization was conducted only for the purpose of research), which may mitigate any negative spillover effects.” 5. ↑ See this section of our cash transfers intervention report. 6. ↑ From the registration for “General Equilibrium Effects of Cash Transfers in Kenya”: “The study will take place across 653 villages in Western Kenya. Villages are randomly allocated to treatment or control status. In treatment villages, GiveDirectly enrolls and distributes cash transfers to households that meet its eligibility criteria. In order to generate additional spatial variation in treatment density, groups of villages are assigned to high or low saturation. In high saturation zones, 2/3 of villages are targeted for treatment, while in low saturation zones, 1/3 of villages are targeted for treatment. The randomized assignment to treatment status and the spatial variation in treatment intensity will be used to identify direct and spillover effects of cash transfers.”

Note that this study will evaluate a variant of GiveDirectly’s program that is different from its current program in that it will not provide transfers to almost all households in target villages. The study will estimate the spillover effects of cash transfers on ineligible (i.e., slightly wealthier) households in treatment villages, among other populations. Since GiveDirectly’s standard program now provides transfers to almost all households in its target villages, estimates of effects on ineligible households may need to be extrapolated to other populations of interest (e.g., households in non-target villages) to be most relevant to GiveDirectly’s current program. 7. ↑ For our estimate of the proportion of the benefits of cash transfers that come from short-term consumption increases, see row 30 of the “Cash” sheet in our 2018 cost-effectiveness model.

For our estimate of the proportion of transfers that is spent on short-term consumption, we rely on results from GiveDirectly’s randomized controlled trial, which shows investments of $505.94 (USD PPP) (within villages, or$601.88 across villages) on a transfer of $1,525 USD PPP, or about one-third of the total. See Pg. 117 here and Pg. 1 here for total transfer size. 8. ↑ See a version of our cost-effectiveness analysis in which we made this assumption here. The calculations in row 35 of the “Cash” tab show how assuming that 0% of the transfer is invested would affect staff members’ bottom line estimates. 9. ↑ See rows 5, 8, and 14, “Cash” sheet, 2018 Cost-Effectiveness Analysis – Version 1. 10. ↑ See this section of Özler’s post: “This new paper and Blattman’s (forthcoming) work mentioned above join a growing list of papers finding short-term impacts of unconditional cash transfers that fade away over time: Hicks et al. (2017), Brudevold et al. (2017), Baird et al. (2018, supplemental online materials). In fact, the final slide in Hicks et al. states: ‘Cash effects dissipate quickly, similar to Brudevold et al. (2017), but different to Blattman et al. (2014).’ If only they were presenting a couple of months later…” See also two other recent papers that find positive effects of cash transfers beyond the first year: Handa et al. 2018 and Parker and Vogl 2018. The latter finds intergenerational effects of a conditional cash transfer program in Mexico, so may be less relevant to GiveDirectly’s program. 11. ↑ Haushofer and Shapiro 2018, Abstract: “Comparing recipient households to non-recipients in distant villages, we find that transfer recipients have 40% more assets (USD 422 PPP) than control households three years after the transfer, equivalent to 60% of the initial transfer (USD 709 PPP).” Haushofer and Shapiro 2018, Pg. 28: “Since we have outcome data measured in the short run (~9 months after the beginning of the transfers) and in the long-run (˜3 years after the beginning of transfers), we test equality between short and long-run effects…Results are reported in Table 9. Focusing on the within-village treatment effects, we find no evidence for differential effects at endline 2 compared to endline 1, with the exception of assets, which show a significantly larger treatment effect at endline 2 than endline 1. However, this effect is largely driven by spillovers; for across-village treatment effects, we cannot reject equality of the endline 1 and endline 2 outcomes. This is true for all variables in the across-village treatment effects except for food security and psychological well-being, which show a smaller treatment effect at endline 2 compared to endline 1. Thus, we find some evidence for decreasing treatment effects over time, but for most outcome variables, the endline 1 and 2 outcomes are similar.” 12. ↑ Haushofer and Shapiro 2018, pgs. 32-33: “Total consumption…Omitted: Durables expenditure, house expenditure (omission not pre-specified for endline 1 analysis)” function footnote_expand_reference_container() { jQuery("#footnote_references_container").show(); jQuery("#footnote_reference_container_collapse_button").text("-"); } function footnote_collapse_reference_container() { jQuery("#footnote_references_container").hide(); jQuery("#footnote_reference_container_collapse_button").text("+"); } function footnote_expand_collapse_reference_container() { if (jQuery("#footnote_references_container").is(":hidden")) { footnote_expand_reference_container(); } else { footnote_collapse_reference_container(); } } function footnote_moveToAnchor(p_str_TargetID) { footnote_expand_reference_container(); var l_obj_Target = jQuery("#" + p_str_TargetID); if(l_obj_Target.length) { jQuery('html, body').animate({ scrollTop: l_obj_Target.offset().top - window.innerHeight/2 }, 1000); } } The post New research on cash transfers appeared first on The GiveWell Blog. ### GiveWell’s outreach and operations: 2017 review and 2018 plans Fri, 04/20/2018 - 13:48 This is the third of three posts that form our annual review and plan for the following year. The first two posts covered GiveWell’s progress and plans on research. This post reviews and evaluates GiveWell’s progress last year in outreach and operations and sketches out some high-level goals for the current year. A separate post will look at metrics on our influence on donations in 2017. We aim to release our metrics on our influence on donations in 2017 by the end of June 2018. Summary Outreach: Before 2017, outreach wasn’t a major organizational priority at GiveWell (more in this 2014 blog post). In our plans for 2017, we wrote that we planned to put more emphasis on outreach, but were at the early stages of thinking through what that might involve. In the second half of 2017, we experimented with a number of different approaches to outreach (more on the results below). In 2018, we plan to increase the resources we devote to outreach primarily by hiring a Head of Growth and adding staff to improve our post-donation follow-up with donors. Operations: In 2017, we completed the separation of GiveWell and the Open Philanthropy Project and increased our operations capacity with three new hires. In 2018, our top priorities are to hire a new Director of Operations (which we have now done), maintain our critical functions, and prepare our systems for increased growth in outreach. Outreach 2017 review and 2018 plans Before 2017, outreach wasn’t a major organizational priority at GiveWell (more in this 2014 blog post). In our plans for 2017, we wrote that we planned to put more emphasis on outreach, but were at the early stages of thinking through what that might involve. We currently have one staff member, Catherine Hollander, who works on outreach full-time. Two others, Tracy Williams and Isabel Arjmand, each spend significant time on outreach. From August 2017, our Executive Director, Elie Hassenfeld, also started to allocate a significant amount of his time to outreach. How did we do in 2017? In 2017, we focused on experimentation. In brief, we found that: • Advertising on podcasts has had strong results. Using the methodology described in this blog post, our best guess is that each dollar we spent on podcast advertising returned$5-14 in donations to our top charities.
• Increasing the consistency of our communication with members of the media had strong results for the time invested.
• Retaining a digital marketing consultant yielded strong results.
• Retaining a PR firm to generate media mentions did not have positive results.
• We’ve had a limited number of conversations with high net worth donors. We don’t yet have enough information to conclude whether this was a good use of time.

You can see our estimates of the five-year net present value of donations generated by each of these activities here. Overall, we spent approximately $200,000 and devoted significant staff time to this work. Our best estimate is that these efforts resulted in$2.5 million to $5.9 million in additional donations to our recommended charities. We conclude: • New work on outreach had a high return on investment in 2017. • Some activities, such as podcast advertising and digital marketing improvements, have shown particularly strong results and should be scaled up. What are our priorities for 2018? Our marketing funnel has three stages: 1. Awareness/acquisition: more people hear about GiveWell and visit the website, 2. Conversion: more people who visit the site donate, and 3. Retention: over time, donors maintain or increase their donations. Our current working theory is that we should prioritize (though not exclusively) improving the bottom of this funnel (retention and conversion) before moving more people through it. We also plan to scale up the activities that worked well in 2017 and to continue experimenting with different approaches. Our primary outreach priorities (which we expect to achieve and devote substantial capacity to) for 2018 are: 1. Hire a Head of Growth to improve our efforts to acquire and convert new donors via our website. Over the long term, the Head of Growth will be responsible for digital marketing. What does success look like? Hire a Head of Growth. 2. Improve the post-donation experience. We believe we have substantial room to improve our post-donation communication with donors. We have hired a consultant to help us improve our process. What does success look like? Significantly improve our process for post-donation follow-up before giving season 2018. At this point, we’re still in the earliest stages of figuring out how we’ll do this, so we don’t have concrete goals for the year beyond finalizing our plan in the next few months. Our stretch goal for the year is to succeed in achieving measured improvement in our dollar retention rate/lifetime value of each donor. Our secondary outreach priorities (which we expect to achieve, but not devote substantial capacity to) for 2018 are: 1. Continue advertising on podcasts. This advertising was particularly successful in 2017. We want to systematically assess podcast advertising opportunities and increase our podcast advertising. We plan to spend approximately$250,000 to $350,000 on podcast advertising this year. What does success look like? Advertise on new podcasts and measure results to decide how much to spend in 2019. 2. Receive coverage in major news outlets. This has led to increased donations in the past. What does success look like? Pitch major news outlets on at least five stories in total and get at least one story covered. 3. Deepen relationships with the effective altruism community. We want to deepen our relationships with groups in the effective altruism community doing outreach, particularly to high net worth donors. For a list of other potentially promising projects we’re unlikely to prioritize this year, see this spreadsheet. Operations 2017 review and 2018 plans In 2017, we increased our operations staff capacity, made a number of changes to our internal systems, and completed the separation of GiveWell and the Open Philanthropy Project. In addition to maintaining critical functions, our highest priorities for 2018 are to (i) appoint a new Director of Operations and (ii) make improvements to our processes across the board to prepare our systems for major growth in outreach. How did we do in 2017? We made a number of improvements to our operations. In brief: • We completed the separation of GiveWell and the Open Philanthropy Project. • Donations: We hired two new members of our donations team, which allowed us to process donations consistently notwithstanding increased volume. We also added Betterment and Bitpay (for Bitcoin) as donation options. • Finance: We hired a Controller. We rolled out a few systems to improve the efficiency of our internal processes (Expensify, Bill.com, and others). • Social cohesion: We created a regular schedule for visit days for remote staff and staff events to maintain cohesion. In January 2018, Sarah Ward, our former Director of Operations, departed. Natalie Crispin (Senior Research Analyst) has been covering her previous responsibilities during our search for a new hire to take them on. What are our priorities for 2018? In the first half of 2018, we aim to move from a situation in which we were maintaining critical functions to positioning the organization to grow. Our two main priorities for the first half of 2018 are to: 1. Appoint a new Director of Operations (complete). In April 2018, we hired Whitney Shinkle as our new Director of Operations. Between January and April 2018, Natalie Crispin served as our interim Director of Operations. 2. Prepare our systems for major growth in outreach, which we expect to lead to increases in spending, staff, and donations. 3. Maintain critical operations across domains: donations, finance, HR, office, website, recruiting, and staff cohesion. Major operations projects we aim to complete in the first half of 2018 include: • A significant improvement in our approach to budgeting making it significantly easier for us to share updated actual spending versus budget. • We retained a compensation consultant to help us benchmark GiveWell staff compensation to comparable organizations. • We published our 2016 metrics report and plan to publish our 2017 money moved report by the end of June. The post GiveWell’s outreach and operations: 2017 review and 2018 plans appeared first on The GiveWell Blog. ### Our 2018 plans for research Thu, 04/19/2018 - 09:58 This is the second of three posts that form our annual review and plan for the following year. The first post reviewed our progress in 2017. The following post will cover GiveWell’s progress and plans as an organization. We aim to release our metrics on our influence on donations in 2017 by the end of June 2018. Summary Our primary research goals for 2018 are to: 1. Explore areas that may be more cost-effective than our current recommendations but don’t fit neatly into our current criteria by investigating (i) interventions aimed at influencing policy in low- and middle-income countries and (ii) opportunities to influence major aid agencies. 2. Find new top charities that meet our current criteria by (i) completing intervention reports for at least two interventions we think are likely to result in GiveWell top charities by the end of 2019, (ii) considering renewal of GiveWell Incubation Grants to current grantee organizations that may become top charities in the future and making new Incubation Grants, and (iii) developing and maintaining high-quality relationships with charities, funders, and influencers in the global health and development community. 3. Improve our internal processes to support the above goals. We plan to continue to delegate significant parts of our top charity update process to non-management staff and to improve our year-end process for making recommendations. 4. Continue following our top charities and address priority questions. We are devoting fewer resources than we have in the past to top charity updates. We plan to continue gathering up-to-date information to allow us to make high-quality allocation decisions for giving season, and to answer a small number of high-priority questions. Our secondary goals (which we hope to achieve, but are lower priority than the goals above) are to: 1. Improve the quality of our decisions and transparency about our decision-making process. 2. Hire more flexible research capacity to increase our output. 3. Complete reviews of two new potential top charities. We discuss each of these goals in greater depth below. Goal 1: Explore areas that may be more cost-effective than our current recommendations We’ve added five new top charities in the last two years. We now believe that our current top charities have more room for more funding than we are able to fill. This increases the relative value of identifying giving opportunities that are substantially more cost-effective than our current top charities (because identifying similarly cost-effective opportunities will crowd out marginal funding for our current top charities), even if we believe we have a lower chance of success of identifying these opportunities. We’re therefore prioritizing investigating the areas we believe have the highest chance of containing opportunities that are substantially more cost-effective than our current top charities. The primary staff working on this are James Snowden (Research Consultant) and Josh Rosenberg (Senior Research Analyst). Sub-goal 1.1: Assess interventions to influence policy in low- and middle-income countries Our current top charities all implement direct-delivery interventions (although we believe that some leverage substantial domestic government funding). We think there’s a reasonable, intuitive case that philanthropists may, in some cases, have a greater impact by influencing government policy because (i) governments have access to regulatory interventions that are unavailable to philanthropists and (ii) there may be opportunities to help improve the allocation of large pools of funds. We’ve started work investigating advocacy for tobacco control (notes 1, 2, 3), lead paint regulation (1, 2, 3), and J-PAL’s Government Partnership Initiative (1, 2). More about why we’re prioritizing this area here. What does success look like? We publish at least five reports on interventions to influence policy in low- and middle-income countries and prioritize one to three for deeper assessment. Sub-goal 1.2: Improve our understanding of aid agencies We believe there may be opportunities for GiveWell (or potential GiveWell grantees) to help improve the allocation of spending by aid agencies. We want to improve our understanding of what aid agencies spend their funds on, whether there are opportunities to improve this allocation, and whether GiveWell (or potential grantees) would be in a good position to assist. What does success look like? As this project is at an early stage, we don’t yet have specific metrics to assess success. Goal 2: Find new top charities that meet our current criteria One of our most important long-term goals is to identify all charities that should be top charities under our current criteria. We are uncertain whether we will be able to identify organizations outside of our current scope of work that we believe are substantially better giving opportunities than our current top charities (Goal 1) and we want to ensure we’re recommending the best giving opportunities, even if we believe they’re similarly cost-effective to our current top charities. The primary staff working on this are Caitlin McGugan (Senior Fellow), Andrew Martin (Research Analyst), Josh Rosenberg (Senior Research Analyst), Stephan Guyenet (Research Consultant), Sophie Monahan (Research Analyst), and Chelsea Tabart (Research Analyst). Sub-goal 2.1: Produce two intervention reports Intervention assessments are key to our research process. We generally only consider organizations that are implementing one of our priority programs—so designated upon our completion of an assessment of the intervention—for top charity status (an exception is if an organization has done rigorous evaluation of its own program, though in practice we have found this to be very rare). Last year, we completed two full intervention reports (as opposed to “interim” reports, which are less time-intensive). As we’re allocating a larger proportion of our capacity to Goal 1 than we did last year, we aim to maintain this level of output at two full intervention reports this year. What does success look like? We complete and publish two full intervention reports on potential new priority programs. Sub-goal 2.2: Complete grant renewal assessments and new reviews as part of GiveWell Incubation Grants There are a number of GiveWell Incubation Grantees that we hope will become top charities in the future. We want to ensure we’re making good decisions about the renewals of their grants and to continue to support organizations in developing monitoring and evaluation to the point where they can be considered for top charity status. In the past, we’ve made GiveWell Incubation Grants to promising opportunities that didn’t fit within our research priorities at the time. We want to remain open to investigating opportunities we’re not yet aware of. What does success look like? Complete assessments for grant renewals for Results for Development, Charity Science: Health, and a new grant for Evidence Action’s work on iron and folic acid supplementation. Prioritize at least two new Incubation Grants and complete a thorough investigation of each. Sub-goal 2.3: Develop and maintain high-quality relationships with charities, funders, and influencers in the global health and development community We expect good relationships with relevant organizations to help us (i) increase the number and diversity of good-fit charities that express interest in applying for our recommendation, (ii) identify new interventions we should consider as potential GiveWell priority programs, and (iii) clearly communicate our approach to potential top charities, enabling them to determine whether they would be a good fit for our process. While we feel our relationships with well-regarded global health and development implementers and funders have improved, we continue to feel limited in our ability to understand whether there are funding gaps for evidence-backed, highly cost-effective work within large international NGOs and multilateral aid organizations such as the Global Fund to Fight AIDS, Tuberculosis and Malaria. What does success look like? We have at least one call or meeting with at least 60 different charities that we have not recommended or made an Incubation Grant to (last year, we had 42) and at least 100 such calls or meetings in total. We have at least five multi-program organizations with budgets of more than$50 million annually express interest in being considered for our top charity recommendation for a specific, promising program, if we invite them to apply. We prioritize research work beyond an initial, brief evidence assessment on at least five interventions that we became aware of through professional networks.

Goal 3: Continue to improve our internal processes

We believe there’s room for improvement in a number of research processes to support the above goals, as well as our work following our current top charities. We don’t expect the general public to see clear evidence of progress on these goals, as they largely relate to our internal operations.

The primary staff working on this are Elie Hassenfeld (Executive Director), Josh Rosenberg (Senior Research Analyst), and Natalie Crispin (Senior Research Analyst).

Sub-goal 3.1: Decrease the amount of time senior staff spend on top charity updates this year

In the past, much of the work on top charity updates has been the responsibility of Natalie Crispin (Senior Research Analyst). We plan to move a higher proportion of this work to other research staff to minimize the extent to which our institutional knowledge is dependent on any one individual.

What does success look like? Natalie spends less than 30 percent of her time on top charity updates, and, more subjectively, we believe at the end of 2018 that it would not cause significant disruption to further reduce Natalie’s time on this work (i.e., to 15 percent) in 2019.

Sub-goal 3.2: Improve our process for publishing our year-end recommendations

In 2017, we started finalizing our charity recommendations for giving season later than was optimal. This meant much of the work had to be completed in a short amount of time, and there was insufficient time to solicit feedback and criticism from our top charities. While this was partly a consequence of adding two new top charities, we want to be more disciplined this year about when we start preparation for our giving season recommendations.

What does success look like? With exceptions for cases where we need to wait (i.e., final room for more funding estimates and cost-per-treatment estimates for existing top charities, information related to new top charities, or information that isn’t available until after July 31 and is crucial to our recommendations), finalize underlying research directly relevant to our 2018 recommendations by July 31; finalize all research and pages by November 1 (two-plus weeks before our publication deadline) to allow for (a) charity feedback and (b) internal debate.

Goal 4: Continue following our top charities and address priority questions

We are devoting fewer resources than we have in the past to top charity updates. We plan to continue gathering up-to-date information to allow us to make high-quality allocation decisions for giving season and to answer a small number of high-priority questions:

• For each top charity, we plan to review spending over the last year and new monitoring and evaluation reports; update our estimate of their cost per deliverable (e.g., deworming treatment, preventative malaria treatment, or loan provided); and complete an analysis of their room for more funding.
• For Helen Keller International (HKI), we plan to explore three major outstanding questions:
1. What is HKI’s impact on coverage rates in vitamin A supplementation campaigns? To date, we have only supported HKI’s work to fund campaigns that are unlikely to occur without funding from HKI, and we would like to understand whether we should expand this support to other campaigns that HKI works on.
2. What other interventions are delivered alongside vitamin A and how does that impact the cost-effectiveness of HKI’s work?
3. What would it take to gather more data on current levels of vitamin A deficiency in locations where HKI works or may work in the future?
• We want to increase our confidence in the costs incurred by other actors for net distributions that are supported by the Against Malaria Foundation, one of our current top charities.
• We plan to speak with each of our standout charities for an update on their work.

The primary staff working on this are Natalie Crispin (Senior Research Analyst), Isabel Arjmand (Research Analyst), Andrew Martin (Research Analyst), Chelsea Tabart (Research Analyst), and Nicole Zok (Research Analyst).

What does success look like? By the end of November 2018, we complete updated reviews of each of our current top charities that include the information listed above. We also publish conversation notes from discussions with each current standout charity.

Goal 5 (Secondary): Improve the quality of our decisions and transparency about our decision-making process

We would like to improve the process by which we set our allocations during giving season. We don’t know yet exactly what this will involve, but we intend to do some initial work to determine ways we can improve the quality of our decisions and transparency about them.

Goal 6 (Secondary): Hire more flexible research capacity to increase our output

We believe our research team is currently capacity constrained. We would like to hire more flexible research generalists at all levels of seniority. We don’t expect to spend more time on this goal than we already are, but we would be excited about hiring the right candidates. If you’re interested in working for GiveWell, you can apply through our jobs page.

Goal 7 (Secondary): Complete reviews of at least two new top charities

We are prioritizing top charity reviews less highly this year than we have in previous years because we currently expect to identify significantly larger funding gaps than we will be able to fill. However, we have a shortlist of potential candidates for top charity status, and if we have the capacity, would like to complete evaluations of one or two of these organizations.

What does success look like? Complete evaluations for one or two new potential top charities.

The post Our 2018 plans for research appeared first on The GiveWell Blog.

### Review of our research in 2017

Wed, 04/18/2018 - 13:13

This is the first of three posts that form our annual review and plan for the following year. This post reviews and evaluates last year’s progress on our work of finding and recommending evidence-based, thoroughly-vetted charities that serve the global poor. The following two posts will cover (i) our plans for GiveWell’s research in 2018 and (ii) GiveWell’s progress and plans as an organization. We aim to release our metrics on our influence on donations in 2017 by the end of June 2018.

Summary

We believe that 2017 was a successful year for GiveWell’s research. We met our five primary goals for the year, as articulated in our plan post from the beginning of the year:

Our primary research goals for 2017 are to:

1. Speed up our output of new intervention assessments, by hiring a Senior Fellow and by improving our process for reviewing interventions at a shallow level.
2. Increase the number of promising charities that apply for our recommendation. Alternatively, we may learn why we have relatively few strong applicants and decide whether to change our process as a result. Research Analyst Chelsea Tabart will spend most of her time on this project.
3. Through GiveWell Incubation Grants, fund projects that may lead to more top charity contenders in the future and consider grantees No Lean Season and Zusha! as potential 2017 top charities.
4. Further improve the robustness and usability of our cost-effectiveness model.
5. Improve our process for following the progress of current top charities to reduce staff time, while maintaining quality. We also have some specific goals (discussed below) with respect to answering open questions about current top charities.

We achieved our five primary goals for the year:

1. Our intervention-related output was greater than in any past year, although we still see room for improvement in the pace with which we complete and publish this work (more). We hired a Senior Fellow and published nine full or interim intervention reports in 2017, compared to four in 2016.
2. We increased the number of promising charities that applied for our recommendation (more).
3. We added two new top charities: Evidence Action’s No Lean Season (the first top charity to start as a GiveWell Incubation Grant recipient) and Helen Keller International’s vitamin A supplementation program (which joined our list as a result of our charity outreach work). We continued to follow our current Incubation Grant recipients and made several new Incubation Grants to grow the pipeline of new top charities (more).
4. We made substantial improvements to our cost-effectiveness analysis (more).
5. We reduced the amount of staff time spent on following our current top charities. We also completed 17 of the 19 activities outlined in last year’s plan (more).

We discuss progress on each of our primary goals below. For each high-level goal, we include (i) the subgoals we set in our last annual review, (ii) an evaluation of whether we met those subgoals, and (iii) a summary of key activities completed last year.

Goal 1: Speed up intervention assessments

In early 2017, we wrote:

In recent years, we have completed few intervention reports, which has limited our ability to consider new potential top charities. We plan to increase the rate at which we form views on interventions this year by:

• Hiring a Senior Fellow (or possibly more than one). We expect a Senior Fellow to have a Ph.D. in economics, public health, or statistics or equivalent experience and to focus on in-depth evidence reviews and cost-effectiveness assessments of interventions that appear promising after a shallower investigation. In addition, Open Philanthropy Project Senior Advisor David Roodman may spend some more time on intervention related work.
• Doing low-intensity research on a large number of promising interventions. We generally start with a two to four hour “quick intervention assessment,” and then prioritize interventions for a 20-30 hour “interim intervention report” (example). We don’t yet have a good sense of how many of these of these we will complete this year, because we’re unsure both about how much capacity we will have for this work and about how many promising interventions there will be at each step in the process.
• Continuing to improve our systems for ensuring that we become aware of promising interventions and new relevant research as it becomes available. We expect to learn about additional interventions by tracking new research, particularly randomized controlled trials, in global health and development and by talking to select organizations about programs they run that they think we should look into.

How did we do? Achieved our goal.

Due to our uncertainty about the capacity we could devote to intervention assessments, we did not have an explicit target for how many reports we expected to complete. In 2017, we published seven interim intervention reports, two full intervention reports, and completed ~30 quick evidence assessments (defined below). Our research output for 2017 was higher than 2016, when we published one full intervention report, three interim intervention reports, and completed 30 quick evidence assessments.

What did we do?

Goal 2: Increase the pipeline of promising charities applying for our recommendation

In early 2017, we wrote:

We would like to better understand whether we have failed to get the word out about the potential value we offer or communicate well about our process and charities’ likelihood of success, or, alternatively, whether charities are making well-informed decisions about their fit with our criteria. (More on why we think more charities should consider applying for a GiveWell recommendation in this post.)

This year, we have designated GiveWell Research Analyst Chelsea Tabart as charity liaison. Her role is to increase and improve our pipeline of top charity contenders by answering charities’ questions about our process and which program(s) they should apply with, encouraging promising organizations to apply, and, through these conversations, understanding what the barriers are to more charities applying.

We aim by the end of the year to have a stronger pipeline of charities applying, have confidence that we are not missing strong contenders, or understand how we should adjust our process in the future.

How did we do? Achieved our goal.

More charities entered our top charity review process in 2017, although it’s unclear whether this was due to our charity liaison activities. Five charities formally applied in 2017, compared to two in 2016, and four in 2015. One of those charities, Helen Keller International’s vitamin A supplementation program, became a top charity.

While we feel our relationships with well-regarded global health and development implementers and funders have improved, we continue to feel limited in our ability to understand whether there are funding gaps for evidence-backed, highly cost-effective work within large international NGOs and multilateral aid organizations such as the Global Fund to Fight AIDS, Tuberculosis and Malaria.

What did we do?

• We had at least one conversation with 42 organizations to introduce them to GiveWell’s work in 2017, compared to 16 in 2016.
• Where organizations running multiple programs expressed interest in applying for our recommendation, we had several calls with them to help determine whether they should apply and which of their programs would be the most promising fit for a top charity evaluation. We had not offered this proactive support to organizations in the past.
• We hosted two charity-focused events: (i) a conference call for charities with GiveWell senior staff to present an update on our work as it relates to charities and to give them a chance to ask questions directly of our senior team and (ii) a networking event for our recommended organizations in London.
• We attended seven conferences on global health and development issues to broaden our network and perspective in subject-matter areas that GiveWell has not historically worked on.
Goal 3: Maintain Incubation Grants

In early 2017, we wrote:

We made significant progress on Incubation Grants in 2016 and plan in 2017 to largely continue with ongoing engagements, while being open to new grantmaking opportunities that are brought to our attention.

Among early-to-mid stage grants, we plan to spend the most time on working with IDinsight and New Incentives (where our feedback is needed to move the projects forward), and a smaller amount of time on Results for Development and Charity Science: Health (where we are only following along with ongoing projects).

Another major priority will be following up on two later-stage grantees, No Lean Season and Zusha!, groups that are contenders for a top charity recommendation in 2017. For No Lean Season, a program run by Evidence Action, our main outstanding questions are whether the program will have room for more funding in 2018 and whether monitoring will be high quality as the program scales. We have similar questions about Zusha! and in addition are awaiting randomized controlled trial results that are expected later this year.

How did we do? Exceeded goal.

As expected, our work last year focused on following up on current grantees. No Lean Season, one of our later-stage grantees, graduated to top charity status and we made one grant to a new grantee, the Centre for Pesticide Suicide Prevention. We also made a number of grants to improve our understanding of the evidence base for our priority programs and deepened our partnership with IDinsight.

What did we do?

Goal 4: Improve our cost-effectiveness analysis

In early 2017, we wrote:

We plan to continue making improvements to our cost-effectiveness model and the data it draws on (separate from adding new interventions to the model, which is part of the intervention report work discussed above). Projects we are currently prioritizing include:

• Making it more straightforward to see how personal values are incorporated into the model and what the implications of those values are.
• Revisiting the prevalence and intensity adjustment that we use to compare the average per-person impact of deworming in places that our top charities work to the locations where the studies that found long-term impact of deworming were conducted. More in this post.
• Improving the insecticide-treated nets model by revisiting how it incorporates effects on adult mortality and adjustments for regions with different malaria burdens and changes in malaria burden over time.

How did we do? Achieved goal.

We made substantial progress on improving our cost-effectiveness analysis in 2017.

What did we do?

• Moved to a system of making more frequent updates to our cost-effectiveness analysis. This has made it easier to identify which specific factors are driving changes in the estimated cost-effectiveness of our top charities.
• Revisited how we think about leverage and funging (how donating to our top charities influences how other funders spend their money) and updated our cost-effectiveness analysis accordingly.
• Published a report on how other global actors approach the difficult moral tradeoffs we face.
• Prior to announcing our 2017 recommendations, we performed a sensitivity check on our cost-effectiveness analysis to identify how sensitive our final outputs were to different uncertain inputs. This has helped us identify which inputs we should prioritize additional research on, and we believe it has made our communication more transparent, particularly around our personal values.
• Revisited and updated our prevalence and intensity adjustments for deworming.
• Deprioritized improving how our insecticide-treated net model incorporates effects on adult mortality. A limited number of conversations with malaria experts made us less confident that there was informative research on the question that would improve the accuracy of our models.
• Deprioritized making adjustments for subnational regions with different malaria burdens because it would take substantial time to deeply understand the assumptions informing the subnational models we have seen. We believe this remains an important weakness of our model and that it limits our ability to make high-quality decisions about prioritization among different regional funding gaps.
Goal 5: Improve our process for following top charities

In early 2017, we wrote:

“In 2017, we plan to have a single staff member do most of this work and expect it to take a half to two-thirds of a full-time job. Three other staff will spend a small portion of their time, totaling approximately the equivalent of one full-time job, on this work.”

How did we do? Achieved goal.

We estimate that it took about 40 percent of the staff member’s time who focused on this work plus a small portion of four other staff members’ time, totaling at most and likely somewhat less than the equivalent of a full-time job (roughly half the time we dedicated to top charity updates in 2016).

We believe we maintained or increased the quality of the top charity updates, as we completed or made major progress on all but two of the activities and questions outlined in last year’s plan.

What did we do?

The table below summarizes our progress on each of the activities and open questions outlined in last year’s plan.