Innovations for Poverty Action — Mindset Engagement in Cash Transfers

Published: June 2016; Last updated: July 2018

[Added December 19, 2016: GiveWell's experimental work is now known as GiveWell Incubation Grants.]

Note: this page summarizes the rationale behind a grant to Innovations for Poverty Action made by Good Ventures. Innovations for Poverty Action staff reviewed this page prior to publication.

Summary

As part of GiveWell’s general effort to support the development of potential future top charities and improve the quality of our recommendations, in May of 2016 Good Ventures granted $350,000 to Innovations for Poverty Action to support a randomized controlled trial (RCT) run by Richard Sedlmayr and colleagues in collaboration with GiveDirectly (one of our top charities), testing a mindset intervention with recipients of cash transfers.

Updates

November 2018 update

The researchers have pre-registered the study here.

July 2018 update

Richard Sedlmayr provided an update on this project, noting that the project is on track. The endline survey began in May 2018 and is scheduled to be completed in October 2018. A pre-analysis plan is expected to be filed in September, preliminary results will likely be available by November, and a working paper around March 2019.

One challenge that the project continues to encounter is a high rate (~20-30%) of refusals in one of its two study regions. Most of the respondents who refused the endline survey had also refused the cash transfer intervention. Anecdotally, refusals continue to be related to persistent rumors about the implementing partner, GiveDirectly.

The scope of research has been expanded since we recommended this grant, allowing collaborators to study intra-household bargaining related effects, local distribution and inequality effects, civic effects, and network effects. This additional work does not involve Richard Sedlmayr and is being supported by other funders.

Table of Contents

The intervention

Richard Sedlmayr (a PhD student with whom we have been in contact for several years) and his colleagues Stefan Dercon, Rob Garlick, Johannes Haushofer, and Kate Orkin have proposed an RCT that will attempt to encourage cash transfer recipients to reconsider fundamental aspects of themselves, their surroundings, and their future by showing them a brief video and providing interaction with a coach. The team will evaluate if and why this mindset intervention may affect the outcomes of the cash transfers.

While we are not particularly familiar with the literature on mindset interventions, we think this study is likely to be worth supporting for several reasons:

  1. Paul Niehaus (Co-Founder and President of GiveDirectly) has spoken favorably to us about this idea, and is willing to run this RCT on top of GiveDirectly's existing program.
  2. Based on previous contact with Sedlmayr, we respect his opinion and thus see his interest as some evidence in itself that the intervention is worth testing.
  3. If the intervention does have an effect, it has the potential to increase GiveDirectly's impact (and the impact of cash transfers in general, if other groups adopt the intervention).

Grant details

This grant fits into our GiveWell experimental work by funding a study with the potential to directly influence (a) a top charity's (GiveDirectly's) program and (b) a priority program (cash transfers) more generally.

Our understanding is that, prior to our making this grant, the team had raised most of the funding needed to conduct a smaller version of the study, but that it would likely have been underpowered (i.e. the data collected would have been insufficient to detect a statistically significant effect of the size that the intervention might produce). Our grant is aimed primarily at increasing the likelihood that, at the conclusion of the study, we are confident about whether or not the intervention is effective, rather than remaining uncertain because the study was underpowered due to limited investment.

The first and (from our perspective) most important tranche of our $350,000 grant is $230,000 to allow the team to run an additional follow-up survey, which we expect to significantly increase our confidence in the study's results. The team plans to use the next $80,000 to expand from a shortened to a full baseline survey, and the remaining $40,000 to increase the intensity of the intervention by lengthening the video engagement with transfer recipients.

Cost-effectiveness

While we believe that providing the additional funding to allow the study to be adequately powered is likely to be cost-effective, we do not have a formal estimate of the grant's expected impact in, e.g., cost per unit of increased confidence on our part in the study's results.

We also do not have a confident prediction of how likely this intervention is to yield an effect large enough to be worth the cost of implementation, mainly because we do not have confident estimates of:

  • The likely impact of the intervention
  • How much the intervention would increase the cost of cash transfer programs if implemented at scale (though our understanding is that the intervention is relatively inexpensive)

Room for more funding

In the absence of this grant, our impression is that the team would move ahead with the more limited version of the study, which we believe would have a reasonable probability of leaving us uncertain about the intervention's impact upon completion. Sedlmayr has told us that our additional funding will help ensure the study is adequately powered to statistically detect any potentially meaningful economic effect in the data.

While it is possible that the team might be able to raise some funding from other sources, our impression is that they are not aware of other funding options and that starting the study is time-sensitive.

Risks of the grant and internal forecasts

This grant could fail to have the effects we hope for in a number of ways:

  1. The study detects an effect that is too small relative to the cost of implementing the intervention for it to be worth scaling up. We believe this is reasonably likely (~50% chance).
  2. The study yields a result that we're not confident in. We think there is a moderate chance (~25%) of this (given the number of potential problems that can arise with any study).
  3. The study detects an effect that would be worth scaling up, but we are unable to find an implementer interested in doing so (for instance, if GiveDirectly were to decide not to incorporate the intervention because it is too time-intensive or diverts attention from other activities, or because GiveDirectly interprets the study's results differently than we do). We think this scenario is fairly unlikely (~7.5%).
  4. The intervention has no measurable effect, and we could have predicted this prior to the study by surveying the existing literature more thoroughly. We think this is fairly unlikely (~7.5%), especially given Sedlmayr's interest in attempting the intervention.

(We’re experimenting with recording explicit numerical forecasts of events related to our decisionmaking, especially grantmaking. The idea behind this is to pull out the implicit predictions that are playing a role in our decisions, and make it possible for us to look back on how well-calibrated and accurate those are.)

Relationship disclosures

Sedlmayr has been GiveWell's primary contact at a philanthropic advisory firm that helped secure funding from an anonymous donor for GiveWell's operations. GiveWell has a conflict of interest in that this grant is recommending that Good Ventures fund someone who may be in a position to recommend funding to us. We do not consider this sufficient reason to prevent us from making the recommendation.