Our Mistakes

This page logs mistakes we’ve made and lessons we’ve learned. We share this information so that others can benefit from our experience and evaluate us as an organization.

GiveWell is dedicated to finding and funding outstanding opportunities in global health and development, publishing the full details of our research on this website for donors to review. The organizations we fund must be open to our intensive review process and transparent public discussion of their track record and progress, both the good and the bad. We expect the same of ourselves.

We focus on issues that could affect the impression that people have of our work and its reliability, including errors in our research, grantmaking, organizational strategy, and operations. We have done our best to include mistakes that may have affected our decisions, that were preventable, and resulted in lessons that led us to make meaningful changes. We have especially tried to include mistakes that we think might lead donors to reconsider donating to us. We haven’t listed missteps where the main cost was to our productivity or growth. If you know of other items you think should be listed here, please contact us.

Last updated: November 2025 (December 2024 version, April 2024 version, 2019 version and 2015 version)

Table of Contents

Major issues

2016 to ongoing (first posted in 2018): Failure to publish all relevant intervention research

How we fall short: In early 2016, we began to review the evidence base for a large number of programs to determine how we should prioritize programs for further evaluation. Our 2016 research plan discusses this priority (referred to as "intervention prioritization"). Since then, the vast majority of the work that we’ve done to review interventions remains private, in internal documents that we have not shared because we have not put in the time to ensure the work is of publishable quality. We prioritized spending time to assess additional opportunities more highly than spending time to prepare our work for publication.

While we don’t believe that publishing this work is likely to have changed any of the recommendations we make to donors, we see this body of private materials as a substantial failure to be transparent about our work. We believe that transparency is important for explaining our process and allowing others to vet our work. The process of formally writing up our research and seeking internal and external feedback has also on occasion changed our conclusions.

This remains an area for improvement.

Steps we are taking to improve (posted December 2018): We plan to make progress on this work in 2020. Our research team has built into its plans for the year more time for publishing research we completed in the past as well as newer investigations.

Update (posted September 2023): Though we didn’t post an update on our progress in 2020, we’ve taken several steps since 2016 to publish more of our research. As our research team has grown, we’ve generated a greater volume of research. We’ve made progress on publishing these findings, but we still have more work to do.

Areas of progress. We’ve made substantial progress in publishing more of our research.

  • Grant pages. For example, in 2016, we were not yet publishing the full rationale behind each of our funding decisions; we now expect to publish a page on every grant we recommend for funding. A list of all pages we’ve published on grants since 2014 is available here.
  • Deprioritization decisions. We began publishing short notes that explain our decisions to stop or pause investigation on programs that don’t appear promising after an initial review (example here). This format allows us to more quickly communicate our views about a deprioritized program so that people can evaluate and respond to our reasoning. You can find all our short deprioritization notes in the program reviews dashboard.

We’re also publishing more quickly. In 2022, we began setting internal timeline targets for publishing new grant pages. Since the initial goals were set, we have published grant pages for Top Charities more quickly than before; they are now usually published less than three months after making a grant. We are tracking timelines for all research publishing, so in the future we will be able to assess whether we met our goals. We sped up our publication process, in part, by eliminating unnecessary review steps and streamlining communication with grantees to make their review and signoff easier.

Areas for improvement. While we have shortened our timelines for publishing relative to 2016, especially for grants to Top Charities, we still have more work to do to publish research quickly, particularly research on interventions.

We are also working to increase the legibility of our research. In 2023, we have been prioritizing the legibility of our research. As part of our value of transparency, we want readers to be able to understand our reasoning, evaluate the ways we might be wrong, and provide feedback that will improve our research. Toward that end, we’ve added new summaries to our research and grant pages that describe what the program or grant does, identify our key assumptions, and clearly explain the program or grant’s cost-effectiveness and what our largest sources of uncertainty are. You can see examples of these features on this grant page.

Update (posted November 2025): Our primary goals with respect to this mistake have been to publish our research more quickly, to share more of our research, and to publish research materials that are more legible. We have made incremental progress in all three areas since our last update, though we continue to have areas for improvement.

Publishing more quickly

We have had timeline targets for publishing new grant pages since 2022. Over the past year, we have substantially increased the number of grant pages we publish within three months after grant approval, though we remain significantly short of our goal. As of November 2025, 22 grant pages remain unpublished after three months, and 8 of those 22 grants were approved more than six months ago.

We have improved by assigning team leaders responsibility for moving grant pages forward and reviewing progress toward our goal each quarter. We are also tracking the steps in the process that take the longest and identifying strategies to streamline them. For example, in order to speed up the initial drafting of the grant page, which is among the longest steps in the process, researchers are now required to have a first draft of a grant page completed at the time they request approval for the grant.

Publishing more of our research

We have made substantial progress in publishing more of our research: Our overall publishing volume has nearly doubled over the past year. For example, during the first eight months of this metrics year (February 1, 2025 through September 30, 2025), we published 50 grant pages—already more than we published during all of 2024. In addition, we published 11 reports on specific programs or research questions—about as many as we published in all of 2024.

We have also developed new ways for sharing our work beyond our website. For example, we launched a podcast in March 2025 to provide updates on the impact of aid cuts on health programs and to share information about other aspects of GiveWell’s work.

Improving legibility

In 2023 and 2024, we made substantial progress in making our work more legible, and legibility is now a guiding principle for our research team. As noted in the update above, we added new summaries (like this one) to our research and grant pages that describe what the program or grant does, identify our key assumptions, explain our cost-effectiveness estimate, and share our largest sources of uncertainty. In addition, our grant pages now include a more complete walk-through of our cost-effectiveness model (as can be seen here and on other pages).

Nevertheless, our grant pages remain lengthy and complex. The primary goal of our legibility effort has been to enable outside experts to review and critique our work. We believe our research publications are now close to accomplishing this goal, but they are often not legible to non-researchers. While we aim to serve this broader audience through other communications, such as our blog and podcast, we are beginning to invest more into making the main findings and processes of our research more accessible.

2020: Privacy Policy–related misstep

How we fell short: We have gradually expanded our marketing efforts since 2018. In May 2020, as part of these efforts, we updated our Privacy Policy.

Our updated policy included the ability to share personal information with service providers to assist with our marketing efforts. Our contracts required them to keep the information confidential and only use it to assist us with their contracted services.

We decided to use Facebook as such a service provider, and on July 12, 2020, we used email addresses of some donors to create a Facebook Custom Audience to help us identify other potential donors. We understand this to be a common tool for social media marketing. The email addresses were hashed, or converted to randomized code, locally before they were uploaded to Facebook for processing to create a Custom Audience. Facebook was required by our contract to delete the email addresses promptly after the Custom Audience was created and was not allowed to use the email addresses for other purposes.

We regret not having offered all donors a chance to opt out before we used their email addresses for this purpose.

How we addressed our mistake (posted June 2021): We deleted our Custom Audience on July 30, 2020, after realizing some of our donors may have wanted the chance to opt out before their email address was used to create a Custom Audience in order to identify potential new donors. This realization was prompted by our CEO asking for an update on our approach to privacy protection.

We notified donors whose email addresses were used about what happened. We emailed others about the update to our Privacy Policy and how to opt into or out of information-sharing in the future. We also added an opt-out form to our Privacy Policy page. We don’t plan to proactively contact our audience prior to each future marketing effort, though we may decide to on a case-by-case basis.

We completed an internal assessment of what led to this misstep. To avoid similar missteps in the future, we piloted a formalized process for scoping projects with a goal, among others, of ensuring the right level of review for very new types of work (as social media marketing was in 2020).

2017 to 2019: Failure to publish charity reviews

How we fell short: Since early 2017, we have had a significant number of conversations with organizations about applying for a GiveWell recommendation. We also completed a preliminary evaluation of a number of applications. Much of this work remains private. In some cases, this is because we did not get permission to publish information from those we spoke to. In other cases, this is because we did not put in the time to write up what we have learned in a format that we believed the organizations would allow us to publish.

We do not plan to publish these reviews, as they are outdated and likely would not represent the current organizations accurately. We do not think it would be a good use of the organizations’ time to review our outdated work, nor would we expect to be successful in asking them to do so.

However, as we say above: “While we don’t believe that publishing this work is likely to have changed any of the recommendations we make to donors, we see this body of private materials as a substantial failure to be transparent about our work. We believe that transparency is important for explaining our process and allowing others to vet our work. The process of formally writing up our research and seeking internal and external feedback has also on occasion changed our conclusions.”

How we addressed our mistake (posted October 2020): Our research team built additional time for publishing into its process.

2007 to 2014: Failure to prioritize staff diversity in hiring

How we fell short: From 2007 to 2014, we did not prioritize diversity in our hiring, and our staff composition reflects the lack of attention we paid to this issue.

We believe a more diverse staff will make GiveWell better and more effective. We believe broadening our candidate pipeline and reducing any bias that exists in our hiring process will increase our likelihood of hiring the best people to achieve GiveWell’s mission. And, we believe that having a diverse staff and an inclusive culture will make GiveWell more attractive to prospective staff and improve retention.

How we addressed our mistake (updated October 2020): We have made progress, but we still consider staff diversity an area in which to improve.

Since 2014, we have taken a number of steps to increase diversity in our hiring. Those efforts include advertising open roles with professional groups that focus on underrepresented audiences and working with consultants to recruit candidates from underrepresented backgrounds. We also use a hiring process that aims to limit bias by focusing on work samples that are graded blindly where possible.

As of 2020, our team is significantly more gender, racially, and ethnically diverse than it was in GiveWell’s early years. It still is not as racially or ethnically diverse as we would like it to be. People from low- and middle-income countries, in which our Top Charities primarily operate, are not well represented on staff. As of mid-2020, we continue to undertake specific projects to increase diversity on our staff, such as exploring whether our recruitment processes differ from best practices related to recruiting for a diverse workforce and then working to ensure that we’re following those best practices.

2014 to 2016: Failure to prioritize hiring an economist

How we fell short: From 2014 to 2016, we produced relatively few intervention reports, a crucial part of our research process. Our low production may be explained by the fact that we tasked relatively junior, inexperienced staff with these reports. We did not prioritize hiring a specialist, likely someone with a PhD in economics or the equivalent, who would have likely been able to complete many more reports during this time. This delayed our research and potentially led us to recommend fewer Top Charities than we otherwise might have.

How we addressed our mistake (posted June 2017): In September 2016, we began recruiting for a Senior Fellow to fill this role. The role was filled in May 2017.

2013 to 2016: Failure to address misconceptions organizations have about our application process

How we fell short: We realized in 2016 that some organizations had misconceptions about our criteria for grantmaking and our research process. For example, some organizations told us that they thought programs could only be recommended for three years; others weren’t aware that we had recommended million-dollar “incentive grants” to Top Charities.

How we addressed our mistake (posted December 2016): We assigned a staff member the duties of charity liaison and made them responsible for communicating with organizations that are considering applying, to help them with our process and correct misconceptions.

2009 to 2012: Errors in publishing private material

How we fell short: There were two issues, one larger and one smaller:

  • Since 2009, we’ve made a practice of publishing notes from conversations with organizations, subject matter experts, and other stakeholders. Our practice is to share the conversation notes we take with the other party before publication so that they can make changes to the text. We only publish the version of the notes that the other party approves and will keep the entire conversation confidential if the party asks us to.

    In November 2012, a staff member completed an audit of all conversations that we had published. He identified two instances where we had erroneously published the pre-publication (i.e., not-yet-approved) version of the notes. We have emailed both organizations to apologize and inform them of the information that we erroneously shared.

  • In October 2012, we published a blog post titled, “Evaluating people.” Though the final version of the post did not discuss specific people or organizations, a draft version of the post had done so. We erroneously published the draft version which discussed individuals. We recognized our error within five minutes of posting and replaced the post with the correct version; the draft post was available in Google’s cache for several hours and was likely available to people who received the blog via RSS if they had their RSS reader open before we corrected our error (and did not refresh their reader).

    We immediately emailed all of the organizations and people that we had mentioned to apologize and included the section we had written about them. Note that none of the information we published was confidential; we merely did not intend to publish this information and it had not been fully vetted by GiveWell staff and sent to the organizations for pre-publication comment.

How we addressed our mistake (posted December 2012): In November 2012, we instituted a new practice for publishing conversation notes. We began to internally store both private and publishable versions of conversation notes in separate folders (to reduce the likelihood that we upload the wrong file) and assigned a staff member to perform a weekly audit to check whether any confidential materials have been uploaded. As of this writing, we have performed three audits and found no instances of publishing private material.

We take the issue of publishing private materials very seriously because parties that share private materials with us must have confidence that we will protect their privacy. We have therefore reexamined our procedures for uploading files to our website and are planning to institute a full scale audit of files that are currently public as well as an ongoing procedure to audit our uploads.

Update (posted October 2016): We established a publishing process that clearly separates publishable versions of conversation notes from private versions of notes, periodically auditing published notes to ensure that all interviewees’ suggestions have been incorporated. At the time of this update, our process requires that explicit approval to publish is given for each file we upload, and we periodically audit these uploads to ensure that private information has not been uploaded to our server.

2006 to 2011: Tone issues

How we fell short: We continue to struggle with an appropriate tone on our blog, one that neither understates nor overstates our confidence in our views (particularly when it comes to charities that we do not recommend). An example of a problematic tone is our December 2009 blog post, Celebrated charities that we don’t recommend. Although it is literally true that we don’t recommend any of the organizations listed in that post, and although we stand by the content of each individual blog post linked, the summaries make it sound as though we are confident that these organizations are not doing good work; in fact, it would be more accurate to say that the information we would need to be confident isn’t available, and we therefore recommend that donors give elsewhere unless they have information we don’t.

We wish to be explicit that we are forming best guesses based on limited information, and always open to changing our minds, but readers often misunderstand us and believe we have formed confident (and, in particular, negative) judgments. This leads to unnecessary hostility from, and unnecessary public relations problems for, the groups we discuss.

How we addressed our mistake (posted July 2010): We feel that our tone has slowly become more cautious and accurate over time. At the time of this update, we are also resolving to run anything that might be perceived as negative by the group it discusses, before we publish it publicly, giving them a chance to make any corrections to both facts and tone. (We have done this since our inception for charity reviews, but now intend to do it for blog posts and any other public content as well.)

July 2009 to November 2010: Quantitative charity ratings that confused rather than clarified our stances

How we fell short: Between July 2009 and November 2010, we assigned zero- to three-star ratings to all programs we examined. We did so in response to feedback from our fans and followers—in particular, arguments that people want easily digested, unambiguous “bottom line” information that can help them make a decision in a hurry and with a clean conscience. Ultimately, however, we decided that the costs of the ratings—in terms of giving people the wrong impression about where we stood on particular programs—outweighed the benefits.

How we addressed our mistake (posted November 2010): By December 2010, we will replace our quantitative ratings with more complex and ambiguous bottom lines that link to our full reviews.

More information:

December 2007: Overaggressive and inappropriate marketing

How we fell short: As part of an effort to gain publicity, GiveWell’s staff (Holden and Elie) posted comments on many blogs that did not give adequate disclosure of our identities (we used our first names, but not our full names, and we didn’t note that we were associated with GiveWell); in a smaller number of cases, we posted comments and sent emails that deliberately concealed our identities. Our actions were wrong and rightly damaged GiveWell’s reputation. More detail is available via the page for the board meeting that we held in response.

Given the nature of our work, it is essential that we hold ourselves to the highest standards of transparency in everything we do. Our poor judgment caused many people who had not previously encountered GiveWell to become extremely hostile to it.

How we addressed our mistake: We issued a full public disclosure and apology, and directly notified all existing GiveWell donors of the incident. We held a Board meeting and handed out penalties that were publicly disclosed, along with the audio of the meeting. We increased the Board’s degree of oversight over staff, particularly with regard to public communications.

June 2007: Poorly constructed “causes” led to suboptimal grant allocation

How we fell short: For our first year of research, we grouped charities into causes (“Saving lives,” “Global poverty,” etc.) based on the idea that programs within one cause could be decided on by rough but consistent metrics: for example, we had planned to decide Cause 1 (saving lives in Africa) largely on the basis of estimating the “cost per life saved” for each applicant. The extremely disparate nature of different programs’ activities meant that there were major limits to this type of analysis (we had anticipated some limits, but we encountered more).

Because of our commitment to make one grant per cause and our overly rigid and narrow definitions of “causes,” we feel that we allocated our grant money suboptimally. For example, all Board members agreed that we had high confidence in two of our Cause 1 (saving lives) applicants, but very low confidence in all of our Cause 2 (global poverty) applicants. Yet we had to give equal-sized grants to the top applicant in each cause (and give nothing to the second-place applicant in Cause 1).

How we addressed our mistake (posted 2007): We shifted our approach to more broadly defined “causes,” which gave us more flexibility to grant to organizations that appeal to us most. e We also switched to exploring broad sets of programs that intersect in terms of the people they serve and the research needed to understand them, rather than narrower causes based on the goal of an “apples to apples” comparison using consistent metrics.

Smaller issues

For several years and ongoing (posted in 2024): Failure to estimate the interactions and overlap between programs

How we fall short: We did not adequately consider or model the potential interactions and overlaps between different health programs we fund or that are being implemented in the same regions. This oversight could lead to inaccurate estimations of the combined impact of these programs. Specific examples include:

  • In regions like Northern Nigeria, multiple programs by GiveWell or others are being delivered simultaneously, including insecticide-treated nets (ITNs), seasonal malaria chemoprevention (SMC), vaccines, oral rehydration solution (ORS), and azithromycin distribution.1 We have not thoroughly assessed how these overlapping interventions might interact or affect each other’s efficacy.
  • In our vitamin A supplementation (VAS) cost-effectiveness analysis, we did not account for the potential interaction between VAS and the expected scale-up of azithromycin distribution in high-mortality settings. This oversight could be leading us to overestimate VAS cost-effectiveness by approximately 20%.2

More broadly, we have not sufficiently examined how our focus on funding vertical programs (those that deliver a specific intervention) might impact overall health systems and the delivery of other essential health services.

By not considering these interactions, we risk overestimating the combined impact of multiple interventions and potentially missing opportunities to achieve greater impact through more integrated approaches.

Steps we’re taking to improve:

  • We plan to develop an approach to modeling overlapping effects of programs and address this issue in upcoming grant investigations where overlap is most likely, such as considering the interaction between azithromycin distribution and VAS.
  • We plan to publish our view on why we typically support vertical over horizontal programs to solicit feedback and encourage discussion on this approach.

By addressing these issues, we aim to improve the accuracy of our impact estimates, identify potential synergies between programs, and ensure our funding decisions consider the broader context of health systems in the regions where we work.

This issue was raised as a part of our “red teaming” of Top Charities. You can read more about this mistake and our broader red-teaming process here.

For several years up to June 2024: Failure to fully account for individuals receiving interventions from other sources

How we fell short: We did not adequately investigate or account for the possibility that individuals might receive interventions like insecticide-treated nets (ITNs), vaccines, or vitamin A supplementation (VAS) from sources other than the programs we fund. While we discussed the possibility that recipients receive interventions from other sources in our pages on ITNs, seasonal malaria chemoprevention (SMC), VAS, and vaccines, we now believe that the adjustments we made were insufficient. For example:

  • In our analysis of ITN distribution campaigns in the Democratic Republic of the Congo (DRC), we assumed that only 5% of the population would obtain nets from alternative (non-campaign) sources,3 based on trials conducted about 30 years ago. However, more recent evidence suggests this figure may be significantly higher, potentially 25-50% for children under 3 years old.4
  • For New Incentives’ conditional cash transfer program for vaccinations in Nigeria, we may have underestimated the rate at which vaccination coverage was increasing in the absence of the program.5 Our adjustment was equivalent to assuming coverage increased by roughly 1.5 percentage points per year, while some surveys indicate it may have been increasing by 5 percentage points per year in several Nigerian states prior to New Incentives’ entry.6
  • In our evaluation of vitamin A supplementation (VAS) programs, we relied on outdated surveys and modeled estimates to determine vitamin A deficiency rates, without fully accounting for the potential impact of vitamin A fortification programs introduced in many countries since those surveys were conducted.7

These oversights could have led to overestimation of our programs’ impact and cost-effectiveness. For instance, in the case of ITN distribution in DRC, this issue could have potentially lowered our estimate of cost-effectiveness by 15-30%.8

How we addressed our mistake (updated October 2025):

  • We updated our analysis of ITN distributions to account for higher rates of routine distribution, and have investigated counterfactual coverage in other countries where we fund net distributions, such as the Democratic Republic of the Congo.
  • We revised our estimates of counterfactual vaccination coverage for New Incentives’ program to account for the underlying increase in vaccination rates over time.
  • We explored funding additional surveys of vitamin A deficiency in countries where we expect to consider large VAS grants, to get more up-to-date and accurate data.
  • In the Top Charity cost-effectiveness analyses we use for decision-making, we explicitly state our assumptions about the percentage of individuals who would receive interventions from other sources.
  • We engaged with experts to better understand how campaigns for health commodities we fund interact with routine distribution systems, and we have considered supporting routine distribution in some areas.

We took these steps to improve the accuracy of our impact estimates and ensure we’re directing funding to where it can have the greatest additional benefit.

This issue was raised as a part of our “red teaming” of Top Charities. You can read more about this mistake and our broader red-teaming process here.

For several years up to January 2024: Failure to more frequently engage with outside experts

How we fell short: We did not consistently or frequently enough seek input from external experts, including implementation experts, researchers, individuals with in-country experience, and fellow funders. This limited engagement may have caused us to miss important perspectives and insights that could have improved our analyses and funding decisions. Specific examples include:

  • During our red teaming process, external experts pointed out that we may be using overly optimistic or outdated assumptions on insecticide-treated net (ITN) durability.9 This insight, which we had not previously identified, could significantly affect our cost-effectiveness estimates for ITN programs.
  • Conversations with malaria experts and program implementers, conducted in parallel with our red teaming, revealed that more individuals were likely receiving nets via routine distribution than we had previously estimated. This information could have important implications for our funding decisions related to mass net distribution campaigns.

By not consistently seeking external input, we risked operating with incomplete or outdated information, potentially leading to suboptimal funding decisions or missed opportunities for greater impact.

How we addressed our mistake (posted November 2024):

  • We now more regularly attend conferences with experts in areas where we fund programs, such as malaria, vaccination, and nutrition, to stay current with the latest research and implementation insights.
  • We increased our outreach to experts as a standard part of our grant investigations and intervention research processes. While we have always consulted with program implementers and researchers to some extent, we now allocate more time to these conversations than we had in the past.
  • We implemented new approaches for soliciting feedback on our work from a wider range of experts and stakeholders.

By increasing our engagement with outside experts, we aim to broaden our perspective, challenge our assumptions, and ultimately improve the quality and impact of our grantmaking decisions.

This issue was raised as a part of our “red teaming” of Top Charities. You can read more about this mistake and our broader red-teaming process here.

For several years up to November 2023: Failure to sense-check all raw data

How we fell short: Note that we don’t list every small research mistake we make and correct. This page lists mistakes that “affect the impression that people external to the organization have of our work and its reliability.” We list these two examples because they’re representative of a category of research error we have made.

In brief, we estimated some parameters in our cost-effectiveness models by plugging in raw data at face value without subjecting the numbers to common-sense scrutiny or examining how they could be inaccurate.

This is a quote from our writeup on how we address uncertainty:

To estimate insecticide resistance across countries, we look at bioassay test results on mosquito mortality. These tests essentially expose mosquitoes to insecticide and record the percentage of mosquitoes that die. The tests are very noisy: in many countries, bioassay test results range from 0% to 100%—i.e., the maximum range possible. To come up with country-specific estimates, we take the average of all tests that have been conducted in each country and do not make any further adjustments to bring the results more in line with our common-sense intuition.

Another example comes from that same page:

Another major program area we support is childhood immunization… To model the cost-effectiveness of these programs, we need to take a stance on the share of deaths that a vaccine prevents for a given disease. This assumption enters our cost-effectiveness estimates through our etiology adjustments …. To estimate an etiology adjustment for the rotavirus vaccine, which targets diarrhoeal deaths, we do the following:

  • Take raw IHME data on the number of deaths from diarrhea among under 5s in the sub-regions where these programs operate
  • Take raw IHME data on the number of deaths from rotavirus (a subset of diarrheal deaths)
  • Divide the two to get an estimate of the % of diarrhea deaths in each region that could be targeted by the rotavirus vaccine

As Figure 5 shows, this leads to implausibly large differences between countries; we effectively assume that the rotavirus vaccine is almost completely ineffective at preventing diarrhoeal deaths in India. This seems like a bad assumption; the rotavirus vaccine is part of India’s routine immunization schedule, and a randomized controlled trial in India that administered the rotavirus vaccine to infants showed a 54% reduction in severe gastroenteritis.

How we addressed our mistake (posted April 2024): We began taking the steps described in our writeup on uncertainty to address this issue. We are now notably more attentive to the data we aggregate to arrive at our estimates, thus ensuring that we don’t follow the (sometimes noisy) data we have without sense-checking the numbers.

Late 2020 to early 2022: Overestimated funds raised

How we fell short: In late 2021, we believed (and we wrote) that we would raise $1 billion annually by 2025. This was a massive overestimate (which we corrected in this mid-2022 post), and this mistake led to the following long-term problems:

  • In late 2021, we worried that our research might not be able to keep up with the volume of donations we expected. That is, we thought we’d raise significantly more funding than the cost-effective funding needs we would identify. Because we’re committed to being transparent with donors, we wrote that we were holding onto funds we had received (and that we expected to hold funds in the future) because we weren’t finding enough grant opportunities to give them to. Unfortunately, the way we communicated about this led to a long-standing, hard-to-correct belief in our audience that we have more funding than we can spend.
  • Because we believed that we would raise so much money, we put significantly more attention on building our research team than on building our outreach team, leading to a further imbalance between the volume of highly cost-effective funding opportunities we’ve identified and our ability to raise sufficient money to fill those funding gaps.

How we addressed our mistake (posted April 2024):

  • We were very explicit publicly about two facts. First, we expect to find cost-effective programs to which we can direct all funding we receive. Second, the organizations we recommended are in fact funding-constrained.
  • We hired for senior roles across our outreach team to build outreach capacity so that we can raise more money and fill more of the most cost-effective funding gaps we find.

We previously shared another mistake related to this episode. For more detail, see below in the section titled “2021: Miscalculation of and subsequent miscommunication around rollover funds.”

April 2022: Failures of training and communication left us vulnerable to a crypto scam

How we fell short: In April 2022, we received an email requesting a refund of a cryptocurrency donation, and we decided to grant it despite our no-refunds policy. We later realized that this request hadn’t come from the real donor. We credited the real donor with the gift and lost $4,600, which we made up for by drawing on our unrestricted funding.

Cryptocurrency donations are especially fertile ground for scams because information about all crypto transactions is publicly available online, except for the identity of the person initiating the transaction. The email we got in this case largely fit the description of a common scam: a person claims that they’ve accidentally transferred a larger amount than they intended, often providing screenshots of public details of the transaction as "proof," and asks for a refund, though they didn’t actually make the donation themselves.

GiveWell had safeguards in place against this, including requesting that all crypto donors fill out a donation report form against which to verify such claims and maintaining a no-refunds policy (for all types of donations, but particularly for crypto). However, the donor relations team handling requests like this were relatively new to their roles at the time and unfamiliar with this type of scenario, and decided to override the no-refunds policy in light of what they felt was a straightforward request.

We think this mistake was largely caused by a failure of training and knowledge sharing with the new donor relations staff:

  • We had made exceptions to the no-refunds policy in the past, but we hadn’t adequately documented the specific and limited reason for which exceptions could be made. We should have made these clearer in our internal training materials so new staff would be less reliant on judgment calls. We should also have communicated the no-refunds policy more clearly on our website.
  • Former donor relations staff had encountered this type of scam before, but we hadn’t included information about it in our training materials.

How we addressed our mistake (posted September 2022): To avoid this in the future, we did the following:

  • Provided extra training on crypto scams to the donor relations team and incorporated this information into our training materials for new staff.
  • Revised the cryptocurrency donation pages on our website to clearly highlight that crypto donations are non-refundable and that donation report forms should be submitted prior to a donation.
  • Circulated an internal memo clarifying our no-refunds policy for relevant staff.
  • Discussed our cryptocurrency donation practices with experts and implemented best practices for both straightforward and more complicated transactions to reduce the incidence of fraud.

If you are considering making a cryptocurrency donation and want to know more about the steps we take to prevent fraud, please reach out to donations@givewell.org.

2021: Miscalculation of and subsequent miscommunication around rollover funds

Rollover funds are funds that we raise in a certain year but choose not to spend in that year, instead “rolling them over" to the following year because we believe those funds will have a greater impact if spent in the future. For background on rollover funds, see the page we published here.

How we fell short: In November 2021, we announced that we expected to roll over about $110 million in funding to grant to future opportunities. We ultimately rolled over substantially less. We rolled over $18 million that was available for grantmaking as of the end of metrics year 2021 (i.e., January 31, 2022). We also carried over an additional approximately $40 million that was received in metrics year 2021 but was not yet available for granting; this was a combination of:

  • unrestricted funds that were designated by the Board for grantmaking in mid-2022, in accordance with our excess assets policy
  • donations given to the Top Charities Fund in January 2022, which were allocated alongside donations given to the Top Charities Fund in the rest of Q1 2022

While our forecast was roughly accurate about both funds raised and funds directed, we failed to define the question well enough to predict how much of our available funding we would have left over.

Much of the discrepancy came from:

  1. Including funds given through GiveWell and designated for specific organizations (e.g., a donation given through our website for the Against Malaria Foundation) on one side of the ledger but not the other. These funds were granted out to the organizations to which they were designated, but we had erroneously been considering them as adding to the total amount of funds that would be available for granting at our discretion. This led to approximately $18 million less in funds available than forecasted.
  2. Not accounting for the contingency funding that was earmarked for some 2021 grants. These are funds that are currently held by Open Philanthropy but are earmarked for particular programs in the event that the programs require them (e.g., if we stop supporting a program in the future, this funding will be granted out as exit funding for that program). Some of this funding may be returned to our budget in the future, if it goes unspent once the grant period has ended, but for now these funds aren’t available to us for granting because they’ve been earmarked. This led to about $25 million less in funds available than forecasted.

This discrepancy didn’t change the bottom line or lead to a suboptimal allocation of funds: we thought we would raise more funding than we could allocate in 2021, and we did. But, we think the conceptual mistakes in our analysis combined with how we publicized the (erroneous) projection of $110 million in rollover funds led to lower overall funds raised. If we had made a more accurate prediction, we probably would have placed less emphasis on rollover funding in our public communications. We expect this would have led to more funds raised for our recommendations and more lives saved or improved, given that as of June 2022, we believe we’ve found more highly cost-effective funding opportunities than we’ll be able to fund this year.

How we addressed our mistake (posted July 2022): We learned from the specific mistakes we made in 2021 so that we can now approach this type of analysis with more clarity around the different pots of funding at play and how they interact. In the future, when we take on major pieces of analysis like this, we’ll have more awareness of potential errors in our methodology and subject the analysis to more thorough review.

November 2018: Spreadsheet errors led to additional funding for one Top Charity

How we fell short: In November 2018, we used this spreadsheet to calculate how much funding to recommend that Good Ventures grant to each of our Top Charities. Since that time, we have become aware of two errors in the spreadsheet:

  1. In a sheet that consolidated information from other sheets, we mistakenly included one funding gap twice. This led us to calculate the amount to recommend that Good Ventures grant to Evidence Action’s Deworm the World Initiative as $100,000 higher than we would have without the error ($10.4 million instead of $10.3 million). We learned of this error because Deworm the World brought it to our attention. We expect to reduce the amount that we recommend that Good Ventures grant to Evidence Action in a future grant by an offsetting amount.
  2. In the same spreadsheet, we increased our estimate of the total amount of additional funding that Malaria Consortium could absorb by $4.8 million over what we had calculated originally, based on new information we received from Malaria Consortium. We later realized that we had not added in Malaria Consortium’s 10% overhead rate, leading us to underestimate the total Malaria Consortium could absorb by $480,000. This did not affect our recommendation to Good Ventures or other donors because the amount we recommended these donors give was limited by the amount of funding available, rather than by the amount that Malaria Consortium could absorb.

How we addressed our mistake: The first time that we used this spreadsheet format to calculate our recommendations to Good Ventures was in 2018. While the errors made then did not, in the end, result in over- or under-funding any Top Charities, they are indicative of ways spreadsheet errors could lead to mistakes in funding levels. We updated the format to reduce the risk of error and to build in checks for discrepancies.

2017: Failure to publish internal metrics report

How we fell short: Each year, GiveWell publishes a metrics report on our money moved and web traffic. These metrics are part of how we evaluate ourselves. We failed to publish a complete metrics report in 2017, only publishing an interim report at the end of September.

How we addressed our mistake: We misassessed the difficulty involved in completing the metrics report. We reassigned responsibility to another staff member and prioritized publishing our 2016 metrics report as soon as possible and publishing our 2017 metrics report as soon as possible in 2018.

November 29, 2016 to December 23, 2016: Poor communication about Top Charity recommendations restricted to a specific program

How we fell short: On November 29, 2016, we released updated charity recommendations. Three of our seven Top Charities implemented a variety of programs, and our recommendation for them was restricted to a specific program. We did not clearly communicate this fact on our Top Charities or donate pages, potentially causing donors who gave directly to these three organizations (as opposed to giving via the GiveWell website) to fail to restrict their donations to the programs we recommend.

How we addressed our mistake: We updated the pages to reflect the fact that our recommendation for these charities is program-specific.

December 2014: Errors in our cost-effectiveness analysis of Development Media International (DMI)

How we fell short: In early 2015, we discovered some errors in our cost-effectiveness analysis of DMI. See this blog post for details.

How we addressed our mistake: We improved the general transparency and clarity of our cost-effectiveness models, and explicitly prioritized work on cost-effectiveness throughout our research process. See this section of our 2015 annual review for more.

November to December 2014: Lack of confidence in the cost-effectiveness analyses we relied on for our Top Charities recommendations

How we fell short: We were not highly confident in our cost-effectiveness estimates when we announced our updated program recommendations at the end of 2014, a fact we noted in the post, because we finalized our cost-effectiveness analyses later in the year than would have been ideal. See this part of our 2014 annual review for more detail.

How we addressed our mistake: We improved these analyses by reworking our cost-effectiveness models to improve the general transparency and clarity of the analyses and explicitly prioritizing work on cost-effectiveness throughout our research process.

We are experimented with more formal project management to increase the likelihood of completing all tasks necessary for our year-end recommendations update at the appropriate time.

January to December 2014: Completed fewer intervention reports than projected

How we fell short: We published fewer intervention reports than we had planned to at the beginning of 2014. We completed two new intervention reports in 2014, but at the beginning of 2014, we wrote that we hoped to publish 9-14 new reports during the year. On reflection, our goal of publishing 9-14 intervention reports was arbitrary and unrealistic given the amount of time that it has typically taken us to complete intervention reports in the past. See this part of our 2014 annual review for more detail.

How we addressed our mistake: We have learned more about how much work is involved in completing an intervention report and took steps to make more realistic projections about how many we can complete in the future.

November 2014: Suboptimal grant recommendation to Good Ventures

How we fell short: In 2014, we erred in our recommendation to Good Ventures about its giving allocation to our Top Charities. We made this recommendation two weeks before we announced our recommendations publicly so that we could announce their grants as part of our Top Charities announcement. If we had fully completed our analysis before making a recommendation to Good Ventures, we likely would have recommended relatively more to AMF and relatively less to GiveDirectly. See this part of our 2014 annual review for more detail.

How we addressed our mistake: In the end, we adjusted the public targets we announced based on the grants Good Ventures had committed to, so we don’t believe that donors gave suboptimally overall. In the future we expect to make—and announce—our recommendations to Good Ventures and the general public simultaneously.

November 2014: Not informing candidate charities of our recommendation structure prior to publishing recommendations

How we fell short: In our 2014 recommendation cycle, we did not alert our candidate charities of our “Standout Charity” second-tier rating prior to announcing our recommendations publicly. Some of our candidate charities were surprised when they saw their ranking as a “Standout Charity,” as they had been assuming that they would either be recommended as a Top Charity or not recommended at all.

How we addressed our mistake: We are now more cognizant of how we communicate with organizations and continue to solicit feedback from them so we can identify any other ways in which our communication with them is suboptimal.

July 2014: Published an update to the intervention report on cash transfers that misstated our view

How we fell short: Elie assigned a relatively new Research Analyst to the task of updating the intervention report on cash transfers. The analyst made the specific updates asked for in the task, which led him to change the report’s conclusion on the effect of cash transfers on business expenditures and revenue. A Summer Research Analyst vetted the page, and we published it. After publishing the update, another GiveWell staff member, who had worked on the page previously, noticed that the report’s conclusion on business expenditures and revenue misstated our view.

How we addressed our mistake: We made two changes. First, when passing off ownership of a page from one staff member to another, we began to involve all staff members who previously owned the page via an explicit "hand-off" meeting and by getting their approval before publishing the page. Second, we became more careful to ensure that all changes made by relatively inexperienced staff are reviewed by more experienced staff before publication.

Update (posted October 2016): At the time of this update, we still aim to hold an explicit “hand-off” meeting that includes staff who previously owned the page, although this meeting does not always include all staff who previously owned the page. We do not require the approval of all staff who previously owned the page prior to publication.

February 2014: Incorrect information on homepage

How we fell short: On February 4, 2014, we asked our website developer to make a change to the code that generates our homepage. In the process, he inadvertently copied the homepage content from November 2013. This content had two main differences with the up-to-date content. First, it described our Top Charities as “proven, cost-effective, underfunded and outstanding” rather than “evidence-backed, thoroughly vetted, and underfunded,” wording we changed in late 2013 because we felt it more accurately described our Top Charities. Second, it listed our Top Charities as AMF, GiveDirectly, and SCI, rather than GiveDirectly, SCI, and Deworm the World. According to our web analytics, 98 people visited our AMF page directly after visiting the homepage, possibly believing AMF to be a Top Charity of ours. Note that the top of our AMF review correctly described our position on AMF at this time.

How we addressed our mistake: We discovered the problem on February 25 and fixed it immediately. We added a step to our standard process for checking the website after a developer works on it to look for content that is not up to date.

How we fell short: Timothy Telleen-Lawton (GiveWell staff member as of April 2013) has been friends with Paul Niehaus (GiveDirectly President and Director) for many years. When Timothy met Holden Karnofsky (GiveWell’s Co-Founder and Co-Executive Director) in April 2011, he suggested that GiveWell look into GiveDirectly and introduced Holden and Paul by email. GiveWell later recommended GiveDirectly as a Top Charity in November 2012, before Timothy was on GiveWell staff.

Starting in January 2013, Holden started living in a shared house with Timothy, around the same time Timothy started a trial to work at GiveWell. Paul has visited and stayed at the shared house several times. We should have publicly disclosed the social connection between Paul and Holden and Timothy.

Note that this mistake solely relates to information we should have publicly disclosed to avoid any appearance of impropriety. We do not believe that this relationship had any impact on our rankings. Timothy was not the staff member responsible for the evaluation of GiveDirectly, and Holden has had relatively little interaction with Paul (and had relatively little interaction with Timothy prior to moving to San Francisco in 2013).

How we addressed our mistake: We publicly disclosed this fact in December 2013; at that time, we also created a page to disclose conflicts of interest.

February to September 2013: Infrequent updates on our top-ranked charity

How we fell short: We aimed to publish regular updates on the Against Malaria Foundation, but we went most of the year (February to September) without any updates. This was caused by our desire to publish comprehensive updates, and we allowed expectations of new information being available shortly to delay publishing brief updates that had meaningful but limited information.

How we addressed our mistake: As of July 2013, we changed our process for completing Top Charity updates. We began publishing notes from our conversations with these organizations (as we do for many of the conversations we have more generally) to facilitate more timely updates on our Top Charities.

Update (posted October 2016): At the time of this update, we now plan to publish twice-yearly “refreshes” of all of our Top Charity recommendations, in addition to publishing conversation notes and relevant updates throughout the year.

May to June 2013: Unpublished website pages intermittently available publicly

How we fell short: From May 20 to June 26, private content was intermittently available to the public on the GiveWell website. A change we made on May 20 caused pages set to be visible by staff only to appear, in some browsers, as a page with a login screen and below it, the unpublished content. Unpublished content includes both confidential information and incomplete research. Confidential information on unpublished pages is generally information that we expect to be able to publish, but which we have not yet received approval from an external party to publish. However, there are exceptions to this and it is possible that sensitive information was revealed. We are not aware of any cases of sensitive information being revealed.

How we addressed our mistake: We fixed the problem a few hours after discovering it. We added monitoring of unpublished pages to our list of regular website checks.

April to December 2012: Taking too much of job applicants’ time early in the recruiting process

How we fell short: During this period, our jobs page invited applicants to apply for our research analyst role. We responded to every applicant by asking them to work on a “charity comparison assignment” in which each applicant compared three programs and discussed which they would support and why. This assignment took applicants between 6 and 10 hours to complete. During this period, approximately 50 applicants submitted the assignment, of which we interviewed approximately 8.

We now feel that asking all applicants to complete this test assignment likely took more of their time than was necessary at an early stage in the recruiting process and may have led some strong applicants to choose not to apply.

How we addressed our mistake: We stopped asking all applicants to complete this assignment. In December 2012, we changed our jobs page to more clearly communicate about our hiring process.

March to November 2012: Poor planning led to delayed 2012 charity recommendations release

How we fell short: In early GiveWell years, we aimed to release updated recommendations by December 1 in order to post our recommendations before "giving season," the weeks at the end of the year when the vast majority of donations are made. In 2011, we released our recommendations in the last week of November, but then ran into problems related to donation processing. To alleviate those problems in the future, we planned to release our recommendations in 2012 by November 1 to give us sufficient time to deal with problems before the end of the year rush of giving.

In 2012, we did not release our recommendations until the last week of November (significantly missing our goal). We continued to publish research about the cost-effectiveness and evidence of effectiveness for the interventions run by our Top Charities throughout December, which meant that some donors were making their giving decisions before we had published all the relevant information. The primary cause of the delay was that we did not start work on GiveDirectly, the new 2012 top-rated charity, until mid-September, which did not give us enough time to finish its full review by the deadline of November 1.

How we addressed our mistake: In 2013, we aimed to release our recommendations by November 1. We considered possible top-rated charities on July 1 and moved forward with any contenders at that point.

Update (posted October 2016): At the time of this update, we now aim to publish our Top Charity recommendations before the US Thanksgiving holiday each year, so as to make them available throughout the year-end giving season.

June 2012: Failure to discuss sensitive public communication with a board member

How we fell short: In late June 2012, we published a blog post on the partnership between GiveWell and Good Ventures. We generally discuss sensitive public communication with a board member before we post, but failed to do so in this case. The post was not as clear as it should have been about the nature of GiveWell’s relationship with Good Ventures. The post caused confusion among some in our audience; for example, we received questions about whether we had “merged.”

How we addressed our mistake: GiveWell staff began to be more attentive about sharing sensitive public information with the board member responsible for public communication before posting.

July 2007 to March 2012: Phone call issues

How we fell short: Throughout GiveWell’s history, we have relied on Skype and staff’s individual cell phones to make phone calls. This led to instances of poor call quality or dropped calls, but given the fact that GiveWell was a startup, those we spoke with generally understood. In addition, we had not always confirmed with call participants the phone number to use for a particular call or set up and send agendas for the call in advance. Earlier in GiveWell’s history, participants likely understood that we were a very new, small organization just getting started and aiming to control costs. But, as we’ve grown this is no longer a reasonable justification, and both of the problems listed here may have had implications for the professionalism we’ve projected to those we’ve spoken with.

How we addressed our mistake: We became more vigilant about confirming that all participants are aware of the number to use for scheduled calls. In March 2012, we set up dedicated lines and handsets for our calls.

December 2011: Poor communication to donors making larger donations (e.g., greater than $5,000) via the GiveWell website

How we fell short: In giving season 2011, there were 3 major issues which we communicated poorly about to donors:

  1. While Google covers credit card processing fees for organizations enrolled in the Google Grants program (which includes GiveWell, itself, and many of our Top Charities), many organizations are not enrolled and therefore donors who give to them via our website do pay credit card processing fees on their donations. While these fees are small in absolute terms for smaller donors, a 3% fee on a $10,000 donation is $300. Some donors may realize this and choose to give via credit card regardless. Some, however, may not have realized this and would have preferred to have mailed a check to save the fee. October 2016 Update: Donors who support GiveWell or our Top Charities through donations to GiveWell are subject to payment processing fees that vary depending on the platform through which they donate; we no longer receive free processing through Google due to the end of that program. We have published a page detailing the options for donating to GiveWell and associated fees, as well as advice for larger donors interested in minimizing such fees.
  2. People making large donations very frequently run into problems with their credit card companies (due to the fact that they are spending so much more on a single item than they usually do). In our experience, about half of donations over $5,000 are declined the first time a donor tries to make the gift and are only cleared after he or she speaks with his card company. This creates confusion and unexpected hassle for donors trying to give to charity.
  3. Giving via appreciated stock has beneficial implications for donors allowing them to reduce future capital gains taxes and therefore give more to charity (without giving more “real” money). We did not broadcast this message to donors.

How we addressed our mistake: Though GiveWell’s responsibility for communicating about the points above varies, communicating well about all of the above furthers our mission. We worked to communicate better about these points to larger donors, including through this 2012 blog post.

Update (posted October 2016): In addition to our page listing giving options for donors, we also created a page of advice for larger donors, including donating appreciated securities.

December 2011: Problems caused by GiveWell’s limited control over the process for donating to our Top Charities

How we fell short:

  • On December 21, 2011, a representative from Imperial College Foundation (the organization receiving donations for the support of the Schistosomiasis Control Initiative, our #2-rated charity in 2011) emailed us to let us know that its Google Checkout account had been suspended. Donors who wanted to give to SCI via the GiveWell website gave via Google Checkout, and though the Google Checkout button is on the GiveWell website, the organization owns the Checkout account and donations go right to it. GiveWell staff therefore did not know there was a problem until the ICF representative informed us of it. We still do not know how long the problem lasted or whether any donors attempted to make donations during the time the account was suspended. (We do not even know how Google communicated to them about the error). ICF contacted Google but has not determined what led to the account suspension.

    Once we learned of the problem, we reconfigured donations to go through GiveWell.

  • As noted elsewhere on this page, many larger donations made via credit card are initially declined by credit card companies due to the fact that many donors give a larger amount to charity than they spend in a single purchase throughout the year. Because donations go directly to our charities, at times, GiveWell has to coordinate with representatives of the organizations to cancel charges so that donors feel safe resubmitting their donation. This creates confusion, wastes time, and doesn’t allow donors to complete the transaction as quickly as they would like.
  • Setting up trackable donation processing for our Top Charities requires individual communication with each organization. This means that we must spend time communicating with each organization, and each must spend time creating its account. Also, in the event that the organization does not have time to set up the account or sets up the account but it has a problem, the required tracking may not be in place. With several organizations in 2011, tracking was either not set up at the time we released our recommendations or we needed to create a one-off workaround to track donations to them.

How we addressed our mistake:

  • We worked to better advise larger donors of their non-credit-card options for donating and potential hassles of donating via credit card. October 2016 Update: We have a page of advice for larger donors that discusses options for making large donations. We also have an information page discussing different donation options.
  • We considered switching over donations to all organizations to go through GiveWell so that we are immediately aware of any problems. October 2016 Update: We offer donors the option of donating to GiveWell for regranting to our Top Charities, or donating directly to our Top Charities and letting us know that they’ve done so.
  • We worked to complete our recommendations earlier in 2012 than 2011 (to give us additional time to address any problems that come up).

December 2011: Miscommunicating to donors about fees and the deductibility of donations to our Top Charity

How we fell short: Our top-rated charity in 2011 was the Against Malaria Foundation (AMF). We made two errors in the way we communicated to donors about the ramifications of donating to AMF.

  1. Fees: On our donate to AMF page, we told donors that "no fees are charged on donations to AMF." This was incorrect. Donors who give via AMF’s website are charged normal credit card processing fees. We now understand that we miscommunicated with AMF on this issue; AMF did not intend to communicate that there are no processing fees and was unaware that we were communicating this on our site.
  2. Tax deductibility in Australia: On our Top Charities page that we published on November 29, 2011, we listed Australia as one of the countries for which donors could take tax deductions. We believed this was accurate because AMF listed Australia as one of the countries in which it is a registered charity. In early December, an Australian donor emailed us to let us know that while AMF is a registered charity and corporations can deduct donations to it, it does not have a status that allows individuals to deduct donations to it. (This issue is discussed in a 2012 blog post.) November 2015 update: Gifts from individuals to AMF are now tax deductible in Australia. See this blog post for details.

How we addressed our mistake:

  1. Fees: We changed the language on our page to clarify that credit card fees are charged on donations via AMF’s website. We also provided donors who wished to give for the support of AMF the option to give donations directly to GiveWell. Because GiveWell was enrolled in Google’s Grants program, Google paid credit card processing fees for donations. GiveWell then had the ability to regrant these funds to AMF.

    October 2016 Update: Credit card processing fees are incurred if a donor supports AMF through GiveWell; they are no longer covered by Google. Additional details are available here.

  2. Tax deductibility in Australia: We took several actions. (1) We emailed Rob Mather, AMF’s CEO. He agreed that the charity status page on AMF’s website was misleading. AMF edited the page to clarify its status in Australia, and Rob Mather offered to refund any donations (or parts of donations) made by Australians relying on the fact that they could receive a tax deduction. (2) On our site, we removed Australia from the list of countries in which AMF is registered for individual-donor tax deductibility. (3) We emailed all Australian donors who had given to AMF (and had found AMF via GiveWell) since we had posted that donations to AMF are tax-deductible for Australians to let them know we had erred and we communicated Rob Mather’s offer to refund donations. At the time of this update, AMF was in the process of applying for tax-deductible status for individuals and planned to inform us if and when that was granted. AMF has also told us that the two donors that have asked for refunds have both said they will donate the same amount to AMF when the tax-deductible status is in place.

    Update (posted September 2015): Gifts from individuals to AMF are tax deductible in Australia. See this blog post for details.

Late 2009: Misinterpreted a key piece of information about an organization to which we gave a $125,000 grant

How we fell short: When reviewing Village Enterprise (formerly Village Enterprise Fund) in late 2009, we projected that they would spend 41% of total expenses on grants to business groups, because we misinterpreted a document they sent us which projected spending 41% of total expenses on business grants and mentorship expenses. We do not know what mentorship expenses were expected to be so we do not know the magnitude of our error. Village Enterprise ended up spending 20% of total expenses on business grants in FY 2010. We caught this mistake ourselves when we were updating the review in August 2011. Village Enterprise planned to spend 28% of total expenses on business grants in FY 2012.

How we addressed our mistake: We updated our review of Village Enterprise to reflect the correct distribution of expenses. Before publishing a page, we plan to have at least one additional GiveWell employee check the original source of figures that play a key role in our conclusions about an organization or program.

Update (posted October 2016): We do not always implement this check before publication; sometimes pages are vetted after they are published.

August 1, 2009, to December 31, 2009: Grant process insufficiently clear with applicants about our plans to publish materials

How we fell short: Between August 1, 2009, and December 31, 2009, we accepted applications for $250,000 in funding for economic empowerment programs in sub-Saharan Africa. We attempted to be extremely clear with organizations that we planned on sharing the materials they submitted, and that agreeing to disclosure was a condition of applying, but in a minority of cases, we failed to communicate this. We conceded these cases and gave the organizations in question the opportunity to have their materials—and even the mention of the fact that they had applied for funding—withheld.

We try to avoid keeping materials confidential unless absolutely necessary, and in this case our unclear communications led to confrontations and to confidentiality situations that could have been avoided.

Details in this blog post.

How we addressed our mistake:

  • We offered the minority of charities with whom we’d been unclear the option not only to have their materials omitted, but to have us not disclose the fact that they applied for funding from us.
  • We added clarificatory language to the top of our charity reviews, in order to clarify what a “0-star rating” means.
  • We decided to consider publicly publishing pages on organizations we consider before we accept materials from them, in order to make our intentions about disclosure and public discussion absolutely clear.

Updated (posted October 2016): Details of our current transparency policy are available here.

November 25, 2009: Mishandling incentives to share information

How we fell short: A blog post discussing the Acumen Fund paraphrased information we’d been given during Acumen’s application for funding from us. An Acumen Fund representative told us this had come off as a "bait and switch": using the grant application as a pretense for gathering information that we could use for a negative piece. (This was not the case; we had invited Acumen to apply in the hopes that they would be a strong applicant, and would have written a similar blog post afterward if they had simply declined to speak with us.)

We try to avoid creating incentives for organizations to withhold information, given how little is available currently. Therefore, we are generally careful with how we use any substantive information that is disclosed, and generally check with the organization in question before publishing anything that could be construed as "using it to make a negative point." (An example is our post on microfinance repayment rates, which uses voluntarily disclosed information to raise concerns about the repayment rate while attempting to be clear that the organization in question should not be singled out for this disclosure. We checked with the organization discussed before making this post.)

In this case, we published our post without such a check, reasoning that we were not sharing any substantive materials (only paraphrasing general statements from representatives). Doing so gave the impression that sharing more information can result in more negative coverage.

We continue to struggle with the balance between disclosing as much information as possible and avoiding disincentives to share information. We will not find a solution in every case, but feel that we mishandled this one.

How we addressed our mistake: We let Acumen Fund know that we regret this incident and resolved to be more careful about quoting from representatives and grant applications in the future.

May 2009: Failed to remove two private references from a recording that we published

How we fell short: In May 2009, we discussed the Millions Saved project with a staff member of the project, Dr. Jessica Gottlieb, and then published a copy of the recording of the conversation to our website. Dr. Gottlieb approved making the recording public on the condition that we remove personal references that she made during the conversation. We partially removed the references, but we failed to remove one person’s email address and Dr. Gottlieb’s suggestion that we speak with a particular person. We noticed this error in February 2014 while preparing to use this recording as part of a test assignment for potential employees. According to our logs, no one had downloaded the audio file during the previous year.

How we addressed our mistake: We notified Dr. Gottlieb about this mistake and apologized to her. Subsequent to (and unrelated to) this error), we implemented a formal procedure for reviewing uploaded files to confirm that all requested changes to files have been made.

January to September 2008: Paying insufficient attention to professional development and support

How we fell short: At our board meeting in January 2008, we agreed to explore options for professional development and mentoring, in light of the relative youth and inexperience of our staff. GiveWell staff put a lower priority on this than more time-sensitive goals, and while we explored a few options, we made little progress on it between January and September. At the September Board meeting, the Board criticized this lack of progress and reiterated the need for professional development and mentoring.

How we addressed our mistake: As of July 2009, we had two highly regular mentoring relationships, and two more in a “trial phase.” We also stepped up Board oversight through a monthly conference call (attendance was optional but generally high) and more regular calls with Board Vice-President Lindy Miller. An update on professional development was presented at our July 2009 Board meeting.

  • 1

    Azithromycin distribution is funded in Nigeria by the Gates Foundation.

  • 2

    We think the cost-effectiveness of VAS campaigns could be reduced by increased azithromycin distribution because we think both interventions may decrease mortality from the overlapping causes, non-malaria infectious disease. (For more information, see the “Will VAS remain impactful in the future?” section of our report on VAS (footnote 339 in particular). Note that we think azithromycin may also affect malaria mortality.) If azithromycin prevents deaths that VAS would have averted, the impact of a VAS campaign would be reduced. We very roughly calculate a potential 20% downward adjustment in the cost-effectiveness in VAS based on this possible overlap:

    • Based on data from the Institute for Health Metrics and Evaluation (IHME), we roughly calculate that in Burkina Faso, approximately 55% of all mortality among children aged one month to four years stems from non-malaria infectious disease. In 2019, there were ~71,900 total deaths in this cohort, with ~15,700 malaria deaths, and ~55,000 deaths from all infectious disease. (55,000 - 15,700) / 71,900 = ~55%. Source: Institute for Health Metrics and Evaluation (IHME). Used with permission. All rights reserved.
    • One study found that azithromycin leads to a ~14% reduction in all-cause mortality.
    • Assuming that VAS and azithromycin primarily treat the ~55% of deaths that are due to non-malaria infectious disease and roughly adjusting by 20% for the assumption that azithromycin and VAS might treat slightly different subsets of that 55%, we estimate the total reduction in VAS’s effectiveness to be -20%: (share of non-malaria deaths averted by azithromycin) * (adjustment for VAS and azithromycin treating different groups) = (14% / 55%) * (100% - 20%) = ~20%.

  • 3

    See our discussion in the “How many people would access nets from other sources?” section of our report on nets: “We currently have no adjustments in the model to account for concern 4 [‘We don’t fund a mass distribution campaign and neither does another funder. However, people get nets anyway because they buy them from shops, or access them through routine distribution channels (e.g., via health clinic visits).’]. Effectively, this amounts to assuming that people today are no more or less likely to obtain nets from other sources than they were during original studies testing the impact of ITNs on malaria (more below), which were predominantly conducted in the 1980s and 1990s. In these trials, around 5% to 10% of people had access to nets in the control group. By using these control groups as a stand-in for the counterfactual today, we effectively assume that 5% to 10% of people would get nets in the absence of any mass distribution campaign.”

  • 4

    This estimate is based on an unpublished internal analysis.

  • 5

    See this cell of our 2023 New Incentives cost-effectiveness analysis, which explains how we use New Incentives’ coverage surveys to estimate counterfactual coverage.

  • 6

    This calculation was part of an unpublished internal analysis of three surveys:

  • 7

    We describe our concern about vitamin A deficiency (VAD) rate estimates in our VAS intervention report: “Our analysis is very sensitive to vitamin A deficiency rates today, but our estimates are based on information we have low confidence in (10- to 20-year-old surveys of deficiency, updated for change over time, and modeled estimates from the Global Burden of Disease Project whose methodology we do not fully understand). Since these surveys were conducted, many countries have introduced vitamin A fortification programs, and we’re unsure how effective these have been at reducing deficiency rates.”

  • 8

    We came to this estimate via a very rough calculation. We make the following assumptions:

    • Instead of assuming 5% of people who receive nets from mass distribution would receive nets from routine distribution in the absence of mass campaigns (as we currently do), we assume:
      • 25-50% of children under 3 years old would receive nets from routine distribution in the absence of mass distribution campaigns
      • and 15% of people over 3 years old would receive nets from routine distribution in the absence of mass distribution campaigns.
    • Half of the benefits of nets campaigns (including from averted mortality and income effects later in life) are from increasing the number of children under 3 years old who receive nets and half are from increasing the number of people over 3 years old who receive nets. This is a very rough assumption we make because in our nets cost-effectiveness analysis we often primarily distinguish between under- and over-5-year-olds (or 15-year-olds in calculating long-term increases in income), rather than between under- and over-3-year-olds as these estimates of routine coverage do.
    • Our current cost-effectiveness estimate implicitly assumes that 5% of people would receive nets from routine distribution (see the “How many people would access nets from other sources?” section of our report on nets). We therefore subtract 5% from the numerator and denominator of the “share of over-/under-3 benefits lost due to nets from routine coverage” value to account for this percentage that is already not included in the cost-effectiveness estimate.

    Given these assumptions, we calculate the following potential changes in cost-effectiveness:
    More conservatively, assuming routine distribution of nets would reach 50% of under-3s:
    50% (proportion of benefits from under-3s) * ((50% - 5%) / (100% - 5%)) (share of under-3 benefits lost due to nets from routine coverage) + 50% (proportion of benefits from over-3s) * ((15% - 5%) / (100% - 5%)) (share of over-3 benefits lost due to nets from routine coverage) = ~29% reduction in cost-effectiveness
    Less conservatively, assuming routine distribution of nets would reach 25% of under-3s:
    50% (proportion of benefits from under-3s) * ((25% - 5%) / (100% - 5%)) (share of under-3 benefits lost due to nets from routine coverage) + 50% (proportion of benefits from over-3s) * ((15% - 5%) / (100% - 5%)) (share of over-3 benefits lost due to nets from routine coverage) = ~16% reduction in cost-effectiveness

  • 9

    Two experts we engaged with during red teaming, Justin Cohen, Vice President of Malaria and Neglected Tropical Disease at the Clinton Health Access Initiative (CHAI), and David McGuire, Director of Access and Country Engagement at the Innovative Vector Control Consortium (IVCC), suggested we may be using overly optimistic or outdated assumptions on net durability. (See discussion of net durability here.)