- Top charities
GiveWell aims to find the best giving opportunities we can and recommend them to donors. We tend to put a lot of investigation into the organizations we find most promising, and de-prioritize others based on limited information. When we decide not to prioritize an organization, we try to create a brief writeup of our thoughts on that charity because we want to be as transparent as possible about our reasoning.
The following write-up should be viewed in this context: it explains why we determined that we wouldn't be prioritizing the organization in question as a potential top charity. This write-up should not be taken as a "negative rating" of the charity. Rather, it is our attempt to be as clear as possible about the process by which we came to our top recommendations.
The last time we examined Freedom from Hunger was in May 2011. In our latest open-ended review of charities, we determined that it was unlikely to meet our criteria based on our past examination of it, so we did not revisit it.
We invite all charities that feel they meet our criteria to apply for consideration.
The content we created in May 2011 appears below. This content is likely to be no longer fully accurate, both with respect to what it says about Freedom from Hunger and with respect to what it implies about our own views and positions. With that said, we do feel that the takeaways from this examination are sufficient not to prioritize re-opening our investigation of this organization at this time.
Published: May 2011
Freedom from Hunger develops and tests programs that add supplementary services to microfinance and trains microfinance institutions to implement them. Freedom from Hunger's programs include providing business education, health education, and savings services. We think this is a valuable area to focus on. In our analysis of microfinance, we have noted that many microfinance institutions target financial rather than social results, and we have sought out microfinance institutions that focus on improving the lives of the people they serve. Therefore, we believe that Freedom from Hunger is trying to meet an important need.
In addition, Freedom from Hunger has a commendable focus on evaluation that is extremely rare among charities. It has been involved in highly rigorous evaluations of its programs, and it publishes numerous technical reports about its work on its websites.
Our review consisted of reviewing these rigorous evaluations and speaking with Freedom from Hunger's president, Chris Dunford. Freedom from Hunger does not currently qualify for our highest ratings because:
Freedom from Hunger describes its strategy as (1) testing innovations to deal with hunger and (2) distributing tested innovations through other organizations.1 In practice, it trains a staff member of a microfinance organization to train other staff members to implement one of the following programs:2
Freedom from Hunger told us that some costs are paid for by partner organizations and that the amount each contributes is determined by whether the model has been previously tested and whether Freedom from Hunger has funding available.7
In considering Freedom from Hunger's impact, we asked two main questions:
As discussed below, there is some limited evidence that programs work when implemented well, but no evidence that they are consistently implemented well.
We reviewed all the relevant studies for Freedom from Hunger's programs.
In summary, the rigorous evaluations of Freedom from Hunger's programs found some effects on knowledge, but found limited or no effects on business revenues and limited effects on health behaviors.
A more detailed overview of the evidence from the most rigorous studies we found is available below. More detail on each study is available in the footnotes.
|Country||Quality of study design||Bottom line from abstract/summary||Outcomes showing statistically significant improvement||Outcomes not showing statistically significant effect||Outcomes showing statistically significant deterioration|
|Peru8||Rigorous (randomized controlled trial)||"We find little or no evidence of changes in key outcomes such as business revenue, profits, or employment. We nevertheless observed business knowledge improvements and increased client retention rates for the microfinance institution."9||8 indicators (2 of business results, 4 of business practices, 2 of institutional results)||32 indicators (6 indicators of business results, 10 indicators of business practice, 13 indicators of household outcomes, 3 indicators of institutional outcomes)||None|
|Dominican Republic10||Rigorous (randomized controlled trial)||"We find no significant effect from a standard, fundamentals-based accounting training [designed by Freedom from Hunger]. However, a simplified, rule-of-thumb training produced significant and economically meaningful improvements in business practices and outcomes."11||None||10 indicators of business practices and sales||None|
|India12||Somewhat rigorous (randomized controlled trial with imperfect randomization)||"Participation resulted mainly in improved confidence levels of the daughters (and mothers) regarding their money management. Statistically significant improvements in savings levels and effective bargaining were not detected in the quantitative studies...The only [health] topic with significant gains in the randomized controlled trial evaluation was HIV/AIDS. All other topics, such as hand-washing, diarrhea, nutrition and reproductive health, saw few significant differences...[Gains] were seen in the girls’ comfort levels in discussing the topics with their family members."13||Unmarried: 1 indicator of financial literacy, 6 indicators of health literacy; Married: 2 indicators of financial literacy||Unmarried: 14 indicators of financial literacy, 44 indicators of health literacy; Married: 14 indicators of financial literacy, 48 indicators of health literacy||Unmarried:1 indicator of financial literacy; Married: 2 indicators of financial literacy|
|Ghana14||Results not reported on intent-to-treat basis||-||-||-||-|
|Bolivia15||Results not reported on intent-to-treat basis||-
|Country||Quality of study design||Bottom line from abstract||Outcomes showing statistically significant improvement||Outcomes not showing a statistically significant effect||Outcomes showing statistically significant deterioration|
|Ghana16||Rigorous (randomized controlled trial)||"The malaria education complemented the other activities to increase knowledge and positive behaviors. Yet, even the increased knowledge and behaviors often were impeded by gaps in a family’s ability to access promoted prevention methods such as ITNs."17||1 indicator of net ownership; 4 indicators of net use by vulnerable group; Net re-treatment||3 indicators of net ownership||None|
|Peru18||Rigorous (randomized controlled trial)||"Individuals in the IMCI treatment arm demonstrated more knowledge about a variety of issues related to child health, but there were no changes in anthropometric measures or reported child health status."19||Only knowledge indicators||All directly measured and reported child health indicators||None|
|Benin20||Somewhat rigorous (randomized controlled trial with imperfect randomization)||"Results revealed that that the education villages perform somewhat better than the credit-only villages in malaria knowledge indicators [and] also have somewhat better malaria behaviors...Education villages were substantially more likely than credit-only villages to perform better on HIV knowledge indicators...There were no significant differences...when assessing knowledge and behavior change as a result of the childhood illnesses module."21
|7 health indicators, 3 credit and finance indicators||74 health indicators; all food security, social network, and decision making indicators; 39 credit and finance indicators
We have a positive view of programs aiming to increase savings in the developing world. For more, see our blog posts:
Freedom from Hunger is testing its Saving for Change program in a "large-scale randomized control trial conducted by Innovations for Poverty Action (IPA), comparing 500 treatment and control villages, as well as 24 months of financial diaries (high-frequency surveys) conducted with a subset of those participating in the RCT." The baseline report on the trial indicates that project results will be completed in 2012.22
This is a new program, so evaluations are not yet available.23
Freedom from Hunger's model centers on testing programs and training others to implement them. This is a strategy that allows Freedom from Hunger to reach a large number of people, with the trade-off that it has less control over how programs are implemented. Freedom from Hunger appears to be concerned with the question of whether programs are implemented well by partner organizations, but its ability to ensure high quality is limited by the nature of its model.24
As part of its efforts to monitor and improve program quality, Freedom from Hunger told us it does these things to follow up with the partner organizations it trains:
It is unclear to us (a) whether these processes have enabled Freedom from Hunger to identify and respond to problems; and (b) whether past programs have been implemented well.
We asked president Chris Dunford about how Freedom from Hunger approaches the potential problems associated with relying on other organizations to implement its programs. He responded that Freedom from Hunger does as much as it can upfront to train partners and to deal with potential issues.33
To its credit, Freedom from Hunger readily acknowledges that monitoring is a key challenge and notes that, even in RCTs, ensuring proper implementation is difficult:34
"We have a dual-track strategy focused on global hunger:
1) Evidence-based innovation on how to deal with hunger and
2) Distribution of innovations through others to serve the hungry poor."
Chris Dunford, phone conversation with GiveWell, February 22, 2011.
"GiveWell: Our understanding of your model is that you develop training programs related to microfinance and train the trainers who in turn train microfinance institution (MFI) staff to implement the program. Is this correct?
Freedom from Hunger: Our staff spends a lot of time training and in preparation for training. But the training is because we have a dual-track strategy focused on global hunger:
1) Evidence-based innovation on how to deal with hunger and
2) Distribution of innovations through others to serve the hungry poor.
First we have to show that the innovation works, which means careful research, including randomized controlled trials (RCTs). Distribution means we need to be trainers. The idea is similar to a market test followed by global roll out....
We have four basic models right now:
1. Credit with Education. The basic package is widespread. We did the first RCT in microfinance, on Credit with Education programs in Ghana and Bolivia in the mid-1990s. It didn’t get as much attention as the more recent studies because the results were never published in peer-reviewed journals. We effectively showed the positive value of Credit with Education, and now we are at the stage in which organizations come to us and say they want to implement the program. Every new partner requires a new customization process. We are still developing education modules on new topics for dissemination through Credit with Education, and we want to roll them out to as many organizations and people as possible.
2. Saving for Change. This program started in 2005 in Mali as a joint venture with Oxfam America and the Strømme Foundation of Norway. It is a savings-led approach to microfinance. It serves areas or people who are generally beyond the reach of MFIs. There is an RCT going on in Mali, funded by the Gates Foundation and conducted by Innovations for Poverty Action. We’ve demonstrated that it can be practical for implementation, but because the impact evaluation is incomplete, it’s one stage behind Credit with Education.
3. Microfinance and Health Protection (MAHP). We work with MFIs to introduce programs to increase access to health services and products. We’ve done this in five different locations on four continents with five different MFIs and have completed a lot of research on this; this time we’re publishing our results through peer-review journals. Now we’re in the stage of starting to roll it out. There are still uncertainties about whether all MAHP services can be offered through Saving for Change programs; this is a new frontier for innovation.
4. AIM Youth. This is a new innovation program to extend Saving for Change and Credit with Education to youth (13–25 years old). As a still untested innovation, we remain appropriately skeptical but optimistic about its prospects to work well in a variety of circumstances. The MasterCard Foundation has given us a major contract to fund the development and testing of AIM Youth."
Chris Dunford, phone conversation with GiveWell, February 22, 2011.
"At regular meetings, the women's group gathers to make repayments and deposit their savings. The women also participate in a lively and joyful learning session led by a local staff person who speaks their language and knows their culture and customs." Freedom from Hunger, "Credit with Education."
"Saving for Change enables groups of women to deposit savings-often starting with weekly deposits of only 20 cents-and build lump sums for predictable needs. When savings accumulate, the women in the group act as their own bankers, approving small loans to each other from their pooled savings. The interest they charge themselves for the loans goes back into the pool of savings, yielding a healthy return on the deposited savings of each member of the group." Freedom from Hunger, "Saving for Change."
"MAHP complements this education by enabling microfinance institutions to offer financial products and other services that improve access to actual healthcare services and medicines." Freedom from Hunger, "Microfinance and Health Protection."
Freedom from Hunger, "Advancing Integrated Microfinance for Youth."
"GiveWell: What exactly does Freedom from Hunger pay for and what do its partners pay for?
Freedom from Hunger: It depends on the status of the innovation. If we test an unproven program, then, because of the risk, we will pay for everything. Research and development of new innovations is usually funded through grants from institutional donors. Once we’ve developed a program, but it’s still not totally proven, then there’s more cost-sharing. We’ll cover our own costs but our partners pay for their costs. A third scenario is one in which an institution will come to us with money to cover our costs to do training, or we help them to fundraise to cover our costs. "
Chris Dunford, phone conversation with GiveWell, February 22, 2011.
Dominican Republic study: A randomized controlled trial of a program implementing Freedom from Hunger financial education modules was conducted in 2006-2008. Drexler, Fischer, and Schoar (2010, Pg 9) write, " The materials and capacitator training program for the Accounting treatment were based on the financial education program designed by Freedom from Hunger, a US-based non- profit organization, and the Citigroup Foundation5 and adapted to local conditions. The ADOPEM training program is most closely related to the budgeting module of the FFH training program." Results are reported in Drexler, Fischer, and Schoar 2010, Pg 24, Table 2.
Drexler, Fischer, and Schoar 2010, Pg 1.
Results for "intent-to-treat" and "control" are reported in See Gray and Chanani 2010, Pg 50, Table 3, Pg 53-55, Table 6, Pg 60, Table 10, and Pg 63-65, Table 13.
"This innovative methodology combines qualitative and quantitative approaches to create a nuanced picture of the current SfC program and to document the baseline situation in an SfC expansion zone in the Segou region of Mali, where a randomized control trial (RCT) is currently underway to measure the socioeconomic impacts of the program over a three-year period (2009-2012)." Bureau of Applied Research in Anthropology and Innovations for Poverty Action, "Baseline Study of Saving for Change in Mali: Results from the Segou Expansion Zone and Existing SfC Sites," Pg 7.
"AIM Youth. This is a new innovation program to extend Saving for Change and Credit with Education to youth (13–25 years old). As a still untested innovation, we remain appropriately skeptical but optimistic about its prospects to work well in a variety of circumstances. The MasterCard Foundation has given us a major contract to fund the development and testing of AIM Youth." Chris Dunford, phone conversation with GiveWell, February 22, 2011.
"GiveWell: Do you follow up with organizations that implement programs you train them on?
Freedom from Hunger: We follow up with further training, particularly to develop or introduce new education modules to their work, and to troubleshoot with technical assistance as needed to help our implementing partners resolve the problems that naturally arise as they expand the program and integrate it within their other operations. We track the partners’ progress in our Credit with Education status report that is updated every six months with reports from the partners on financial performance and the education modules each organization is implementing. Once they are no longer dependent on Freedom from Hunger for funding or training/technical assistance, we cannot force them to report, so it is a tribute to our good relationship management that they continue to report voluntarily for years afterward.
We are concerned about the quality of implementation. It’s a high-leverage strategy, because we train people to do what we know how to do. But we have no lasting control over the MFIs and other partners. We seek to deal with this by providing high-quality training up front and follow-up technical assistance as long as funding allows."
Chris Dunford, phone conversation with GiveWell, February 22, 2011.
Number of reporting organizations is from Freedom from Hunger, "Credit with Education Status Report (December 31, 2010)."
On the total number of MFIs participating in Credit with Education: "Of 112 partners who continue to report to us (half are in Mexico and many of these just participated in training programs without follow-up technical assistance), we are highly engaged with something like 20 to 40 at this time. In some cases, we have the opportunity to go back in to see how they are doing after a long hiatus of engagement...There are some MFIs that we’ve worked with that have ceased to report. I estimate it’s about 10." Chris Dunford, phone conversation with GiveWell, February 22, 2011.
"We track the partners’ progress in our Credit with Education status report that is updated every six months with reports from the partners on financial performance and the education modules each organization is implementing." Chris Dunford, phone conversation with GiveWell, February 22, 2011.
Freedom from Hunger, "MAHP PM Report Data (December 2010)."
Freedom from Hunger, "Saving for Change Outreach (December 2010)."
"These data give us only weak proxy indicators of the quality of program implementation and no verification of impact. Still, when we receive data from the partner, we compare the numbers against those from the prior period and take note of any changes, positive or negative. Often this leads our relationship managers to reach out and talk to the partner; either to commend them for a growing program or to inquire about programs that are clearly struggling." Freedom from Hunger, "Response to GiveWell Draft Evaluation Report," Pg 3.
"Our monitoring of partners after they have been trained and guided in the customization process is done by "relationship managers"—our staff assigned to liaise with particular partners in particular countries, both during the intensive training and technical assistance phase of innovation dissemination and customization, and long afterward. As I said in the interview, the frequency with which these staff can actually make on-site visits to the assigned partners depends on dedicated funding and partner willingness to receive our visits (by far, the former is the more common constraint). However, the relationship manager stays in touch with key staff of the partner organization by phone and e-mail to stay abreast of the partner’s progress and challenges. Through these means, if not by direct observation, our staff has fairly accurate knowledge of how the partner is applying the tools and systems we have helped its staff develop and install." Freedom from Hunger, "Response to GiveWell Draft Evaluation Report," Pgs 2-3.
"The key steps are to randomly select clients for story collection and to collect these client stories with a systematic yet open-ended interview process that allows the client to tell as much of the full story as she or he is willing to share…Our goal is to interview people who have just recently joined a microfinance program and then find these same people about three years later (whether or not they are still participating in the program) to learn what has happened in their lives in the intervening period." Jarrel et al. 2011, Pg 9.
"So far, we have collected and analyzed a total of 274 client stories from random samples in eight countries with nine local microfinance-providing partners… At the time of writing this report, Freedom from Hunger had not yet begun the second round of interviews with original cohorts of incoming clients." Jarrel et al. 2011, Pgs 9 and 34.
"GiveWell: How would you respond to the criticism that Freedom from Hunger just comes in, provides training and support, and then leaves but doesn’t follow up to ensure that the program is going well and running effectively?
Freedom from Hunger: We do everything we can upfront when we’re engaging with our partner to identify possible areas where the project might fail, and address those. We don’t have the resources to stay heavily engaged with our partners long-term, and in many cases, because they are independent operating entities, they might not want us to."
Chris Dunford, phone conversation with GiveWell, February 22, 2011.
Chris Dunford, phone conversation with GiveWell, February 22, 2011.
I was surprised by the speed of your evaluation, which is a tribute to your work pace, especially considering the number of research reports you had to read carefully and summarize. But I was also surprised that I was the only staff person interviewed directly. In an effort to communicate succinctly and clearly, I tend to paint our work and our organization in broad summary brushstrokes that can mislead; I get away with this approach because usually evaluators also talk to my colleagues who know and love the details. In particular, I clearly misled you in my comments regarding monitoring. Moreover, you have drawn the wrong conclusion about our expansion plans for Saving for Change. And we want to raise a caution flag regarding the way you summarize the results of randomized control trial (RCT) research, not just ours.
We do not contest your overall conclusion that Freedom from Hunger does not currently qualify for your highest rating””we are pleased to be deemed a Notable organization, given the very high standards set by GiveWell.
Freedom from Hunger is fundamentally different from the organizations that have qualified for your highest rating, at least the ones we know well (Small Enterprise Foundation and Village Enterprise Fund), because we do not directly control (either through legal governance or funding leverage) the program operations that deliver services to intended beneficiaries. Such control would allow us to require particular systems of implementation, including quality-monitoring and control and the detailed reporting of quality data to Freedom from Hunger, which we could roll up into a regular global report for posting on our website. Your report appropriately recognizes the strategic tradeoff we have made for scale of outreach at the expense of the kind of time-and-money-consuming control of partners that allows for a meaningful global quality reporting system. However, it is not correct (my fault) that “Freedom from Hunger does not monitor its partners over time to determine whether they implement Freedom from Hunger's programs well.” So let me try to correct the misperception I created with my breezy remarks during the interview (I was enjoying myself too much, which always gets my staff worried!).
Our impact research on the basic delivery models and the education modules and other components is meant to serve the “proving” function. It also guides design of impact-monitoring systems designed for use by implementing organization managers to improve rather than prove impact. As such, impact- and quality-monitoring primarily serves the needs of local organization management. Because organizations vary greatly in their circumstances and priorities, their management needs and capacity for quality-monitoring vary greatly as well. The result is considerable heterogeneity in the quality-monitoring systems they develop and implement.
Until GiveWell showed such interest, quite frankly, we have not seen value in trying to persuade our disparate partners to standardize their quality-monitoring systems to generate data about quality of delivery that can be rolled up into global reports that we can post on Freedom from Hunger's website. Donors have not demanded this level of detail, and we are skeptical of our ability to create a system that generates global reports that would be meaningful, either to donors or even to ourselves. The point I was making in the interview is that even if we could develop a global-level monitoring system, our lack of control of partner implementation would frustrate our desire to act on the information such a quality-monitoring system might provide. Putting this point more positively, our priorities have been on building the partners' own capacities for quality-monitoring. Let me explain our approach to quality-monitoring in more detail.
We recognize that dissemination of our innovative program models requires customization to suit the particular circumstances of every independent implementing organization that we train. This customization process is more likely to lead to quality delivery for poor beneficiaries and sustainability by the local organization than imposing a fixed design with a pre-determined quality-monitoring and -control system. Our aim is to train the organization to build and maintain its own system, following the philosophy and methods of Social Performance Management (SPM). We have led the global development of SPM training for microfinance institutions, as part of the ImpAct Consortium; SPM is equally applicable to non-financial NGOs that promote savings groups. Our training is designed to build institutional capacity to recruit, train, supervise and incentivize program-delivery staff. This involves quality-monitoring and -control by supervisors and internal auditors to generate feedback to management and to the field staff themselves. It includes training in such monitoring tools as “client satisfaction surveys” and other techniques applied to “lot quality assurance samples” of women participants. Moreover, every education module includes in its design package an impact-monitoring tool that tests whether the women participating in the education have attained pre-determined levels of change in knowledge and behavior.
Our monitoring of partners after they have been trained and guided in the customization process is done by “relationship managers”””our staff assigned to liaise with particular partners in particular countries, both during the intensive training and technical assistance phase of innovation dissemination and customization, and long afterward. As I said in the interview, the frequency with which these staff can actually make on-site visits to the assigned partners depends on dedicated funding and partner willingness to receive our visits (by far, the former is the more common constraint). However, the relationship manager stays in touch with key staff of the partner organization by phone and e-mail to stay abreast of the partner's progress and challenges. Through these means, if not by direct observation, our staff has fairly accurate knowledge of how the partner is applying the tools and systems we have helped its staff develop and install. That is, we have a good sense of how the partner is implementing (including our many partners in Mexico), often including the implementation of quality-monitoring that suits its management purposes. This relationship management process often leads to joint identification of problems that need our troubleshooting assistance and opportunities for new product development and system design. We often build our fundraising around meeting these particular needs and opportunities to improve the quality of service to the partner's clients. These more highly engaged relationships are with the 20–40 partners I referred to in the interview.
In summary, we are monitoring the quality of implementation through our relationship managers, which gives us a valuable but often qualitative picture of the quality of implementation across our portfolio of (now) 132 partners who report outreach (scale) numbers to us on a biannual basis.
While I have despaired of rolling up our partners' numbers to provide global reports, I want to be clear I am referring to numbers on “quality” as we were using the term in our interview. We have long provided the Credit with Education Status Reports (CSRs) on our website (see the latest CSR attached to the same e-mail to which this commentary is attached). As you know now, these CSRs report the outreach numbers, the PAR, the OSS and the education modules delivered by microfinance institutions. These data give us only weak proxy indicators of the quality of program implementation and no verification of impact. Still, when we receive data from the partner, we compare the numbers against those from the prior period and take note of any changes, positive or negative. Often this leads our relationship managers to reach out and talk to the partner; either to commend them for a growing program or to inquire about programs that are clearly struggling.
We actually have a similar reporting system for our Saving for Change partners. Given this savings group model is new (relative to Credit with Education), we haven't had a sufficient number of partners to merit a separate report on the website until now. I am attaching the latest Saving for Change report, and also the underlying MS Excel workbook with the Data Collection Form, Performance Ratios and Project Performance templates, to illustrate the similarities and differences from the CSR. We have a similar outreach and activity report for our partners engaged in Microfinance and Health Protection (see latest version attached). These reports and other data sets are in fact rolled up into a global (all program models, all 132 partners, all 19 countries) “Performance Management Report” (latest is also attached). Our Board of Trustees asked staff not to post this PM Report on our website, because they find it too confusing without detailed explanation by staff. I do believe, however, that the footnotes do a pretty good job of explaining how the report is put together, but I'll let you be the judge. Freedom from Hunger now has multiple program models at play in the world; we are working to provide reports comparable to the CSR for all our main models.
Our “impact stories” monitoring methodology is still relatively new. It includes sending a U.S.-based individual to interview approximately 40 incoming and current clients with a qualitative tool that collects information on the client's well-being, life opportunities, program participation, poverty level and food-security status. Surveys are administered once to act as a baseline, and then again in three years as a follow-up, to observe any changes. Although the study design does not allow for attribution of any observed changes to the program, it is a useful tool for obtaining a snapshot indication of food-security and poverty-level status of clients of the institution, as well as further understanding how clients perceive the impact of the program. Visiting the organization also gives Freedom from Hunger the opportunity to observe live credit or savings group meetings in which education sessions from our program are delivered. To date, nine of our partners have participated in this monitoring. We have summarized our learning to date in the attached white paper just about to be published on our website.
Sorry for this long discourse on monitoring, but I hope you can now see that it is misleading to say that Freedom from Hunger does not monitor quality of implementation by partners after we train them. The problem I referred to in the interview is that we would like very much to have much better, more standardized information on quality and the ability to act on that information in a consistently productive way across our whole portfolio of partners. So far, the cost of getting such information and acting on it would prohibit the massive outreach on which we place such high priority, given the global prevalence of chronic hunger and the massive need for self-help support against it. As I said, this is our strategic tradeoff. But still, we could do better given sufficient funds designated specifically to monitoring.
Saving for Change Expansion
The Performance Management Report (attached, see above) shows that Saving for Change is growing very rapidly in West Africa. Through Freedom from Hunger's strategic alliance with Oxfam America, as well as our independent work in West Africa, our Saving for Change methodology is enabling 483,554 people to form and operate effective village-level savings and loan groups, many of which are also benefitting from malaria education. We are continuing to expand Saving for Change in West Africa and now are exploring new partnerships in Mexico, Ecuador and Peru. We also have a feasibility study scheduled in May in Haiti.
Additionally, Freedom from Hunger has been involved in an intensive quantitative and qualitative research plan for our Saving for Change program in Mali. The research involves a large-scale randomized control trial conducted by Innovations for Poverty Action (IPA), comparing 500 treatment and control villages, as well as 24 months of financial diaries (high-frequency surveys) conducted with a subset of those participating in the RCT. The qualitative work is conducted by the Bureau for Applied Research in Anthropology (BARA) at the University of Arizona in Tucson. BARA is carrying out longitudinal anthropological studies in twelve villages, comparing villages that have Saving for Change with those that do not. The studies will provide extensive information on program impact in the domains of poverty outreach, agricultural production, economic activities, food security, social capital, savings and lending behaviors and much more. The baseline study was not previously placed on the ffhtechnical.org website since we did not author the study; however, the paper is now posted here: http://www.ffhtechnical.org/resources/articles/baseline-study-saving-cha....
In short, we are indeed prioritizing the massive scale-up of our savings-led approach to microfinance. The challenge is that Saving for Change is not a self-financing program delivery system like the credit-led models; the implementing NGOs depend on charitable donations. Therefore, the expansion of the savings-led model is far more expensive, unless it is dovetailed with already-funded program delivery systems (such as microfinance, agricultural extension, literacy training, health services, religious congregations, etc.), which we want to actively explore.
I have to say I am puzzled by your comment that our commitment to expansion of Saving for Change would change your “overall conclusion about the organization.” I don't want to discourage that change of heart, but this seems at variance with your concerns about the inconsistency of our impact research results (our research on the impacts of the savings-led approach is still in progress) and our lack of quality-monitoring (which applies as much to our savings-led partners as to our credit-led partners). These were the reasons you cited to explain why Freedom from Hunger does not currently qualify for your highest ratings. Do you hold savings-led models to a different standard than credit-led models of microfinance?
Summarizing Results of RCTs
Here I want to pass on to you some thoughts from my evaluation specialists regarding inconsistency of RCT results. Bobbi and Megan contributed enormously to the first two sections on monitoring and Saving for Change, but the following is almost entirely their thoughts, which I endorse.
When Freedom from Hunger (and its collaborating academic researchers) designs and implements evaluations of its programs, not all indicators assessed through client surveys carry equal weight, nor do they all serve to evaluate the impact of the program. A simple count of indicators showing positive change, no change or negative change misrepresents the meaning of the data to the implementing organizations as well as to Freedom from Hunger and its supporting researchers. We want to provide a few examples of why this is the case and to share some changes that are occurring in the evaluation field that are resulting in improved analysis and interpretation of data from large-scale evaluations.
First, not all indicators assessed in a survey have equal value or weight. In a sense, there is a hierarchy of objectives that has to be taken into consideration. Some questions are simply more important than others in evaluating impacts; some questions get included as explanatory variables and are not meant to measure impact. For example, in the Bénin study, one of our primary concerns is about mosquito net ownership and use. We also added some questions about whether participants also used mosquito coils, indoor sprays or mosquito sprays for the body. We are not specifically concerned about whether the clients improve indoor spraying, but we added the question out of interest to see whether those who owned bednets were also likely to exhibit other protection measures or to simply evaluate whether the promotion of mosquito nets improved other protection measures as well. If we saw positive impact in mosquito net ownership and not in mosquito spray, this did not mean we'd consider the lack of improvement in mosquito sprays as a failure; we're primarily interested in mosquito net ownership. Thus, if the “use of mosquito spray” indicator gets categorized as “no effect,” this suggests that improvement in this indicator was weighted equally with the “net ownership” indicator, when it was not.
Second, not all indicators are meant to individually assess impact. For example, in the GiveWell analysis of the Karlan and Valdivia study on business education in Peru, only eight indicators were counted as having a positive effect, while 32 are counted as having no effect. There are a few reasons why this does not accurately portray the meaning of the data. First, very little data exists on microentrepreneurs and their business activities in general, so researchers must ask multiple questions to test one effect. For example, the questions about amount of sales that occurred in the last month, in a good month, in a normal month and in a bad month is trying to test whether microentrepreneurs experience improved sales due to their participation in business education. Four questions are asked instead of one question due to issues of recall, reliability of clients' ability to provide accurate data, and to account for the fact that businesses are likely cyclical or seasonal. We saw no effect of the education on clients' reported sales for a good month, but we did find an effect for clients in a bad sales month. This result (no effect in three of the four questions) is not inconsistent; what it shows is that the business education was effective, but this effect manifested most prominently by protecting sales volumes in the typically slow months of the year. If we had chosen to ask only about sales in the prior month (which is the best recall period and is normally what is used when asking people about their finances), we would have missed altogether the effect that occurred at some other point in the year. In the time we had to evaluate this program, the main impact of the education was first on income-smoothing. Over time (and with more time to evaluate), we might have seen improvement in typically good months as well. If we had seen no effect in any four of these questions, we would have had one conclusion: the education has no effect on the clients' sales. Since we saw an effect in one of the questions, our conclusion must be that the education does have an effect on the clients' sales and you detect it more clearly during their bad sales months.
A third important point to consider is that our impact evaluations are also “research,” which means some questions are included because we're simply interested in collecting data that would be important to the industry and are not there to measure impact. We also ask the same question in different ways to detect discrepancies or to simply test which question will better represent a concept.
Finally, for RCTs and other non-RCT research, researchers lately are more and more developing indices of indicators to “concentrate” and better help explain impacts. This is best seen in the Bénin evaluation. Because we ask multiple knowledge questions, the risk is that we'll find improvement in some and not in others. To evaluate how broad was the knowledge change, we could take the percentage of indicators which showed an effect, if we believe that all questions were equally important for detecting impact. Because this is not always the case, researchers are now developing indices that present a set of indicators as a single unit to avoid having to assess the meaning of each question individually. We can say in the Bénin study that, on average, clients participating in the malaria education had better malaria knowledge and better malaria behaviors, even though the results for individual indicators might give the impression that there was little impact or create confusion about what the data were actually showing.
Finally, a few additional questions/clarifications that need to be explored:
In general, it is devilishly difficult to interpret the results of RCTs that aim to investigate impacts on multiple variables. We find it is necessary to triangulate on the meaning of RCT results by gathering and interpreting qualitative research (such as our “impact stories”) that provides a broader, even though less precise, view of overall impact on the lives of participants in a program. Other researchers are coming to this same view. We would be interested in better knowing your thoughts on this thorny issue.