The creation of the Agency Fund was motivated by evidence: a constellation of encouraging research findings led us to approach global development through the expansion of capability and self-determination.1 But agency-oriented innovations don’t always work, and maximizing impact requires continued learning. This calls for investments in research and action, as well as the cultivation of an institutional ecosystem linking research and action. And that will involve not only the expansion of linkages between existing institutions (such as universities and policymakers), but also the emergence of entirely new institutions that unify research and action under a single roof. We call such institutions “learning organizations”. (We used to call them “learning implementers”, but that didn’t catch on.)
Why learning organizations? One important reason is that human agency is complex: causal pathways are messy and contextual complexity looms large. That makes it harder to distill research into a sufficiently clear, coherent, and complete body of literature to guide appropriate action in an independent organization. Agency-oriented approaches usually require a dogged commitment to keep iterating and improving within context.
For illustration, first take the example of a less complex intervention that remains a perennial favorite of evidence-oriented donors: insecticide-treated mosquito nets. They are a tangible good with an unambiguous objective: to reduce death and disease from malaria. We can define obvious hypotheses about the key factors that define their effectiveness, e.g., the prevalence of malaria parasites, the presence of mosquitoes, and the rate of mosquito net adoption. The envisioned behavior—for people to hang nets over sleeping spaces, prioritizing children—is equally clear. And parasites, mosquitoes, mosquito nets, sleeping spaces, and children all have a physical presence that can be objectively measured. As causal pathways are clean and verifiable, it is within the reach of research institutions to develop insights that add up to a reasonably complete understanding of when, where, and how mosquito nets work. Of course, local context (for example, a deeper understanding of the policymakers and community health workers involved in net distribution) matters for implementation, but it can be reasonably well separated from the study of mosquito net efficacy. An organization can become highly cost-effective at mosquito net distribution without conducting major research activities in-house.2
When dealing with approaches aimed at expanding human agency, we are usually faced with greater complexity. Take the example of student mentorship, which more closely resembles a service than a good. Sure - as with mosquito nets, we can evaluate its impact on development outcomes in any given context. But there is more room for debate about what the ultimate outcomes are. Proximate outcomes are even more ambiguous: does mentorship work via shifts in self-beliefs, soft skills, attitudes, or something else? All of these are abstract concepts that are defined in the eye of the beholder and have no single accepted metric, much less across languages and cultures. Furthermore, all of them may be shifted by mentorship to a degree, and no credible statistical technique can identify which shifts mediate impact on the ultimate outcomes.3 We will also struggle to disentangle the intervention from the characteristics of the individuals delivering and receiving it, and the rapport they happen to have. Finally, impacts may be moderated by any number of other invisible factors - say, in the local culture and economy - that may be elusive and ever-changing. Though promising evidence exists, it does not seem warranted to make a sweeping claim that “student mentorship is evidence-based”; and in fact, it is hard to even envision a hurdle of evidence that could ever make this so. The impact of mentorship will sometimes delight and sometimes disappoint, and we should not expect to ever attain a fully generalizable understanding of when, where, and why it “works”.
It is no coincidence that the term implementation science is more widespread in global health research (where many interventions resemble crisp goods), while improvement science is more common in education research (where most interventions resemble fuzzy services).4-5 Working with intangibles tends to call for a more adaptive learning approach - a remedy to complexity. Rather than pursuing a complete and generalizable understanding, this involves accepting and embracing context - tinkering, iterating, and failing fast, muddling toward a better place in a process of trial-and-error. The most vocal critics of the RCT movement in development portray adaptive learning as an antithesis to randomized experimentation.6 But there is no contradiction. Look no further than the tech sector, which is ground zero for the adaptive learning paradigm, and which routinely deploys randomized experiments for product optimization.7-8 When you routinely ingest granular data, experiments can become extremely cheap, and can guide rapid organizational decision-making.
The most compelling demonstrations of the “learning organization” model do not ignore academic evidence either; but they view it as inspiration rather than gospel, and they remain mindful of the need to keep tinkering and iterating to make promising ideas work within a specific context.9 As an illustration, consider the nonprofit Youth Impact. Its creation was inspired by a economics research paper from Kenya, which found that a “sugar-daddy-awareness” campaign had large effects on safe sex and teenage pregnancy.10 When Youth Impact tried to replicate this program in Botswana, its experiment yielded more mixed results - but it surfaced important insights about how to increase the program’s effectiveness in Botswana. Some of these insights were purely operational (for example, about how to optimize between dosage and scale); others appear more profound and feed into the body of literature that may illuminate others down the road (for example, near-peers were more effective messengers for sex education than teachers).11 Today, Youth Impact collects user data continuously and initiates experimental evaluations on a monthly basis. It bears as much resemblance to a data-driven tech firm as it does to a research institute and a social service implementer.
There are other instances of experimental economists co-founding institutions that unite research with field implementation, such as Precision Development (which supports smallholder farmers’ decisions through granular digital extension services) and ConsiliumBots (which supports Latin American students on their educational journeys). We also see more social entrepreneurial initiatives invest significantly in their data science and research capabilities, including Rocket Learning (which supports parents in the early education of their children), Noora Health (which supports family members in giving care to sick relatives), and Indus Action (which supports citizens in the navigation of benefit entitlements). These are some of the most compelling demonstrations of scalable efforts to advance human agency, and all of them are converging toward a learning organization model.
Learning organizations see implementation as an opportunity to have some impact immediately, while simultaneously learning how to become even more impactful in the future. This enables them to maintain a clear-eyed commitment to evidence while engaging with much more complex interventions than mosquito nets. If and once more evidence-oriented charitable dollars get allocated beyond the distribution of global health goods, many of them appear likely to go to grantees that blend research and implementation.
Readings.
[1] The Agency Fund (2021): Putting Data, Media, and Technology in the Service of Human Agency. White Paper.
[2] Givewell (2022): Against Malaria Foundation. https://www.givewell.org/charities/amf
[3] DP Green, SE Ha, JG Bullock (2010): Enough Already about “Black Box” Experiments: Studying Mediation Is More Difficult than Most Scholars Suppose. Annals of the American Academy of Political and Social Science 628(1)
[4] T Madon, KJ Hofman, L Kupfer, RI Glass (2007): Implementations Science. Science 318(5857)
[5] C Lewis (2015): What Is Improvement Science? Do We Need It in Education? Educational Researcher 44(1)
[6] A Deaton (2020): Randomization in the Tropics Revisited: a Theme and Eleven Variations. NBER Working Paper 27600
[7] R Koning, S Hasan, A Chatterji (2020): Digital Experimentation and Startup Performance: Evidence from A/B Testing
[8] E Ries (2011): The Lean Startup: How Today's Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses. Random House, New York City
[9] E Duflo (2017): The Economist as Plumber. American Economic Review 107(5)
[10] P Dupas (2011): Do Teenagers Respond to HIV Risk Information? Evidence from a Field Experiment in Kenya. American Economic Journal: Applied Economics 3(1)
[11] N Angrist, Gabriel Anabwani (2023): The Messenger Matters: Behavioral Responses to Sex Education. mimeo