 My name is John Gargani. Robert McClain and I recently published the book, Scaling Impact-Innovation for the Public Good. In my presentation, I'll introduce some of the ideas in the book. The presentation is intended as a conversation start or so. I'll move quickly and stick to the main points. This is a picture of banana landia, a banana plantation in Mozambique. On the lower left is the plantation. It's so large that it single-handedly transformed Mozambique from a net importer of bananas to a net exporter. On the upper right are small farms called Mishambas, found throughout the country. And running between them is a road, a bright line dividing one large-scale agricultural approach from another. For Robin Me, this picture raises questions. Who decided how big the plantation should be? How many small farms there should be? How were the decisions made? What's the optimal mix of these and other approaches to agriculture? And who's responsible for achieving it? The same questions can be asked in other settings and contexts. And we didn't believe they were answered well by current approaches to scaling. So we set out to find answers elsewhere, from innovators in the global south. In the end, we arrived at a different understanding of scaling that's based on the work of southern innovators. We put what we learned in the book. It's organized around four principles and five case studies. And we believe it challenges organizations to adopt a different mindset when scaling. The book is available for free, along with other scaling resources at the IDRC website. Just Google Scaling Impact IDRC, or use the link below. I'll start by clarifying what we mean by impact and scaling. Then I'll introduce the four principles of justification, optimal scale, coordination, and dynamic evaluation. Some of what I present goes beyond the book, reflecting our most current thinking. Rob and I joke that everything about scaling impact is simple to understand, except for the terms impact and scaling. Let's quickly define them. Impact has multiple definitions. Many of you learned about impacts in the context of logic models. In logic models, impacts are only some consequences of a program, policy, or product. Outcomes and outputs are also consequences, but they aren't impacts. Impacts are the most important consequences. They are the last to occur. And they are usually compared to an implied counterfactual. Counterfactual is another term that may be confusing. Without getting into detail, logic model asserts that the consequences of a program, policy, or product are different and better from those of some alternative program policy or product. As a rule, logic models don't identify the alternative, which is called a counterfactual. We define impact differently because we believe it helps us better understand scaling. For us, impact is all consequences, not just those we intend or want, of any importance, good or bad, to anyone. They may occur at any time, and they should have an explicit counterfactual. So if we say our program is better, we know what is better then. If this has your head swimming, be grateful. You have something to ask about later. Scaling is also a problematic term. Let's unpack it using the metaphor of an apple tree, where the apples are impacts. You may have heard of people talking about scaling up. That's like taking an apple tree and growing it larger so it produces more fruit. Banana landia is a scaled-up plantation. Scaling out is like growing more trees to produce more fruit. Mashambas are small farms that have been scaled out across the country. Scaling deep nurtures the tree in order to change the qualities of the apples, making them larger or more delicious. These different approaches to scaling can be combined, so we can scale up and out at the same time. There is also same scaling, which is intentionally maintaining the same scale, and descaling, which is scaling down, possibly to zero. And finally, not scaling, which has superficial similarities to scaling, but the opposite effect. We consider any and all to be valid ways to think about scaling. But notice that they describe different strategies about how to scale, bigger trees, more trees, better trees. These are the means of scaling. If you're a manager, this is what you spend a lot of time thinking about. Somehow, it has also come to dominate how most people think about scaling for the public good. We want to change that. We want to focus on ends, not means. This is why we talk about scaling impact. We want organizations to scale the positive impacts they have on people, places, and things. And they should scale up, out, deep, or any other way that produces optimal impact. The first principle of successful scaling that emerged from our work with Southern innovators is justification. Scaling must be justified in a public way because scaling is a choice. We may feel pressure to scale from funders and peers, what we call the scaling imperative. But organizations are free to choose. Sometimes, perhaps most of the time, it is better not to scale. Because scaling affects others, the choice is shared with them. And it should be based on evidence and values. It's not enough to know that our actions will create change. We need to understand how much and in what ways it matters to people. Much of the writing on scaling starts with the question, how do we scale? The principle of justification cautions us to take a step back and start with the question, should we scale? A big idea underlying the principle of justification is impact risk. It's the risk that organizations fail to produce the impact stakeholders desire, or produce those that they find undesirable. There is impact risk when organizations are uncertain about the consequences of their actions. Consider a continuum of certainty. At one end is high certainty of impact. Here we can reliably predict the result of a program, policy or product. Pharmaceuticals fall here. A public health organization cannot justify using a new drug unless it knows the effect it will have on people. At the other end of the continuum is low certainty of impact. Here an organization cannot predict what will happen if it acts. Fine art falls here. A museum can justify hanging a new painting on its walls without knowing the effect it will have on people. Your innovative program, policy or product probably falls somewhere in between. Where determines how you use evidence and values to justify it? The same idea applies to scaling. Let's add another dimension, scale, that ranges from small to large. In general, the larger the scale, the more certainty we need to justify our actions. With this in mind, we divide the space into three levels of risk. Impact risk is too high when we have less certainty than is appropriate for the level of scale. This would be treating pharmaceuticals like fine art, using them without understanding their effects. Impact risk can also be too low. This is like treating fine art like pharmaceuticals, and waiting to complete large-scale randomized trials before exhibiting art. There is a Goldilocks middle ground of acceptable impact risk. It's difficult to identify, which is why the choice to scale should be shared with those affected. It changes as scale increases, and it depends on the urgency of the problem. We may be willing to assume more impact risk to address a more urgent problem. Currently, this is a topic of great debate with COVID-19. Should we test potential vaccines less, increasing the impact risk in order to inoculate the public more quickly, or should we test as we always have and inoculate more slowly? The second principle is optimal scale. More is not always better. So rather than thinking about achieving maximum scale, we should be striving for optimal scale that balances multiple considerations. This requires a holistic view that we believe should consider at least four dimensions. Magnitude, which is probably the most common concern regarding how much impact and how many affected. As well as variety, which is the range of different impacts that are created, equity, which has to do with the fairness of who is helped and harmed, and the sustainability of impacts and the efforts to create them. The third principle is coordination. It acknowledges that scaling takes place in complex systems. Given this, to bring an innovation from first idea to optimal scale, requires an evolving set of actors and a flexible scaling process. Think of a journey that starts at first idea. For example, is it possible to make a new type of smart fertilizer that adjusts itself to local growing conditions and ends at impact at optimal scale? The right mix of magnitude, variety, equity and sustainability. The first part of the journey is called discovery science. It may be undertaken by a bench scientist and her collaborators. If the idea seems feasible, the next part of the journey is implementation science. There may be an implementation expert, investors, manufacturers and distributors all working to bring the idea into practice. The whole journey may also be supported by scaling science, scaling experts and stakeholders, help all the actors to justify their efforts, define optimal scale, engage with collaborators and competitors and support evaluation. I've laid this out as an orderly linear process, but successful scaling is almost always messy. Those on the journey move forward and backward. They may ever shoot optimal scale and need to descale. Sometimes to be successful, they make the difficult decision to stop their efforts altogether. The fourth principle is dynamic evaluation. It starts with a simple but powerful idea. Scaling is an intervention. We talk a lot about scaling an intervention, but scaling is an intervention. When we scale, we change our actions in order to change the magnitude, variety, equity and sustainability of impacts. However, scaling creates dynamic change, which makes it vitally important to evaluate before, during and after scaling. What do we mean by dynamic change? Well, evaluators who are interested in impact focus most of their attention on two relationships. First, the relationship between an organization's actions and its impacts. Second, the relationship between context and impact. That's what the arrow pointing to arrow means. In some contexts, impacts may be larger or better than in others smaller and different. The same actions in the same context are assumed to be stable. Scaling changes all of that. When we scale, we change actions in order to change impacts. This is why we say scaling is an intervention. If scaling is successful, the way it changes impacts has the potential to change the context, making it easier or harder to create impacts. In addition, scaling may have side effects that affect context. For example, when an organization attracts philanthropic investment to scale its work, it may become difficult or impossible for similar organizations to attract investments in the same location. As scaling continues, these feedback loops can ripple through complex systems, making it more difficult to predict how impacts will unfold next. Dynamic evaluation challenges us to widen our gaze. In the past, evaluators were focused on two relationships. When scaling, we may need to focus on five. Unfortunately, evaluators may not be well equipped for this. I've covered a lot of ground quickly. We've talked about impact and scaling, the four principles of justification, optimal scale, coordination, and dynamic evaluation. You can learn more about scaling impact at the IDRC website, and I look forward to answering your questions. Thank you. I have a question. Maybe, thanks Federico. It may be a direct implication of both of John's talk and also the panelists that we've seen the type one problems versus type three problems to address those type one research versus type three research. There is also type one evaluation or assessment approaches versus type three packages. What are the implications? Very often, in fact, we are caught in a mismatch. We're being applied type one evaluation where we're doing type three research to do type three problems. Does that mean that we really need to shift, as you've said, John, probably in your book as well, some of the theories of evaluation and impact assessment? I would say, yes, we do need to shift those theories, or at least are thinking about it. Expand them, I would say, is really what I'm suggesting, not taking what we have and throwing it away. If we have just a minute, I can maybe show you something that relates to how evaluation approaches and scaling approaches may be related to what we've been calling level one and level two, level three sorts of problems. I'm asking our moderators if we have a minute for me to try to show that. I think if it's a minute, we can. Absolutely. All right, let's see if this works. All right, I'm going to show you right up here. So imagine we have a couple of axes. We have impact risk vertically from low to high. So there's a chance we're going to do something bad if we act versus there isn't much chance something bad will happen. In an urgency, we better act now, which is high or low. So when the impact risk is high and the urgency is low, that's where this phase research and stage gating, I think, works really well. We have the time to go through all of that and cycle through, right? So that's where maybe scaling what works, that motto, works well. When the impact risk is low and the urgency is low, then this is what's happening a lot in sort of the business world. They call this lean scaling, where the only urgency they have is around markets. But they can sort of take their time to develop a new product and they can just bring it into the market. It doesn't really matter. It's not going to hurt anybody. So you're going to let the market decide here. What's happening, I work a lot with people who have one foot in the market world. And a lot of what's been going on is these two things getting shifted where people in markets are using lean approaches where they probably should be more cautious. And phase research is being pushed upon them in context when it doesn't really need to be the traditional phase research stage gate approach. So let's put those in the right order. Over here is the urgency is high, but the impact risk is low, in which case we should just go, right? Let's just do this. There's not a lot of risk. And we need to act now. The ship is sinking. Let's do something. This is the quadrant, which I think is really the issue, right? The urgency is high. The impact risk is high. And this is crisis. And this is what's going on with COVID right now. And this is why Rob and I said we needed a principled approach to innovation or scaling or evaluation that takes that into account. And I'd say that that circle is the one that is most of what we do. We are acting because there is some urgency and we have a high degree of uncertainty about what will happen. And that's what we need to try to understand. And trying to superimpose any approach on how we go about that is dangerous because we mismatch the context with that. Having said that, knowing which is appropriate when and where is really quite hard. So I feel like there's a lot of groundbreaking going on in these discussions with a much wider view towards what is possible than I've heard in the past. So I'm really excited.