 We want to kind of build on what we've heard over yesterday and afternoon and this morning and Ido has been giving some thought as to how do we begin to embed some of this thinking and can we use some of these tools and so on within Iodi. So he's had a couple of interns working with him to help think through some of these issues, has put some initial ideas together. I think they are initial ideas, Ido, and you're wanting feedback from the groups here about what your proposal is. Joe, over to you, Ido. Thanks very much. So as you have said, this is work that I've been working on with two of our graduate fellows, Nicole Wu, who's based in Addis. Shane Ryan is, what they've been working with us for a few months, he's now taken on another assignment in Rwanda actually, but not with us. I call that scaling better together and I'll try to cover our approach and then outline in our principle. So our approach basically is to leverage existing approaches and tools and adapt that to Iodi's context. And that's also part of the reason why we wanted to have Larry and Leonard and Mark speak to you. So then you can hear them and see firsthand where they're coming from. So their tools are some of the tools that we are proposing to use, but not exclusively. With the mindset of looking at what's out there, we did quite a comprehensive lit review in the sense of looking at what framework, tools, approaches are happening, mostly in the agricultural sector, but we also looked at health and other sectors which are relevant for us. So we really tried to cover most of what's out there. We look at stuff that's out there by USAID, the World Research Institute, IDIA, which is a consortium of donors, EFAD, and of course some of the tools that you've seen in the last three days, so Wageningen and IRCA, the Smith and CPP, and all, etc. We also looked at other things. I'm not going to list all of them that are out on the screen. We then conducted interviews with the tool developers and with other scaling experts to see what they're thinking, where they're coming from and how that fits into our context. And finally, we are in the process of finalizing our own evaluation of the various tools, how comprehensive, how feasible for your other implement, how suitable they are along the lines of certain criteria that we map all of these tools across. And we also, right at the beginning, looked at what are some of the principles that we want to make sure are embedded into our approach. So flexibility is a very important one. So we have a mix and match approach. So it will involve the combination of the tools you saw yesterday and today, whether it's the scaling scan, the ASAT from USAID, whether it's the scaling readiness, and there can be others as well. And that's mostly the point we want to stress is that it's not about any given one tool, and tomorrow a new tool may come out. It's about how we use them and have the flexibility to use them all to get to what makes sense. And that may look differently for different projects. Also, the use of scaling coordinators to support that process. I'll say a little bit more about them later, but this goes back into what Larry was saying about the intermediation. And Mark actually didn't get to that point in his presentation of the science of scaling. There's also the working on the practice of scaling, which is how you actually embed and run that within a CG center. A third key principle is to keep an agile approach. So we're going to adapt and iterate quickly. We're not going to try to come up with a master plan that captures every possible scenario for every project Hillary could be involved in, but rather capture things that work and get us to the main point we want to see. And if something doesn't work, adapt it and iterate quickly. So don't wait for 30 projects to finish in order to fix something that you figure out on the first one isn't quite right. And finally, scaling is a team effort. It takes a village to raise a child that takes all of Hillary to make this work. So this is really going to depend on very close collaboration between all of the project teams across Hillary, the impact and scale team, and of course with our partner. So I want to stretch up front. We are not suggesting that people in the different programs, the different projects don't know about scaling, don't do it right, don't do that. That's not where we're coming from. There's a lot of richness and expertise. What we are trying to do is to come up with a framework and a system that would help us do it better, quicker, and more consistently across our portfolio as an institute, as opposed to each project doing its own thing with trial and error. In the interest of time, I'm actually not going to spend much time on this slide. Some of the tools have been presented. There are others there. We'll have a synopsis document with all of those. So I'm just going to move ahead. But I will spend a bit of time on this process diagram, which captures the process as we envisage it, and there could be various levels to the process. It's not a one-size-fits-all. It all starts with primary data collection. I'm not sure those online will see it, but it's the top on the left corner side. You have a preliminary data collection book. That's a step that happens right in the very beginning, using a set of structured and semi-structured tools to collect information from key project participants about various aspects that helps us prepare for the initial workshop. We envisage the initial workshop to essentially build on the scaling scan tool and the agricultural scaling assessment tool, which helps us take basically the pathway. So we basically like the scaling scan. We think in its process, and it's slow, the fact that you can do it in a 10-and-a-half or two-day workshop, it fits and it helps the team come together. We like the pathway scale suitability of AFT because it forces us to be much more specific, whether it's a public sector, a private sector, and who are the components. That initial workshop will give us enough information to do an initial scanning report. Now, a lot of the things that Mark has been saying about the self-reporting bias or the risk of a self-reporting bias would still be there. So this would be basically a short report based on a two-day workshop and some preliminary work. It would stand there. It would help a project that's maybe not really at a scaling phase or is doing it at an early phase, knowing that it's not really going to be scaling any time soon, to still be upfront thinking about these things, to have their pathways clear, and to have the ability to look at this a little bit more systematically. So if we're doing, this is what we're calling the light review. So projects that would only do a light review, we would basically stop almost there. We'd do a little bit of data validation and follow-up. So if somebody said in the workshop, oh, well, you know, our path is very clear. This private sector partner said they're going to do 90% of the scaling is going to come through there. We may have a chance with them just to make sure that the way they see their role in this scaling actually matches the way we saw it. So if we have glaring gaps in between, we're saying our scaling is contingent on A and the person or organization doing A tell us, actually, we're not doing that. We're not planning on doing that for the next three years. We might go back and say, hey, some initial validation say that we probably want to go into that, but we won't go into much more than that. That can get finalized in the scaling plan, the initial light scaling plan, and that's where that project would end for that time. They can then decide, you know, in a year's time, in two years' time, we want to review this, we want to do an excess, but at least that's sort of the basic light that we would very much advocate for all new, large projects to go through in order to make sure that we are clear and systematic in how we go about this. However, some projects would be at a stage where they say, well, we actually want a deeper dive than just that. And that's where we propose to use the scaling readiness and to go through the various stages. I won't go through them again because Marcia went through them in quite some detail, but to go through the more in-depth scaling readiness, which has the access, it has the nine stages, which are actually adapted from NASA. So it's an adaptation of something that's been rigorously tested for many years, and it's just being applied to the agricultural sector. It has more quantifiable information that's designed to be aggregated across projects into programs, CRPs, institutes as a whole, so we do see a lot of value in that. But we also, you see on the bottom, detailed analysis using other tools, we do recognize that some teams, for whatever reason, would prefer at times other tools. We would work with them to go through a similar process during the deep dive using those. That would end up with a scaling plan that is both much more detailed in terms of the innovations themselves, the environment, the scaling pathways, et cetera, but also would then link much more naturally into questions on implementation, monitoring, evaluation, landing, how do you navigate that, and how does that inform us going forward. So that's the process diagram. I know it's a little bit complex, and I'd be happy to answer questions later, but again, in the interest of time, let me move on. So some of the things that in initial discussions that I've had with a number of colleagues, they said, oh yeah, that's true, but you know we have to have deliverables. So of course we have to have deliverables, and of course those deliverables actually should measure towards our outcomes and our impact. So just to lay them a little bit out here. So we have the synthesis of the primary data collection. We would have the initial scaling report based on the stakeholder workshop. For those, we keep it as optional, but for those who do the deep dive using scaling readiness or another tool, we'd also have a much more detailed mapping of the technology readiness and usages, as well as the challenges and the scaling options. You remember the seven options that Mark presented, et cetera. Ultimately, the main deliverable will be the scaling plan, which would be the light review or the detailed version, and that one will incorporate the finding from the overall analysis, which what's our scaling pathway, what are the interventions that are going to address the challenges, what's the partner mapping, and I know we have a number of partner mappings already being done in different contexts with different tools. This is not meant to duplicate them. It's really with a scaling focus, which are the partners with the scaling focus, which are key, which help us in terms of doing the independent third party verification on some of the assumptions that we have. You will also have some reporting data collection verification considerations and finally recommendations for the next steps, which then the project can take on board or not immediately or not, but at least they'll have a clear set of recommendations to help guide that work moving forward. What's a timeline? I want to be tentative about this because we've not done these, and the way that we're mixing and matching means that it's never been done the way that we are doing it. We love ourselves a little bit of margin in terms of how long something takes or so on, but we really think that this is an indicative. This is something that we've developed for a project that's supposed to start quarter four this year, then so we have some preliminary data collection, then the scaling workshop, we then, this is a project that does want to do the deep time, we then do the initial report, data validation, then the scaling readiness, and then the comprehensive data plan. So in all, start to finish from initial project interested in doing something around this to having a scaling plan would take about a year or so if they're doing the full process. Obviously, this is not a year of someone working full time on this, but in terms of how these things are spread out, it will probably take a year or 10 months, 90, 10 months or something like that to really go through the entire process. Once we actually start going through this, we'll narrow these down and we'll have much more clear indication on timelines, etc. So this is pretty much the initial concept that we had envisaged. Now in terms of next steps, why would that look like? What are the implications? So we're planning a small initial rollout, four to six projects in 2020. The idea is really though, and part of why we want to do it now is to start thinking to 2021 and beyond and gradually introduce this as future projects go live. So if today, 0% of our projects across our portfolio have a scaling plan in place, we might want to come up with some kind of a target. We haven't discussed that number yet. What that should be in 2025, in 2030, and what are the implications of that for how we work and plan and engage with our investors and with our partners, etc. So what next? Some people might ask me, what are the projects? So I purposely don't want to go into that. Here, some of them are still being finalized. But basically, if we haven't talked about it, you're not one of the projects, basically. Unless something happens in this presentation that completely revolutionizes your world and you say, you know, my world will not exist unless I'm part of the initial group, in which case, speak to me and we see what we can do. But basically, this is meant to roll out with a small pilot of projects who are already aware of it and timing for it. If you're working on proposals, for example, for the Ilria 2025, you should plan for an initial light assessment at a minimum in year one and then use your judgment for estimating what your needs might be. So if you think you will be going into a more scaling phase during the five-year project, then leave aside some money, not just for an initial scan before the detailed dive. Mark gave an example of how they did it in RTB that in certain flagships, they left money, a pocket of money to address issues that they knew that they don't know what they are yet, but that something will emerge. So I would encourage doing something like that using your judgment. And again, if you have questions or if you want more information or if you want to talk through that, please let me know and I'll be very happy to walk you through it. I do have a few more slides, but in the interest of time because I know some people need to leave at 12, let me stop here and open it up for any questions and maybe those questions will take me to the slides anyway. So thanks. And Ian, would you chair the questions? Okay. Okay. Thanks, Nido. So that's a very brief outline of some of the thinking and how we would plan to take this forward. And I'm sure there are maybe many, many detailed questions. Let's begin with comments and questions about the overall approach. And if we get time, we can dig deeper into specific questions. I mean, do you think this is a reasonably, a reasonable way forward for building? So let's deal with kind of the overall approach first. So yes, Bonnet. Yeah, thanks, Nido. This is a great presentation and I could agree that we should, as an institution and also indeed as a CG as a whole, we need to focus more on scaling and innovation. When it comes to scaling and then say commercialization of new technologies, innovations, which by default are perceived as risky by the private sector because it is an innovation. I mean, by definition it's perceived more risky as other products and technologies that they might be commercializing. From my experience and from my previous jobs, I know that in order to incentivize private sector partners to take on those innovations, there is a need to have some incentivizing mechanisms. Let me put it this way. Professional financing, for example, which is made available for those companies. This is obviously not something that we do as the only, but have you also thought about this? How do we really attract private sector partners? Because we may think that, okay, Company X seems to be a suitable partner for us. They could potentially do that. They could potentially work with us, but really they might not be interested because they perceive the product as risky. They perceive the customer segment as risky because presumably it will be targeting the bottom of the pyramid. Companies have relatively little experience in doing that. So there needs to be some set of mechanisms that would incentivize us to work with us. Thanks, Paulik. So let me answer that in two parts. First, ideally we would get to innovations that are so compelling that they don't need a lot of incentives for people to take up. Because the danger with that is that if you have to pay someone to use your innovation, it's very difficult to scale something like that unless you have pretty much unlimited funds, which we don't. And so, for example, in the scaling revenue, one of the, on the nine scale, one of the big jumps that for me is very, very significant and interesting is if someone who's not involved in the project in any way, you're not paying them, you're not supporting them. Have they adopted it and are they using it? That's sort of a big sign of maturity. So I think over the long run, a sign of our maturity would be how much of our innovations that we feel are ready to go out there are actually being taken up without heavy involvement or subsidies from ill-readied sites. That's the first part of the answer. The second part is that I absolutely agree with you that on some areas we do need to establish the case first before we can hope for large, unsubsidized private sector interest. So, for example, financing, which you've mentioned is a great example. It's a huge bottleneck. And traditionally, when you look at the financial services industry, whether formal such as banks or less formal such as MSIs, et cetera, they have stayed away from the agricultural sector like the plague because they are afraid of high risk and they're not sure how to handle it. And part of, where ill-readied and CG more broadly, but ill-readied can play a role is piloting small-scale innovations, whether it's with one acre of land, whether it's with other partners to find out, to demonstrate that this actually has a business case and how we step that up will be very important. And I do agree that there are areas like that where it would make sense. We have to be very careful to define them well and to define our entry and exit very clearly so that this doesn't become sort of an ongoing thing. But I think there's absolutely cases where it makes sense for us to do so. I think that, I mean, Dieter often gives the example of the animal health industry, the pharmaceutical sector, and how many of these large multinational companies who are sometimes also in the healthcare sector view the agricultural sector, the animal health sector view Africa as risky because the healthcare business is much bigger. And they are afraid of running the risk of false products and they get a bad name and that kind of ability needs to affect much bigger healthcare sector. So I think the question of risk and risk averseness is important. However, we've got to remember that the private sector is very homogeneous, heterogeneous, sorry. It's not just the big multinational companies. The report last week that Zagra produced by Tom Reardon called the Hidden Middle shows that there is a huge investment from small and medium-sized enterprises across Africa. And Kenya is a classic example of that. So maybe it's not a big multinational company that we're thinking about. Maybe it's a small and medium-sized enterprise here in Kenya or in Ethiopia or wherever, okay? But of course, ensuring that we can help them be risk by access to finance and so on and so forth. When we have some experience with that, if you look at Iblee, a lot of what we're doing now is talking to governments and investors and so on. How can we do this? Some of the investment is Iblee for the private sector insurance company. So, yeah, we're already doing some of that, if we need to build on that. And there's video points. I don't know many of the tools there are designed to help you identify what the most bottlenecks are. And some of those bottlenecks are risks that we can't address that we can engage with other organizations or companies or governments to help you know those bottlenecks and risks. So, say, we have some examples of where we're beginning to do that. Someone, please go ahead. Hi, I'm Peter Thorne here from the UK at the moment. Oh, thank you. Can I go ahead? Yeah. Well, there's a lot of things that we can discuss here. But I think one of the things I'd like to highlight is the question of how do we demonstrate our success? I think most of the speakers have kind of alluded to the issue of client sales around this. And getting to the stage either that Mark sort of vitalized this morning of users having taken up these innovations by secondary organizations, tertiary organizations, some way beyond the kind of proof-of-concept evidence-gathering validation or whatever you call it. This does not happen overnight. This is talking about things that are going to come from perhaps five or even 10 years or beyond that after the end of the main engagement of the research. So, I mean, I like the framework that you're presenting it over. And obviously you have to go to evolve as you have experience applying it. But do you have any provision within that to actually for us to be able to go back to our donor later on and say, look, this is what we've achieved. This is where we scale. And this is our hardline scaling. Scaling partners have invested in this. I think that's another good indicator that if a scaling partner is willing to invest their own money in promoting their technology, they don't require project support. That's a kind of similar level of indicator to this independent uptake. So, that's my question. How do we embed that in our, but it's a monitoring and evaluation question, right? It's not that attribution. It's our contribution. But we need to be able to generate those numbers at some point in the future. Yeah. Yeah. So, I completely agree. It's a very valid point. I don't think that I have a great answer about this today. It's something that we are thinking of. All of these tools have sort of their own M&E and follow up sort of tools. We're still giving that thought. I think some of the... I don't think I need to have a great answer. I wouldn't have a great answer. I don't even know I would have a great answer, but I think it's something for us to think about early on. Yeah. Yeah. I completely agree. I'd answer this with sort of a question, you know, answering a question with a question. Which, you know, is, what do people say today when you ask them about these things? And I would say that my experience has been that, you know, many, not all, but many of our colleagues would say, oh, the private sector would do that. We don't need to question how it's done. And I think that that's maybe part of the issue, that we don't have good mechanisms today to actually are tracking how our past innovations from 10, 15, 20 years ago have been taking up what was the process. And therefore, what can you expect from a project which is, you know, a piloting stage based on the numbers and based on the context, what can you expect it to do? I think that we can ultimately come up with better prediction and modeling around these things. And we have a lot of in-house expertise in that, not in my program, but in the other programs rather. And I think that that would be important to leverage and to think about more. But that's as good an answer as I can give today, Peter. All right. I mean, I'm really urging you to sort of have your eye on this as you move this forward. I think, and I agree, a sort of historical analysis would be a very good starting point because we can at least give evidence that it has happened, hopefully. But I think predicting for the future is likely tricky. Agree, Peter. Agree. Isabel, do you have a point? Yeah. If you could just go back to the slide diagram, just for me to better, I mean, so I can understand my question. So I was just wondering about these, I think, there's something before? No, I don't. Yeah, there are just two before it. No, not the next slide, sorry. So I was just wondering about, so you mentioned that the tools you've seen a couple of tools today, so we will have to adapt them to the next talk. Yeah. So I think, I think a couple of people in different programs, they want to have input to the rest too. So I don't want to use the word champions or use them for something else, but a program contact person, possibly would be a good idea. You know, and that person would be the link as well as that. But you don't have a stronger connection between the different research program and impact of COVID one way. We've obviously very careful, not just like we can have coffee, but possibly having, you know, a draft tool to comment on and things like that. Actually, I get, I really get buy-ins from the rest of the institute. The person would be in charge of convincing you to have a program that way, right? That's why I'm wondering. And then for the four to six projects that you are thinking about testing that, yeah, looking at, you know, how, what would be the lesson after six months, one year, just through that group possibly as well, so that you don't get any of that consistency as well and people drawing from it. So just, that's just our suggestion. And then something that we briefly discussed, I think I wasn't very clear in the Uganda Confili team about TLC. So TLC is a bit of a change approach for me. It's about this end in mind that I've, maybe I was only hearing that I wanted to hear, but I heard that a lot in us today that it is the end, keeping the items upright and something is the end, end in mind, right? And when you start engaging with both others in a community or a country on a specific innovation, always starting with, okay, what if I end, if I exist tomorrow, what will happen? And for me, the TLC helps me to into that. So I was as well wondering how this scaling plan could be integrated in an, in the, in the, in the, like, subject to country TLC, so that it reminds people who really implement the, the project that this end in mind, I think is very important if we want to go to what a scaling, a scaling up, a scaling outcome. So I wasn't sure you, you, because it's right last time when I was on, on, on WebEx. And that was my idea. It was not that TLC replacing that. It was not about, like, strengthening the, are you at all the TLC tool that somebody used and more or less people used to get more and more familiar with? How can we use this at least in some problem? Use it to, to get these protocols when more, more aligned. Yeah. And yeah, yeah. Okay. So on your first point, I very much welcome that, you know, to have, because the way we envisage it for now, the scaling champion, to use that word, would be each time within a project team who would be the person which is different, as you rightly point out. But in addition to that, to have each program nominate one or more, you know, the more the merrier, I have no, you know, it's like, it's an open thing. If, like to have people provide feedback on the approach, draft tools, et cetera, also group feedback after the, the initial set of project. Very open to that. Very welcome that. Because I think it would also help create this sense of joint ownership, which I think is crucial for this to work. So I mean, I have seen in some, some emails, you know, Edo's baby. And for this to be useful, it has to be, you know, Eury's baby, not Edo's baby. And so, and so I very much welcome that. I think it will be very good. To give you a bit more concrete answer, we are aiming to have a lot of the write-ups, both about the tools that cannot fit, but also outline our process, who does what, et cetera, ready by the end of this year. And so in a month or two, probably two towards two months or so, we'll be able to share that. And we can use the time in the interim to create a small sort of internal group on that. So on that point, you know, absolutely. And thanks for suggesting that. On the theory of change, I'm happy to work with you to see how, how you envisage that in, in general. Again, very open to this. If it helps improve some of our other processes and embed better into them, I'm, I'm all for it. I will chat a little bit, and I may be at times a little defensive about it in the sense that it's so new that it needs to sort of have the liberty to, to do these things sort of to run through the, the standalone process at least once or twice so that we, we know better what we want this animal to be before we start sort of completely embedding it. But I see no harm in, in using the time to, to work, for example, on the Uganda TOC or another as an example and seeing where, where that takes us. I would have thought on a theory of change. I mean, yes, I agree, you know, the theory of change approach to starting in the mind is going to be important here. But I can also see an opportunity to link this because many of the assumptions that we build into our theory of change actually will map onto some of the bottlenecks that these tools identify. So I think there's a, there's a strong point there, right? Because we, we have those assumptions of the theory of change that often we don't think about, well, what are we going to do if those assumptions don't hold? And I think this could really help with that. And at that point, I completely agree because it's helped go from assumption into a more independently verified where are the bottlenecks, where are the actions, where, where is our, our issues. And in that way, they can be mutually reinforces. So absolutely. A follow-up question? Yes, a follow-up and I'll, I'll check online again. Well, the, the follow-up question is about the, the scaling as, as a, as a research question, the science of scaling, which was at the, on the table at some point in, in Italy or in the life of CRP, which has been more like from the background right now. Again, I think about, if there's any mind about the research question that would, you know, if we want to overlay on that scaling diagram for the diagram from the research question, yeah, I think it would be, as well as the time. And then we should not do everything at the same time, I agree. But it may be interesting to have, I see people are going to tell me on what will be his science of scaling, based on that process that you're suggesting here. And I don't want to complicate, because you know, these things have a kind of, but I, I think it could be an interesting thing. Tell me my, my initial quick reaction to that is that this is initially very focused on the scaling needs. So the practice of scaling more than the science of scaling, so to speak. Having said that, one of the objectives of the impact of scale program, in the fact that it goes closer to the development side, is that there would be insights there that feedback and help us come up with new and better research questions and so on. And so in, in its design, this is, this is what we're supposed to be doing. And so again, maybe not the first thing to do in the first year, but in principle, yes. It would be very worrying, in fact, if we had really concrete decision support, you know, insights coming up from this, and they don't translate somehow into our, our future research questions and how we do things. Because it would mean that basically we disregard whatever happens and we keep doing our things. But it can happen. Yeah, I, but yeah. Okay. Anyone else online wants to come in? Please do so. Yeah. Sure. Yes. Good question if I'm speaking. Okay Alan, please go ahead. Yeah. You know, when I was looking at Mark's presentation, I was quite impressed by the way that they kind of done a systematic inventory of innovations across the AITA that, that might be right for the scaling. So what extent have we done that properly? Do we have a kind of list of innovations which? Yes. So I mean, currently we do not have that, but one of the main reasons behind this is precisely to have that. So it's not just listing what we have in terms of it was also the coming from 400 innovations ready to be taken off at the CPR level. I think it's a little bit similar. We have our own list that we sometimes give to investors when they push us forward. But if I had to put my hand in the fire, I would say that, you know, my response would be a little bit similar to what Mark was. And so I think that precisely if we go from zero percent mapping today, what should be our goal in five years? Is it, you know, 50 percent? Is it 80 percent? Whatever the case may be. And this is something that we would need to discuss as an institute and see what means to guide us on the level of our ambitions and so on. But it is very much my hope that, you know, when we discuss this a few years from now, we will be able to show this as well. This is where we were in 2019. And here's the movement, here's the progress over time and the change in our portfolio and really start to use that information in a way that can really help us design and execute better. So that is a big part of what my hope will happen as a result of this. I was going to make a comment on that in my laptop from Mark. So let me do it now because it's very relevant to your question, Alan. So we've been asked three times, sorry, asked two times in the last three months about doing our, what if we only have this way to go to scale? Of course, you know, I'll send an email to program leaders. They'll gather information and we get a long list. I score half of them off immediately because I don't think we're ready to go to scale. But we had that experience as tax, you know, there was a long list for me already. And in reality, most of them are not ready to go to scale. That's my judgment, of course. And it's a subjective judgment. I would be much happier if we were positioned to make that judgment more objectively. And that's exactly what I think some of these tools can help us with. You know, so instead of me, you know, trying to persuade a donor that this is ready to go to scale just on, you know, what I know and my gut feeling and so on, I think a much more objective way of coming up with that list I think would be extremely, extremely valuable. And I hope we can get to that position fairly, fairly soon rather than have to make some kind of subjective, subjective assessment. And usually we over-optimist it, of what can scale. Hi, this is Silvia from Adiz. Yeah, Silvia, please go ahead. Yes, it's not really a question, it's just a comment. So basically, it is really interesting that all these words show up and all these thinking is happening now. As part of the work we do under agricultural nutrition and health in the food safety flagship, particularly, I'm living in a cluster of activities which has to do with delivering impact of the scale. So basically, there's a number of projects mapped to that cluster of activities which are all those that are expected to reach millions. This includes the more male project, for example, in Kenya, a project in India, also working with the dairy sector, safe food fair food project in Cambodia, safe pork, also in Vietnam. Anyway, a number of them, the expectation is that they deliver, they develop technologies and test them and they can be delivered at the scale. So we were starting to get organized to start doing some sort of diagnosis and going through a similar process that you have explained, Ido, on our project. But obviously we don't have the expertise and we are little by little trying to learn. So it's very interesting that this is happening also at the same time for yielding as a whole. So just saying that I'll be reaching out to you to see how we can, you know, we can discuss how we can perhaps get your support on doing that and whether we can incorporate what we are trying to do within your plans for the next year because obviously we don't want to do this just in parallel. So as much as possible, we should try to integrate. So anyway, just saying that I'll be reaching out to you to discuss that. Okay, thank you. Great. Thanks, Sylvia. I think I may have had another voice trying to call out. Yes, yes. Me here. Okay, me. This is Barbara. The discussions are quite interesting and a number of issues come out. I think, and this is a comment, not a question really, other than having internal harmony in terms of our understanding of what scaling entails, what tools we're going to use, how we go about them. I think at some point we also need to be engaging with partners and our donors, having a harmonized understanding of what scaling is because there's no use of understanding scaling differently from what our donors wants us to achieve. And at what point will these kinds of discussions happen and who is going to spearhead it? I don't have a simple answer to that. But I think part of that discussion will go on in the next six to nine months as we develop our ideas for investing in livestock building in 2025. Because the donors will want to see what our plans are. How are we going to really go to deliver at scale? Not us, but with our partners. How are we going to deliver at scale? So I think that conversation will begin probably later this year or more likely early next year as we start engaging with a broader group of donors. They won't want to hear our ideas and our thoughts on this. And that's why I said having, you know, you and I deliver the plan this workshop this month so we can begin to think about this as we develop the initial proposal by the end of this month. I mean, we won't have it all in place by the end of this month. But at least starting to think about it. And I'm sure that's a conversation the donors will want to have with us when we start engaging with them as a group in the first quarter of 2020. I don't think we've thought it through fully yet, but we've got some time to think that through and how we would respond to that question from our donors. So I think that's a really important question. I think I had one more voice online and I'm a meal to get point. And I'm going to wrap it up. You will have an opportunity to continue to engage in this discussion. This workshop will be just getting a first step in a process. So was there someone else online wanted to come in? Ian, I don't know. There is me. Just go ahead because you've got the microphone. Sorry, I think I've overridden you again. It's Pedro again. I think I was touched on that. I felt that throughout the two days, the whole question of partnerships somehow rather left prominent. We have Larry making a very strong point in one which I would endorse very strongly. But engaging with these developments, straight-spgaining partners early on in the project side because it's really essential. They should maybe be involved in prioritization. They should be involved in to be asked a question about targeting. And it may not be the actual partners, organizations who scale in the end, but representative partners at that stage will really help you to build your research around issues that are potentially scalable. And I think on the other side, the whole issue of the research partners is these, if we call them the whole after or the continuum of kind of scaling process, we tend to disengage rather to be earlier. And there are often opportunities that have technology scale into the sort of ether where our involvement in action promotion is less. But for us to learn about what's been happening and to look at refining the kind of technology, looking for sort of mitigating measures against problems that arise, looking at strength and future options. I don't see, this kind of sound like criticism, not really. The process diagram doesn't appear to have any mention of partners. Maybe it's implicit that we should be experts with our partnerships because these things will not happen without long-term engagement with partners and learning from them about what we can contribute to the whole process. Sort of pickgobbles, panel spending to fit in now. Yeah, just quickly, Peter, I completely agree. You'll notice that Ilry is not mentioned in the diagram either. I mean, it's more like which tools we use at which stage, but it is indeed very much a clear implication that in all of these stages, we will work very closely with partners. I completely agree with you. Fair enough, but let's shout it from the rooftops and rear it a little deeper. Yeah, fair comment. Et cetera. Equal representation of ideas, blah, blah, blah. Okay, yeah. Okay, finally, Neil, final comment and then I'll run up. Oh, Neil just... He's the one who's going to take me home. No, I just, okay, you know, I understand why you didn't want to describe this what is this project, but I still think these are the not the same level of understanding about what we mean by going to scale and what are the innovations that are ready. Like some of us, like me, I think like a tool, I don't know, I just don't see a tool going to scale and having the impact that it has. But some people see it differently. And I say, I'm right, I'm wrong, I'm just saying it would be good to make a one point two example of that. I mean, within the CRPs, we've been talking about product lines for a long time. And the whole concept was about these product lines to formulate innovation. So it seems a bit strange that it is still such a difficulty of defining what we mean with innovation after having gone maybe through eight years, six years of CRP processes and talking about innovations. I think that's a good observation. And we still, yeah, we still struggle with that. As you say, we've been thinking about it for five or six years. I want to draw this conversation to a close. It's just the start of a conversation. Some of you may have been wanting to come in or ask questions. I'm sure either we'd be delighted just, you know, we'll all be at this next week. You can catch them at coffee time. I know you see a lot of guys in the video have other things to do. But, you know, send you an email, catch them, have a conversation. Arrange to see them or, you know, arrange to speak on Skype. A lot of things that you want to raise that you've had an opportunity to talk about this morning, because our time's been limited. And I'm sure you would welcome that very much as we can try and refine some of these ideas. I hope everyone has found this day here for the Barcelona and this morning useful. We really wanted to kind of, you know, we could have spent much longer, we could have died much deeper into many issues. But this was a kind of a first attempt, just to kind of expose early. We know we're doing things across early. We're trying different things to scale out and ADDG and ACDG and ADB and all, you know, and so on. But we don't necessarily learn a lot from each other in this process. So part of this is about trying to get us up to certain common levels of understanding. And as I said yesterday, trying to understand what else is going on out there, the middle light world, from what, you know, Ladi and people like Ladi are thinking, some specific examples from the CG of tools that can help us. And you can see how users and his team have started to think about how we can use some of those tools to adapt to their specific circumstances. So I hope you found it useful. I want to make a comment on the CG doing scaling. You will recall that when we put together the second phase of the CRPs, virtually every CRP had a flagship on transformation and scaling. Why didn't he call something else? The ISPC threw them all out. But the donors wanted them back in again, and the donors started to fund us doing that by like, just do that biologically. So there's still this tension about, you know, what's the role of the CGIR? And we had a long discussion with the science leaders meeting in June about that. And I think the consensus is that, yes, the CGIR should not be doing scaling in the sense of being out there actually doing the scaling itself. But we do need to be engaged in facilitating that process and engaging with those intermediaries that Larry talked about in his cogwheel diagram. And then the question arises, well, why is the CGIR or CG Center leading this project? I would draw a distinction between leading a project and doing the scaling. Many of the projects that we're leading that are focused on scaling, we're not doing the scaling. All the scaling is done through our partners. We might be like AVCD, for example, but we're leading it, but we're not doing the scaling. We're simply facilitating it and managing the project. But the scaling is being done by our partners, whether it's government, the private sector, and so on. So I think, you know, there is a consensus that we should not be doing scaling. But we do need to be engaged in the facilitation. And in some cases, we might be leading the project of the program, but that doesn't mean saying we're doing the scaling. So I hope we can kind of, at least with an early agree that that's how we deal with that question. And we have, you know, when we set up the Impact Scale Program, you know, we have long discussions and arguments in the board about, you know, what this is about, and why is it only doing this? And, you know, Jimmy and I had to defend it quite adversely against some of the board members' arguments. They saw it as mission creep and so on. But every Center is doing the same thing. They might call it, you know, something different. Mark talked about the IITA Director, which is kind of, you know, leads on partnerships for delivery, especially the same idea. Every Center is doing the same thing. So we do need to engage. And it's part of why we set up the Impact Scale Program. We also need to think more about how do we ensure that we have the right management structures and incentives in place for this. And I think that's something we struggle with and something I struggle with in terms of, if you look at our staff performance appraisals and incentive schemes, we still haven't got that quite right in order to drive this forward. Now, you know, I've tried to introduce into the scientist performance criteria, things like, you know, inputs and impacts and engagement scheme, but I don't think we've quite got that right yet. And I think we'll look at how do we manage that better? How do we incentivize staff who, you know, want to do the science, if they want to get engaged in some of this, but this is not necessarily going to produce a scientific paper. That is really important for the already. So I think that is an organization or something we need to think more about and get there and get that right. Finally, every time I think about this, I can't help going back to, when we all do this, my own personal experience. If I look at the farm I was brought up on, between when I was born in 1956 and when I went to university in the early 70s, that's the 20-year period, the amount of scaling and innovation is absolutely huge in that 20-year period. I don't remember horses on the farm. They'd been replaced by tractors before I was born, but not long before I was born. When I was born, nobody made silage. It was all hay. By 1962-63, no one was making hay. Making hay and other inches, that's a bad idea for the highway. They're almost as silagey. The speed in which that scaled out was huge and it was driven by many things. One of the key drivers of innovation in that 20-year period is the rising cost of labour. That's what drove a lot of innovation. It drove capitalization, mechanization. That was trouble with a lot of innovation going on in the agricultural engineering field. Forage harvesters to make silage. Cattle breeding in the UK was revolutionized in 1963 by a change in government policy. Tele-importation of French breeds. The first shallows came into the UK in 1958, until then you couldn't import foreign breeds. Within 10 years, the black cattle in Aberdeenshire, they're all white cattle. From French shallows, there was a cement art from a ceremony in Switzerland and so on. Although the technology was important, there were three things that drove the innovation and the scaling. One was government policy, and I've just given you an example of one. Second was change in our active growth in the economy, which led to increasing labour costs, which drove mechanization. And of course, the technological innovation was part of that innovation. But there wasn't really the technical innovation that drove it. It was government policy, it was labour cost. But it was also supported by government policy in the sense that it was a very vigorous and well-resourced extension service. But anyway, it wasn't the technology. The technology was a result of those other drives. And so, I know that the context is different in the countries we work in, but I think some of the principles are still very valid. So, thanks very much everyone. Thanks very much to those of you online who have either got up early in the morning or stayed in the office late. As I say, I hope you found it useful. It's one of the start of a much broader conversation. As Ido has said, hopefully by the end of this year, there'll be a more detailed document that will build on this and from caucus some of the feedback that Ido has had. And maybe at that point, Ido would be the time to kind of reconvene either this group, or a smaller group to kind of just review that and, you know, the idea of having some in-person champion or champions in the program is the useful part of that process. So, I think we would plan to kind of, while the conversations will go on in the next few months, we'll kind of have a stock take late this year, early next year, as we continue to refine the plans. And I look forward to seeing many of you in Addis Ababa next week.