 We've got a pretty good number joined now, but I think there'll still be a few more continuing to join in the next couple of minutes, but we'll make a start, so thanks again for joining. My name's Robert Grimshaw, I'm the convener of the Queensland Regional Committee for the Australian Evaluation Society, and I'd like to start today by acknowledging the terrible and yuggera people as the First Nations owners of the lands on where I'm joining you from today up in the Anjan Brisbane, and we recognise that these lands have always been places of teaching, research and learning. We pay our respects to their elders past, present and emerging, and extend that respect to any First Nations people joining us here today and from whatever lands and country that may be on with the benefits and opportunities of virtual seminars. So just some housekeeping before we get going. As I mentioned, I'm the convener for the Queensland Regional Committee for the AES, but anyone who isn't a member of the AES and based in Queensland, if you'd like to find out anything more about the work of our committee or joining in, then please get in touch with us, get in touch with me on the committee, you can find our details on the AES website. I'm very much looking forward to the AES conference returning in Adelaide this year, but I'm probably biased, but potentially more excited that next year Brisbane will now get to have our opportunity to host the conference that we missed out, thanks to, you know what. Just in terms of Zoom housekeeping, I think if we can remember to keep ourselves on mute, just ensure the clear audio and avoid any feedback. Feel free to use the chat function as you're going, as we're going through the session, and then there will be plenty of time for discussion at the end, whereas use the raise hand function if you'd like to contribute to discussion or ask a question. This is being recorded today as I noted in the chat, so please let me know if any concerns with that. We'll ask for your feedback at the end of today's session in the feedback poll, and the slides will be distributed to those who registered, and on the AES website in the members section, and the recordings are posted on the AES YouTube channel in good time when our limited admin staff have had the chance to process that. So I'd just like to introduce Luke Everett, our presenter for today, and thank Luke for volunteering offering to provide this seminar or this presentation for us. So Luke is the founder of Project IO, a software as a service platform that guides teams through the design monitoring and evaluation of their program strategy. After the last 10 years, he's been working with international development teams to improve their technology services and provide greater insights into their impact data. And his passion and interest around this really came up when the Queensland committee we hosted the interactive session back at the start of the year around evaluation from afar and the challenges and opportunities and insights it might have gained in the past year when evaluation from afar or from a distance has been forced upon many of us, as well as those in the international development space have been doing it for some time, but now faced additional challenges around that. So yeah, Luke was participating in that discussion and offered to provide a bit more of a further discussion and presentation around his work. So with that, I'll hand over to Luke to take us through today. Thanks, Luke. Thanks, Robert. I'd also just first like to begin by acknowledging the traditional owners of the land on which we meet today and paying my respects to elders past and present. I also have to apologize. I'm just getting over a head cold. So if anything's a little bit coughing or spluttering, I apologize for that. But we'll get started. So ask any questions obviously throughout this presentation. This should be, I guess, partially a bit of a conversation throughout as well, because I also want to sort of hear any of the challenges or trials and tribulations of remote evaluations. As Robert said, I come from the international development field, have worked across a number of countries and programs across the Pacific Southeast Asia, the Middle East and Africa. And these have been through a variety of projects or thematic areas. Education was really a big one. A little bit of TVET in the Pacific Health in Africa, and then a few economic growth and sort of Mel platform projects as well. So in Indonesia, there were a few sort of evaluation specific programs that sat across their portfolio, which was really interesting. Three major donors and the EC through my time in the UK. And really what I was doing was working with teams around how they sort of capture, manage and secure their data for whether they were participants of programs or beneficiaries or those sorts of things and making sure that any of that information was managed effectively, captured properly against key indicators and those sorts of things. And as Robert mentioned, we're sort of working on a next generation Mel platform that helps teams collaborate more transparently, more effectively and building in some of those more adaptive management capabilities that that the industry is moving towards. So there's been a lot of challenges that have gone along with not only the pandemic, but also shifts in the way that donors and clients are thinking about and spending on, you know, evaluations, you know, based on our experience in the work we've been doing recently. So there's been more drive to do locally managed or driven implementations of projects, less opportunity for STAs or for consultants to sort of move into the field. Some of that is his travel restrictions. Some of that is just that they want to do more. And it's more sustainable and effective to have like driven implementations of these projects. We've obviously got a lot more hybrid work going on. What are the challenges with that with data retention and security and those sorts of things? You know, there's there's also been a lot of changes to delayed project inceptions because of COVID because of caretaker because of a lot of things. So a lot of these programs have sort of been delayed and are coming on board and needing sort of rapid inception phases. So how do we make sure that we're capturing the right data from the very beginning and that it's consistent and you have that evidence based throughout. And then also with we also work with teams to manage their projects more collaboratively where previously they were all managed in isolation and project teams would implement, whilst they might be managed by the same managing contractors, etc. They would all still be managed individually as as an individual project. So there was less lessons learned, shared and the data and information that goes along with that doesn't generally flow between teams, which is which is real shame. What I'm going to quickly do is just run a poll just to get a bit of an a bit of an idea of all of you and the different systems and those sorts of things that you're using. So I might quickly launch that. And that will just help me get an idea of where we focus some of this information on. Just give that a little 30 seconds. Okay, great. So it's good to see a spread of teams working sort of the whole way through the process, you know, not just point in time evaluations or those sorts of things, but really from that design and planning stage. That's, that's really helpful for us to understand. And as I expected, everyone's using the standards for Excel and PowerPoint. But some of the more sort of planning oriented SAS platforms aren't really being used at the moment. Okay. So how are we solving these problems? A lot of teams have really ramped up their technology adoption. We saw a lot of, oh, yeah, interesting. Sorry, yes, government was excluded from that list. Perfect. Yeah, so sorry, as I was saying, a lot of teams have been scaling up their technology adoption, sometimes with core IT teams sort of vetting and managing the rollout of that. But a lot of them also just, you know, individual teams going out looking for technology solutions. We saw a lot of Zoom adoption. And then sort of as teams came through that process, I think Microsoft teams really pulled some of that out. And the move to things like Miro to other diagramming tools like Lucidchart and that sort of thing. And along with that, you know, the industry is moved or is moving depending on where you are in the world to more of an adaptive management approach where you are able to sort of change project structure and those sorts of things. I'm not sure in the domestic context how much of this adaptive management work takes place, but historically with your international development programs, it was very difficult to get large changes to project strategy sort of approved and allowed for development. But now we're seeing more programs where that's sort of more of a built in core components of these programs. And as I said before, we're getting a lot more locally driven development programs as well. So with the increased technology adoption, it's really broken down into six key areas for assisting with adaptive management. So you've got your planning stage where people are using as we've seen with the polls, more of your Microsoft suite, and a little bit more Miro, but things like Lucidchart and Asana are still sort of or project management tools are still a little bit lacking. Where we're seeing a lot of movement now with planning is broadening the teams that are associated with that planning process, making sure more stakeholders are involved. And that's really helped by the technology shift so that you don't all have to be in the same room, you can get experts and specialists or more widely distributed teams on board. And this really helps with that sort of staff and participant adoption over time, you know, if they're involved with the planning stage, we see a lot greater level of involvement throughout the whole process. Communication stayed pretty, pretty steady. You know, a lot of these programs will have a public website with some of their impact data, public project intranets, which have more detailed analysis. But again, this would all sit in, you know, PowerPoints or Excel spreadsheets, those sorts of things, or, you know, your standard scheduled reports through your evaluation process, whether they're by annually, annually, or at the end of a program. But now we're starting to see more of these structured dashboards through Tableau through Power BI come into play. Really, where we sort of need to get to though, is more user driven dashboards or data insights where your role might be a little bit different to how those structured dashboards in the past had been set up so that you can monitor and manage them yourself for the roles that you're doing. And this really should be not only just from, say, a management perspective, but all the way to anyone's individual roles, so that they can actually use that information for more operational reporting. Context monitoring, we don't do a lot of this, so I'm going to skip over a lot of that. But some of the areas of interest here are around open source data sets where you might already have people capturing impact metrics that you can use for your analysis, you don't have to then go out and also capture that information again. I think this is an area where there's a lot of room to move here and improvements to make. And then your data collection, so we've sort of gone through the maturity of how we do a lot of this. So from paper forms in country or in areas where there's low technology adoption, pulling those through previous reports and research and really surfacing those in a more meaningful way, whether it's contextually or based on better search capabilities, those sorts of things. And now most people are using web-based forms. But it's really interesting in areas where connectivity might still be difficult. There aren't a lot of people using offline web forms and those sorts of things where being out in the field is more important to get that data. But how are we starting to bring some of that offline data into our assessments and evaluations? Impact assessment, this is another area where we're seeing a lot of technology shift from mere compliance or audit perspective data, governance based data and information and more shifting over into that impact. So what's happening on the ground with our theories of change and our indicators from that perspective. And then knowledge and relationship management, I think a lot of this is still really manual and still really needs a little bit of innovation in this space. So how are we surfacing that data for individual people for their day to day jobs, not just sort of point in time evaluations more operationally. But then also stakeholder engagement, how are we helping teams or evaluators to better communicate that change? And then also tracking those evidence bases over longer periods of time and keeping that evidence a chain intact so that it can be validated. But also when evaluators come in at points in time, they don't have to necessarily be involved in the day to day minutiae of that data capture or that that mail process, but can then still see that history to make sure that all of the results are viable and accurate at all times. So, you know, we talked about some of the next generation of tools, you know, the need for greater online collaboration for planning and strategy development. So, you know, looking at tools like Lucidchart and other the SAS place planning tools to sort of and mirror as well to for a larger extent to sort of bring more people in and get that buy-in from the very beginning. You know, in the communication space, we're seeing or seeing the need for all of that impact strategy, reporting analytics to be in a single location so that it can be presented to websites so that it can be used by teams regardless of where they sit in an organization. They might be in a metal team. They might be sort of external doing evaluations or they could be, you know, TVET trainers or whoever, but really being able to open that data insight to everyone is really critical to make sure that we have buy-in from from everyone and also to help them understand how their day-to-day roles relate to the end impact goal statement or what the programs are trying to achieve. Also, with changes to the communication systems, we're trying to add more contextual data, context to the data. So where you have qual results, that's a lot easier, but when you're really capturing hard data, hard quant data, it's a little bit harder to understand that in the greater greater context because you are more removed from programs or geographically removed. So having the ability to also include contextual data with those, you know, trends or those sorts of things makes it a lot easier for people to understand that. And then there's some new data capture mediums that I'll talk about in the near future. And then the other parts are more specific around data capture and for adaptive management. But I think we'll talk a little bit in the next couple of slides about some changes to things like machine learning and knowledge and relationship management systems that are really helping to to improve these areas. So there's a few areas of interest that we've been keeping track of really specifically around PMEL platforms. So Torch, which is one of our products, Tola Data in the EU and Dev Results based in the US are really based around capturing data against your project strategy so that you can understand that teams not specifically evaluators or MEL teams can understand how that data impacts the overall approach of a program really helps with adaptive management because you can then track how that data is changing over time against your project strategy. And when you're tying it back to be able to use that data for benchmarking and really justifying any changes to that adaptation all in the one place so that you don't have to worry about who's got what version or who's got the latest data sets to back up those changes if it's all contained in one location. And we can also add client approvals to those sorts of things as well just to sort of cover off on any compliance related areas. There's some people doing really interesting work on video results capture so Folk Fogtail is a really good example of this but really starting to add video context to the results of our programs really helps to give that extra level of fidelity and it's in an area that was historically really difficult, especially for development programs where bandwidth is a challenge. You really had to send film crews out or people out to take those to make those videos and to cut them together. But now there's some emerging tools that making that really easy. So that's an area to keep an eye on. And also machine learning platforms. So even AWS from Amazon and Microsoft are doing a lot of work on sentiment analysis on automated text to speech or speech to text areas around automated translations for teams that need to work in multiple contexts. We can use these new machine learning tools to simplify that process and automate that process. But it also makes it a lot more accessible to teams in the fields. We can talk more about that as well. So just quickly from a project perspective, you know, we're working on a platform or we've developed a platform and now are looking to increase the sort of levels of hierarchy of how we extrapolate and aggregate that data. But we really focus on the teams individually and stakeholder engagement. So how do we democratize that data more increased transparency so that more people can use that impact data and, you know, do experiments, run new interventions with that data where previously it had been sort of locked down a little bit more into the meld teams or into the evaluation teams that. You know, apart from those milestone reports, it wouldn't get out into the wider teams. Tying your impact results directly to your strategy, as opposed to having them sort of separated into. You know, you've got your theory of change or your log frame or your impact model over here. And then you've got your indicators that you're tracking and then you've got your results against those. We combine all of them so that you're tying your results through your indicators directly to your strategy so that you can see how your say your theory of change is adapting over time and how that's affecting things from your results, as opposed to them being a little bit more isolated. As I said, keeping everything centralized so that everyone can see the context at all times. Some project management tools to make that sort of mail process a lot simpler. Everyone knows what's due when. We also have in-house mail expertise so that people that are using our platform can also lean on our experience in different areas to help structure indicators or, you know, if you've got requirements for tooling that you might not necessarily have surveys developed already, we can help with that as well. And then we're sort of looking now into the next stage. So how do we then help teams that are tracking their interventions make them better? So whether that's recommendations for indicators or changes to your theory of change or impact model, you know, activities that might not necessarily be performing as well as others, we can use that larger data set to start to recommend changes to your approach for different contexts and different locations. But then also surfacing research. So if you opt into our open source data sets, we can then start to make that data capture a little bit easier, share research from different teams across multiples. And if you're using this across multiple programs or multiple interventions, they can start to share data between them to sort of benchmark and adapt their approach. So we'll have a little bit of a discussion shortly about different challenges that you might be having with remote evaluations. You know, there's a lot of benefits and disadvantages of different technologies in this space, but definitely we can book a meeting to have conversations through our websites just on anything remote evaluation based. If you're interested in having a look at torch, definitely book a demo. And then you can also connect to us on LinkedIn as well. So I'll stop there. And we can start the Q&A. If anyone has any questions or comments. I might also say if there's any interest in the sort of newer platforms that are starting to help make some of these challenges easier that might not necessarily be torch based, I definitely get in touch as well. Thanks, Luke. And yes, if anyone wants to jump into the discussion, please just use the raise hand and great to hear from you, great to hear from your questions. Good opportunity to any technology problems, perhaps, like, like might be here to help. And. Robert, I might just answer that just with Allison's question. We can actually run a really quick demo if people are interested, like to have one available that can show you how some of this stuff work if you like. Yeah, sure. Yeah. That's cool. Well, maybe we'll yeah, save the Q and A for a few minutes, let's quickly jump into that and then we can come back to it. All right, let me just do that. Just while you're doing that, just a related to the the demo, just for the question around if you can describe the key functionality of torch and compare it to the platform, such as Amplify and SocialSuite. Which I'm hoping might mean something to you. Yeah, of course. So. So I'm feeling quite like a lot right now. Things like SocialSuite are really interesting. And I think there's a lot of work in the ESG space as well that we're seeing a lot of technology adoption where it's sort of really focused also on sort of the governance reporting and those sorts of things where we're looking to place torches really at that sort of impact measurement space as opposed to necessarily ESG, although it's interesting to sort of tie all of that together because there's some sort of there's some governance reporting that is more straightforward than and less I don't I don't want to say arbitrary, but less more difficult to sort of see those tangible benefits without more in-depth evaluation. So it's sort of starting at the at the other end of that. All right, let me just share this. Just what's ESG for the audience? What's ESG for us early? Sorry, one second. So yeah, there you go. Sorry, I was trying to share my screen at the same time. I think someone else helped me out in chat. Yeah, so let me. OK, so this is torch. So this is test data, so don't get too hung up on what the actual results of these are. But what I will do is just jump into our example case study project. So when we talk about a theory of change or an impact model, or a results chain, you're normally going to be looking at something like actually, let's get another example. You might be familiar with something like this. So where we build out and sorry, my screen's quite small because I'm working off my laptop, but we might look at, say, like a results chain like this where you've got starting at your you might map your activities and then to your outcomes through to your and in this example, they're not using traditional like a traditional theory of change hierarchy, but tracking your project strategy all the way up to your impact statement. Where we start to break this down away from its traditional model is. We take that attribution out so that teams can see really simply. An overview of your project, so you might say, this is your overall impact statement. You've got a number of indicators associated with that, and this is where we talk about attaching directly to the strategy as opposed to them having them in the separate evaluation framework or something like that so that you can really see which indicators are relevant at which levels and you might see that that sort of as it goes through that hierarchy, it might evolve over time. So you might track at the very lowest level. Let's try and grab an example here. Yeah, so number of unions invited to training, but further up the chain, you might say. You know, number of participants overall that that attend training as an example, where this then comes into play if we go back to this example. Is we help teams firstly structure their programs so we've developed a wizard that sort of explains if you're using a theory of change, how to structure a goal statement. This might come from a donor or a stakeholder group. It may not. So you may have to set that yourself. It may be related to one of the strategic development goals, those sorts of things, but we help teams through a simple wizard process structure out your project. And as you add elements, we're going to mark them off. And if you're using one of our templates, you also get instruction and instruction panel on how to manage each of those levels. Then for simplicity, we also provide an evaluation framework. So this just allows you to export that to donors or to stakeholders that are interested in that. And this is just a list of your indicators overall. That, as I said, you've got your diagrams and where we help with stakeholder engagement here is that each of these elements of your theory of change or your impact model have status indicators across the top. So you might see an at-risk element that hasn't had results recorded against targets or it might be overdue or those sorts of things. You've got elements that are completed. And so falling behind and then at-risk. So if there's an element that your targets are much higher than the results you're capturing, we're gonna flag that automatically for you to say this is an activity that you need to look at. From there, you've also got, we automatically build a performance view for you. So these are just automated dashboards out of the box that people can use to track different indicators. And we use a tagging system to create cross-cutting views. So in the international development space, we would talk about, say, gender or GEDC-related indicators. And we can use this to automatically cut some of these down for you. And really what this is doing is just saving time for teams not having to then pull that data into Power BI or tab-load to get some of this pretty straightforward data. Okay, then you've got your document management so you can upload designs or reports or those sorts of things at the project level. And then you can also upload documents at the results level as well. So you might wanna also, you can record the aggregate rows of, say, a survey or one of the eventations or one instrument to use and then upload the evidence for those. And this is how we sort of start to track that evidence base over time. We've got user access controls at the project and platform level. So you can give stakeholders access, read-only access to a specific project or you can give your teams, full access to all projects just helps with that management. And then we also have a version control system where you can save versions and variations of your theory of change or your impact model so that you can go back over time and have a look at, at the start of your program, what did your theory of change look like versus at the end or if there are, points in time where you wanna create variations. Say, as COVID hit, your project needed to pivot or change to a COVID recovery model. You could create multiple forecasts of your theory of change. Take that to your donor or stakeholders and say, this is how we expect these changes could happen and then choose one of those forecasts to become your core project and they're fully featured as well. So you can go through and have a look at how those targets and results would change over time. From here also, just gonna stop sharing for a second because there's one other feature that I wanna quickly show but I just need to do one thing. And all of this is exportable to, if you wanted to pull that into Power BI or something like that, we allow that really simply as well. The idea of this is just to make it easier for teams so they don't have to pull it into something else to do that, to do that evaluation. Okay. Just really share this really quickly. Okay. And then for each individual project, we can also create impact insights. So these are dashboards that you can simply create to give an overview of your results. You might wanna show, have a dashboard just for your stakeholders that shows. This was a volunteer program. So total number of volunteers broken down by location. All of this is built around the ability to capture results and set targets for indicators at disaggregated levels. So you might have, maybe a five, 10, 15 disaggregations of a specific indicator and we're recording results against any of those so that we can build these visualizations over time. So if you were to go into one, obviously tracking quant and qual data in the same place, but we've definitely focused more on the quant data capture at this stage. So you can see here, disaggregations broken down by category and then by name, setting baselines at the start of the program and we provide the ability to add descriptions to them so that you can define them that aren't necessarily the designers of the program just gives that ability to sort of translate that out a little bit for non-evaluators or all mel teams. But you can see here, this disaggregation is broken down by gender and location. We also have an approval or certification process. So when people record results, they're then centrally certified so that they can be QA'd before they appear in any stakeholder or client reports. And that's all set up through our user access system. But that's really it. So this is the starting point for all of this, but as we get more access to data and research, we can start to use some of those more machine learning based tools to surface more recommendations and those sorts of things. Terrific, absolutely. Okay, there's a lot of questions. Yeah, I was just going to say I have been keeping an eye on the chats and I've pulled them out for you so I can guide you through the questions. Okay, cool. So if we start with Hannah. So this is designed to be really lightweight. So we know that this works in low bandwidth locations throughout the Pacific and Indonesia, but it is an online system. But anytime we talk about using offline data capture and those sorts of things, you can then take that offline data and import it directly into Torch. And just on the data capture, there's one later about where the data is stored. Yeah, so data, so each of these tenants is designed to be managed in isolation. The reason why we did that is so that we can get around client data sovereignty problems. So we can deploy a tenant in an Australian Secure Data Center or in a country-specific data center. Yeah, we just work with clients on a case-by-case basis to move that around. Cool, and I think the next one, I was around, is it useful for routine program monitoring as well as evaluation or just for standalone evaluation? Yeah. So the idea of this was to make it operational. So you might be only using, you might be only capturing data at specific points in time because you might have a survey that goes out annually or those sorts of things, but we really wanna make this an operational system that people can use to adjust their programs, month-to-month if they had to. So it's really about when those indicators are recorded, but for example, we're working on a finance integration at the moment so that activities could be tagged against your finance system and you could have those come through monthly so that you can really use it for routine monitoring. And then next one around what do you recommend for complex human systems innovations or where linear theory of change may not be appropriate? Yeah, so we haven't done a lot in this area, but with our customization with your programs, you can sort of set up your own hierarchies or structures so that you can capture it in the way that you want to or that your team needs to. And then especially for things where you're doing a lot of adaptation that version control process really helps to sort of guide through that process. But yeah, it's definitely not an area that we've focused on initially, but it would be great to talk more about that if you would like. Okay, so with the one from Alison around lists of indicators, et cetera. So yeah, so the lists of indicators, when you look at your evaluation framework, that's fully searchable and filterable based on any of the data sections that you fill in. So the standard is really based on DFATS indicator categories, but anywhere where you would click a heading or have a look at open the advanced search section, you can filter on any of those fields that you capture. And really that's to help things, when I'm working with teams, not using torch, it's an area which takes a lot more work to understand what indicators they're using, what instrumentation they're using and when, where this sort of really helps filter and cut that down. The other part of that was transfer a project to a client's account. Yeah, so at the moment torch tenants are designed to be isolated for an individual project. And we did that exactly so that you could transfer that to whether another contract that took over that program or any of that sort of thing happened throughout the contract process. What we're working on now is an overarching platform that sits across multiple torch tenants that can then aggregate that data up so that you can get access to, say you've got 10 clients, you can then get access to all of those or pass them off as needed and export. Yeah, so you can export any of the diagrams, any of the data into Excel or into PDF. Editable formats are a little bit more difficult unless they are sort of that Excel model for the diagrams specifically because diagrams are quite difficult to programmatically manage just in terms of where things sit and when. But it's an area where we're gonna be putting more effort into as well. Integration, so we're at the moment talking to a survey platform that does a lot of machine learning work. So that will be our first data integration. At the moment, we use a simple import like from Excel format to take aggregated data from say Qualtrics or from SurveyMonkey or whatever that happens to be into the platform. But yeah, that's definitely an area where we're actively working on as well. Can you aggregate data from across different programs? Yeah, so, and this is what I was talking about in terms of that organizational layer. At the moment, if you have multiple projects on the OneTorch tenant, it's not how it's designed but you can definitely aggregate them at an organizational level. So there's two different dashboards and they work exactly the same but they are limited in their scope just for that sort of security perspective but at the platform level dashboard you can add visualizations and data sets regardless of which project using our tagging so that it can automatically pull those aggregates up. So for example, if you had use like value for money as an example, if you had a value for money tag and then use that tag across indicators for all of your programs, you could create a BFM visualization and it would pull all of that data together with all of these aggregations to show you that at an organizational level. And this is an area where we're really sort of pushing forward in a big way because I think that's one of the problems at the moment is that it's a lot more difficult to do that and it really shouldn't be. Thanks, Luke. I think it has addressed that pretty wide ranging from the chat panel but thanks everyone for your great engagement. There was a question earlier prior to the demo just around which tool you'd recommend to the transcription of interviews. I think you mentioned sort of machine learning and a bit of the developments in the text-to-speech speech-to-text space. There are really specific tools and platforms to do it out of the box but AWS provides probably one of the most advanced products to do this where you can just send it audio files or video files and it will automatically transcribe them for you. But definitely if you want more information we can take that one offline and unpack it a little bit more. So we've got a few more minutes for anyone who does want to jump up with any other questions or discussions. Now discussion points. I was wondering a bit as we're going through and particularly looking at the demo about where the use of technology or tortures the tool in particular fits with then the capability building of the evaluation teams or the non-evaluators that you're encouraging to use the tool, do you find that it's sort of built it and they will come and you build them up because the tools, technology is hugely capable but people might not be as much, do you build them up or do you build them first and then go now you've got that in place? Here's the technology to sort of unleash your potential. Yeah, I think it's a little bit of both. I think the more you can, like from a torch perspective, the more you can bring people along at the planning and design phase and then really understanding how a project comes together within the context of the platform is really helpful because you're sort of guiding them through the process of how it all fits together within the tool that you're using as opposed to all of these different stages being sort of isolated and put together in a file share or something like that. Really what we've also tried to do with torch is build translation of language in for the sort of industry jargon that we use. So where you talk about building a project within torch, you can use the theory of change approach and call your project a theory of change based evaluation project or you could call it whatever you wanted for the local context. And similarly with the hierarchy levels, working with FCDO programs, it was really clear that whilst they use a theory of change approach, they customize the hierarchy language to the local languages that they're working in so that people can understand it more effectively. If you talk about a long-term outcome in some hour, that's something different to different people. So using different language really helps with a lot of that challenge. And then we specifically made the system as simplified as possible. So that it wasn't intimidating for teams to pick up. So it's really designed to be quite hierarchical in how you watch your data and then how you visualize it so that it's not overly complex. Just definitely helped with that as well. The other part of that is also have side one system. They can really understand how those results relate directly to the project strategy. So that they can see how their day-to-day job impacts those metrics because they'll see the data capture all the way through. Terrific, thank you. And just mindful of time within the couple of minutes to go. Just thank you again for the presentation and addressing everyone's questions there. And thanks everyone for your engagement. I just launched the feedback poll but we do have a couple more minutes. So firstly, I would really appreciate your feedback on today's session, awesome. And if you've got any last minute questions or comments then please jump in now. The feedback poll should be coming up for people now. And I'd just like to say also thanks to everyone for coming along today. And if you want to learn any more about torch or have any technology challenge questions in this area definitely reach out.