 So I'm going to talk about standard operating procedures and testing procedures, and then Olaf will discuss the implementation tool kits a little bit as well and give some demonstrations. All right, so we're just going to go over these, and in particular my presentation will be quite quick. I'll give Olaf more time. So we're going to describe what a standard operating procedure should be, some of the procedures involved in testing, and different types of testing that are available. And some of this actually builds on what the team showed you. They had some testing stuff. Actually, we have different recommendations, but we'll discuss that in a moment. Okay, so the first thing we're going to do is just try to differentiate between a standard operating procedure and training or learning material, because these are often used interchangeably, where we say an SOP is actually a learning material. Okay, we're going to discuss the difference, and I'm going to show some examples. So standard operating procedures, they're not just learning material, they outline a specific procedure, right? Activities required to complete a task based on some type of internal best practice that you have developed, right? They can reference additional resources, okay, like step-by-step guidance. The Palestine team quickly showed a user manual, for example. So you can reference those materials, but that is not a procedure, okay? So just quickly, and I'll show some examples in a moment. So I'm just going to compare on the left side, I have the training material, the right side's the SOP, right? So the training material, it describes this step-by-step process of performing a specific function, whether that's a user manual, whether this is a manual for configuring something in the system, whether it's a manual for analyzing data or reviewing data quality, okay? The SOP can link to the training material, but it describes processes, procedures, roles, functions, and other such tasks, where people have to know how to perform that task already, okay? So you can't implement the SOP if people don't know how to do the task, but at the same time, the SOP is not a description necessarily, just of small tasks, okay? And I'll show some examples to compare in a moment, okay? So there's kind of a general process here that I discuss with SOPs, and it might not be what you're used to necessarily. Maybe if you do have SOPs, maybe you just write them and share them with people, and then you kind of expect people to follow them, but in practice I found this does not work very well, okay? So the first step here is reviewing the topic, and by this I mean actually understanding what it is that you need to do, right? So whether you're reviewing data quality, whether you're learning how to make a new tracker or case-based program, or whether you're implementing something else, okay? You need to actually understand how that's done. You also need to review best practices, guidance, etc., and you need to kind of bring this together. You have to also in parallel, even though an SOP is not a training material, you have to train people how to do the task at some point, right? You can't have an SOP that describes specific functions without the user's understanding how to perform those functions, okay? As you're kind of gathering all this information, you draft the SOPs, and then you perform training on the SOP, okay? But the people have to know what they're doing beforehand, but there should be a separate training on actually implementing those procedures. You don't just print them out or send them by email or, you know, do whatever, share them by PDF. Maybe people read them, maybe they're kind of halfway implemented, but chances are you're not going to be able to scale that process up. You probably won't be very happy with the overall outcome, okay? And then the last thing is monitoring their implementation, okay? You have to check that these are working, follow up with staff to make sure they're performing the functions that you've requested, okay? And for this, you need to develop some type of framework to actually evaluate them, and I have an example of this as well, okay? So I'll try to show examples in a moment, okay? So I've described some of this, this first part in particular I've described, right? But the whole idea is if you're implementing some type of procedure and someone doesn't know about the concept, then you're not going to be able to kind of make it work in practice. The example I have is from data quality. If you say to people, find outliers in my system every month, and they have no idea what an outlier is, or you've not shown them how to identify outliers, chances are that procedure is not going to work very well, right? The second part on training about the tasks, this ties into this, right? So if they don't know what it is, how will they actually perform that task? So even though they're separate pieces of material, they are closely tied together, okay? So when you're writing the SOP, you kind of have to take all this information you've gained. It's a bit of an internal research project, right? You have to know, locally, how are things actually done? Because you don't want to write a procedure that's completely out of touch with the reality of the situation, okay? You also have to understand some best practices. So what are some global principles that may improve your current workflow, you know? What is realistic to actually modify? Doesn't mean you change everything, or you just grab some document off the internet and copy and paste it into your document. That won't work, okay? But it needs to be kind of this marriage between the two, okay? And then you also need to think about those specific skills you're focusing on and how you can explain them in a kind of very systematic way to people to make sure they can implement them. And then once you kind of write them up, try to test them internally just to see if they make sense, right? Because sometimes the first draft in particular, it's kind of disjointed, it doesn't have a good flow, you're not able to identify who's responsible for doing specific operations, and it doesn't really work well, okay? So then once you've kind of developed and tested them, you train on the specific procedure, okay? And training on this procedure is not the same as training on the task. People involved in the procedure should already know how to perform those tasks that you are describing. If you are saying that there are specific procedures for making a new dataset, they should already know how to make a dataset, but they have to apply the procedures that you're introducing to dataset creation, okay? It also tends to be useful during this process then to inform people what their role will be, right? So if you have different levels of your system that will be responsible for different functions, this might be a good time to discuss this with them as well, okay? And then after you perform the training, you typically can't expect everyone to just follow along, right? It's much more of a collaborative exercise, right? So you want to understand where they're having trouble, support them, and make sure things are kind of progressing the way you want. And there are some tools, I'll show you an example of one that we can use to kind of monitor their progress, okay? So in Moodle, I've actually just put in a couple of different documents here, and I'm just going to pull them up, okay? Oops, sorry, it's logged me out. So this is one example of an SOP, and actually I wanted to use this as something to differentiate between kind of the SOPs I'm describing and maybe SOPs that are in the health system, right? So this was provided by our colleague from Rwanda, it's about the Ministry of Health, and I'm just going to scroll to the table of contents real quick, okay? Okay, so these are kind of non-specific procedures, right? It's just kind of saying this is how we report our data, how we collect our data, this is kind of, you know, it's describing the system overall. And that is good, that is needed, okay? But what we're actually talking about is kind of very specific procedures for managing the DHIs to implementation, looking at specific tasks, looking at specific functionality, looking at specific roles and responsibility, right? So how we differentiate that, and I'll just pull up the second one. Here's an example. This is an example of creating aggregate data inside of DHIs to, okay? And you can see it's not really like a training manual. So I'll pull up the training manual too, so we can compare the two in just a moment, okay? It doesn't describe in great detail what these are, but it describes what to do when you're actually making them, right? And it's not just a matter of clicking on an item and saying, okay, I've made it, I'm happy, let's move on, or however you happen to configure these items in the system, okay? I have like very explicit instructions at this stage, if you find this, don't do this, okay? If you find that there's nothing, then proceed, okay? You can see here these kind of bolder areas on this, right? So the whole idea is to kind of identify who can do this, at what time they can do it, and the reason that, for example, this SOP exists, I mentioned yesterday this example where we had many users in the system who had access to do this kind of thing, and without the proper communication and coordination, it created a lot of trouble. So in DHIs too, in general, it is very easy to kind of configure, okay? The problem comes with the sustainable portion of it, three years from now, four years from now, whatever, okay? As your implementation matures, if there is no coordination, if there's no proper procedures, the system can become very challenging to manage, right? So those people who were trained initially and may have some basic skills on configuration will get lost in this sea of kind of randomness in the system where it will be kind of very heavy and difficult to manage. So if I'm looking at this, this just outlines for various items. These are various objects in DHIs too that a person can create. It kind of describes the procedure for creating each one. This is very different than the training material that's associated with that specific concept. So we'll scroll down a bit where it actually starts, okay? So here we describe kind of the step by step procedure. It's the same item, the same object I'm describing, but now I show the user interface. I show kind of where to click in the system. I give very detailed descriptions of where to go and what to do, okay? And then we contrast this with this, which is just kind of a bunch of quick bullet points, right? It's not a very long document. It's like three pages long, I think, okay? Because user manuals can be very long. Obviously, there's lots of screenshots and explanations, okay? These are not meant to be long. They're meant to be digestible, right? Something that can be easily read, all right? But very clear instructions on what to do, right? And there is some extra information just about general procedures and processes, okay? So that's kind of the idea and the differentiating factor here. So that kind of five-step process that I was describing, it takes a little bit of work, but you end up with something that's brief and to the point, right? That people might actually read. Your chances of success are a little higher this way, right? And I mentioned this kind of idea of monitoring their implementation, because that is also very important after you draft everything and train people on this stuff. You have to check if it's working, right? And there are various ways you might do this. Maybe if it's something specific inside of DHIs too, you know, you can check the actual configuration or something. There's various tools. But it's always a good idea to come up with some kind of framework to evaluate if behavior is actually changing, if things are actually being done. The example I have here is for data quality. So for example, if I develop some type of procedure on data quality, I might just run through this checklist, maybe in a month or two's time and just check with this person. Are they performing the actions that I've outlined in the procedure or not? And if they're not, what do I do? What kind of support do they need to kind of make this happen? All right? So we don't want to kind of forget this monitoring part because it helps us kind of determine if things are actually working the way we expect and not just this assumption. I've made the SOP. I've given it to everybody. Everyone's going to follow it, right? In reality, that's probably not the case. Okay. So I'll just move on to testing. I just wanted to give some basic background on the SOPs and I'll do the same for testing. So similar to the SOPs, we kind of have a little bit of a process for testing. We review, we plan, design, execute and report. So the first part reviewing, it's very similar to what I described when discussing the standard operating procedures, okay? In order to test something, whether it's in the field or internally, you need to know how it works, right? So if you're familiar with the topic, maybe you're okay, right? You don't need to just kind of refresh your skills all the time every day. But if you're not as sure, especially maybe if it's a new feature, if it's a new public health program, if it's a new type of visualization or report, especially anything new, make sure you know how it works before you come up with some type of procedure for testing it, okay? Then I would suggest to prepare it. So the second point is plan. So that means to prepare some type of timeline for testing. And this should match with the goals within your implementation, right? You might be under a lot of pressure. Make this work in two weeks. Make this work in one week, right? So your testing phase can't be five days long if you have a training, you know, in six days from now, right? That's probably not going to work well for you. So you need to kind of, yes, create a plan, but also be realistic and marry this kind of with this concept of how much time you actually have in reality, all right? A good idea is also to develop specific test cases for what you want to test. So you just kind of list them out, okay? For example, if I'm doing a DHIs to upgrade, I might have specific items that I want to test, or maybe if I'm implementing DHIs to for the first time, can I enter data? Can I create a report? Can I delete an event, right? It seemed very simplistic, but you just run through these simplistic operations and check if everything's working, right? Okay. So the fourth point is execute. So this is where you actually take all those test cases and you perform them, right? But you just need to make sure that you're aware of how the feature should function, right? Because you don't want to kind of think that something's not working, even though it is, and maybe it's a problem with the configuration, or you're kind of having to differentiate between the software itself and kind of the configuration that you've made. And then of course reporting this, right? So you want to document any issues that you find. But try to be specific with this, because sometimes, for example, maybe in the future, you will use the community of practice. People will click on an app and say, I can't enter data, right? And we don't really know what that means. In your own setting, it will also be kind of a little difficult to interpret that, right? So you want to be as specific as possible. I put a couple of steps that you can use to kind of report each of these issues and that you can follow. Okay. So just real quickly, we have three types of testing that we recommend you go through when you're kind of working with programs, okay? The whole basis of this is to get feedback on what you're working on, right? So if I receive feedback, whether it be good or bad, don't take this as something critical. Don't take this as something that's kind of negative towards you. The whole idea is improving the ability of staff in the field to interact with whatever it is that you've made, okay? So we recommend first internal testing, okay? User acceptance and field testing. And I'll just describe these quickly. Okay. So internal testing is kind of performed by the internal team. So you have a core team, let's say, and they're very familiar with whatever they've configured, okay? So they know the ins and outs and they're kind of laser focused on making that work, right? This is a very small team probably, you know, four to eight people perhaps, right? And at that time, if you find anything wrong, you should make some changes if you can, right? We also might want to test our infrastructure, okay? So I'm not going to get into all the specifics here necessarily, okay? But there's two types of testing you might want to consider. One is load testing, right? So for example, if you have an estimate of how many people will be logging into your system and interacting with it for the first time. Let's say you're implementing DHIs too, and to begin with, you're piloting this in 100 facilities. There are two users in each facility. So you think on a normal day, there may be 250 people logging in. The facilities, plus the province, plus the national team, okay? So you might want to create some test cases around that, right? Around 250 people or so interacting with the system. We then have this second type of testing, which is stress testing. And this basically pushes your system to its maximum limit, right? It's its maximum theoretical limit, right? So maybe even though you think at the beginning, you only have 250 users login, you've created this infrastructure and it can handle 10,000 users before it crashes. Now you know that, right? So the whole idea is to give you some idea of how you can plan your infrastructure needs, right? If you know that it's only going to crash at 10,000 users and you don't see that happening for another three years, you know, then you might be able to kind of say, well, my infrastructure is okay for now. I can add as much as I want. I can add users. I can do what I need to do. The system will be fine. And I've linked to a couple of technical tools that help us create these tests for those of you who are more, have a more of an IT background. So, did I skip? I skipped one slide, I think. So after the internal testing, we do something called user acceptance testing, okay? You increase the pool of people outside of the core team, basically, maybe to not just at the national level. You still want to make this easy to manage, okay? But you do allow more people involved in this. And ideally, you want program people who are familiar with the data. But these people might not be very familiar, either with DHIs too, or with the configuration. Because generally, a lot of the users, you will train whatever it is that you're making, this also might be new to them, right? So it's a bit of a kind of good exercise to use people that aren't very familiar with what you're doing, okay? Now, here, you're kind of more testing the user experience, right? Do things look nice? Is it easy for people to do things? Does it take too long, right? Are people getting frustrated just by using the thing? Are they giving feedback on modifications, okay? That type of stuff is important to gather, more so than kind of thinking, well, I've made something, and it works. Therefore, you have to use it, right? That's not really how we want to operate, okay? So this last one then is field testing. This is where we actually go out, and we make sure things are working. Now, some people get stuck in this stage, so just advising you to have a plan here, right? Give yourself enough time to plan this phase. Don't get stuck in the piloting phase forever, or you don't get stuck in a state where you don't have enough money to scale if things are successful, right? So Anna talked about budget. It's important to kind of consider this when planning this phase. And here you're testing a number of different things, maybe standard operating procedures like I mentioned, the infrastructure, the devices, training materials, maybe the configuration aspects of that, okay? And there's a number of ways in which you can perform this testing, right? You can go out into the field physically, okay? And you can just sit there and kind of watch people observe what they're doing on a daily basis. You can also ask them questions, and this can be a little unstructured to begin with in particular, right? You just want a good feeling for does this work or not with those people, right? If you're just guiding them the whole time, then you know, maybe it works in that specific way that you've outlined for them. But when they start poking around, maybe things break. And that's good to know before you scale up the system, because people have a tendency to go where they shouldn't. And that's fine because they should be allowed to in some sense, okay? But we just want to make sure that when you have maybe, you know, thousands of people doing that, that's not the case, right? It's okay with a smaller group of people, especially when you're gathering that feedback, okay? You can also do very specific types of testing. So you can come with like specific cases, sit down with a small group and say, okay, we're going to work through case one, two, three together, give me your feedback on case one, give me your feedback on case two, right? This is a very structured way of doing testing. It was very costly, though, I would say, out of all the methods, right? So just be careful in terms of managing that. And then you can do remote testing. And, you know, maybe during COVID, we had to do a lot of this. It's efficacy, I'm not exactly sure of necessarily, but it can allow you to support those that are kind of far out in the field, right? So you can kind of make sure, though, however, that if you're doing stuff remotely, people are always available for those staff that are kind of involved in the testing, right? Because you don't want someone in the field to have a problem, and they have no one to call, they have no one to talk to, no one to give feedback to, okay? So at the least, when you do this remote testing, make sure there are people always available, okay? So, yeah, in brief, okay? So SOPs are complementary to trading material, right? They're not one in the same, but they work together. There are five kind of basic principles that I've outlined when making this SOP. Understand what it is that you're talking about before you write it up, okay? Train people on the specific tasks that are there, draft and test the SOPs, train on the SOPs, and make sure you monitor their implementation, okay? And then for our testing procedures, we outlined five principles as well. Review the subject matter, make sure you know how it works, plan for the testing, design your testing cases, perform the testing, and report kind of enough information to make sure people understand what's actually wrong when you find something that is wrong, okay? There are three types of testing, internal testing, user acceptance testing and field testing, and then there's different types of testing methods, basically. You can observe them, you can talk to them, you can have specific cases and questions, and you can support them remotely, all right?