 Now, let's talk about how we actually improve some data use. There is an old adage in information systems that has always been true, and that's garbage in garbage out. If we are making it very difficult or burdensome to capture data, then it's almost assured that the types of analytics that we are going to be able to get out of the system are poor quality or very limited. Here you see two examples. The first one on the back is showing how health facilities are being forced to record virtually all zeros. The act of having to put in dozens or hundreds of zeros into the system every month is very burdensome. It puts a lot of effort and strain on health facilities. You can imagine a scenario where it becomes very easy to accidentally put in a one in front of one of these zeros. And so what was meant to be zero becomes 10, or what was meant to be zero becomes 100. It becomes very easy to make mistakes. Why are we forcing everyone to enter zeros? It needs to be reduced. We need to reduce the burden as much as possible for data entry. The other one is registries. And you see an example here from Zimbabwe where health facility workers are required to update many registries on a regular basis, on a monthly basis, or even a continuous daily basis. This reporting burden is, again, detrimental to data use. We force all of these potential data users to spend all of their time and energy to actually just putting data in. We need to streamline data entry as much as possible. We also need to standardize the tools and reports that we're using. We need to make sure that new partners are not coming in and introducing new tools all the time. We need to make sure that programs are aligning and harmonizing with each other. We need to make the tools as simple and as efficient to use. We also need to make sure that our electronic data capture forms are supporting either matching or supporting the workflow of our paper tools. Many countries are still using paper to capture the data initially and then transforming that or transferring that to the electronic data sets. And it's important that those electronic data sets look as similar and follow the same workflows as the paper forms so that you're more easily able to transcribe the data from one to another. The biggest reason for data quality problems is transcription errors where a person sees a number on a paper form and then puts in a different number on the digital form. It's very, very easy to do this and one of the ways to minimize that is to make sure that our paper tools look very similar to our digital or electronic tools and that our electronic tools are supporting a workflow that eases data entry in them. A next key point for improving data use is data is only as good as how it is presented. Here we see a picture of a pie chart that is more of art than it is useful analytics. It's very easy in any kind of analytics tool, including DHS to to make very poor analytics. Arguably, it may be easier to make poor quality analytics than it is to make good quality useful analytics. Here's a few best practices to help improve or ease the development of good quality analytics. The first one is keep it very simple. If you produce complex analytics, it's almost a guarantee that they won't be used. When we say keep it simple, we mean simple bar charts, column charts, line graphs, single value charts, very, very simple pie charts. Not this, what you see here is an example, the opposite of this. Sometimes we think of the six-year-old rule. That means that if you can explain it to a six-year-old, then it's a good chart to put onto a dashboard. And it's not to say that we should treat everyone like children. Of course, there are many professionals, many experts out there using these. But the point is that people are pressed for time. People have limited capacity at any given moment. And we have to make these analytics as simple to understand as we possibly can. Not because they have the capacity of a six-year-old, but because they are being stretched very thin, especially in the health sector. And we need to appreciate that the simpler we make it, the easier it is for them to use it. The next one is that we need to target our audience. Again, we appreciate that there are different data needs at different levels by different users. We need to make sure that our analytics that we're making are speaking specifically to those users. And we're not just passing down analytics in an assumption that those analytics that are used at higher levels are also useful at lower levels. The third point here is we need to be careful about choosing the appropriate chart type. We see this to be a big issue and we've done research that shows that it's very common for users to choose inappropriate chart types. What exactly do I mean by this? Well, here in this example, you see a pie chart that is completely unusable. But if this was converted to, say, a bar chart or column chart, where each one of these health facilities was represented as a bar instead of represented as a slice of the pie, then we would be able to much more easily understand that those facilities that have higher RDT positive rates and those that have lower RDT positive rates. We could even sort them and we can see the lowest ones or the highest ones. We also see a very common issue with the use of line charts. A line chart is meant to show a trend over time. It is not meant to show a trend across multiple localities or or units. We see it very often used for the latter, showing a trend across multiple org units. Of course, there is no trend across multiple org units. That's not an accurate comparison. So line charts should be used as trend analysis over time. The next one is appropriate indicator pairing. You need to show data on a dashboard or any kind of analytics output that is comparable to each other. It's typically not good practice to put in random indicators or maybe indicators that are loosely associated with one another in the same analytics, the same bar chart or graph or pivot table. It becomes confusing. We need to make sure that we are putting in analytics that are consistent and depending on one another. They are related. They are in the same health program. They have related outcomes and impact. And then finally, we need to use easily interpretable legends. We see this as a big problem. In DHS2, you have the flexibility to define your own legends. You can make any kind of color range color path that you would like for your legend. That flexibility sometimes brings in problems. For example, having a legend that doesn't make any logical sense. There's no progression. One easy rule of thumb is green is good. Red is bad. The progression of red to green means something is improving. Going from green to red means something is getting worse. So when in doubt, just use a simple red to green legend. The final point here that's in the box at the bottom is countries have to establish routine data analysis and presentation, refresher trainings for HMI staff. The use of analytics is something that has to be nurtured. It has to be supported. You have to continuously train users on new analytics tools, refresh them on using the existing tools. There needs to be a way at which users at all levels are receiving some kind of refreshing training. And you're able to monitor the progress and uptake of those trainings. In DHS2, there are applications that can be used to walk users through a training for their own analytics. Those applications can be found on the DHS2 app hub. For example, a country can build in a training and then launch that training down to users at all levels that takes those users, walks them through their own dashboards, walks them through how to build a chart and data visualizer. The trainings can be tailored to specific end users, specific programs, and specific levels within the hierarchy. Now training has to be done physically, but it's important that whether you're doing trainings physically or remotely, that you are able to monitor progress and make sure that users are actually completing those trainings in a satisfactory way. Now focusing a little bit on dashboards, just a few quick tips on how to improve dashboards. The first one starting on the left is when a user logs into DHS2, it's good practice, as we see in this example here from Sierra Leone, to make a landing dashboard. So as soon as they log in, this is the first dashboard they see. You can see that Sierra Leone has titled it notice. And on this dashboard, they have some simple instructions and information for all users. So how to improve your experience or use DHS2, some contacts, some guides to different standard operating procedures or user documentation, some links to some analytics, some standard charts or standard reports. Just different useful things for health staff. So again, the first landing place is maybe not necessarily showing any data. It's giving some support to users to actually use DHS2 to use the analytics presented here, to give them some guidance on how to troubleshoot issues, contact details. The second on the right side of the screen is providing a description for each dashboard item or analytic on the dashboard. This is very easy to do in DHS2. And on the dashboard, as we see here in the example of this bar chart, and on that bar chart, there is a description. And this description defines how the dashboard item should be used, so how this chart should be used, how it should be interpreted, and any specific actions to be taken based upon the data that is presented. So you can build your standard operating procedures directly into the dashboard. And we find this to be a really good practice. People forget over time exactly what they're supposed to do when it comes to standard operating procedures. So give them a constant reminder. When they look at this chart, they're going to very easily see that, oh, this is what I need to do with this data. If the data is looking like this, I do this. If the data is looking like this other way, then I need to do this other thing. One additional point is that when we're making dashboards, we need to make sure that they include, again, our standard operating procedures and our targets. So we may have a standard operating procedure for an individual chart, like we saw in the previous example, or we could have a standard operating procedure for how to use the entire dashboard. And this is what we see in this example here on the screen. We see this is an immunization dashboard. And right there in the middle of the dashboard, we are providing the EPI focal point standard operating procedure. So this is just a very simple example. Obviously, these can be much more complex. But the point is that the user of this dashboard, who is an EPI focal point, is able to quickly see this is how I use this dashboard. Now, also on this dashboard, we see, especially in this column chart, targets. And it is extremely important to include targets and legends that clearly indicate how things should be performing. Good or bad, on target, off target, above target for all of our analytics. Don't make people guess if they're doing well or not. Clearly indicate to them. And you can see here in this chart where we have a very clear target line, and we also have a max acceptable value. The max acceptable value there is to indicate any issues with data quality. So anything that's above that max acceptable value is probably a data quality problem that has to be investigated. We also see a little bit of a scorecard in this screenshot. So again, red, bad, green, good. And you can see a very clear progression from red to green illustrated on this scorecard. We also see a little bit of a picture of a map here as well. And you can see a very simple legend to understand there as well, starting from a wider shade into a darker shade. The last point to make here is that in DHS2, you have the ability to send messages to users, and you can also automate messages to be sent to users. Messages can be sent internally using the DHS2 messages application, or it can also be pushed to people's emails or phone numbers via SMS. Here we see on this dashboard, on the top left corner of the dashboard, the automated messages that are coming from DHS2. Here we see just one message, but these messages could be whatever you define them to be. They can be alerts and notifications to data quality issues that are found. They can be notifications or alerts to different kinds of situations that need to be followed up on, for example, a disease outbreak or a stock out. Again, the point is that you should bring the job to the user, not force the user to interrogate a lots of data to figure out exactly what they should be. DHS2 is smart enough to automatically detect many things, and this is highly configurable. You can configure it to detect different things, and then you can make sure that once that thing is detected, it sends a notification to the person who needs to respond to it. And we can put those notifications as we see here directly on the dashboard. One of the practices that we see that has a very big impact on actually using data is the monthly district data review. In many countries around the world, there is a policy that district health officers will meet at some routine interval, for example, monthly, or maybe even weekly, and review the data for key program areas. In this data review process, they will look at the previous activities that have been done and assess if they had any impact. They will also make plans to review plans for new activities, assign responsibility for those activities. Here you can see some pictures of a district review session. All your district health officers for different programs, sitting around their computers, looking at data, discussing things, coming up with plans and strategies, asking questions. This process right here is, again, how we go from just analysis of data to actually knowledge and data use, this human process of sitting together and working through various analytics. We also see a picture here of an individual presenting a dashboard and talking about the current status. Are things improving? Are things getting worse? How was the impact from the previous month's activities? What do we need to do this month? You know, they're presenting the dashboard live. They're using the dashboard as a tool to communicate how health programs are performing, how they're going, and what needs to be done next. To find the picture, we see an example of an M&E framework for key performance indicators. So this is a monitoring and evaluation framework. This is a critical tool for district data reviews or just data reviews at any level. In this, it defines what are our key programs, their indicators, how their indicators are calculated, what the baseline was, what our targets are. And the data source and responsible person for each one of these. Again, it's extremely important that everyone understands the targets. What are we working towards? And those targets need to be achievable. They can't be outrageous. They can't be something that is completely unattainable. They have to be achievable. They have to be easily understood. And they have to be communicated in the dashboards and the analytics in the reports that are produced. Again, we have many tools to help communicate targets, such as target lines, legend sets in DHIs too that can facilitate this process. I mentioned this previously, but I want to re-emphasize how useful it can be to create district and facility communities of practice. Here in this picture is an example of the community of practice in health facilities in Rwanda. You can see that users in this community of practice are posting questions, pictures of DHIs too. They're asking for insights. They're helping each other answer questions. This type of communication forming this support group has been shown to be extremely valuable. And here in Rwanda, in this example, you have dozens of health facilities all tied into this WhatsApp group where they're all able to just support each other, ask questions, provide answers. And it's also a place that maybe their supervisors can also monitor. And so they can know maybe who's participating, who's not, but they can also see the common issues that are coming up. And it's the responsibility of the supervisors to support the work of those that they are managing. So if there's a common issue, say around data entry or using a dashboard, then the supervisor can also respond saying, we're working on this, we've addressed it. And when they have fixed it, communicate that fix out through the community of practice as well. So it's not just for users at one level, it's maybe also the users and their supervisors in the community of practice. We have two very good case studies on this, again, one from Rwanda and one from South Africa, links provided here if you'd like to go into more detail. Here I want to again point out that one of the issues that we have with data use is missing or unreliable data. And I just want to point out that over the last couple of years within the DHIS2 community in partnership with various other organizations like the World Health Organization, UNICEF. Here I also want to point out and reiterate on a key point about missing and unreliable data. Over the last couple of years, the University of Oslo and the DHIS2 community have been collaborating with some key partners. For example, UNICEF, WHO and Global Fund as well as some other technology providers like Grid 3, Crosscut, WorldPop to explore alternative population sources. So again, one of the key issues that I've mentioned previously is that we have unreliable population statistics in many countries. And those population statistics do not go down to low enough levels. For example, health facility level or community health worker level. There are many organizations working to address this. And within DHIS2, we are trying to bring all of the insights and innovations around missing or unreliable population sources into DHIS2. So working with partners like Grid 3 and Crosscut, FlowMinder, we've brought in new tools for building facility population catchment areas. For example, having multiple population sources that can be used for calculating different indicators. These are now available in DHIS2 and we have a webinar explaining some of these here that's linked on the slide. And again, we also need to make sure that we're getting data from all the various sources. Supply data, maybe education data, especially for doing outreach campaigns for immunization or vaccination for children at schools. Of course, community health data, HR data, disease surveillance data, and then this can go on and on depending upon the various health programs provided in the country. Real quick summary of what we've gone through in this presentation. There were some core data use principles. We talked about the virtuous data cycle where data quality and data use are connected together. And if we want high data use, we need to have good data quality. We talked about some reasons that data is not used. Sometimes data is not accessible. It's not available. It's not reliable or systems are overloaded with too much metadata and they're not able to find the data that they need to actually use. We talked about some improving data use practices, making data capture very simple, making presentations very specific to users and keeping them very simple. We talked about how to get data out of DHIS2 through dashboards and notifications. We talked about some data use best practices like having good standard operating procedures for your data use processes. How to use data in meetings, having district action plans or routine planning cycles where data is being used, and of course having standard, excuse me, communities of practice of users who are able to share experiences and insights around their data using like WhatsApp or other messaging services. We also talked about how to address missing data. So some data quality processes to identify outliers and address those. Thank you.