 In this session, we have the following learning objectives. First, we want to recognize the major reasons for lack of data use. Second, we want to describe the principles of data use. And finally, we want to identify data use standard operating procedures. Starting with data use principles. When we think about data use, one thing that we need to come back to is what we call the virtuous data cycle. This illustration describes how data quality is connected to data use. We see on the left side of the screen in the red, we have poor data quality and we have low data use. As we improve data quality, we can also improve data use. Data use and data quality are mutually dependent on one another. We cannot have high data use if we have poor data quality. You see as we improve data use, we can also improve data quality. Once we have good data quality and good data use, we can start to experience things like data-driven actions and decisions. Usually, the folks who enter the data and those who are using the data have a higher degree of data ownership and a data use culture starts to emerge. And ideally, we have a demand for data at all levels. So as we talk about data use, it's important to appreciate where the information system comes in to aid the data use process. On this slide, if you look at the diagram on the right, we actually here present to you the data-to-action process. And so you can see that there are steps between data, information, knowledge, and action. And each one of those steps has different components. So obviously we start with collecting data and organizing data. Then that leads to the analysis of the data and summarizing the analysis onto dashboards, reports, charts, maybe in meetings. And then we go to interpreting that data. Interpreting the data is adding context to the data. It's bringing in other types of information that actually lead us to be able to prioritize that data. And at this point, we've already moved from data to information, which is analyzed and summarized now then to knowledge, which is a broader understanding of the context, the data. And we're able to start to prioritize it into certain types of decisions, which we then are able to implement in the form of actions, and then we monitor the impact. So this is the entire data-to-action process. And again, we want to focus that the information system, or DHIS-2, is only part of it. So DHIS-2 helps you collect and organize data, and then it helps you analyze and summarize that data as information. The rest of it, the interpretation, prioritization, the decision-making process, the implementation, the impact, and following the actions, all of that is more of a process that is on the human side of the equation. So humans are interpreting and prioritizing. They're making decisions, implementation, and carrying out actions. DHIS-2 will not be able to do that for you. Now, we also need to appreciate that from the actions, we need to produce feedback down to each one of these levels. So as we've completed an action, that should be back into our knowledge-building context, help us reprioritize or reinterpret what we thought we knew. Then going down to the information, the action needs to help inform better analysis. And then, of course, actions produce, potentially, more data to collect. So we've carried out some kind of activity in the field. We need to collect more data on it, and then that goes up through the whole process again. So all of this to be said is that the HMIS, or DHIS-2, is only possible to do a limited amount of this entire process. The rest is up to humans and the organization, and how well that organization is structured with standard operating procedures, with clear protocols, clear actions to take at different points will decide how good the actions are. You can have a perfectly organized and well-functioning health information system, but still have very little action. And the reason is because only half of the equation here, as you can see, is accommodated by the HMIS or DHIS-2. Half of it, and arguably probably the most important half, is up to humans and the organization. So why is data not used? The primary reason is that most information systems, including most health management information systems around the world, have information overload. And what we mean by this is that they are capturing a tremendous amount of data that is not actually usable. It's not useful. There is one country that in their HMIS, they were capturing 18,000 data elements every single month. Now you can imagine a scenario where if the health worker at facility is required to report on 18,000 different data elements every month, how much of a burden that is. If we make it too burdensome to actually capture data, then the data that we do get will be poor quality and subsequently unusable. We can also think about how difficult it is to analyze that data. Meaning there are now 18,000 different data elements that I need to sift through in order to make a simple pivot table. It becomes virtually impossible just to be able to make basic analytics because you're completely overloaded with data. The key things are not captured data that you do not use. You need to minimize the data capture to only indicators that are actually used routinely. Capturing data that is not used is burdensome for data entry and makes data analysis difficult. Some additional reasons why data is not used. The primary reason that we see is lack of data access. The people who need to use data do not have access to actually see it, visualize it, analyze it. We need to dramatically expand data access. This can be done in many different ways. It's not just giving a user a login to DHIs too. It's public facing dashboards. It's automated alerts and notifications. It's summary printouts and meetings, presentations, downloading the analytics and putting them in a PowerPoint presentation. There are many ways that we can get data to users. Again, having it on a static dashboard that requires a login to DHIs too is one of the least effective ways to improving access to data. The second point here is that a lot of data that is needed for decision making is not available to the people who are actually making the decisions through the HMIS. Again, we need to make sure that we have logistics, HR and finance data. We also need to make sure that we have updated population statistics available in our HMIS. We've been conducting research and we've found that in many countries, district level and health facility level users do not use the analytics that are presented to them because they have available to them their own population statistics. These population statistics, they've gathered through household surveys, through various projects and programs that have given them what they consider a more reliable population denominator. Those population denominators because they're made locally are not available in the national HMIS. Therefore, the users of the national HMIS at maybe district and facility, sub-facility level, don't consider the data reliable because it doesn't factor in what they consider the accurate population. It's very important that we reconcile differences in population from a bottoms up approach. Making sure that the people at the facility level have access to their population statistics and it goes up to higher level. The third reason here is that data is not trusted and what we mean by this is there is a perception that data quality is very bad and that because of that, I cannot use the data that I have available to me. Here we see in the examples of two charts, one first one is a scorecard second one is a column chart. And we see all of the coverage indicators are over 100% in the top and the bottom we see a very similar scenario where we have a lot of indicators over 100%. How can we have a coverage over 100%. Well, it means that there is some underlying issue with the data. It means that our population denominator is incorrect, as I previously mentioned, or it means that there is some issue with the numerator. There was a data entry error and they put in a value that was too high based upon what based upon the count that they actually had for the number of patients received or the number of vaccinations given, for example. And when you present this kind of analytics with all of these coverage indicators that are way over 100% to a policymaker, a decision maker, what's their first takeaway? It's that all of this is wrong. I can't trust any of it, so they don't use it. Fourth point here, and to just reiterate a key point here is population data is unreliable in many countries. Again, this is what's driving these coverage rates over 100%. The final point on this slide is many users do not know how to access data or build analytics. We see many users just scared to go in and play around and explore different visualizations in DHIs too. Partially, this is a lack of capacity that they have not been trained on how to use some of the analytics tools. Partially, this is lack of motivation. They don't see any benefit for them to actually go in and look and analyze and build different types of analytics. And thirdly, this is a concern that they don't have the authority or the permission to actually do this to build their own analytics to explore their own data. Many health officers, especially at lower levels, believe that they should only analyze the data that is being given to them. Well, if the data being given to them is structured poorly, presented poorly, then they're not analyzing that data very well, if at all. One key point here is that we have to make concerted and consistent efforts to improve the capacity for users to actually visualize data, analyze data, and explore their own data. One key point to capacitating users at all levels to visualize and use their own data to build a data culture is an appreciation that there are different needs for data at each level. For example, what a community health worker needs to see to help support their decisions as they're going and doing outreach campaigns, meeting with patients, receiving patients, working with community health groups. Their data for those processes are very different than the data, say, needed at district level or national level. We need to make sure that the data that we are presenting to each level is tailored towards the activities, the work processes that they actually do. We have a habit of making a dashboard at national level and just pushing it down to districts and facilities, assuming that the same indicators that are important at national level are also important at district and facility level. Sometimes they are, but often they are not. The people at district and facility level need to see more granular data. That's what this chart is showing here, this graphic, where we have multiple dots at district and facility level and even more dots at community level. This is showing that we need to have higher granularity and higher diversity of data at these levels, which is somewhat counterintuitive. I mean, typically we're thinking of the HMIS as more of an upright pyramid where we need more data at global level and national level. But in reality, to actually build a data use culture, the opposite is true. We need more data presented and at a more granular level at these lowest levels. They're the ones actually implementing the health programs. But they're the ones that need that very granular data to inform their day to day decisions and processes. We also need to make sure that we have clearly defined standard operating procedures. For example, when does the district use these dashboards? How do they use these dashboards? What kind of decisions do they make? What kind of actions should they take? How do we follow up on those actions? At facility level, how often are they doing data quality reviews? How often are they doing data planning? At community level, how are they interacting with their dashboards and analytics? What kind of activities do we expect them to do based upon the analytics and data that we're giving them? How do we monitor these activities over time for impact? All of these things need to be defined in standard operating procedures. We also need to think about information cultures. One of the key points of information cultures is building communities of practice. And a community of practice is a collection of users at the same level who are able to share questions, share insights, and just generally communicate to each other. And we've seen this to be very successful in countries that have implemented communities of practice, say, at district level, or even at facility level, where you have multiple facilities in the same WhatsApp chat group, or multiple districts in the same chat group, or even at community health workers where you have a lot of community health workers all in the same Facebook Messenger group, and they're able to communicate, ask questions, provide answers, support each other, share analytics even, ask for insight. These kinds of communities of practice really go a long way in driving information culture. We also need to make sure that at each level, users who are meant to be interpreting and using data understand the responsibilities, authorities that they have to use the data, and that they feel accountable for using that data, that there is some process, again, a standard operating procedure in place, that holds them accountable for the use of that data, and we are able to monitor the impact of the use of that data over time.