 Perfect. Okay, we're going to get started because I don't want to lose time for the presenters. Thank you very much for those who are here and managed to get away from coffee to come here with us and thank you for everyone who's online. It's a tough day for everyone. It's been a long week, a lot of conversations going on and so we really appreciate that. Today is going to be about use of immunization data and triangulation of this information. For those who were not acquainted with the diagram that we've been recycling for a while especially me. We have a pretty big immunization toolkit nowadays that ranges and touches quite a lot of components and components that can be triangulated around them of course, but in particular I wanted to highlight the bottom right corner. Especially because this is a collaboration that we've had with different partners among Davia and such. So we have taken also the opportunity to present a little bit more these triangulation dashboards and we are having also some experiences from the field of triangulation activities, but most importantly as well. I think that even before we get to triangulation is very important that we talk about uptake of this data. So how to use this data and how to make sure that people use this data because if you don't start with even checking your baselines, there's no point to triangulate things that you don't even know if they're right or not. For this reason, we're going to have Patrick O'Meol from his Uganda, who is going to present us the work that they have done with the MOH and the EPI team in country to start uptaking and revamping a bit their EPI dashboards in order to make sure that the use gets increased so that they can move forward with their triangulation activities. So, yeah, give it up for Patrick. Patrick O'Meol is my name. I work with his Uganda, but I also support the package development team so remotely we've been working with the global team in supporting the development so a lot of contribution has been coming from our team to the global team. Yeah, so, so what I want to share is really what we're trying to call a participatory approach to adopting these packages for the immunization for Uganda. So, and we, so we've made some progress, and that's really what we want to share with you. I'm here really on behalf of a team, Steven, Sam Kasozi and Prospa, whom we've been working with closely, and of course the Ministry of Health and especially the program team from the UNEPI, that is the Uganda National Transpondent Programme for immunization. So, so our journey starts way back in 2018 when we started supporting the package implementation, and it was just more than for EPI, we were doing for TB, HIV and malaria, and the initial start was really focusing on aggregate. And so the scope of work we had was really to do the installation and do capacity building training and then also infrastructure support because we had a lot of capacity issues on the server. But in there we found ourselves doing more general support for DHIS to because we couldn't just focus on these other three. So, and so when we went in we found the minister at the time was revising the HMI so we had to go through the whole process of revising the HMI, but that was also good because it gave us an opportunity to bring in some of the things that we failed to do and lacking within the HMI actually for the EPI program. So we went through that from 2018, and of course what we did was to be able to install the packages on the national instance, and that's what you see the sample that I'm showing you up there, that is the version that we had. But still we still when we still go to the district you still see their dashboard this is their handmade dashboards you know. If you go to the district, you will see they still have this kind of dashboard at the district, and the others at the health facilities which are really still handmade, and you wonder why they cannot really go into the HIS too. So along they realize that of course the challenges of access, yeah. Most people have challenges of accessing dashboards within the HIS too. And that of course limits their use especially the senior people the program managers and, and all the senior people find it a bit difficult to access the dashboard for some reason my password is not there and stuff like that. We also realize that there's little appreciation or awareness of the DHS analytics, people think the HIS is just to collect data. If you talk about dashboards, not so much they feel like DHS is just for you to be able to report. And also for the immunization, the way we are designed the dashboard we had both the analysis app and then the dashboard so when you're training people and explaining to them the two. It was a bit confusing. And so along the way we still saw a lot of ad hoc requests whenever they want to do analysis they keep asking provided this and the other. And of course also realize there are still many tools we have many dashboards in Uganda that are out there that really create a lot of confusion sometimes. And so we thought, how can we try to improve what we have within the day. The immunization package and try and improve on the dashboard. And so with the global team we say let's try and have a more participatory approach to doing this. So, so basically with this we would then try to also bring in the new analytics because the analytics within the HIS has improved there are things that we have to create an app for like if you talk of the scatterplots to do the the modern and then the monitoring chart and all these things. These things are now within the HIS so we said can we use this opportunity to look at what we have within the package and then be able to use the latest features of the HIS to and improve this dashboard. Yeah, and along the way of course support integration and then look forward to translation of data, and then we generate this feedback and improve the general package of the general the global package for the API. And so we started that and that's one of the workshops that we had in Uganda. We had someone, the team from the global team implementers coming one was in the meeting. The other one was right behind there. And so, so with this basically we're trying to see if we get the package, we get it into the instance and then join to review with the team, get your feedback. But the most critical one is really to go in depth with the indicator, the understanding of the indicator because once they understand the indicators, sometimes take it for granted people know dropout rates they don't understand sometimes. When you see to them and I explained to them, tell them this is our numerator, this is our denominator. This is how we are estimating the denominator because most of them are estimated. Then they start to appreciate the whole process. And then once we get that we don't quickly do some refinement of the analytic outputs. Small things like color legends, some people don't like it in colors. Like if you look at these colors, they didn't like it, you know, but that's what we had in our package, you know. Yeah, they didn't like that color so they have to make choices of their color that makes them happy, you know. So, and then rapidly do that refinement. And then we think after we've done that we should be able to then go ahead deploy, and then train and then of course generate some feedback to the global team and be able to, to generally improve on the package. So that's really the process that we want to follow. And right now we are the rapid refinement and improvement we have something that we are going to show you just a screenshot. And so, so with a team that came in we were able to go to the field that was again, Manuel going out to the health facility to talk to the team and see what's there. There was trying to present to them on the package, you know, for them to understand what it is. And so, with the lessons that we've so far learned because really there's a lot that we need to learn because we're trying this out. We have seen a lot of participation I can tell you when we're reviewing the indicators, everyone was a subject matter expert. We had something to say, you know, about the indicators, you know, and that was quite interesting because it was, they had something to say about the indicators. We've seen a lot of the whole thing of getting their perspective early into the development of these packages because if we just take it to them without getting their perspective, I think that can also be a problem. And we think this will promote, of course, use an ownership of this product because I don't know the day they know what how it has come about. And it will also validate our interpretation of requirements. They are requirements that we go into the guidelines of WHO. And then we read and we sit in Oslo and we, you know, we have an understanding of requirements and we develop these and we send it down there but when you try to bring it out to them they may not, it may not be in what they know. One case here is, you see up there, we have that the handmade monitoring chart that they are doing and for them they're using cumulative dropout rates in what we have. Oh, it's not cumulative. So you realize that there's a bit of a disconnect on what they know or how they're doing it and how the guidelines are bring it out. So we've had to make some adjustment to try and meet what they would want. So, and then this whole field experience. If you see those pile of boxes that's at a district level. So it also gives people understanding of like when this data that we get into the highest, how does it get there, you know, so that is the HMI room at a district level where data is entered so all those piles of data. So the data that comes from the HMI is from health facilities, and they keep the enter them into the highest and they fire them so you can imagine that whole pile so it gives that whole appreciation of for implementers to understand the processes that people go through to get this data into the highest two, and that gives you an understanding on as you're making the designs and all these things to be able to appreciate how things come in. So some of the, well, so this is now the beautiful some, you can see that the colors are, you know, they're nice, you know, I believe you can feel it, you know, coming from that up to something that this is more appealing even if you feel like yes, makes a lot of sense. You are very nice people table that can quickly. Yeah, and you can quickly look at your district or whatever I see which one is really yellow, you know, they are the colors that our people kind of understand, and when you're ready and trouble. Yeah, that is the BCG. Again, this is sample I wouldn't want to. Yeah, but they drop out of this MR one. It's quite high. Yeah. Yeah, so but this is you. Yeah, so, so, so the challenges that we're facing one big one of course is the official limitations which we are being really trying to engage the product managers. One of them is this open access of the dashboard people people don't like logging in. If you talk of the dashboard the dashboard should be where I go key in my URL and I see what I see and I get out. So we really have to try and see how we advocate for this. There was this thing of cumulative dropout rates that we're still trying to see how do we implement that because that's in what they want cumulative dropout rates. They want if I'm in January. That is okay but if I'm like in March I had January February then that's my total dropout and then I so it's cumulative. We are still struggling with that a bit. And then they have these things of ranking and counting organization based on performance I want to know my top five districts, you know, and be able to count some of those those are some of the things that we find a bit challenging. Of course we have to do translation we realize some of the VPD data is not integrated so that's something that we need to work on. And of course the the the costs around training cost once we're done we need to train people. Yes, so we don't have that budget right now. And right now so the issues of virgin. This is the, this is version 2.37. That's where you can do interesting stuff with some of these some of those single values and with the percent you can do with the test version. I mean in the production instance a bit old, but of course we are working in the ministry in the next few weeks to do the upgrade so that we have all this in the production instance. So looking ahead of course we have to finalize that development, we need to finalize and deploy. We need to support the integration. We need to do the training, and we need to keep pushing the product team prioritize some of these missing features in the system. I mean in the software, and then we need to re-evaluate the use because I didn't want to make sure people are using this and then be able to generate requirements that will improve on the global package. So thank you. And just acknowledge the people that we work with. Thank you very much. Yeah, you've been very quick actually I think I like I scared you a bit too much. Now just like a bit introduction, we're going to have like a pretty much a whole block session managed by by Angela and Joel, because all the countries that we're going to present are all somehow affiliated with CDC and they're working on their triangulation so we are going to have Angela Montesanti Porter, who is going to introduce us a little bit to the triangulation dashboard that have been working on. And then we're going to have Dr. Kulibali from Ivory Coast, who is going to describe a little bit of their experience with their triangulation efforts, and then it's going to be online colleague from from Nigeria, and again field experiences are the key to kind of feedback into the global packages, so we really want to give the opportunity to feel to the field to give their their their perspective and their challenges. And then finally joy is going to present for the Ethiopian colleagues who unfortunately having some connectivity troubles. So, at least we're not lacking the opportunity to show as well, their experiences. So thank you very much and I leave it to you Angela. Thank you, Victoria. So my presentation is actually going to be split and when I give sort of a brief introduction and get us on the same page with a common sort of global triangulation framework for immunization programs. And then we'll actually hear the use cases from the countries Victoria Victoria just mentioned, and then I'll come back and end with some information on the triangulation dashboard prototypes that we've been developing within DHS to So with that, the objectives of the remainder of the session are really to gain an understanding of triangulation key concepts, particularly in the world of immunizations. Learn how to implement the four step triangulation process that's been outlined in the global guidance developed by WHO UNICEF and CDC. And then with the national and subnational examples of triangulation through our country's cases, and finally discuss ways in which dashboards can be used to integrate triangulation program. So let's start with the basics, what is data triangulation, or the global working group that developed the triangulation guidance had to first come up with an agreed upon definition which was a lot harder, and it's going to sound hard to tell you what definition we came up with. But it's the synthesis of existing data from two or more sources to address relevant questions for program planning and decision making. So really what we're talking about here is even in the absence of perfect data, public health practices long recognize that combining many pieces of weaker evidence can form a strong basis for improved decision making and notice we're not saying perfect decision making And then also just to mention that the strategic strategic advisory group of experts on immunization or the sage data working group also came out with a statement in publication, really focusing on how data triangulation should really be the default for public health analyses as a means to use all existing program data. So again, we're getting at that data use that's kind of fit for purpose. So some common triangulation principles here, your triangulation analysis and exercises should really be driven by important program objectives so you really want to think about fit for purpose here. We want to use existing data so ideally no new data need to be collected. You want to include diverse data set so for example coverage stock and surveillance. So Uganda really trying to address some of this in their presentation, engage a multidisciplinary team if possible. With the triangulation guidance we really wanted to focus on basic analysis that include local knowledge and the interpretation. So while there's always a time in place for sort of complex modeling or statistics. We really want to make sure that data analysis and data use can really be accepted and used at even the very local or basic level. Because as again Patrick mentioned you know some people don't even really know what a job out raiders or how to calculate it so we really want to try and keep things as simple as possible for them to be able to use their data for their purposes. So the results should be communicated for use and improve decision making so we're not doing analysis just for the sake of having analysis. So what are the benefits of triangulation well it encourages collaboration across program units. This really allows for greater data sharing and access. There's a deeper understanding of data through synthesis and again incorporating that contextual information local context as well as considerations of data limitations, because all data sources are going to have their limitations but that doesn't mean that you can't use that data. It also identifies areas for program improvement that might not be apparent from an individual data source. It improves confidence and conclusions and quality of recommendations for planning and policy or strategic decision making, and also builds a health workforce capacity around critical thinking data analysis and data use. I won't spend too much time on this but I just want to mention again that there is some draft global guidance available, and it's available here at this website the slides will be available on the schedule. But essentially what we have here is a package of guidance documents that are meant for two levels so we're meant for the national immunization program staff as well as sub national immunization program staff. And within those two levels there's four documents so there's a general triangulation or a new document, as well as three topics specific annexes. We served, we know global regional and country level staff and really asked okay what kind of questions what kind of program issues are you really wanting to address with triangulation and pretty consistently we heard these three topics so immunity gaps, program requirements and program targets and denominators. So I encourage you we don't have time today but there's a lot of detailed information here with particular indicators examples from countries and sub national role and how they've been able to conduct triangulation analysis around these, these specific topics. So we'll simplify things but the guidance really kind of revolved around what we created as a four step triangulation process. So first starting with asking the key question, what programmatic issue what programmatic question are you really trying to answer or assess with your triangulation exercise. Step two, identify existing data sources. Step three, summarize the data and the local context and step forward develop an action plan. This is here to integrate triangulation into existing activities. What we're really trying to prevent here is adding additional work group to staff that are already quite overburdened. What we hope here is that staff can start using triangulation analysis as part of their regular activities and part of their work to become more efficient and effective with their data use. For example, they can, in their routine analysis so feedback on reported data, their API data that usually occur at monthly or quarterly levels depending on which administrative level you work with. For example, some new interviews at data and relevant reviews, at talk of violations of intervention impact or program and documentation. So for example, a new vaccine introduction evaluations and at work investigations. Part of data quality reviews, so API and surveillance reviews and existing trainings so thinking of trainings of mood level managers and supportive supervision. And then finally, of course, dashboard design. to DHIS-2 dashboards as soon as we hear from our countries cases, starting with Dr Ruth Kulibali from Kodibali. Good morning. I am Dr Ruth Kulibali. I work in Kodibali immunization program. I will present how we use that DHS-2 and data triangulation to improve information system performance and storing them data use at all levels of the Kodibali immunization program. This is the outline that I will follow. Until 2021, multiple information systems were used to manage Kodibali immunization program data, such as eDVDMT for vaccine coverage, VPD, Surveillance Vaccine Stock Data, APN4 for Vaccine Coverage, VPD, Surveillance Data, and Stock Management Tool Vaccine Stock Data. Multiplicity of tool and data source for each data type result in weakening that data use for program planning. That was revealed in 2019, data quality assessment. The objectives of Kodibali Ministry is to improve the performance of Kodibali Information System for support expanded program on immunization data and storing them capacities of Kodibali immunization and VPD surveillance staff at all levels in data management analysis, interpretation, and use for decision making. In terms of method, we are migrated in DHS-2 for the management of vaccine coverage, VPD surveillance, and vaccine stock data. We implemented a pilot base in WHO, UNICEF, and US CDC guidance on triangulation for improved decision making in immunization program. At the sub-national level, we conducted two triangulation trainings where head for regional and district level immunization and VPD surveillance data manager. In terms of results related to migration in DHS-2, we conducted data harmonization to incorporate historical data in DHS-2. We conducted trainings of immunization and VPD surveillance staff of all levels. The DHS-2 data package rolled out in Kodibali, WHO API, WHO Integrated Disease Surveillance and Response Aggregate. The DHS-2 data package binding WHO VPD case-based surveillance tracker. Since February 2022, DHS-2 use at national level. In terms of results related to the pilot triangulation guidance at national level, we received strong support from API leadership. The priority case on WHOs can access data support to document progress towards measures elimination. We conducted the first step to triangulation process. We identified data quality issues and population and geographic areas with immunity gap. The findings helped inform decision on introducing measures, vaccines, second dose in Kodibali, and revisiting target population estimates. We conducted two series of trainings. The training was conducted through presentation, individual, and group work. We received technical and financial support from US and Kodibali CDC office. Here are two photos of the Kodibali immunization triangulation. In conclusion, with DHS-2, Kodibali immunization and VPD surveillance staff are better equipped to conduct monitoring data analysis and triangulation to inform program planning at all levels. The national immunization program staff has provided triangulation activities using DHS-2 during training and supervision. Thank you very much. Hello, good day, everybody. Can someone confirm they can hear me? Yes, we can hear you. Go ahead. All right. Good day, everybody. My name is Bidemi Adye and I work with the African Field Epidemiology Network, Nigeria country office. This morning, on behalf of all that same members, including colleagues at the Affinite Office and the US Center for Disease Control and Prevention, I'll be making this presentation on our ongoing work on the integration of DHS-2 and next information systems for triangulating and visualizing routine immunization and vaccine preventable disease surveillance data in Nigeria. I'll be speaking along this outline. By way of background, currently in Nigeria, the district health information system houses the immunization and other roofing health case-based and aggregate data. And most instances of DHS-2 in Nigeria is managed and maintained by the Federal Ministry of Health and the National Primary Care Development Agency. One of the surveillance systems implemented in Nigeria is the summer system. Surveillance outbreak response management and analysis system, which houses case-based disease surveillance data and is currently managed by the Nigeria Centers for Disease Control. What are the triangulation challenges in Nigeria? There are limited opportunities to triangulate for program improvement. And this is because of weak data used due to fragmented access into the system and lack of sharing and coordination across the currently existing system. So this morning I'll be presenting the processes of collating, analyzing, visualizing vaccine preventable disease surveillance and routine immunization data into a triangulation dashboard that is used for improved programmatic decision making. And I'd like to also state that the intended users of the dashboard are across the national and the subnational level. At the national level, we intend the dashboard to be used by EPI and data officers and also at the state and district level. Just like Angela mentioned during our opening presentation, we're not just merely bringing data together just for the sake of analysis. It's important for us to be guided and have a direction and a core reason for why we're doing triangulation. So I'm going to share with you now the areas identified program issues and the key questions were open to the triangulation we're doing with ASSA. The first of that area is online data fine immunity gap. So we wanted to know does administrative coverage appear to be accurate in Nigeria? Does surveillance data suggest that the immunization gaps? The second area is to assess program performance and data quality. And these are the questions we're currently asking using the data available within the system. Which states or justice are with the lowest performance or inconsistency in data quality that also requires follow-up? So these are the triangulation questions across the program areas where issues have been identified. Just before we started the whole of work around triangulation, it was important for us to be able to understand the reliability of the different data sources that we'll be making use of, listed there on the screen at the different sources where we're putting data and doing triangulation analysis. So first of all, you see the administrative vaccination coverage and it was important for us to understand the strengths and the limitation of the administrative vaccination coverage. Under the strength, we see that the administrative coverage is available on a monthly basis and it is available across a different level. But that's also come with this limitation because data quality issues like outliers are disrupted as some of the issues with the administrative vaccination coverage in Nigeria. Talking about vaccination coverage survey, although by way of strength, they are more reliable than the administrative coverage, but we don't know that surveys are infrequent. So also at this table are the different strengths and limitation of all that sources like the WUNIC survey, like the WUNIC estimator, the WTRI UNICEF estimate for immunization coverage, MISU's case-based surveillance data, and the MISU's aggregate surveillance data, which is available on DHS too. For the sake of time, I will not be able to go through all the strengths and limitations. Alright, I'll talk a bit around the method or approaches that were adopted to collate data and do analysis and visualize that on the dashboard. It was important for us to start with system mapping because we're pulling data across platform DHS to SOMAS and other system. So it was important for us to understand the existing metadata on DHS too and that's on the SOMAS. So we conducted a systematic review and took inventory of all data elements or indicators, the way they are calculated, the administrative level for which they are populated, and the frequency of calculating them. So when we achieve system mapping and we have a good understanding of the kind of data and all the metadata that exists on both systems, we went ahead to select identification and selection of indicators. Because we know that there are already global documents, like some of the ones Angela referred to by WTRI UNICEF and the US CDC, so we made reference a lot to those guidelines on triangulation and did you some of the example. That wasn't or when we identified a list of good indicators. It was important for us to also ensure it speaks to what in-country program officers and Nigerian government likes to monitor and see populated on the triangulation dashboard. So we're able to do prioritization of indicators analysis and visualization by the national state code and that is also some form of buying and approval to proceed. After the indicators have been identified, we proceeded to system development. So we needed to develop the dashboard and I like to mention that the current triangulation dashboard we're working on was developed on our Shining and Kubernetes and Docker technology. So we're working to use all of that infrastructure, which we think is easy to build and is sustainable and it's also flexible when we need to interpret with all that system. So talking about the dashboard, using the indicators that have been approved, we're able to pre-charts, we're able to no render them with IR, with Python and JavaScript, just so we're able to improve on the user experience and the look and feel of the dashboard. So these are the methods were adopted for the development of the triangulation dashboard in Nigeria. Okay, so what you see on the screen, the screen, the screen graph for the different from the dashboard that we're currently developing, and you will see a beautiful visualization specifically looking at the map that shows vaccination coverage across at the national level and at the subnational level, and we're able to lay on top of vaccination coverage the number of measles cases, because we're working with vaccine preventable diseases, it was important for us to select disease areas that also has corresponding vaccination data. So we've been able to identify measles, yellow fever and meningitis. And this kind of map is one of its kind in terms of visualization across program area in Nigeria, and it's proven to be to be helpful in the sense that it helps stakeholders and decision makers at different levels to be able to see that although there are some areas where we're seeing good vaccination coverage for measles for yellow fever, but we're seeing a lot of measles cases, a lot of yellow cases in those areas. One of the other things that the dashboard is also promoting, it helps us to finally, we're finally able to make use of data from DHS to from summers and all that sovereign system, which was not there before we started this war because all of the system make a parallel to each other controlled by different stakeholders and different people users have access to them. But what the strangulation is doing now is providing access for all FK workers for decision makers at a different at national and subnational level. So these are just some of the screen gap. There are other indicators that have been identified. Okay, so this is a list of some indicators that have been identified for visualization, and these are currently populated on the dashboard where they are shown to you in the previous time. So we have confirmed measles cases versus MCV1 coverages, and that data is coming from DHS to on the summer system. We have age group of confirmed measles cases by vaccination status. This is one very interesting indicator that obstacle that seems to be interested in because it's helping us to see the true picture in terms of cases and their vaccination status, and it's helping program decision makers at a different level to understand the system better. We also have measles vaccine stock analysis and measles coverage. We have measles 1 and measles 2 drop out rate. We have discrepancy between MCV1 and yellow fever vaccine doses co-administered because these vaccines are supposed to be given together at nine months. It was important to understand the discrepancy that currently exists between the two of them. So these are just some of the indicators for all of the indicators there that reflect measles. We have corresponding indicators for yellow fever and for meningitis currently on the dashboard. All right. So in terms of conclusion and next step, what we're presenting is walking progress, and the next thing we'd like to do is to complete the integration of that dashboard. So this is not, although I said DHS to summers and other systems exist in Nigeria, parallel to each other, somebody might be asking, isn't this triangulation also going to be another platform? So we're very deliberate about that. We're not creating another platform. What we're going to do with this dashboard is to make it integrate back onto GHISO. So all users of GHISO will see triangulated data as a dashboard when they log into the system. We're going to also integrate it back into summers so that all the people that have access to that platform right now are able to see that an other existing system in Nigeria. So we're not creating a separate link, a separate portal for SK workers to have access to vaccination data, but instead we're going to integrate it back into all the existing system. And that's our immediate next step. Discussions have commenced on that and we hope to be able to finalize that soon. That will be followed by an implementation and training of targeted user admission, our user at the national level and at the subnational level. So we're going to plan a robust training on how to navigate the dashboard, how to make the best use of the dashboard, because the dashboard will be used there without it being used. So the implementation and training will be followed with a mapping and triangulation and integration of all that data set. Although we're starting with vaccine preventable diseases, some of them and the routine immunization there are all lots of indicators that stakeholders are beginning to say we like to triangulate this, we like to see this on the dashboard. So when we're done with that kind of implementation training, we'll start to map and see all that data sets that can be triangulated within the primary care services in Nigeria and that gets onto the dashboard. And that will be followed by an end user assessment to understand our stability, usability, and at the dashboard that's improved routine immunization and vaccine preventable disease, data interpretation and use in Nigeria. So this is a brief summary of the routine immunization and vaccine preventable disease surveillance data triangulation data in Nigeria. Thank you for listening. Can you hear me? Okay, awesome. Thank you. So just a little disclaimer, obviously I can take credit for this presentation and the slides, these are from our colleagues in Ethiopia who I'm very sure would have loved to be here today but couldn't make it. And so I'll be speaking to them and sharing the third component. Noangela mentioned three use cases. We've talked about Côte d'Ivoire. We've looked at Nigeria and now we also want to look at how Ethiopia was able to use some of the guidance from the data triangulation. And so they focus heavily around triangulating between DHIs too and other data sources focus more around quality and use of immunization data. And so Ethiopia's immunization data comes from DHIs too and obviously from other sources. So DHIs too being the more routine system and other sources sort of being the more non-routine system and I'm going to talk a little bit about that. But the country had had some concerns about the quality of the data and the ability to use that data for decision making and also sort of flag the need for some critical analysis of their data. And so this project was really focused on how to identify issues with the quality of their data, identify program performance gaps and also then use sort of work through the process of mapping and triangulation of data of those sources to be as a method of trying to improve the quality of their information. And so this is sort of the approach that Ethiopia had used to arrive at that. And so the first step was really a selection of a small group of individuals across the various agencies. And so you have somewhere selected from the Ministry of Health so specifically the Directorate of Policy Planning, Monitoring and Evaluation and they sort of oversee more of the HMIS component. And they also had folks that were selected from the maternal and child health directorates. So they are the ones that sort of wear EPI seats. And they also have some officers that were selected from the Ethiopian Public Health Institute and they are the ones that sort of oversee more of the surveillance component. And so the first step really was really a training. And again, this happened during COVID. So there was some training that was provided for them remotely around the national level. Sorry about that. So there was some training that was provided for these select group of officers at the national level focused around the basic steps of data triangulation. And now once they had completed the training, they now went into a stage of mapping, really trying to map out the various components and variables of data that existed within immunization and also within surveillance, but also looking at some of those external data sources that have been used for immunization. They also conducted key performance interviews, trying to speak to some of the end users, also program managers around and their challenges with the quality of data and with the use of this data. Then they also conducted some analysis pretty much, you know, looking at trends, frequencies between the HIS2 data and also some other sources. And I'll talk a little bit more about that. Then they also tried to identify these quality issues and then tried to figure out steps on how to promote data use and trying to figure out a way to improve the quality of data, but sort of do more data quality checks on a routine basis. And so for the various data sources that they use for this exercise, again, the HMI is the Health Management Information System's data. They used data from 2000 to 2019. Again, this is data that was pulled from the HIS2. They also pulled data from the Ethiopia Demographic and Health Survey. Many of us are familiar with this. And I think the last presentation from Nigeria had highlighted some of the pros and cons of these various data sources. So I will need to touch on that. They also used data from the Woonig Estimates that often released annually and a couple of other surveillance and sort of logistics data. And sort of for comparison, they focused more around 23, measles 1 coverage, full immunization coverage, and also trying to look at issues around outbreak, more specifically for measles. And most of the analysis was really just comparing the data and trends and just line charts between these data sources. And I'm going to look at some examples now. And so I'm going to quickly run through some charts that they had wanted to flag. One of them, as you can see here, was looking at the three coverage and is looking at data from HMIS, from the EDHS, and also looking at data from Woonig. As you can see over time, there are significant differences between the HMIS data and the EDHS data. So again, we're looking at routine data versus data that is collected through surveys. And I think many of us are familiar with some of the challenges with these data sources and why sometimes there are obvious differences between them. But the country wanted to flag the significant difference that has been consistent over the years. And they also identified that there were small differences between the HMIS and Woonig. And so there was a better correlation. At least the data differences were much smaller when looking at the HMIS and Woonig. And for this slide, they went to flag. As you can see, there was significant difference between HMIS and the survey data. They were able to flag that as time went on, some of that difference had reduced. Again, similar pattern. Also, if you look at the first dose of measles coverage, again, large gap between HMIS and DHS, and obviously small difference between HMIS and Woonig again, but you can also sort of see the similar pattern also when looking at the first data for measles one. And again, as you look at it, in terms of looking at the ratio of difference between these two, it was pretty much consistent, more like a 0.5 difference in the ratio over time. And they also looked at full immunization coverage. Again, looking at HMIS and the DHS, again, significant difference over time, looking at data for the last 20 years. And when you look at sort of the consistency line, looking at the ratio between these two data sources, you can see that although the difference sort of increased around 2011, they sort of observed some kind of decline, looking at comparing the both data sources. And so I think sort of the big takeaway here is that they also looked at data use. And so when Ethiopia kind of launched the API package, there obviously is an API dashboard that was attached to that. And the message of that dashboard was available and being used by the policy planning and monitoring evaluation, directorate or more within the HMIS group. So this is the group that and this observed that that dashboard was being used. But interestingly, that there was very limited use within API. And again, you expect that for API package, API units should be sort of at the forefront of it, but they observed limited use of that. They also observed that there was pretty much no data translation practices. So they were not standardized opportunities for looking at data across multiple data sources. And at the time of this exercise, obviously Ethiopia have not started collecting surveillance data on DHS too. So some of their big takeaway here, again, issues around completeness of data and inconsistency across data source, which is pretty much sort of expected. And I think we've kind of looked at some of those data points. They also observed that there were outbreaks in many areas that high hard administrative coverage, specifically for measles. At a national level, they did observe that there was the use of DHS, there was improved or sort of consistent use of DHS too by immunization and M&E experts. So those within the Ministry of Health, so those within the API unit and within the public, the policy and planning units had or were more familiar with the use of DHS too, as compared to those within the surveillance group. Again, this is expected because the surveillance unit obviously had not migrated to using DHS too for collection of surveillance data. And they also observed that I think also that some of the data sources also reviewed that there were all the issues again with the quality of data and issues also around capacity building. So they observed that those especially within EPHI did not necessarily have the capacity around use of DHS too. And so, you know, some of the major, you know, next steps and things that Ethiopia had moved on. Again, this was done at a national level, super high level. There was a need to obviously do this at a more deserterated level. And so there is a lot of focus around trying to do this at a subnational level. And also the Ethiopia is trying to build in data translation and synthesis of data into its routine activity. So this is not something that should be done once every couple of months, but they're trying to see how this can be done on a more routine basis. And more importantly, sort of training of data managers on the use of DHS too, but also the use of data translation practices. But I think it extends just beyond data managers, right? There's also emphasis on how they can include policy makers and program managers on how to use the data or at least interpret some of the results from this. So I'm going to pause here and hand over to Angela. She's going to talk about some of the examples, the example of data translation dashboards are some of the collaboration we've had with the University of Oslo. Thanks Joel. And thanks to all the presenters. It seems like everyone's quite on time. So I have time to go through some of this and hopefully we can open it up for some questions as well. So the next few slides are really meant to sort of summarize the approaches that countries can take in development of triangulation dashboards, especially with what you like have just seen with Ethiopia, Nigeria. So we'll kind of go over that and then also sort of how within DHS we can start using some of this as well. So again, just given the data sources that are available within DHS to the main two topic areas that we're going to focus on, though again, triangulation can be many things that can focus on many topics. The two that we'll focus on are around program performance and immunity gaps. So to start with program performance, what are the key concepts here that need to be considered when we're talking about a triangulation dashboard? I'm probably preaching to the choir here, but monitoring of course is needed for planning and continuous quality improvement. Data quality issues can hide coverage gaps and make finding mischildren quite difficult. Gaps can be hidden by looking at aggregate data. So if you're looking only at the national or only at one district level, or only looking perhaps at a yearly total. So of course, when we dive a bit deeper, substantial variation usually occurs across facilities and districts as well as over months. And typically what we find is that when they're all reporting errors, oftentimes it's really only a few health facilities that are contributing to your data quality issues, or data quality issues occurring over just a couple of months. And so if you can really pinpoint the time and the location in which you're having these data quality issues, you can more easily rectify. Issues are typically revealed by drilling down by the reporting unit, observing monthly trends, looking at underlying numerator and denominator, and making comparisons with other data. So how can a dashboard really help with program performance? When we're looking at objectives of a program performance dashboard, here's just some examples of what these objectives would be. So allowing staff at the various levels to monitor their own immunization program performance, identify potential data quality issues that need further investigation. A lot of times a dashboard is just going to give you a sneak peek, right? So it is important to keep expectations reasonable in that sometimes you're just going to notice that there are certain areas that you need to conduct field investigations for cause analyses, go conduct some sort of supervision and gather more information, and then also prioritize subnational areas or health facilities for supportive supervision activities. So again, just kind of running through the four step process when we're thinking about developing the dashboard. What key question are we trying to answer with the program performance dashboard? I think you saw some great examples of this with the Ethiopia example. So which districts or facilities have low performance or data quality issues that require follow-up? Or does administrative coverage match with other measures of program performance so comparison to stock outs or vaccination session or disease burden? So step two, identify existing data sources, depending on which type of system or software you're using to develop your dashboard, like Nigeria did, you can integrate a lot of external data sources into one. What we're focusing on is what can we actually try and get within DHIS2 itself? And so we'll see here in the green box, there are three data sources within DHIS2 that we can try and get with the WHO immunization data package, as well as the WHO IDSR aggregate surveillance and the WHO case-based surveillance tracker. And so that's really the prototype that we've been developing as CDC as a part of our project with UIO. So step three, summarize the data in the local context. Again, I won't spend too much time on this. You saw some great examples of different analyses that can be included in these triangulation dashboards. But just to go ahead and point out another really great one that's particularly useful at the sub-national level is looking at an access and utilization grid. So this is looking at PENTA-1 coverage and the dropout rates from PENTA-1 to PENTA-3. And in this particular country, each dot represents a different district. And so here you see the grid with our four different quadrants. So quadrant one, any district that falls within that quadrant, which is the rarer right-hand side, they're sort of on track with their program, right? So they have high PENTA-1 first dose coverage and they also have their dropout. So they're vaccinated with a lot of kids with first dose and those kids are coming back. For quadrant two, you're kind of seeing that there's some service quality issues here. So they may have high first dose coverage. So kids are coming into the program to get their first dose, but they also have a high dropout rate. So kids are not coming back. In quadrant three, you see that there are access issues. So they have low PENTA-1 coverage. So not that many children coming into the facility for their first dose vaccine, but those kids are coming back. So there's low dropout. And finally, quadrant four, you see that there's both access and service quality issues because they have both first dose coverage that's very low, as well as high dropout. And so as a district health officer or a national level, I would really want to prioritize these places that are in quadrant four as sort of highest priority. Unfortunately, we didn't have time to have a session dedicated to demoing a live demonstration of the DHIS-2 triangulation dashboards we've developed, but here's a quick snapshot. And perhaps we'll try and send out the link of this demo so that way people can take a look. But just to say that this is in the works. And in the last slide, I'll give you an update on that project. In step four, develop an action plan. So with dashboard beta, what can we actually do as a next step? Well, first, you can identify subnational areas or health facilities that need supportive supervision or again, like field assessments to better understand what issues are going on. You may want to implement targeted immunization strengthening or data quality improvement activities. You may realize that you need to develop a revised guidance and monitoring processes, integrate data validation checks within your system, perhaps revised supportive supervision checklists, et cetera. So moving on to the sort of generous generally speaking, how would you go out to school about creating immunity gaps dashboards? Similarly, again, we really saw a great example of this of how Nigeria went about creating this type of dashboard. But really, the objectives are to allow staff at the various levels to monitor immunity gaps of select vaccine preventable diseases. And the second objective, which has become quite of interest slowly is to identify potential gaps in immunization and or surveillance programs impacted by the COVID-19 pandemic. And so what are some of the key questions that you can answer, at least try to start assessing with a dashboard is those surveillance data suggests that our immunization coverage gaps, if so, what age groups, geographic areas or highest populations, and does admin should the coverage appear to be accurate? So again, not to spend too much time on this, because similarly, we're going to be using for the HS2 dashboard that we're working on, it's the same three data packages, so the immunization data package as well as the aggregate and case based surveillance tracker. And then just to mention, you know, there are a lot of external data sources here that are not within the HS2 that are particularly helpful in identifying immunity gaps that we hope to consider and sort of focus on in a sense to have our dashboard prototype, but things like supplementary immunization activities or campaign coverage, and things of rat nature, I think also very important when we're looking at immunity gaps. And I'm just going to briefly, you know, we hear a lot of examples from the national level and maybe a couple of examples from that subnational like district level. But there's really also a lot that can be done at the health facility level. I'm very, very basic analysis that could be really helpful. I think Patrick gave a very, you know, he gave clear pictures of like the things you see on the walls, right? You'll see the cover, you know, the monthly coverage chart on the wall of the immunization officer at health facility. And we go to the surveillance officer, his office, and you'll see just the total number of vaccine preventable disease cases that are going on, you know, in a given month. But there's never a place where both of those information are sort of synthesized together. So an example here is looking at first of these is administrative coverage on a dashboard, right? And so in this particular health facility, we see over two years, we have super, super high coverage in 2019, they're reaching exactly 100% coverage for both measles first, second, and second dose. So as a district health officer, I would be questioning, is this real? Could this, could there be a data error? Is there fabrication involved? Are we having targets, population estimate issues? You know, what's going on here? Is this really 100%? And so the next thing that I want to see on a dashboard, and we really did this at a health facility, and we were working in Bangladesh, is I'm making sure that we're also looking at the cases at the same time. So here's a graph of the confirmed measles cases by age and vaccination status. And the gentleman mentioned again that this graph in particular is super, super helpful in really being able to better identify why you're having your measles cases and who is really getting measles, rather than just looking at that aggregate total number. And so what you see here is in this health facility, you know, generally they do fairly well with few cases, but there's evidence of delayed vaccination. And so here you see between nine months and one year old, all these cases have had zero doses of measles vaccine. And in this country, according to their vaccination schedule, a retired should receive first dose by nine months. So you can see here that this health facility is missing their children. And then similarly, when you're looking at the one to four years, you see here in orange that there's a number of these kids that only received one dose of measles, but they should have received their second dose of measles by 12 months. And so that also shows you here that they're missing children for getting the second dose of measles. And so, you know, as the district health officer was interested in what was going on here, we can get to the field investigation and found that the practice here is that they're not vaccinating children that are coming in quote unquote sick, so fever, cough, anything like that. And those are not contraindications for receiving measles vaccines. So we were able to sort of rectify what that practice was in reality. Just to briefly mention again, this is a snapshot of the trend duration dashboard we have in the works. So then we're seeing DHS two rounds in the gaps. And you can see a map here that overlays coverage and number of cases. And according to Patrick, it sounds like we have to change our color scheme. But again, and I won't go over this too much, but you know, it's always okay, so you have your dashboard, what's the action coming after the dashboard. And so there could be things around strengthening of routine immunization. So for example, on that health facility, you know, second year of life platform to make sure we're getting the higher and senior two coverage on vaccinating specific populations, vaccinating specific age groups, you may realize that you need a national or subnational vaccination campaign, or perhaps addressing gaps in surveillance program or surveillance performance, or immunization data quality. So let's skip that one because we kind of talked about it already. But also, you know, just to summarize, you know, the functional requirements of a triangulation dashboard, you've kind of seen it through all of these cases and what I've mentioned here. But I think number one, you know, incorporating data from multiple sources what's really key here is the interoperability between routine data sources. And you saw how Nigeria has been able to sort of address that with their DHIS2 system as well as their SHORMA system, and trying to figure out how we can do that better within DHIS2 itself. You also want to be able to have access to this data on DHIS2 and make sure that there's consistent update and maintenance of the data. You want to be able to aggregate and disaggregate the data as necessary. So thinking from being able to go from national to district, even health facility level, perhaps aggregate to case-based surveillance, yearly to monthly, things like that, and really allowing the users to be able to control this aggregation and disaggregation as they see fit with their data. You want the dashboard to be dynamic, so you want multiple visualizations depending on the type of triangulation. So you saw some great examples here, even things that include combination graphs, tables, that graph that I showed you that showed the stacked bar charts of the age and vaccination status of cases. That's something that we're having a bit of trouble doing within DHIS2, so we're trying to figure out how we can go about doing that. But just being able to have those different analytical capacities I think are really important. And you want to allow the dashboards to be customizable to the users so they can include additional triangulation visualizations that can change their level of aggregation. They can change the antigens or vaccine preventable diseases they're looking at or even the time period in which they're trying to analyze. And additionally, this point's really important and something we're also trying to figure out how to do better within DHIS2 is allowing users to make notes or annotations and provide interpretations on the analyses. So with that, I'm going to end on this slide here. Coming soon, the DHIS2 triangulation dashboard for immunization and VPD surveillance programs. The objectives here are really to streamline the integration of data from the DHIS2 immunization data package as well as the IDSR in case-based surveillance tracker. And again, just trying to really focus the scope, we created two dashboards, one more on assessing program performance and one on investigating and removing gaps. We've completed phase one of the project, which is to develop the prototype. So we've created automated analyses based on the three packages. And then phase two will be to repilot this in a country that is actually implementing all three of these data packages. So to like maybe an off-road country that we pilot to assess the dashboard functionality and adaptability and use. And then also sort of focusing a bit more on interoperability with external data sources as well. So with that, I'll just say we have lots and lots of great resources around this. I'm sorry we didn't have more time to dive in a bit deeper. But these slides will be available so you can access any of these links. And just want to say thank you very, very much. And please feel free to reach out to Jorah. I will put it in the community chat if people have additional questions. But I think we actually have time for some questions now before we may be adjourned around 1150, 1155 so people can head over to the other side. It's the last session of the day. We're not sure. Please feel free to put questions in the community groups and those things. We can also just continue the conversation, post-conferences, people, digest everything they've talked about over the past few days. Yeah, okay. Well, thank you very much. Fantastic presentations and really very encouraging to see this, I don't want to call it pivot, but it's progression from single source data to multi-source data. It's fantastic, especially in the HIV space. This is something that we've been asking because as we get, countries get closer and closer to epidemic control, the problems are now around individuals, not around groups. But I would like to make two comments in here. One is in terms of the different source of data. I think we need to start looking beyond health data into weather, into political, economic and other sources because they all have impact on how we provide treatment and how we care about individuals. And the second important thing is, I think the focus should not be on program performance alone, which we've been doing for many years. I think the focus especially in the HIV space is about patient care. When I say patient care, we really want to know why are people interrupting their treatment? How can we bring people back into treatment? How do we manage those to follow up? These kinds of things require actually not only aggregate or public data, we also need to get patient-level data. And how do we connect, for example, the aggregate, the bigger into the patient-level data? How do we connect into EMRs? I know there was discussion yesterday about EMRs, but I think the focus should be kind of a little bit away from program performance, more into patient care, and then also bringing other patients-related data. Otherwise, I think it's very encouraging and fantastic presentation. Thank you. Thank you. Yeah, two great points, and I agree with you completely. And just to briefly mention, we have been working a bit with global colleagues, as well as our own CDC colleagues around at least in the vaccination and immunization program worlds, like vaccine demand and making sure we're sort of understanding issues around patient care and people being hesitant to get vaccines. What are those reasons or patients may be hesitant to come back after a bad experience and trying to figure out how we can better try and get that sort of information to pinpoint some of those as well. Any other questions or comments? Thank you very much for the presentation. It was quite an eye-opener. I have actually noticed that many times when we develop our dashboards in most of our implementations, aspects of our data quality are really looked at. So you can imagine a top management person who then, we have those relative dashboards where the time will just move as the days progress and the likes. And the times you'll be focusing on some kind of trends and the likes. And in the process, at times you also get to identify maybe see some data quality issues, probably just as an outlier, for example. And probably this is probably another way of trying to identify some of those outliers. Probably my question is, how best can we then address some of those data quality issues as we present the dashboards rather than waiting to go back to the source and then doing the adjustment, which might actually take time and the times may not necessarily be feasible many times. At least so that the decision maker in as much as the data may not necessarily be correct, it is at least close to reality. Instead of just probably just identifying the error, what else can we do to probably say adjust the error? Thank you. I would let other colleagues also give their input from countries, but just to say I think that's what capacity building at the different levels really comes in. I think it's quite important. I mean, of course, depending on the country, some countries, their data entry occurs at district level versus some countries, their data entry does occur at the health facility level. But really making sure that at each of those levels that they have the knowledge and the power to be able to understand their own data, their own data entry and reporting and be able to look at their own data quality. So that way it can quickly rectify at the appropriate level rather than having to wait until the district health officer looks at all of the health facilities and finally gets to every single health facility. So I would say that's one of the biggest challenges and I think it was mentioned by all of the presenters that the capacity building and training piece is just so, so important. I think because of your exact point that really being able to quickly rectify a lot of these issues. When we were developing the guidance and piloting this in Bangladesh, we were able to find a health facility that was kind of consistently an outlier and they were sort of, there was clearly a consistent data entry issue. And so it was kind of just a matter of calling them up and being like, hey, like, can you double check data from, you know, X month to Y month. And they were able to sort of quickly rectify it over their DHIS2 system because they do enter a data at the health facility level. So I don't know if that answers your question, that's sort of my thought. But I don't know if others have any other comments. Thank you for your presentation. Can you hear me? Okay. How well are our triangulation data? I can say accurate in comparing a performance in coverage with drug stock use. Because this is something we're trying to look into, but sometimes it can be tricky regarding the drugs part. So how well are we good in comparing the coverages with the drug trends when we don't have shortages and so on? Good question. It looks like Joe maybe wants to have a comment. But just while he's coming up, you know, at least in talking about vaccine stock, one of the things that we also look at are wastage rates. And so one thing that's incorporated onto the DHIS2 triangulation dashboard is kind of creating a heat map chart of all of the health facilities or districts and their wastage rates and kind of flagging, you know, which ones have extremely ugly low wastage rates, I mean, the negative wastage rates, which ones have super, super high wastage rates. And in the vaccine world, high vaccine wastage isn't necessarily a bad thing. There could be open dose vial policies where, you know, the country says you can open one vial of vaccine to vaccinate one child and if you waste the rest, that's okay. But it does kind of flag just sort of where we're having data quality issues and then comparing that with different coverage kind of gives us a bit of an idea of, you know, are we actually vaccinating the number of children that we're saying we're vaccinating based on that stock data. So that's just one quick example I'll let you also add. Thanks. I was going to say pretty much similar things. One thing I was going to add is that you're probably not the only one interested in looking at like stock data and also like vaccine utilization data with that. So for example, in the Nigerian use case, when they had created like the first draft of the triangulation data just looking at just immunization and VPDs, there was a request from the team that manages sort of like vaccination and stock data for that to be included as part of like the thought data source. And so there is a lot of interest, especially from those that, you know, manage more of the logistics components to ensure that their own data is included. And I think that hopefully we will start seeing more examples of countries including logistics data into that process. Yeah, thanks. Patrick, do you have a comment? I think maybe come up here for the mic. Mine is just a quick one of on how we can maybe be better coordinated while we are trying to implement this dashboard because in country it can be confusion, you know, while we're doing the immunization package dashboard for Uganda, I was at a pulpit preaching the gospel, DHS to and how it can do everything. Then all of a sudden if another colleagues show up, they also have dashboards. And this one they're using are shining. And of course they're trying to pull data from DHS to into another platform and trying to do so a bit of confusion, especially for the users. So I don't know how we can be better coordinated, but maybe a bit organized. And I liked the idea of so if we are doing maybe dashboards out of DHS, then try and make some of them go in so that for a user it's not, I can use this without necessarily going the other side. So I think that that's really important for us to think about and to properly implement. Quick comment. Thank you so much, Patrick. I think that because of the way the dashboards are, you know, we have to be extremely careful that we are not, like you said, confusing the users. And I think that hopefully in Nigeria, because one of those examples that is able to integrate these dashboards back. So I think one of the things that they did was they ensured that during this whole process, they work with like the technical working group at the national level to be say, hey, this is what we're trying to do. This is the plan. And so whatever is built outside of DHS too, because it's pulling data from multiple information systems is then integrated back on DHS too as a thought party tool. And so that at the end of the day, users can access that data. And in some cases, they want it temporary, some case, they want it permanent. But I think ensuring that all the various stakeholders are at the table and I agree and ensure that there is proper coordination is important. But I'm a huge advocate of ensuring that whatever is built outside is integrated. And I think it's good that DHS has that functionality to incorporate those external tools in it. So very good comments. Thank you very much, everyone. I will give another round of applause because it was like last session of the week and on top of incredibly interesting sessions. So yeah, thank you very much, everyone for being here. Thank you for the people online. And you may go in peace.