 So my name is Rebecca Potter. I'm our team lead for the health domain with DHIS2. And today we are having a series of presentations on impact and effective use of DHIS2. So what is effective use? This is actually a concept in information systems literature, and so I had to go to some colleagues for some help, but it's using a system in a way that helps attain the goals for using that system. Great definition, right? However, then I asked a few of my PhD colleagues who are quite smart ladies and they told me, this is about being able to use the system to its full potential according to the intention or targeting the use of the system for the purpose it is intended to do. The use is effective if it is contributing to the objectives of the organization by improving performance through efficacy and efficiency. So efficacy being, is it doing what it's supposed to do? And efficiency, are you doing it in a way that takes the least amount of resources possible? So thank you to Anna Torsing and Marta Lvia for helping me with that. And Gavi really pushed us this year to describe better how is DHIS2 actually contributing to health impact? Why does this matter? Why should we keep investing in this information system? Why do we keep investing in data? So it was a pretty big question. And so this theory of change is actually borrowed from Gavi. They have a really great, great way of laying these things out, where the inputs lead to their outputs, their outcomes, their impacts. And we spent some time with his colleagues and others trying to figure out where does DHIS2 fit into all this? And we realized a lot of times when we tell our story we're stopping at this bottom level these inputs and activities. So what did we do? Who did we train? How many trainings? How many facilities? But that doesn't really tell us how is the system being used? And what is it achieving? Then we looked at this sort of output level and we said, you know what? This is a bit of a sweet spot for DHIS2 interventions. These are the things where we can actually attribute change to DHIS2 and its presence and how it's being used. And then you keep going up these levels. So how does DHIS2 contribute to these outcomes? So looking at things like campaign coverage, campaign effectiveness, how does DHIS2 use to actually achieve that? And then you typically have your high level impact but that's a very difficult part to actually attribute change to DHIS2 because there are so many different factors, right? DHIS2 alone is not the one helping 90% of people living with HIV to become virally suppressed. It's just one part of that story. So these are a couple of examples working with the Federal Ministry of Health and we have these documented on our website as well these impact stories. But I really liked these because they gave us real numbers. One example, we're talking about something very basic. Basic aggregate monthly facility stock reports and sometimes even the facilities were not entering them. Sometimes it was being entered at the LGA or district level but these being scaled up throughout Nigeria and having the right people trained at the LGA level, it actually reduced the number of facility reported stockouts by 5,000%. So from 626 in 2021 compared to more than 30,000. So that means when children are going to these facilities they actually have the vaccines there to get vaccinated. So this was a really great story here and we looked at abstracts today where we could actually find numbers to attribute impact. Another one working with partners that resolved to save lives they introduced the DHIs to Android app for individual hypertension case management and they use this at the point of care. So if we think about that cascade, what did they do? They introduced the tracker, that's just an activity. They had to train the 100 facilities and all these care providers. But what they did was actually it reduced the waiting time for follow-up visits. They were able to digitize all this data but then people at the facility use the working list. They use the individual level data to find all of the ones who were somehow missing. They were not going back to care. And then they actually relinked more than 1,000 of these people back to care by following them up and overall they were able to increase the number of diagnosed patients, hypertension patients by more than 50%. So we think that these digital interventions really do have a huge potential for change. And so I will hand over to my colleague who is joining us online, Patrick Omeow. And he's going to share a little bit more on the topic of effective use by digging into the participatory design of EPI dashboards. So Patrick, if you're online, the floor is now yours. Thank you so much, Rebecca. Can you hear me? Just confirm you can hear me. Loud and clear, yes. Thank you. Okay, let me just share my screen before I start. Yeah. So yeah, thank you so much, Rebekah. And thank you so much, Tim, for being part of this presentation, making time to be part of this session. Patrick Omeow is my name. I work with his Uganda, but I've also been part of the team that works with the Rebekah's team at the University of Oslo, mainly supporting the configurations of metadata package for the aggregate domain. Yeah, so I'm basically here to share the work that we've been doing with the team of colleagues, both at his Uganda, but also at the Minister of Health and of course at the University of Oslo, the metadata package team. Yeah, so what we've been doing is we came up with the idea of a participatory design for the EPI dashboard. And this is not the first time we present this to this conference. This is the second time we're presenting to the conference, but this time we are really sharing the baseline assessment that we did to try and understand the user experience, the experience of users before we could make changes to the dashboard using this participatory design. So a bit of background here, may just a quick outline of this presentation that I'll give a quick background to where this is coming from. And also a quick highlight on the approach that we're using to adopt these two kits in Uganda. And then the purpose and objectives of the survey, the methods we used and the key findings with the conclusion and recommendations that we came up with learning from that baseline assessment. And then I would also share some screenshots of the dashboard and the designs that we came up with. And also to highlight a bit of the impact of the benefits that we're starting to see with this design. So for as a quick background, this work comes from the work that is being done collaboratively with WHO and the East Centre really to create these tool kits that we most of us actually know as metadata packages and with this new definition of tool kits, really looking at a collection of resources that can support implementing DHIS-2 to meet a minimum key functionality for a given use case. And the core for this is really the metadata packages that one can quickly install into a DHIS-2 instance. And for long, we've always said that having this standard design can actually help to promote best practices, disseminate standards, avoid duplication of efforts and integrate programs into HMIS, make sharing of data easier and more efficient and then also post-simplifies vertical reporting where there are global partners that may need data. And so far we've seen over 10 program areas being covered by these tool kits and over 64 countries have adapted these tool kits. And this is just a quick illustration again for people who are familiar with the tool kit. We often talk about use this slide to kind of show how it comes about that you have these guidance that is globally generated and out of that guidance that you, then another guidance is developed that can help with how data can be analyzed and collected at a facility level. And it's based on that that we're able to come up with this tool kit that you can put in your EHIS to use it to collect data and be able to actually analyze your data. So for Uganda, we, and not only Uganda for the countries that are being supported by the HIST groups, we get support from Gavi and the Global Fund to implement these tool kits. And for Uganda, we started way back in 2018 with the approval of the Minister of Health, we were able to do the TB HIV and malaria and of course the EPI. But that's what we could do, making sure we're able to install the package onto the national instance. But when we looked at the time, we realized that the utilization of this dashboards and tool kits that we install, especially for the aggregate that it was really underutilized. And most times we realized later that the training, the way we were rolling it out, the training wasn't so adequate. Most of the users weren't aware and some of the critical analytics were lacking, especially for EPI. So this was the dashboard that with the first design that we installed and this one came from the metadata package. So we installed this for Uganda and that's how it looked. But with that, people continue to make their own analysis of, so what they would do, they go into DHS too. Even when they have that other dashboard, they still go ahead and do their own analysis. This is an example of a TB analysis that was done by one of the districts. There's a go extract the data from DHS too and still go ahead and do their plotting on Excel and other tools offline. So we said, maybe we need to do it better in how we design our dashboards and also how we roll it out. So that's why we came up with that idea of a participatory approach, really to kind of re-adapt what we had because we already had it installed and give it a new look, revamp it and make it look better. And so we did a bit of a stakeholder engagement, a bit wider stakeholder engagement to basically review what we had set up on the national instance. And then from there, the team basically generated new requirements and feedback for us to make refinements to the dashboard. And so we went ahead with this feedback and then the additional indicators that are also proposed and also utilize the latest features of DHS too that had more analytics. And then we went and made these modifications. But this time where we thought to be wiser to actually first do a baseline assessment to understand the experience of the dashboard that was there before we could actually bring in the roll out this revised dashboard. And this is just a process that we're trying to use here where we are saying if you have the dashboard, I mean the metadata package set up, then you'd have a joint review, have a more deep dive into the indicator for everyone to appreciate the indicator configurations within the DHS too. And then come up with these areas of refinement and improvement. And this is now where we are, where we came up with the baseline assessment. Actually we are now here because we've deployed but what you're sharing is now in the baseline assessment. Really to understand what was there, how was it for you? How did you find it? And what can we make improvements on? Yeah, so we then went and had to do a baseline assessment and the assessment objectives are really to assess the user perception of the appropriateness, comprehensiveness, the reliability, accessibility, ease of use and capacity or training needs for the existing dashboard. Yeah, really trying to look at what was there. Was it appropriate enough? Was it comprehensive? Was it reliable? And these are just the other specifics that we looked at. Just breaking down what I've just mentioned in terms of the appropriateness, the accessibility and then the training and capacity building needs. So for this assessment, it was really a simple assessment but prospective in the sense that we are starting the baseline and we are hoping to do another post implementation assessment. And again, it's not an experiment. We didn't really have like a control arm to say we are comparing those not to use the dashboard. We really looked at those were using the dashboard. And again, for the sampling, really it was purposeive and drawing from the respondents who are really district and facility users that met the criteria of active use of DHIS too. And then we used the tool that we came up with and this tool we think going forward could make further improvements but this tool can help with this assessment that looks at effectiveness and impact of DHIS tools on users. And basically within the tools we had both open-ended questions and structured questions and for structured most of them would look at like a five point scale just to assess the extent which they agree with certain questions. And we did do the assessment online just using Google Forms. That also helped us to get a bit of consent to consent to the participant. And then also of course have a bit of anonymity on the responses. So data that we collected, we exported into Excel and that's where we cleaned and need the analysis. So this is just the key findings that starting with the profiles of our respondents we were able to reach out to 94 respondents and 48.6, I just covered 48.6 of the districts of Uganda and 84% of the respondents were really the district biostatistician. These are the people who are responsible for coordinating reporting at the district but also post-coordinating reporting within facilities within their district. So they are like the people interact with DHIS to Uganda on day-to-day basis. So they were really the critical people to give us feedback on what was going on. And so the first key area of assessment was on whether the dashboard, the existing dashboard was appropriate or comprehensive really in measuring coverage for the EPI coverage is very key for dropout rate and then of course adverse event following immunization, cold chain and then stock and wastage. So those were these five areas that we're looking at in terms of the comprehensiveness and whether the dashboard was able to appropriately measure these areas. And you could see that the highest agreement was with coverage and again for Uganda coverage at district level, the indicators, the denominators is well set out. So this is not surprising that 71 of them concurred that the dashboard was able to comprehensively measure and help them analyze coverage. Of course, they also followed by dropout at 67 and adverse event following immunization. And the least was really on stock and cold chain and both doing 55% agreeing. And agreeing really we're looking at those who strongly agreed and those who agreed anyone neutral and below we're not counting on this. So it was clear to us that the dashboard would can appropriately actually measure coverage, dropout rate, but with a list on the cold chain and the stock. Yeah, and then we also looked at the reliability and timeliness, whether the dashboard is able to reliably analyze data and also of course in a timely fashion. And again, 72% agreed to that. But of course, those who disagreed the 28% and of course they had concerns of training, concerns of internet, missing analysis and of course the understanding of the indicators. And you're going to see some of this resounding as I go through the findings. Then we also looked at accessibility and ease of use. And the first one you're looking at is ease of access, ease of learning, ease of navigation and of course availability of the user manuals. And again, we had the highest agreement on access. Most of them agreed that they can easily access the system and the dashboard. And of course they also felt it was easy to learn how to navigate and use the dashboard. And then the list was of course the user manual. They indicated that user manual is not available. And again, as we go through the challenges, they indicated that they actually wanted printed kind of manual so that it's easy for them to refer to it. We also looked at training and for training and capacity building, 65% of them agreed that that received training on DHIS2 and again, this was more like looking at DHIS2 for the training because we had limited training specific on the dashboard. Generally for DHIS training, 65% called card and 67% indicated that it felt that the training was adequate. But they also of course highlighted areas of need, especially around interpretation, data extraction, they wanted more training on indicators, the new features of DHIS2 and also if they can be trained on customization and design. And the largely emphasize that the training should be hand-zoned and practical. Yeah. Then the other one we looked at is a frequency of access. We also wanted to know how frequent do they access the DHIS2? And we see 87% of them indicated yes. They have frequent, they always and often access the dashboard, but it was surprising that about 57% of those, about 13% that were not, that rarely, sometimes or never access the dashboard. Some of them were really biostatisticians, but a bit surprising and yet these guys are really key. But again, it's a very small number if you look at 13% of the 94. Yeah. And again, some of the reasons they were giving and still around training, they felt that the dashboard's a bit cluttered. So it's hard to find what you need, the card issues of internet connectivity and then lack of access. Yeah. Yeah, those are the issues that were raised by those who are if infrequently accessing the dashboard. And then the last part on this is really, we also interested in understanding the time it takes them to extract relevant information from the dashboard. And again, we here, we see about 55% indicated they could do this in less than 10 minutes and about 82 in less than 15. And of course, the 18% would go over 15 minutes. And again, they had some concerns of slow internet and clay indicators and confusion on the dashboard. So, and there are a lot of challenges that were picked and we're not going to go under each of them, but we've tried to kind of group them. The first one, we looked at, we grouped it to be really issues around design and functionality, stability of the system, performance and then accessibility, training, capacity building, the end user support, then the data extraction, quality, both on data and metadata and then analysis and then lastly on system integration. And again, if you look at just, I'll just pick a few, if you look at on the design, on the dashboard, the clutter of the dashboard again comes here. They feel that the dashboard is usually congested, so it's not easy for them to pick out what they need. Yeah, and this is almost the same thing that is difficult in finding specific information. If you come here and system stability, they indicated the system sometimes on and off and then the network keeps coming here, yeah? And then here, lack of training for facility staff in analysis and I think this is something that most of us must be aware of that most trainings really don't go down to facility level. They again raise the issue of lack of hard copy user manuals and then down here issues of extraction, there are difficulties in extracting data into Excel. Then they also indicated that data elements are not clearly visible. And then for integration, they also indicated that there are multiple logins and I feel this is an issue of integration. They also, based on those, they also indicated areas of improvement, which again, we not have time to go under each of them, but we've also grouped them in the same way, really around those same areas. Good thing they had the challenges, but they also proposed for us some of the areas for improvement and due to time, I would not go under each of them, we'll share this presentation. Yeah, so for us again, in conclusion, we think that based on what I've highlighted, and again, we have a detailed report for this that the findings really underscore the strength of the existing EPI dashboard in all areas of assessment, but while also emphasizing the need to address the concerns around the stability of the system that they mentioned, the dashboard design and features, and also critical to enhance, and critical to enhance, of course, the user experience and provide additional training opportunities to meet the needs of the users and then maximize the effectiveness of this dashboard. So key recommendations that we draw from this assessment is one is to, we need to really address the concerns of internet. We know that for countries like Uganda, efforts have been made to improve on internet connectivity at district and facility levels, but again, all these facilities at different level. At hospital, definitely, it will be much better, but as you go to a health center too, there will be definitely issues of connectivity. And same things with the district. We have newer districts that the infrastructure, not just for internet, but also even for the other areas to find it's a bit lacking. So, but however, the focus on addressing the internet is really very important. We also need to kind of help users understand that connectivity is not an inherent flaws in the software because when people fail to access the system for some reasons of internet or some connectivity, sometimes they say DHS is not working. So it's very important that we help them to understand that if you have issues of internet, then it's internet, not DHS too. We also recommend that we need to provide training to facility users to ensure that they equally have the necessary knowledge to use the system. And this is very important because most times when we do training kind of ends at district level and we let the district people take it down, but most time they don't have resources to actually do this training at those levels. We also think that it's important to have regular advanced training for district users because most times we're really focusing them on how do you analyze data, but sometimes they're interested in how do I configure my indicators and be able to analyze data. And then we also recommend that it's important for these actions, I mean for these recommendations that we've come up with and the challenges and the areas of improvement that they have suggested that we really incorporate this in our design to further enhance the API dashboard and of course the entire DHS to implementation. And then also this was mentioned by Rebecca really need to enhance the project designs for digital interventions by incorporating a robust system of M&E framework to set benchmarks for this kind of evaluations. As you realize that we try to come up with some benchmarks to assess, but I think when we improve on that then it will be very easy for us to clearly benchmark and then do these evaluations and see if there's impact. So I think I'm done with those. So I'm just going to take you through some of what we have. So for the dashboard, what we've been able to do first it covers these areas, coverage, dropout rate, wastage monitoring, availability of vaccines and supply, cold chain, adverse event following immunization and then session planning, which is really what comes with the guidance for API facility and district level analysis. But also with the new features of DHS we've been able to implement this red or what they call reaching every district categorization we are now able to do monitoring charts. And also we brought in the coverage time that was not part of the existing dashboard and this one is able to help monitor stock adequacy. So for the red categorization we are now able to implement this nice scatter plot and then be able to categorize districts or you could even do it for facilities but because of challenges or coverage doesn't really work at that level but at the stick level you can know which districts fall in which category and we're not going to go through each of these but you can clearly see with this scatter plot we have these quadrants. Again, for now we're not able to shade it. So we said, no, this may be difficult. So what we did, we said, let's do it in a map. So that instead of using these different quadrants here with this scatter plot, we now have it in a map. So here one can clearly see which district is on red and we know what red means by the different categories for this red categorization. Yellow, light green and green. So we went further again to also bring in a pivot table that can actually help those who would want to see more months to see if there's been trends of this kind of a performance for a given district. Yeah, so this is one of the things that was lacking but now we can have this in the DHS too. And then we also did the monitoring chart where you can have your population and then as a vaccination happens you're able to on a monthly basis when they report it's able to plot and show you whether you're meeting your required monthly targets based on your population estimates. We are also able to do the stock adequacy based on the stock coverage. So you can really clearly see again which district is out of stock, which district is under stocked, which district has adequate stock and which one is overstocked. And again, this is data from, I would say sample data but near real that can clearly see anyone maybe going out of the field say these districts are in red, they are out of stock. These districts are in yellow under stocked. You could also bring in all the coverage in one table and have all these nice colors so that you can see which antigen has low coverage for which period. We also of course brought this to a chart for session planning where you can look at the sessions conducted versus the proportion of those those that were, yeah, planned versus the proportion of those that were done and you're able to look at that in one chart. And then you can also look at wastage and also use these legend colors to really highlight those that are maybe high. And then we also brought in the spider chart where you can do multiple coverage for indicators and compare one year with another and then be able to of course put your kind of the target line and you can see for this one in the first year in last year they were able to hit it. This year they are still trying to get there. We are just half the year, but one can see that if you are 23% will you get to 100%? Yeah, but it helps to compare the current year and the previous year. And lastly, so far what we are starting to see and when we do another assessment in the next few months we should be able to properly document some of the impact that this has had in the users in terms of using the system, but also of course take it to the next level of understanding if this is really able to contribute to the health outcomes. So of course with this dashboard on a national instance it's a national reach we're able to get to 5,000 users because you have shared it with everyone, yeah. And of course with the facilities that are in the VHS to instance about 8,000 and the 146th district. So that's the kind of reach this dashboard is able to cover meaning that any user with access should be able to access this dashboard. However, with the viewership which is also good which has made some improvement. Of course if you look at these users 5,000 you expect all of them to view the dashboard but it's not the case, yeah. But we are seeing this improvement in the viewership of this dashboard. And this we use the analytics, the user analytics on DHS too. And when we extracted this data we looked at data of 2021 before we could make these changes. In the whole quarter average, the average of the 10 people would look at that dashboard for the API. But now in 2022 when we started making these changes and talking about this dashboard we're seeing the viewership is going to 50 per quarter. Again, that's still low. And this is now, and this year it's about 61 average view, some months have spikes but we think that already coming from 10 to 50 to 60 is something. We are also hearing that the stock adequacy is now helping with better understanding of stock adequacy for the different district and facility level. If you looked at the map that I shared earlier, analysis like rate categorization and the monitoring chart has also made it easier because previous we had this in separate app the immunization analysis app but now we have it onto the dashboard. So it makes analysis much easier. We're also seeing increased demand for lower level sub-national analysis. Most of the biostatisticians are very interested in doing analysis at facility level or sub-county level because they feel, well, if they're within their district how can they understand the coverage within their district? And so we've had this demand on the catchment area and again with the recent development around the maps I think we can also improve on that. But this is also just a kind of testimony that was coming from the data manager, the API who has been very, very keen on working with us on this dashboard. Yeah, so he was able to say that, we always have weekly program meetings where we present the program performance using the dashboard. So already they're using this because again if you look at the map of stock coverage or stock adequacy, if you present that every Monday or every end of the month once the data has already been entered then it's quite informative. One can know where the problems are. Thank you, Patrick, so much. We're gonna have to, I know this is your last slide we're a little over time. So we can show us the acknowledgments and we'll introduce our next presenter. Thanks. Yes, so that is really the acknowledgement and thank you so much over to Rebecca. Thank you so very much. So our next presentation, it will be a joint presentation from Adolf Komagunga and Wilfred Sanyoni. So from his Brawanda and his Tanzania, respectively. Like to come on up and start us off Adolf. Okay, thank you very much. Good afternoon to everyone. Just going to simply take you through or share with you our experience trying to enhance data use at the decentralized levels especially trying to use to adopt the strategy of having a district of excellency. This is an approach whereby after realizing that a district level, there is a very big gap on using data data analytics apps to analyze data at the centralized levels and also to use data. So then we've been doing this exercise together with our his sisters in Tanzania especially exploring different options but mainly in our country. We've been using this or trying this using square card and also using unhinged dashboards. In my presentation, I'm going to simply give you an overview or what we've done with this district of excellency. I tried to describe the use case and some of the achievements of the implementation and also some conclusion based on how we've seen this. This is just to briefly illustrate how our district of excellence works. The aim of this as I said is to strengthen data use at the pyro sites because after finding that in some of most of the district level, facility level data use is still a challenge. So we use this district of excellency to try to identify the gaps but also use those pilot districts to see what is working, what is not working, what are the challenges I tried to address and replicate what we found working to the rest of the other districts of the country. Of course, at district level, you find the coordination meetings. These are the meetings that are happening mainly at the district level simply to coordinate the health interventions but also to monitor health status depending on priority indicators by district. In most of the cases you find the countries, we are much focusing on national priority indicators. And to some extent, we forget to very support district level where data is being corrected and where it is supposed to be also used to inform or to improve health services. During this district of excellence exercise, part of the task is also to assess the gap. Try to see what are the standards being used, what are the operations and also what are the procedures in place to even the gap with that SOPs so that as we improve our SOPs as we found out something, those that we can also to adjust a little bit to the SOPs especially to improve data use at the central levels. Part of this also that district of excellence is to address challenges but also document. So by looking how districts are using data by trying to find out the gap, by trying to find the challenges but also trying to document so that whatever is working well, we can replicate elsewhere. Whatever is not working well, we try to re-engineer or adjust to come up with a documentation under publication especially trying to combine this implementation research so that whatever is being implemented, whatever is being done at the district level, of course as an organization that working with the academic institutions trying to work with the master students or help us to dig in and found the challenges but also document for the community. This is in summary what we've been doing in our district of excellence to be able to address data use gaps but also copy and paste what is working well in this pilot districts. As I said, we have a number of different use cases but for the sake of this presentation I'll be focusing on only how we through this district of excellence, of course in collaboration with the minister of health and other departments, the use of squaccers and dashboards at the district level in the conditional meetings, district health management meetings and also a number of different meetings at the health facility, especially discussing the health facility they might have different data quality meetings, staff meetings, coordination meetings. So in those meetings want to see how data is being used, how data informs the planning and the presentations and the sessions, how data is being used to inform the next actions. So we started by evaluating and also documenting the process. Of course, talking about there is a gap on data use doesn't mean like nothing is being done but you wanted simply to see how is it being done and what are the gaps in that process? What are the gaps in terms of documentations? And also the challenges for data use at those levels then together we work with them to configure the squaccers and also dashboards based on their priority indicators and also of course national indicators but also focusing mainly on their needs because sometimes we tend to forget data needs at the central levels. We have also been participating in monthly and quarterly coordination meeting in those pilot sites just to see, to be able to experience things rather than reading from the reports, the session reports so that we can really feel their concern and also exchange hands to hands for us to be able to standardize or to advise and standardize the approach in the other decisions. So for this exercise, we in our district of Excellency we actually using one district in the urban area and one district in the rural areas. After that, after participating the meetings and also attending the coordination meetings, we've done some assessments, especially looking from the, how things are being done and how the meetings are being organized, what are the facts presented? Then we document for further use and also for replication. We have also champions the supervision approach so that this, so as we are meeting this, so this is supposed to go down to the his facilities to see whatever we are discussing, how is it being done so that as we address one level but also as we address the challenge we not address on the district level only but also addressing the whole or covered area by the district so that as we improve data use we're not focused on at the city level but also we go deep to the facility so that data can be at the center of every coordination meetings, district management meetings and also data management meetings that are happening at his facilities. So what we simply did for together with the district people is to improve how DHIS is being configured for them to be able to visualize key performance indicators within a district. So we normally had this process of only customizing dashboards for the national use but we thought, why don't we go down and customize this dashboard and scorecard for districts. Of course involving stakeholders which means like the head of institutions, the head of the director of health at the district hospital at the administrative districts and all stakeholders data sitting in this district level coordination meetings. Exploring the scorecard as I'm not sure if everybody knows how the scorecard works but the scorecard offers possibilities of having tabular features for you to with the color coding style for whoever to be able to identify easily the bottlenecks or the area with problems, the performing indicators and non-performing indicators and many options. Part of the exercise, of course, with working with the way with the district people, it becomes like a peer learning process whereby as we do, they also learn how to do things. This means like also decentralized to a certain extent that capabilities of being things by themselves. This has informed the actual research, as I said, whereby throughout the collaboration with local universities and the UIO students to cut out implementation research on all this so that we can easily find out what are the challenges with data use at the centralized levels and also document the lesson notes. This is one of the one of the configuration for the scorecard for you to have a look. For example, taking an example here, you can see with the scorecard, you can easily configure a legions based on how you want and also based on the score. For example, here we, this is a sample of a scorecard for the EPI program whereby I want to see, we wanted to scorecard for a DHM team meeting, coordination meeting, to be able to see how API indicators are performing using different data sources, especially denominators. Here we want to, there's one indicator looking at BCG as denominator but also in other proxy indicators using systems data so that as they are trying to interpret the performance indicators, they are also comparing use of different denominator source because it's also sometimes contribute to whether you can think you are much progressing or you are not performing while maybe there is a limitation. This is about the results. Of course, this is about the results. One of the results out of this exercise is to have the district level of staff capacity to do this by themselves but also promoting data use at the ground level so that maybe they don't feel like they have to send data to the national level but also see how they can use data by themselves. Also the one of the lessons to have the regional collaboration because this app has been developed by our colleagues from Tanzania. So it also helped to be like a way of contributing to the regional innovation, providing feedbacks to improve the app for the rest of the community. I can't read everything as I've been told the time is running so fast but I'm not sure. As a conclusion, data use, it used to be a challenge and it is still a challenge of some of the districts but having a system itself is not enough to say that data is being used at all levels. So people have to know how to use this available information because as data are being collected at the ground level but sometimes much less efforts are being given to how to export that tool. And also the access level of people down there is not really well configured to support that at the centralized levels. Lastly, the district of excellence helps to you to see what is working well from the selected districts and for you to be able to replicate. Sometimes we assume the way we do things might be the best way but using this district of excellence has proved to us that there are a lot we can improve by testing some of the approach in some selected districts and replicate when we are quite sure of what is working well and also support documentation. Sorry for rushing because I've been told that the time is remaining is too small and there are a lot of presentation to be done. So the PowerPoint is shared on the, I think you'll be shared on the drive-ins. The video is going to be shared so you can still have a look and reach out to us in case you need to find out more. Thank you very much. Next, we have online actually Wilfred Sanyoni. So Wilfred, do you want me to pull yours over or do you want to present on your own? Rebecca, can you hear me there? Yes, we hear you. Oh, great. So just a minute. You need to, I think you're still projecting your screen. So give me images. Is my screen, can you see my screen there? Perfect, yes, thank you. Thank you. Hi, everyone. My name is Wilfred Sanyoni from Basing, Tanzania. I'm here to present a local project that we have championed at the Basing, Tanzania, but also working with the hip center but with the Ministry of Health. This project aims at enhancing the data availability and use at the local level and in particular at health facilities. So this particular project was made possible through the implementation of the District of Excellency. In some of the sites in Tanzania. So I think my colleague has just gone through and explained a little bit about what is the District of Excellency. But basically, but in its context, District of Excellency just provides a learning environment where certain solutions such as approaches and technologies can be properly developed, tested and for improved processes of monitoring, evaluation and learning. So the key point is that these particular spaces allows different stakeholders to come together and work together and test out the solution to there. And this solution could be digital or it could be routines could be structured, it could be kind of approaches or at least business processes that could actually improve efficiency of data use, data management, etc. And I think, Rebecca touched a little bit about that in the beginning. So why do we need a District of Excellency? So I think the most important part is to have a safe environment where stakeholders can come together, test some solutions, understand what works, what doesn't work and if it works for whom and it doesn't work, why does it not work? And then after that kind of see how you can scale also some of these solutions from one side to another. Additionally, of course, District of Excellency provided an environment where you can actually generate more champions of the system which can be used in terms of scaling the system into a broad aspect. As we all know, health information systems, implementation requires some champions who could actually help in terms of sustaining and scaling the system across the country, but also reach that. But of course, District of Excellency provide also a platform where researchers and implementers could come together, work together, learn and feed in from the research, the knowledge which is generated from research to that you're feeding to the implementation and going implementation vice versa to the research. So the District of Excellency in Tanzania, we have implemented it in two designated districts in the Domo region. The Domo region is the one of the, is a capital city or capital region in Tanzania, situated there, sandwiched between several national parks, as you can see there. We selected two district councils. One is by district council and the other one is the Adodoma City Council. So one district council provided a little bit of a context of the urban, while the other one was providing a rural setting. The idea was, you know, kind of to learn from some of the local initiative which was ongoing in one particular council and see how we can scale it to other council and then understand the practice of scaling these particular practices and then, of course, documenting these practices and also hopefully scaling it nationwide. Of course, by having two district councilors as you see, provide also a means of testing the solution and also kind of compare our knowledge which we can gain from different setting which you have from the rural and from the urban as well. So our methodology was a little bit straightforward. One, we tried to kind of focus our own district of excellence approach in terms of four dramatic areas. That is when we needed to focus on data management and information use. How data management and information use is kind of cultivated within the local means and how can this be scaled. The second part was more or less how can the digital innovation, digital tools could actually facilitate or help enhance data use at the local level. Then, of course, the third part was more or less on the capacity development. How can we build adequate capacity, build the champions who could actually take up the system and of course, work with that. Of course, the fourth part was the research and documentation in terms of working with the local universities and also in the national as well, post-grad students, undergrad students and see how we can conduct some research, actually research in the ground but also feeding this research into best practices and also share this best practice community. Before starting, we decided to kind of conduct a baseline study. I think my colleagues also, some of my colleagues have talked about baseline studies. As we decided before we embark on any interventions, we might need to kind of understand the local context in terms of understanding the formal and formal and also what kind of improvisation is required in terms of enhancing the process of managing routine data within this particular two sites which we have. Ideally is that the baseline assessment will be now providing us with answers and also now work together with the local stakeholders in terms of developing now these interventions which we want to work within this particular district of excellence. But the key part also is that these interventions need not to be parallel but rather building on top of the existing practice and routines in terms of that analysis that they use and et cetera. So these are some of the findings which we got from the baseline assessment which we have done. The report can be found of course online. We are happy to share the report. There are key areas which we did assessment and we were looking more at the level district level, facility level and we find some strength, some of the strengths, some of the weaknesses which could actually provided us with an opportunity to kind of now shape our intervention. For example, we found that there were already some champions who are there who are understanding the systems who have been using the tools and et cetera. We have found also there was a kind of knowledge in terms of analyzing data which is there. However, we also found some weakness in terms of skillsets in terms of using the HS2 to analyze and integration and most of it was more or less at the facility level where they were mostly thinking of reporting this data rather than using them for the local management. Of course, the other part we also found was shortage of forums and platforms where they could actually talk, analyze data, engage with the data and also interpret their data and make sense of their data and of course lack of standards in terms of how do you analyze your data? How do you manage your indicators? How do you disseminate your information to different stakeholders which we have within the district? And of course we think of facility. There are a lot of course different opportunities. For example, you know, the DHS2 has now been scaled to the health facility. So now the health facility can actually able to do most of these decision-making processes at their particular level. So that was on the data analysis and presentation but also we looked about the information use culture. What are their strengths? Of course there are kind of good opportunities which we saw there. One was, we have these quarterly review meetings which has been happening quarterly, not quite routine but at least there were some kind of institutional practice which is there in terms of what needs to be done, who's engaged, for how frequently are you going to do this and of course the challenge of funding was always there in terms of limiting these particular meetings to be executed but also the standards. We found that there was no common standard between these districts in terms of how they are using these quarter review meeting to actually use the data and influence the decision which is made in this particular quarterly review meetings which is there. So those are kind of aspect which we saw in terms of the need of having more standardized way about us as well as build this culture so that not only the district people will be managing and supporting the data analysis but also the facility now owning the data and using this and analyzing the data and using that for decision-making. Yes, on the general health data management we found also there are some strength there, good structures in terms of how they're managing the HMIS tools, ICT literacy, there was a good command on that on some level I think on most on the district level, fair level skills on use of data tools and et cetera but also availability of experienced in service operations. However, of course there's some of them who are lack of training of course, lack of of course continuation if you're trained today. I think the last 20 was like three, four years ago so there was a really kind of lack of continuation in terms of this on-site job training in that particular location. Of course there also lack of framework to enhance the use of data to inform decisions once they're there. So there was a lot of kind of findings which we kind of got from the baseline assessment. Once we conducted this assessment we kind of sat down with the districts and verify that and of course later on we kind of now designed an action plan with agreed interventions. The first point we saw that there was a need to kind of improve or enhance the analysis of data at the facility level and one of the barriers which we saw was the lack of denominators at the health facility. I think this is kind of a problem which faces a lot of health information systems and luckily we had kind of a local remedy in terms of how one champion was creating these denominators of health facilities. So we were trying out to see how can we map this particular solution from the rural setting to the other setting where we have different level of facilities compared to what we have in the rural setting. So that was kind of the first kind of way in terms of addressing it. And so that health facility could actually now have these targets and also compare themselves across. The second part was now to kind of develop together the health facilities and district level dashboard based on the local needs. They came up with the indicators. These are the indicators we want you to monitor every month. These are kind of the targets which we have. These are kind of areas which we want to really kind of monitor through our program and different programs. So with that with them and design a little bit of a facility level and district level dashboard with them and then of course, I'll share with them for particular testing. We also went ahead and build the capacity to the local user focusing more on the management the top quality analysis, dissemination and use because this is one of the huge gap which we saw in terms of, you know, most of these, I think Patrick also mentioned this most of these digital tools are all apps which we actually building within this health prevention system. We actually point them at the national level. However, the user, the local level are not really using this particular tools. And of course, one could be of course need of awareness but also maybe they don't have the capacity to use that. So we kind of frame them in terms of understanding these tools, use them to interpret their data, understand the quality of their data and et cetera. And it was quite informative in terms of the way they use these particular tools to really go down and look at their data and talk, you know, as colleagues, you know, kind of what makes sense, what doesn't make sense and how can you address these particular challenges which they are seeing. So it was quite a good platform in terms of engaging. And of course, following that, we saw that there was really a need to kind of introduce more routines or platform where these guys could actually come together, sit and discuss, engage, discuss, interpret their data and make decision out of their data. And this particular meeting should be now embedded within these quality sort of quarterly review meetings which they usually have and which are kind of something which they kind of institutionalize within their structures. So those were kind of some of the interventions which we thought of course, as we continue, we are looking forward in terms of how can we improve more. For example, we understand that the Minister of Health in Tanzania have designed, developed data use guideline which covers different levels with national region, district, and facility. And some of these guidelines, we are trying to see how we can also introduce it within the district, test them and also see how it works and also provide feedback for the ministry to see how they can improve these particular guidelines. So, preliminary findings, these are things which we have kind of deployed recently. These are some of these interventions. Of course, what we have seen now is that some of these actions being used by the users for supporting the local actions. And of course, what we have also noticed is that the capacity for review, analyze and dissemination data is growing within the people at the health facility. The important also part was, as they meet together, share this information, now even the best practices can be now elevated more or less kind of bringing the voice of the local people in terms of what practices they're doing. A good case was what we found out in one of the facilities in Bayh where the malaria focal person really kind of used the data to actually identify some of the facilities where they were having high-key malaria cases. I went there and doing intervention and then of course used the follow-up app to understand if these interventions are working or not working. So conclusion, basically, we all know that data use can be interpreted differently depending on different levels. If you're in the national level, your data use aspect would be different compared to the people at the facility level. However, I think these district of excellence sites provide an environment where these practices can be really tested, can be really guided and also learning from them can be now documented and shared with the other districts for a better or enhanced data use. The important part which we also learned until right now is that it's really important to build on top of local routines to facilitate and socializing of these introduced interventions so that this intervention can be sustained and scaled throughout. Of course, the use of DHS2 at the facility level to help our data review, analyze and use helps in terms of bringing these stakeholders or empowering these stakeholders in terms of what they are doing in terms of local actions. And of course, what we are seeing is that more research and effort is required in terms of anchoring these interventions within this district of excellence and a good part of course, a good example is this use of denominator for our facility. I think more research needs to be done, more effort needs to be done to see how best practices can be used to scale across the country but also finding a sustainable way in a way that we can really scale the solutions which we are finding in this particular district of efficiency. Thank you very much for listening. Thank you so much, Paul Fritz and also to Adolf for sharing the districts of excellence approach. So now we're entering our lightning rounds where I actually challenged the next several speakers to give me six or seven slides. So everyone, less than 10 minutes, let's stay on time. And in fact, I will actually ask you to please just skip over your context and goes just straight to our problem statements. So our next presenter is going to be Vincent Kamara from the Ministry of Health Uganda sharing around how the case-based surveillance system was actually used to improve linkage to second-line treatment among drug-resistant cases. All right, thank you. Good evening, everybody. I've been introduced. My name is Vincent Kamara from Uganda. I'm happy to be here. So quickly, to move away from aggregate reporting was informed by a number of issues we identified, especially with the quality of the information or the data that was coming through. Our aggregate reporting started almost in 2012, but we started moving towards tracker-based reporting especially for what we call the electronic case-based surveillance system. And so far we have rolled out this system to 504 health facilities and that is equivalent to 28% of all the sites that we have. But most importantly, all the 18 multi-drug-resistant sites are all enrolled on the case-based surveillance. So we have noticed that there was a lot of data management errors resulting from unreliable and inaturated data for patient management and decision-making, especially when people have to tally the forms manually to report in aggregate reporting. They were encountering a number of issues towards time-consuming. We are not able to do real-time data. Of course, limited and in-depth investigations for better management on decision-making, we are lacking when you're using the paper-based. But also when you're using aggregate, we notice that if you want to do special patient-level data, it would be a little bit difficult. So that's why we ended up with the DHIS-2 tracker, which we dubbed the electronic case-based surveillance for TB and liprosy. So we developed the app back in 2020 and we have trained almost 2,000 health facility staff in the use of the system, almost 1,400 clinicians. Those are the program people. Of course, if you really want the data to be utilized, you have to encourage the program people to appreciate the tracker-based system. So to further help them, we also trained the records assistants. So we have almost 527 records assistants trained, and of course, the biostatisticians who help to analyze the data at district level. Just to be clear that the biostats also help in the rollout of the case-based surveillance system to the health facilities. So the rollout started with the DR sites. We already had the management information system with them. So having the case-based surveillance system, we rolled out to the DR sites first, and then now we have scaled out to the other health center falls, hospitals and health center threes. But to make this even more interesting is the routine monitoring that is done weekly, especially for the data entered in the case-based surveillance versus what has been in the manual records or the hard copy records. So just to bring a little bit of context to this, for those who know about TB management, it all starts from a patient coming to be screened once they come to the outpatient department, the OPD. So there they use the ICF guide, and from there, for patients who have been presumed, they are entered in the presumptive TB register. And at this point, this is where now the ECBS comes in because once we send them to the lab, now the positive, those who are positive for TB and those who have refunds in resistance are entered in the case-based surveillance system. So once they are entered, there's a notification that goes to the DR site, both SMS and also they appear on the dashboard at the RR site. So once they have appeared there, there is a quick follow-up. It's in real time. The moment the notification comes, the TB FOCO person or the DR TB FOCO person of that site immediately follows up to see that this patient has been, can be started on treatment immediately. So that's where I mentioned that there has always been a check between what is in the system versus what has been in the money records to make sure that this linkage is made actionable. So periodically, on a weekly basis, they have the regional TB and lead process supervisor who generate lists from the case-based surveillance and always compare with the registers to make sure that we don't have an NME-sync link. And the integration of having the system together with making phone calls helped the diagnostic treatment centers and patients to make sure that they are all initiated. And of course, once a patient is confirmed, we make sure that for us to close the gap, we make sure that the index TB patients and the RRs are contact traced and all these links, all these lists are generated from the case-based surveillance system. The system has also helped us to make appointment lists just to make sure that people who have started on treatment we make sure complete the treatment. So this continues especially for the people who have been contact traced. We continue, link them for screening, screening, presumption, after presumption, we start them for treatment through the different tests. So that's how the ECBSS helps to bridge the patient management flow. And just a few results of what we have seen since we started using the case-based surveillance, we had an issue of linkage. So some patients would come to the laboratory, they would be sent to the lab to be tested. Some would turn out to be RRs, Reference in Resistance cases, but then somewhere somehow would miss to be started on treatment. And this gap was quite low, almost, it was a big gap in 2021, July. So once we started using the case-based, we started closing on this gap. And as of the last six months in 2022, we were almost closing this gap, we're at 98%. So if we looked at the ones of 2023, we have actually managed to close this gap. And this has been much really attributed to the case-based surveillance system. And of course by regions, we know that regions that have improved much better than the others. And we have picked peers or people who have done this very well to actually move out and send out a message to the others. And that's how, so we can see those people that are within the 100% level, those regions, we have picked out the RR focal persons to go out and give these key lessons to the others. So short-term outcomes, we've seen, this is an excerpt of our case-based surveillance system. So we've seen an increased proportion of identified patients started on treatment. They reduce time to start treatment because sometimes when you let them be for a long time, you end up losing them. So the moment we noticed them, we're able to start them on treatment quicker. And of course with the easy follow-up, especially with the notification module, we're able to see improved follow-up and thus improved treatment success. So the people who have been notified quicker, we do quick follow-up and we make sure that there is treatment success. And this treatment success is monitored better because we are the six month interim, the 12 months and finally the 24 months. So because this is tracked in the system, we're able to know and pick out the missing links. And as at the ministry, we have seen improved health management decision-making. For example, now we are doing a shift from the original regiments, we want to have the B-palm and the B-palm. And of course, in the older way, we would have waited to have the hard copy tools printed, the hard copies rolled out. But now with the case-based surveillance, we are able to add these regiments directly into the system and the people are able to select on these regiments quickly once they have been started on them. Of course, improved monitoring of health programs, fast-tracking patient-level analysis across programs. And now we are able to improve the geographic surveillance and also these were the six months and the 12 month interim success rate, which we are able to pick out. And what do we look out for? We target scale up of case-based surveillance to all, to more diagnostic and treatment units, integration and interoperability with other EMRs. We are lucky that the HIV EMR is really up to date and we want to have the integration work. We also have diagnostic solutions like the lab expert. So once the gene expert machine picks an RR, we would want it to send the information directly to the case-based surveillance. So this interoperability option is already in the offing and we believe it is going to lead us in the best position. And lastly, to explore how artificial intelligence with machine language can be incorporated in the case-based surveillance for MDR TB patient management. With, I've learned a lot from today's presentation earlier and I believe it's going to be a magic bullet in our MDR treatment. I thank you very much. Thank you so much, Vincent. So our next two presenters, our final two presenters are online, both with FHI 360, with various ways of using DHIs to track or to improve HIV case management. So first, I would like to invite Esofa Kokoloko to please go ahead and share your screen whenever you're ready. Let me get out here. Okay, so Esofa, I don't think I'm seeing you. Okay, now we see it, Esofa. But we do not hear you. Can you try speaking? I don't think we hear you. Okay, so we cannot hear you. I'm going to, I think it is your mic, Esofa. I think it's on your end. Okay, maybe can we switch and maybe have Ulrich Katrye try first and see if that fixes the problem. Ulrich, would you be ready to present? Good afternoon. Okay, no problem. Okay, great. So I hear you. So just to confirm, Esofa, I think the challenge is on your end. So I'm going to let Ulrich go ahead to see if you can sort it out, okay? Over to you, Ulrich. Okay, thank you very much. Can you see my screen? Yes, Vincent. Okay, good afternoon. I'm Ulrich Katrye, monitoring and evaluation officer at FHC, FHR Bukina Faso. My presentation will focus on the term improving patient outcomes and continuum of care through application of the HS2 tracker, a study case of the EWA project in Bukina Faso. Ending ads in West Africa is a cooperative agreement with the USN running from 2017 to 2026. The goal of the project is to achieve control of HIV epidemic in West Africa by accelerating progress across the region towards ending ads through prevention, care, and treatment with a focus on care populations. Our objective is to improve service delivery at community and health facility level, 45 viral load laboratory system and 45 monitoring and evaluation system. In Bukina Faso, EWA covers five health regions and concerns, 31 health facilities, and five viral load laboratories under the leadership of the National Council of Fight Against AIDS and STIs, and half sector program to fight AIDS and STIs. One of the main challenges of keeping people living with HIV in the continuum of care is the major issue in achieving viral suppression, a guarantee of the quality of medical care. The inability to increase the number of patients on active ERT despite the success of the EWA project in case detection and links to ERVs was mainly due to difficulties in identifying mist with appointment and treatment interruption. This presentation will demonstrate the positive impact of using the DS2 tracker to improve patient outcomes and ensure continuity of care. We have faced significant challenges in keeping people living with HIV in the continuum of care. The number of active patients on ERV did not increase proportionally due to difficulties in identifying those who missed the ERT appointment and those on treatment interruption. A patient on treatment interruption is a patient without ERV contact withdrawn for more than 28 days since their last schedule examination ERV contact. What was the contribution of DS2 in solving this problem? The EWA project in Bukina Faso uses the electronic HIV tracking to DS2 for electronic individual management of people living with HIV on ERT. A custom DS2 dashboard was developed to allow HPEC-5 U.S.S. supported health facility to generate an anonymous weekly list of patients with renewal appointments. Those who missed the appointment of two and of those who were in interruption in treatment. These line lists contain information that is allowed each patient to be contacted by telephone to remind them of their appointment in order to reduce dropouts. Patients are contacted the day after the missed appointment. These two capabilities enabled EWA to earn a two-month return to care campaign in August, October, 2020. The result we have reached we note that we went from a test score of 17,231 in August, 25,841 at the end of October. The proportion of ITT then till from 48% to 16%. Patients who died or were transferred to other health facilities were documented in the tracker. The continuous updating of DS2 list facilities, the individualized follow up of patients. And in April, 2023, the proportion of ITT within the EWA project in Bukina Faso is 2% of people living with HIV under treatment. In conclusion, we can say that the DS2 tracker is beneficial to EWA project. In every time the analysis and reporting to improve continuity of treatment and return to care. It effectively identifies patients and provides the information needed to schedule equipment or organize community dispensations. Automation of the patient list was crucial to achieve the project goals. In view of these successes, the government of Bukina Faso is working to set up a national tracker in health care facilities for HIV in the country. Thank you for your attention. Okay, so there's access on blocking. Okay. Hello. Hello, over to you. Over to you, thank you. We hear you now. Okay, sorry, do hear me, no problem. We do hear you. Okay, it's good now. All good. Okay. Sorry. I'm ready now. I have a honor to present the abstracts on topic increasing virologic in eligible people living with HIV client using DS2 tracker in Tobu. I'm Esofako Koloko, the regional data manager at FHI Tobu. I will present my abstracts, follow this outline. In context, I will remember as the project AWA. AWA project ending 8th in West Africa is founded by USAID through cooperative agreement. The goal of our project is to achieve HIV epidemic control in West Africa. Specifically, project objectives are first improve service delivery in community and health facility level to staffen the barrier law laboratory system and staffen monitoring and evaluation system. As we see on chat at the left, our project cover four health region and it's implemented in 29 health facilities and its barrier load laboratory. Our project works under leadership on national health control program. We have the main challenge in our project is to attain barrier load suppress for people living with HIV. It's at three level this talent, pre-analytic, analytic and post-analytic. This presentation will show us how using of G2 tracker helped to address some analytic first problem in the barrier load coverage journey. What specific problem did we trying to solve about? Basically, every system had many difficulties to identify easily. People living with HIV needs barrier load testing because this system used paper-based and manual practice. This system couldn't support estimates eligible patients for barrier load tests, hygiene planning. Clients of unaware of a barrier load testing program and paper-based reminder are difficult. And MNE system lacked sample collection children resulting in client presence at around clinic time and leading to delayed barrier load tests. How did we use GISH to address this challenge? As World Health Organization and National Health Control Program recommend, we can generate the barrier load testing date, eligible date for each patient with using GISH tracker in our project. When we enter ALB events in GISH, we directly show the first and the second barrier load eligible date for new patients. For other patients, we display the routine and target eligible dates when we enter barrier load sampling events. In terms of utilization, we use GISH to display recent events and ensure availability of individual information. We created Dashboard to show the number and list of eligible patients for barrier load tests within a specific period. We implemented a process where site data managers extracted a list of eligible patients. And we also matched information on ongoing appointment with the healthcare provider in charge of these patients. We provided a list of eligible patients at the beginning of each month unable healthcare providers to take monthly action based on the information. What analysis tools did we introduce? Perfa USAID found project IWA in Dogo improved barrier load testing, performance and demand. First, GISH Tracker generates help us to generate a list of eligible patients for barrier load tests in demand assessment on a weekly, monthly, and yearly basis. By using GISH Tracker barrier load eligible dates and the patients contact detail is used by healthcare provider to optimize communication and ensure timely provision to barrier load test services. And at the last blood sample for 29 PEPFAR supported sites are site two times in week. Most important result we obtained using GISH Tracker in Dogo is for January to December 2022 sites which use GISH Tracker had sample and tested 97% of eligible patients and 95th of those tested achieved barrier load suppress. When we compare this result to sites without GISH Tracker only 56% of eligible patients were found and tested and 88 of those were tested achieved barrier load suppress. Due to this positive outcome the National AIDS control program has recently expanded the implementation of GISH Tracker to 50 non-PEPFAR USIG supported site in Dogo. In conclusion we can say the barrier load testing is a crucial measure of success and therapeutic therapy and GISH has to give us an opportunity to increase demand of barrier load to people patients who need tests. Using the GISH Tracker in this step initial step on the barrier load testing process is important because it allows us to easily identify eligible clients to track missing appointments and to align barrier load collection to ARV refill antiretrovillary refill data for patients. The adoption of GISH Tracker at treatment facility has the potential to significantly enhance demand for barrier load tests in our country. Finally we can say based on the promised results achieved in PEPFAR USIG supported sites skilling out GISH Tracker to all facility appears to be worthwhile for the government. Thank you for your kind attention. Thank you so much, that was super impressive. I know we have a lot of software team implementers that are excited to hear about how these features like working lists and e-registries help patients to achieve their viral load testing to reach their targets and achieve care. Thank you so much for your attention. Thank you, I appreciate it. We will close here and any questions. I think we can find other presenters in the hallways this week. Have a good rest of your evening.