 This is the session on DHS2 and PEPFAR, and let's get started. If other people wander in, they can join us in progress. We have three presentations in this session. As you may know, DHS2 is used in a PEPFAR, there's a main global instance that collects all kinds of data that's called datum, but DHS2 apparently is used in many other ways in PEPFAR, and the papers we're going to hear today, none of them are actually talking about data going into datum directly, so we're going to hear some other ways that DHS2 is used in PEPFAR. The presentations will be by Lombay, who's going to talk about the safe system from USAID by Kayla, who's going to talk from FHI 360, who's going to talk about an epic DHS2 system, and from Vlad and Alicia from ICF, who are going to talk about digital health inventory and how that's being used in DHS2. First, we'll have a remote presentation. I'm sorry that Lombay couldn't be here in person, but some visa problems presented that, but he will present it virtually, so Lombay, you can go ahead and share your screen and start when ready. Okay, thank you very much Jim, so let me just share my screen until let me know if you see it. Looks good. All right, that's good, good. Yeah, so my name is Lombay, I'm the data management and strategic information advisor for the USSF project, which is a project under JSI, which is Johnson or Inc., and being implemented in Zambia in three regions of Zambia. So the abstract title focuses on the use of customized apps in improving data collection and monitoring, as well as reporting into the into datum, which is the PEPA system that we're reporting into on a quarterly basis, semi-annual and on an annual basis. Yeah, so just a brief overview of the USSF project. So the USSF project is implemented, it's been implemented in three regions in Zambia. As you can see by the map there, they are shaded regions that are north-west in Kuba Bay and central province, and then find that they, if you look at the main mandate of the project is to reduce mortality, morbidity, and transmission when proven nutrition outcomes and family plan integrations in the three regions. And the project life is initially a five-year project, which comes to an end this year, but we have an extension that's going up to 2024. And that's a period of implementation. And at the moment, we are supporting 302 health facilities, which are in 24 districts in the three regions. Then of the 302 facilities, we do have the strategic information assistance, who we call, we can refer to also as data associates. So these are people that handle the data facilities, as well as the rest of the clean and other community health providers in these three regions. And then if you look at one of the mandates that we have amongst our objectives is the 95-95, which is a UN-edged objective, where we are looking at 95 percent of our people to know their status out of those, 95 percent should be put on treatment, should be initiated. And then out of that, the 95 percent should be very suppressed. So we work within the same UN-edged objectives. And then at the facility itself, we just don't focus on one component. We look at different other aspects there, like cervical cancer. We do look at family planning. We also look at the pediatric cascades. And also support some of the community activities such as index testing, of which everything, when you put it up together, we need to call back to the 395s on testing initiation, as well as suppression rates. So because of that, that's why we find that we do all indicators referenced by the MR. As you're all aware, most of us in the house who are familiar with PEPFAR, PEPFAR does release the modern evaluation and reporting reference guide every year, somewhere by September, October, somewhere there. So this time around, we have a vision 2.6, which already is September of 2021. So we actually use the same indicators that in the MR, that's the data that is collected from the health facilities by the teams, the MRD team, as well as the clinical team. And then we do have some other custom indicators as well that are collected on different intervals. So we do have indicators that are collected on a daily basis. We have some indicators collected on a weekly basis. We have some on a monthly basis, quarterly basis, and the semi-annual basis. So depending on what reporting needs we want to find that we do collect these indicators on those different defined periods. So as you've seen by the header, it's written GHIS2 echo platform. So you can just give a brief background why what the name echo, of course the platform is GHIS2. So echo, we were just looking at the bouncing back of the way it is, the name is echo, the bouncing back of sound. So we looked at it and thought of having a creative name where instead of bouncing sounds, we are bouncing back data. So when data is collected from the facility, using GHIS2 aggregated, analysed, it's bounced back to the different providers who are at health facilities and as well as the other technical team who are clinical. And everyone else who have an input in the data and we all now have to speak the same language. That's how we came up with a term echo to go our system. And then looking at the pay for EMIR, all pay for supported IPs in the country and just use the EMIR. That's why we also adopted the EMIR for our reporting on the different indicators. So give you a brief background as well on where we were before migrating to GHIS2 and how we used to do the reporting. So before then, we were using an offline system, you can call it. It's where data was being collected from the Excel template from the facility level. And then that Excel template is submitted to the district for aggregation. Then the same tool, a different tool now is submitted to the province for aggregation into one file. And then that file was put into Microsoft Access. So we were using Microsoft Access as a transactional database. So I think that's used right there at the province. And then we know the cleaning is done. And then the province is happy and comfortable with their data that was now sent to the national office, where aggregation was done into SKL7, which we're using as a data warehouse at the time. And then depending on the different reports that we have, we have the daily report, weekly, monthly, quarterly, I know those reports. So finally, to have all these different templates and all these different transactions of databases as well, which would be sent back and forth to the regions and the national office. And then also later on aggregated into one into SKL7. And then finally, when time came for reporting itself, which is the Quota report as one of the reporting periods itself, paper, we'll go back to SKL7 and put this data on some queries, put the data into an Excel CSV file, which was later imported into Datting. But before that, what we had done using the Datteam UIDs and the Datteam codes that we got from the exchange platform, we actually mapped those codes into our SKL7 database. So it would be easy for us to actually get the import file that we can use for reporting into Datteam. So we find that during this whole process of having the back and forth access, SQLSaver, get a Datteam file, finally to have these all back and forth, which was time consuming. Finally, by the time you're getting a report from one of the regions, which is about 700 kilometers from the national office, per adventure, there's no proper internet connectivity at the time. You find that that file can actually be shared. Even if you try to share it through Google Drive, would have those challenges. And then the process of cleaning through all this from Excel to access and SQLSaver was also proving to be a challenge at the time. And then every time the mail is released by September, we have to go back and update the new codes if the indicator has been dropped and find that that entire process of closing out some indicators opening up other indicators, changing the frequency of reporting and all that, you find that during that whole process, you find that you will lose out on time and have some errors in there, the mappings won't be correct, and some validations also miss out at the time of doing the Datteam validation, Datteam validation and the import. And when all that is done, the report has to be generated in money. So when you run the query for miss here, we have this reporting in Excel, and then you start filtering out what we don't need to come up with an import file that is needed for Datteam. And you find that at the time of doing that report, the entire process was only have a short window from the time we finished our data collection, which ends somewhere by the 10th of every month. And then in the end you have only have about two weeks to do the cleaning, the cleaning and all the aggregation and everything else before we put into Datteam. That's if you're doing the Kotter Report. So we're forced to wait more, extra hours, we're forced to touch and bring the mean that oil for us to make sure the data is clean, because we do have a lot of better elements that would what we collect as a project. Right now for us to go to DHS to improve them, because we still maintain most of them, we've got over 600 that elements that would collect on a monthly basis. So find that as you are running through all those validations using our SQL server and the access, time will be lost. It was quite a little bit intensive. So at the time of reporting, when you're done with the actual reporting, you now have to take a breather and take a few days off before you can report and get back to work. So with that, DHS2 was then introduced to us in JSI. It was there running with other projects, but as a project, we now thought of migrating to DHS2 and we thought of how are we going to improve our data collection, the aggregation as well as the importing tool into Datteam. So we came up with our DHS2 instance, as I mentioned earlier. So this little diagram here is able to show us a few, let me see if we can get the laser to just point a few things here. So right now the way it is, at the facility level, so we do have this data entry person with the strategic information assistance who does collect the data from the registers and the different THR systems at the health facility and they enter directly to Datteam. Sorry, they enter directly into Echo, which is our DHS2. Then at the same time they enter in that, we know we appreciate the good features that DHS2 has. The M&D team mostly at the district level and the provincial level are able to see the data and they can quickly give feedback and they can easily run validations from there, provide feedback to the facility and the team in the facility can also now they can go back, correct the data and clean it right there. Such that at the time when they're done with their cleaning they confuse them, see some visualizers and reports to see how the data is sitting and then we have some other now key stakeholders, technical team can also look at the data and see what's happening. So at the time everything is happening in collection, everyone is involved from the facility all the way to the national office. Everyone wants to see how the data is, run the validations and all that. So what we did to make our lives easy, we mapped all our validations may make the Datteam validations for most of the indicators, especially the ones that are paper indicators, they mapped the Datteam validation so it's easy for us as we run the validation against the Datteam and important to Datteam find that it makes life easy. So how did we achieve this as a project? This is where I have this section here where we're showing the managed data for Datteam app. So this is an app that we have developed and embedded into GHS2, we call it the managed for Datteam app. So how we came about setting up the app, we got the UIDs, example for the organization units, we got all the quotes from Datteam as provided by the Datteam exchange team. We got all the org units, IDs and we got all the data elements, UIDs as well and we got all the category, option, combos, the way Datteam has actually, the way they are set up in Datteam, then we use those ones to map to our GHS2 UIDs. So the indicator in Datteam, example, two excursions is the people that are currently on ARIT is mapped the same way in our echo as the people grant on ARIT. So we get the UID from there and we get the UID from Datteam and then we reference those two. So that's how we managed to work around the managed for Datteam app. So to achieve this, we are using, right now we're using vision 2.36 and all the data is actually stored in the data store. We don't have any external storage of data, so everything is from the data store and we're using the Python scripts to help us do the mapping from and the mapping between the Datteam UIDs and the Datteam codes as well as the echo UIDs and the codes. So found that after introducing this app, I can attest to this, the work that is to take one week for us to prepare the quota report, the quota three, quota three has got very few indicators to take us about a week class and for that we're able to do that in like two to three days, maximum three days. That's if we're just taking our time, we're just waking maybe six hours in a day also, but if we try to push further or at least with nine hours in a day, we find that we're able to achieve that in two days and finish the entire quota three report for certain reporting periods for semi-annual annual, that's what we tend to do about four days. So the period has just taken two weeks, it now takes us about a week or so, five days, we are done with the reporting because life has been made easy, so all we do is just click the button and then see the mapping is done right in the background and though the validations are already configured in our DHIs to based on Datteam, we pull out the data and it will generate the CSV file that we now use to validate against Datteam and which I reported the data into Datteam. So it's one of the things that has helped us, the app customization, which has actually made life easy for us to map the category, the category combos and every time there's a change in the MERA now, if I know it's easy for us, we just go in, just change from the front end and then the indicator will be mapped differently and also the manage for data, the app actually allows for data quality checks, web monitoring validations through the app itself, so we have about three intervals of validations, the form validations, we can the actual data quality and then we can run the validations as well from the app and eventually run the validations into Datteam and then we will then report the data into, we will report it into the Datteam Dev instance and the file of the instance to Datteam support for Datteam on plot. So what are some of the things that we have picked up that we can call the results and the results and conclusions is that using DHAs to an eco-entranded and to health facilities has actually made life easy for us when it comes to aggregation, analysis and reporting, that's like at facility level and all and then on or not end when it comes to Datteam reporting, find that life has been made easy because before then we were, we would assemble a team of about 30 from Mitridge and 30, 40, bring them in one place, I start doing my new data entry, though they have to burn me that all work one, two weeks to do the entries but now we don't have to bring anyone in one central place, all I have to do is wherever they are, they do their normal entry on the monthly basis, then we just run the app and the data is aggregated within a few minutes, we'll have the data already in Datteam. So found that the time has been reduced for about 50 to 60 percent when it comes to running these preparing the Datteam reports from 10 days to about 5 days somewhere there, all that has been reduced has by using the app and then the app also helps us to generate the reports that we need depending on the frequency that we're reporting on and then it also helps us to just submit an error-free report over and then definitely what we can appreciate about the app is we try to also to make it easy to help us report on data outside Datteam such as the high-frequency reporting to get it as a HA farm. Sorry to interrupt, it's been more than 15 minutes, are you able to wrap up in a couple of sentences? Sure, not a problem, this is the last slide. All right, so we can say efficiency has improved the data quality, has actually improved our data quality, that's the use of the app. And yeah, like I said, this was my last slide. So yeah, so that's what I can say about the apps in that it has really improved our work and it does make work easy for us as a project. And thank you. I didn't realize it was 15 minutes. Thank you. Next up we have Kayla who will talk to us about Epic System that FHI 360 has developed. Okay, while they're setting up the presentation, maybe I can just start by saying, Lone Bay, please send us your app. We definitely want to use it. It would definitely help our project as well. All right, that's good to give. Yeah, we can chat to all of us. Okay, great. So my name is Kayla Stankvitz, I'm a technical advisor in health informatics and data science at FHI 360. And today I'm going to be talking about our experience using DHIS2 in the Epic project, which works in 65 countries. So honestly, you can take all the background from the previous presentation and just apply it to many, many, many countries. And it actually works well as background to our project as well. Okay, but just briefly, we are a five-year PEPFAR project supporting HIV and also COVID-19. And we're focused on key populations. So these are populations that are at higher risk of HIV. They might be female sex workers, men who have sex with men, people who inject drugs. We also work in some generalized epidemic countries where we just support the general population. We support the full range of HIV services from testing, prevention, and treatment in over 30 countries. And more recently, we've started supporting COVID-19 vaccination and oxygen systems in over 40 countries. So that's where the 65 countries come in. We work in some countries in both HIV and COVID. So the reporting burden for our epic supported sites is quite high. Many sites originally collected data on paper, and then they'd have to count up those numbers on a regular basis and report to Ministry of Health databases. They have to report monthly to our project database, which I'll talk a little bit about later. And then they would report quarterly to data. So there was a very high data entry burden for our staff. And through Epic, we've tried to improve the data reporting and utilization of the sites that we work with. And DHIS2 has been a really integral part of that process. So we use DHIS2 tracker in, I think, 20 of our HIV countries now. And as I mentioned, as I mentioned, both our project database and our funder's database are also based in DHIS2 aggregate. For DHIS2 tracker, we actually developed a metadata package that's available on our website that can be used for tracking all of the services that I mentioned here, or sorry, here from testing prevention to treatment. I've given multiple hour long talks on that system. So I can't cover it in an hour or sorry, I can't cover it in 10 minutes, but I'm very happy to chat with anyone about that if anyone's interested. So this is what our aggregate DHIS2 system looks like for the project. It's called InfoLink. If anyone is familiar with datum, this looks a lot like datum. But what we do is we further disaggregate all of our data by population type. So if you think about datums reporting, I think maybe some people in the room know how burdensome it is to report quarterly to datum. We have further disaggregated our data and we're reporting it monthly instead of quarterly. So this is a really, really big data entry burden for our teams, but enables us to have really good data that we can act on. And I just want to, this is mostly focused on HIV, but I want to briefly mention that maintaining a DHIS2 aggregate system at the project level has enabled us to very, very quickly adapt when we got COVID-19 bias. So we were able to very quickly use this platform to collect COVID data as well. So as I mentioned, the reporting burden into InfoLink monthly is quite high. So we've actually used a tracker to aggregate data migration process to translate the data from our 20 tracker countries into the aggregate data format to report into InfoLink. I think if you're particularly interested in tracker to aggregate, you're probably not in this room because there's a session about this going on right now. But I'm happy to talk about this process that we use as well. And I'm really excited to hear that this is being integrated into DHIS2 going forward. So the main thing I want to talk about is how we actually automate our reporting in the project and how this has enabled us to improve data quality and consistency of our reporting to our funder. So we do this using Power BI. So the challenge was that each of our countries that work in HIV, which as of right now, that's 34 countries, has to submit quarterly reporting slides to PEPFAR. They have to generate these on a quarterly basis. And the quality of these slides kind of varied across countries. Some countries had a lot of capacity to generate graphs and visuals. Some had lower capacity. But there was also nothing preventing them from entering different data than what they were reporting to InfoLink. So they could report to InfoLink and say that they tested a thousand people and then they could write in their quarterly reporting slides that they tested 800. And there was nothing to automatically flag that as an issue. So what we did was we actually automated the reporting process via DHIS2 and Power BI. So we utilized the DHIS2 API to create what is technically a Power BI dashboard. But we export that dashboard to PowerPoint and it automatically generates about 100 quarterly reporting slides for each of our countries. So there are multiple benefits to this process. One of them being that we have one source for all data. So now any if you want to use the automatically generated slides you have to enter your data into InfoLink and you have to enter it correctly. So it actually motivates people to enter their data on a more timely basis and to correct their data entry. It also really reduces the reporting burden so people don't have to manually create their slides every quarter. And it has increased the quality and consistency of visuals across the project. So this is just an example of what that dashboard looks like. So when one of our teams logs in they have to enter information such as the country they want to report for, the quarter they're reporting for, in which population types they want to highlight in the slides. And then the system actually automatically generates 100 slides. So it's as easy. So all the slides will show here along the left and it's as easy as expressing export to PowerPoint to get your slides in a PowerPoint format. This is an example of a slide that's actually not showing any data. It's just a title page for the PowerPoint presentation. But here are some examples of what some of our slides look like. So this is an example of the cumulative cascade showing our performance towards targets. So we can see for some key paper indicators what percentage of our targets we've hit and see a consistent graph across all of our countries in each of our QPR or quarterly reporting slide decks. Here's just another example of a slide that again shows the graphs that are automatically generated. So lessons learned, DHIS2 has been a really effective platform for increasing timely access to data in our project. We work across a lot of countries and DHIS2 has enabled us to get data from all of those countries in a timely manner and share it with the stakeholders that need that information. It's also enabled us to rapidly deploy information systems in during the COVID-19 pandemic. The DHIS2 tracker to aggregate integration process is possible, but it's I haven't had a lot of time to talk about it today, but it hasn't been very simple for us to maintain mainly because some of the things that the first presenter talked about, our funder updates our reporting requirements every year. So every year we have to go into 20 different DHIS2 instances and update all that mapping. So we're looking for better ways to kind of facilitate that. And finally, DHIS2 and Power BI combined have greatly improved our ability to use our data, but we acknowledge that Power BI isn't a good solution for every country. As a large international NGO, it works very well for us, but as we transition some of these systems to local ownership, we're really interested in similar open source solutions. So if anyone has any experience with that, please feel free to reach out to me. Great, thank you. Thank you, Kayla. The final presentation will be from Vlad and Alicia, who are working with ICF on digital health inventory. Okay. All right. So hello everyone. So my name is Alicia Smith, and my colleague and I will be presenting on a new initiative by a project called the Digital Health Inventory. As Kayla has gone through Epic a while ago, that's one of PEPFAR funded project. And this is one thing that PEPFAR that will need to be invented in the future once we have this tool available. So just a quick background of the why we're doing this tool. So one of the challenges or what PEPFAR is seeking to do is align digital health strategies. And they need to do that in order to have better health outcomes. And they can't do that because of, you know, data needs to continue to evolve. You know, with the pandemic, you see that we needed data for COVID just to see how we were responding. There's different data formats and systems in their reporting countries. We've heard about interoperability issues, data use issues in a lot of the sessions that we had today. There's another problem with standardizing of indicators on reporting. As any person who is lending you money or giving you money, you need to be accountable for how you're spending our money. So, you know, there's a high demand for accountability. There's high degree of fragmentation. We realized that there are a lot of donors who are investing in the same digital systems. And it would be good if we'd have better coordination across these donors. And one of the things that we have heard on numerous occasions is we need to have granular data. That's the best way for us to have proper secondary use of data. And for persons who work in the purpose space, you've heard Mark, use this phrase a lot. We need to collect data once and use multiple times. So I've adopted. I'm not sure what's happening here. Okay. I'm not sure what's happening here, but what we did was we looked at the principles of donor alignment and kind of copied some of the ideas or the ideology from that. And what we want to say is for what donor wants is they want to collaborate and align their investments to the national digital health strategies. They want to prioritize their national planning to just incorporate digital goods. A lot of the work that preferred does is focusing on digital goods or global goods. We want to be able to determine and quantify long-term costs of any investments that we're investing in. We want to track investments just without the progress, the learning opportunities, successes of these investments. And we want to be able to strengthen donor technical skills as we just understand how each donor is investing into these tools. So what donor will invest in, we'll invest in things that are creating and just focusing on the national digital health strategies, policies and regulator frameworks. We're also looking at systems at an appropriate, okay. There we go. So when the slides are real, we can review this slide, but it's on the principles of donor alignment. So what donor will invest in, again, anything that focuses on the creation and evolution of the country's national health strategies, policies and regulatory framework. We're also looking at systems that are on the projectory and progress of continuing their health maturity level, looking at sustainable country capacity, focusing on government implementation, global good adaptation, and also digital health global goods that are sustainable, scalable, accessible, and again, the big word, incomparable, just to ensure that we're meeting those country priorities. And one of the things that we don't really focus mostly on is just the stakeholder sharing of information and ensuring that as a global community, we're doing peer learning across these multiple systems that we're investing in. And that's one of the reasons, those are multiple reasons why PEPFAR is launching the digital health inventory tool. We're referring to it as DHR, not to be confused with DHR, it's two, right? Just a short inversion. So this is a new data stream that has been outlined in the country operation plan in the FY22. And what DHR aims to do is just help PEPFAR understand how these investments start to, you know, put their money in, how they can, the collecting data, so these digital health can help us inform planning, align investments across donors, lower burden, and increase the usability of national digital health inventories and also just identify ways that we can scale tools and just improve healthcare delivery. And again, going back, we're seeking to have better health outcomes. And one of the things that we've done so far with this tool is we've developed it, and we have already piloted and have collaborated with several stakeholders, namely there's an IWG that we're working with. We're also working with public health gates, foundation, global fund, and then we have extensive feedback and, you know, collaboration with multiple implementing partners, namely from Uganda, Vietnam, and Zimbabwe. So quickly, my colleague is just going to run through how we implement this in DHRs too, and I just did the background. Thank you, Alicia. And I recognize we have very little time left, so we'll just go through this quickly. And I'm not going to have a lot of technical detail, unfortunately, because it will take a lot more time. So I'll just do an overview of how are we doing this? So we are doing DHRs too. We are using DHRs too. And so, you know, PEPR has data which basically is used for collecting most of the data streams that PEPR collects, be it that MRS, SIMS, or anything else. So all the users have accounts. They know how to use the platform. So creating a new platform for new data stream would be just not a good idea. So we were trying to figure out how to fit DHI with its complex qualitative questions into DHRs too. Obviously, aggregate data format does not fit. Event and tracker didn't exactly meet our needs either, because our questions were too complex and the requirements of how to display the forms was a bit complicated and convoluted as well. So we decided to go with a platform as a DHRs too, where we would create a React application using the DHRs to library. However, we would use AWS serverless platform on the back end for storing the data and processing the requests coming through this application. We still take advantage of DHRs too for user authentication, user authorizations. We use the data store where we keep some of the information that is possible to store in the data store, and then the rest of it is in AWS, easily accessible. The system team working on PEPR systems, BAO mainly, has been very helpful putting together the tool set that allows this sort of throughput and connectivity to the AWS. So what this custom app does is basically just log in into DHRs too, and then you have this app, but it provides you with sort of alternate form that you wouldn't be able to get through native DHRs to data types. We have custom data entry field types, where we have the checkboxes where you can select multiple with other, where you can extend it with additional text associated with the other. There's radio buttons, and there's this dynamic questions, where if you select a particular answer that additional sub questions will appear. And then we do on the flag logic validation, where we flag everything as you type, as opposed to, you know, in data entry app, you would have to run the validation as you finish the data entry. So what we've done is piloted this to three countries, and these three countries provided over 80 entries. We've had follow-up focus groups with the three countries, and we've gathered feedback from them. Based on their feedback, we've implemented the bug fixes, we enhanced the workflow, and revised and expanded some questions to address the needs. And then based on that feedback, we've had version 1.1, and we're launching it actually on July 1st. It will be launched to 23 priority countries in PEPFAR, but then PEPFAR obviously has, I believe, over 70 countries, and the rest of the countries are optional, where if they want to report it, they can, and it's just that the priority countries are required. Now, of course, collecting data is one thing, but then how you use this data, you know, I think Jason Pickering likes to start with the back where, you know, you start with analysis question first, like, what do you do with this data? Because asking for the data is easy. So it'll allow us to facilitate alignment of the donor data and investments. We'll be able to do the national inventories and landscape analysis of what's out there because so far PEPFAR has not done anything like this where they would ask questions like what are the systems that we've invested in, identify scalable tools to improve healthcare delivery and improve the efficiency of programming and reduce the redundancy of the digital interventions. You know, it'll let us articulate the required digital functionality and synthesize evidence and research, and then it'll allow us to provide a regularly updated and broadly accessible landscape analysis. And that is it. Thank you very much. Thank you, everyone. That's the end of the time for the session. If some of the presenters might be able to stick around for a minute or two, I know I'm supposed to be somewhere else in 430, but thank you all for coming.