 All right, I guess we can start with getting started then. We have some more people entering the waiting room. But just in the interest of time, I'll kick things off. So welcome, everyone, to our session on data use in eRegistries. My name is Brian O'Donnell. I'm an implementation advisor at the Norwegian Institute of Public Health, and I'm also supporting UIO on some of their implementation activities of DHS2 packages. But I want to start the session by just by introducing a bit about our team at the Norwegian Institute of Public Health on the eRegistries initiative. We will be going into some of our more advanced tracker use cases today about using a tracker for point of care clinical data entry and also how we combine this implementation style with research to actually understand how these data can be used in practice, both at the clinical level and also at a researcher's or supervisor's level as well, and also for digital client communication. So I'm joined today by my colleagues, Akuba Dolphin, who is also an implementation advisor at NIPH, and will be discussing about some of the technical considerations for point of care data collection and about designing with the user. We'll also hear from Dr. Bingham Bugale. He just recently finished his PhD studies at the University of Bergen on digital client communication. And specifically the research that was done in Palestine with their MCH eRegistry. So he'll talk a bit about that. Then Eleni Papadopoulou will talk a bit about implementation and evaluation of interventions or procedures in clinical care about how we can use these eRegistry data for research. I'll talk a bit about how to build dashboards on quality of care. And finally, we'll have some summarizing words from the PI of these studies, Dr. Federick Frohlin, who will be discussing a bit more about the eRegistry's initiative and sort of summing together some thoughts about the future directions for eRegistries. So with that, Akuba, how about you take the leap? I see you're already sharing your screen, so maybe you could present. Right. Hello, everybody, and thanks for joining our session. So my name is Akuba Dolphin. And I am presenting this with a colleague, Mahima. That's her name on my Zoom link. And she worked on this in Palestine, but she's now on my tennis weeks. I'm presenting this on her behalf. So the eRegistries initiative had a trial in the West Bank in Palestine. And we were working with the public health system, which we're looking at their maternal and child health systems. And they have public primary health care systems all over. And we were looking at their clinical guidelines, their reporting routines. We looked at their management by referral. And we're focusing also on labor and delivery, where they also had labor and delivery services in their hospitals. But we were not able to. Akuba, could I just interrupt you briefly to say that I don't think you're sharing your slides? I'm not. Yeah, you're not in presentation mode. So you're still on next developed spreadsheet for configuration is what you're sharing. So I think you need to share your presenter mode screen. Yeah. OK. Sorry about that. No problem. Thanks for that. This is what it looks like on my screen. OK. It still does not show presentation screen. Does it show presentation screen? It doesn't show the presentation mode yet. But maybe you could just walk through this now. Just press down. Thanks. OK. OK. Let me stop sharing. Is it sharing now? Brian, do you need to? Yes. Yeah, it's sharing. OK. As presentation. We see your presenter notes. So this needs to be done. Sorry. You could also just share the PowerPoint slides and press down for that. Yeah. So you're not in presentation mode yet? Just a second. Working now? Yep. Great. Perfect. Sorry, everyone. So as I was talking about, we were working in the US Bank of Palestine. And we developed the Matano and Child Health Digital Registry that we call the eRegistry. So our trial was called eREP-PAWL. It was a class-to-randomized control trial where our objective was to look at the effectiveness of the eRegistry's clinical decision support compared to paper-based records on the quality of antenatal care. And we were looking at it through lenses. We looked at the effectiveness of this eRegistry on processes of care and also on health outcomes. So process of care includes things like screening. We're looking at how the health workers actually behave. And the health outcome is what happens at the end of the antenatal care. If you do proper screening, are we going to get better results in terms of children not being born prematurely, et cetera? So we recruited over this period. And we looked at 133 clinics. And we eventually, through this process, analyzed over 7,000 ANC visits in each arm. So we had an arm that had the eRegistry and an arm that just continued with paper. So we were looking at these features that we were providing to the eRegistry section. So for data entry processes, the eRegistry had long to choose not health records, unlike the paper that they had in the control. We have a searchable database. So those of you who are familiar with DHI's to tracker, you also have the opportunity to identify data errors during data entry. We put in program rules to take care of that. And we also made it possible for the system to automatically schedule client appointments based on what your gestational age was. And on the clinical side, we gave clinical guidance to care what we call clinical decision supports. And we also gave management to the health worker based on the national guidelines. And we would highlight clients who are at risk. And we also gave directions on when to refer. So this is actually what we were looking at. These are the paper records that came out of Palestine. They had these several paper records that were kept in huge registers all over in their offices. But of course, you kind of be carrying these up and down. And they definitely are not interactive. And we compared that with this interactive clinical decision support of the eRegistry. So some features that you see here are something like unmanaged condition. So then it will tell you that there is something that is a concern with this client that you need to manage. And you will give new management to take care of that. It tells you that this is the high risk pregnancy. It tells you what the expected delivery date is, gestational age. And then it highlights certain risks for you. And it tells you that on the 20th of October, this client was diagnosed with, was mentioned. It was diagnosed with chronic hypertension. And their gestational age was nine. And then we will tell you what to do based on that. So how would you develop such a program that has all these features of clinical decision support, et cetera? What you would do is that you would, first of all, plot out the data entry workflow for the system. What you'd have to do is you have to view data from all the sources, like their notebooks and the registers of all the health workers. You have to take notes of how they register new clients, how did they find existing clients and add to their record. But remember, you're trying to create a searchable database that you want to mimic what they've already done. And so if you understand their workflow, how they register, how they find existing clients, you can then put that in your search features for the database that you develop. Where do they record data? And in what sequence? What fields do they discard in their registers? Because now that you're developing a new system, you have the opportunity to discard data points that are not used. We know that we see that a lot in our health systems. So when you then collect all the data points, some of the examples that you'd be seeing would come in from Bangladesh. I was working on the Bangladesh system. But the process was the same. So some of these examples you'd be seeing would come from Bangladesh. So in this situation, there were four different data sources that I looked at in Bangladesh. And you tried to find which, for each source, this was, for example, an injectable family planning card, a couple register, a maternal and newborn card, delivery form, et cetera. What are all the different data points that I recorded and tried to find synergies between if you're trying to just get consistent data points that people will use in your new electronic system? And then you should try and plot out the clinical workflow. So for example, if you're looking at lab tests for checking anemia, what different tests are you looking at using the hemoglobin test, the hemocrates, or clinical science? And for each one, what are the values that you use to diagnose no anemia moderate or severe anemia? And what are the actions that you will take? So with this, of course, you have to work with a clinical expert. And this is what we did. We plotted these things out, and then we would present it to the clinical experts in the country and make sure that this is in line with their guidelines and also as much as possible in line with WHO guidelines. And then the next step that people who do configuration will be familiar with is to develop a spreadsheet for configuring NDTIs too. So for each data point, you have what the data point is supposed to be. Do you have an eye button? What is the data entry formula? Do you have a validation? For example, the national ID in this context had to be 13 or 17 digits. So you write a program rule to check that. Do you have show high rules? So in this situation, if the type of outcome was that the child was still alive or still birth, et cetera, then you would show this data point. Is do you need to do a referral? Is it an urgent referral or is it a non-urgent referral? All of this is written out. So it goes from that flow with a clinical worker, with a clinical expert to a spreadsheet like this. And it makes it very easy to be very clear what the rules are. Because as we know when we're doing configurations, you have to be very exact. And this also makes it very easy to test the system. These are your requirements for the system. And this is the way that we have found works both for the GHIs to expert, as well as for the person, the clinical expert or the person who will be testing the system. So these are the results that we got from Palestine. When we look at the process outcomes screening and management, we realized that if you look at diabetes and anemia, we did have some difference between the eRegistry clinics and the control clinics. The eRegistry clinics had a higher percentage of screening because we guide the health worker during the data entry to screen for diabetes and for anemia. However, the difference may not have been significant. And so we also looked at the health outcomes, which is pregnancy. And we found the health outcomes within the pregnancy. We looked at moderate severe anemia, large for gestational age babies, severe hypertension, small for gestational age babies that was undetected during ANC. And my presentation at Labor that was not detected during ANC, for example, the child was breached. You should have detected it and then referred for care at a higher level. So we see that there was not a significant difference in the health outcomes, unlike what we saw with the process outcomes of the screening. There is 20.9% in the non-eRegistry clinics versus 21.7% in the eRegistry clinics. But one thing that we did see when we look at a secondary outcome of attendance was that we consistently had these rates where the attendance values that we got at the key ANC periods that people are supposed to come to. And then when we actually look across at all ANC visits, we see that actually when you combine these, because these look like it's kind of reasonable about half the time people show up. But when you actually look at all ANC visits from 16 to 36 weeks, only 9% of people showed up for these visits at the non-eRegistry clinics, at the control clinics, and 8% at the eRegistry clinics. And so this we feel kind of tries to explain the differences that we saw. So our interpretation of these results was that clinical care decision support in ANC is effective. We were able to implement this system in all the primary health care clinics in Palestine. And we think that the quality of care can be improved. The additional digital health interventions like Quality Improvement Dashboards, which Brian will be talking about in a bit. And we also saw that we also think that increased coverage of antenatal care attendance may improve the health outcomes, because we did see some difference in the screenings. But in the health outcomes, there was not enough, because you have only 8% versus 9% attendance. You are not going to get the results that you need. So this then leads on to my other colleague, Binyam, who's going to talk about what we then did to address attendance using targeted time publication by SMS. Let me stop sharing. And Binyam can continue. Thanks a lot, Akuba, for that discussion of the results of the Palestine trials or the early results. Binyam is not going to be sharing a bit more about SMS use in the Palestinian registries. I'd like to request that people, if they have questions during these sessions, to please leave them in the chat. And we will also have some time at the end for some open discussion form. But maybe we can hold all of the questions until the end. Oh, sorry. We're just trying to keep a record of all of the chat and conversation. So I've posted a link to the community of practice if you can jump in that. That'd be great. Great, yeah. Sorry about the chat. I mean, the community of practice. Thanks, Craig. No worries. Binyam, you can take it from here. Yeah, OK, thank you. I think you can see my screen, right? Yes, we can. OK, thank you. So I will be presenting from the perspective of how this rich data in the registry system can be actually used to support clients with individualized messages. Because these type of messages are known to be effective, they're more generic and untargeted messages in health communication. So in our project, we approached by just using this rich data to tailor message individual communication to pregnant women. Akub already showed you the gap in attendance, and especially in the timeliness of attendance. And other research we conducted, cross-sectional research, showed that the same finding that if you take like good measurements like NDC4 plus, that is used commonly by many researchers as an indicator to NDC ADPSE, which is about 60%. And in Palestine, they recommend five focused visits, and that's about 48%. But when we account for the timeliness of visit according to the National Guidelines, it's only 6%. And if you adjust that with the time when the pregnant women actually joined the health service, that would be 30%. But still, it is much lower. So what's the evidence actually is this kind of communication work in improving the attendance, especially regarding timeliness. Systematic reviews actually show that there is a mixed effectiveness in improving attendance and adherence to treatments and some behavioral change communication, but the ones which are very effective are often based on some behavioral change theories and design with the users. That's a co-design or user-centered design concept and tailored to individual needs as much as possible. So I will be just focusing on these three concepts. And now we try to incorporate this in our development process. But I'm not saying that these are the only parameters which make the SMS or targeted client communication effective, but these are the prominent ones. So I will be focusing on them. So we used a theory, behavioral change theory. We picked health belief model because it suits the problem statement we have. And the way we went to approach this problem, we used this belief model to explore the perception of pregnant women and the perception of health care providers in terms of how pregnant women see the needs and use of health services on timely manner for all the ANC visits. So our focus is only on the women who have no risk factors or risk conditions that they are treated in the basic group according to the WHO recommendation. Based on that assessment, we came up with the health belief model constructs, which are very important in that context. And the three important is belief model construct where susceptibility, severity, and pursuit benefits. So that is the target of our message and the SMS. And we used concepts from behavioral economics like naging and inactive choice to frame the message so that it is appealing and acceptable to the end users or the receivers of this message. So this figure is just to reflect how we tried to incorporate all the theories and frameworks in just writing a text message. So this is to deal with the content of the message, whether the content actually is acceptable, usable by the end users, and at the same time not leading them into an adverse outcome or adverse effect because of the communication we have at the personal or individual level. So we also used the concepts from the feedback, model of actionable feedback, that the message would be communicated on a timely manner, individualized, customizable, and inactive. So this is reflected in this example message, which is actually sent for a pregnant woman in week between 18 to 22 without any diagnosed hypertensive conditions. So this is targeting to help pregnant women actually perceive their susceptibility to high blood pressure during pregnancy and what to do for them. So the advantage of the e-registry, having this rich data, is very important in tailoring to individuals because in prior research, it is not possible with the research where more pilot projects and created on demand manner than having a large information about individuals. So the e-registry now solves this problem for us because there is a long list of variables and the dynamic variables or information about the pregnant woman is also there. So that can be used to say that the message. After having this information, we created the library of message for different conditions and risk factors, which we are selecting to target and then store that in the library. And there are information will be inserted automatically through the program rules we wrote. And so, for example, for this typical message, we have a condition where pregnant women in the gestational age week for 24 to 20 receive this message if the age is greater than 35 and hypertension previous gestational limits, maleness and high body mass index as a risk factor. And then we wrote the program rule to trigger the message and the schedule for to be sent one week before the appointment. But we have all other types of message actually. Types of message will be sent immediately at the booking or the first visit. That's a welcoming message. And then type of message just discussed above seven days before with appropriate gestational age window and having not being that mostly with the condition, in this case, high blood pressure or hypertension. And then three days before that is adding a risk factor for the highest condition, which is targeted during this gestational age window visit. And for hours before message, it's just a simple reminder for all visits scheduled. And then the other type of message which are sent after the schedule, the schedule has been passed to our missed appointment reminders if someone missed appointment yesterday, for example, and then today that person will receive a message that saying you already missed that appointment. So you need to reschedule another appointment or contact the clinic. And then if that is not happening, then another type of message where we try to encourage people to get reconnected to the health system afterwards. So in this whole process, we have strict engagement with the users at all stages. From the beginning, we approached the national experts in the panel forum to identify and prioritize the target conditions. And then the understanding of the user perception, which I discussed above, we use qualitative research method. And they're validating and they're finding the message also involved the stakeholders. And finally, the training of the health care provider is also very important because they have to accept and show ownership of this message because then the clients will be communicating back to them so that we already provided the library of message. And they have been involved within the development of this message. They are ready to respond to any questions. And we also trained them about the practicality, how to sign up women to this SMS service when they encounter the woman for the first time. This is a preliminary result from our forearm trial. This is just to show you how this targeted client communication affected timeliness of intervention acceptance. So these are the timely visits according to the national guideline. And for the control group, without any individualized communication, the proportion is approximately the same as what Acuba showed. But this one improved like having the targeted client communication, at least in the proportion wise, it is better. On average, it is 43% and the groups who receive the DC communication and 35% in the control. But we haven't analyzed controlling all the dynamics we supposed to do. So some of the limitation we invested throughout these design and development processes, if once the scheduled message is there and then there is a need to update that scheduled date, for example, it's not possible because just once scheduled can only be controlled when it is sent out like from the gateway, but it's not possible to improve that through any program rules. We need also further exploration to receive text message back to where communication is supposed to be the goal standard in this type of communication. And finally, as I've showed you, there is no functionality that's supporting sending or scheduling a message based on the past appointment dates. If a person misses the appointment, it's not possible to use the DHS symbol system to send a message. So we had to work around using R to overcome this problem. So that is incorporated in the future. I think this is a good way of using existing data without any additional data collection need. And that can also maximize the investment on e-legistry type of approach. Yeah, thank you. Great, thank you so much, Binjam, for the explanation of the different SMS interventions that were done in Palestine and a bit about some of the challenges that we've had with that as well. I'm going to share the screen. Can you stop sharing your screen? OK, yeah. Make sure it's in presenter mode. It's not actually showing up one second. Can I still share screen here? Just made you the host. See if that helps. There should be a screen. OK, yeah, there it is. Let me see. Maybe you can see the screen now. But I don't think that's my situation there right now. All right, sorry, folks. I'm just going to, in terms of time, move through this pretty quickly. I hope that you can see enough of my slides to move through this. But the discussion of different ways that e-registeries data can actually help follow guidelines that are already at the Institute at the national level, but maybe haven't filtered down to the clinical level. We're going to talk a bit now about how to actually use those data that come through the e-registeries to improve quality improvement programs with this MCH use case in Palestine, as an example. So what are these e-registeries dashboards trying to accomplish? We've already touched a bit upon this, but this is sort of the central themes of the dashboards that we will discuss. So we want both better screening and management for anemia during pregnancy, hypertensive disorders, diabetes. And also, as Binian was talking about in his previous session, more timely and continuous attendance for antinatal care and postpartum care. So there's this tight interrelationship between both guidelines and clinical feedback and reporting that one would see as well. So the guidelines, as Acuba presented, are built into program rules of the tracker program. And now in our dashboards, we actually want to understand adherence to those guidelines as well. And so this is a bit about thinking through not just the numbers of individuals that have come through the system, are they going up or down over a month, but also really understanding the quality of care over time. What is an individual patient's experience in the MCH system in Palestine? And having this shared client record of the registry really helps to promote continuity and quality of care. In a clinical context, when we were doing this background research on what works for a quality improvement dashboard intervention, we did a pretty extensive literature review of in these types of contexts as in the public ANC system of Palestine, what would actually work for a dashboard. So we knew that presenting the information frequently, giving some information both verbally and in writing were important. But also thinking about very specific theory-driven interventions as well, such as the feedback intervention theory and also the model of actionable feedback, we understood that you really wanted to have timely, individualized, non-punitive, and customizable feedback that is given as well. And what does that mean in practice? Well, when we were designing these dashboards, as you can see here at the left, I apologize for the small screen. But at the very top here, you can see timely. So is this real-time data that's being entered on a daily basis? And so the different themes would actually show up every week for a different priority theme, whether that's anemia or hypertension or attendance. And they would get notifications or DHIs to messages. There was a new theme and new data to explore that week. So we tried to make it timely and show the last month's data. Individualized, so at the clinical level, and also making sure that non-punitive, so there's no mandatory interaction with supervisors, a clinician can open up the dashboard themselves without asking a higher up to have access to it or to help them understand what it might mean. So it's also non-punitive. And this is a way in which that we can just show where a clinic's position is relative to its peers within the district. So comparing the performance of a clinic on something like the percentage of clients who are screened for anemia at their first visit, how do you look at your clinic compared to other facilities in your district? And then also customizable. So we include specific details and comments with what are called action items here to better understand the clinician's own performance. So we basically use validation rules in order to generate some text or like auto-generated interpretations for where they fall within their district for certain metrics. So here you can see what kind of information was provided. So I just mentioned anemia screening at booking. And you can see that your facilities average, your district's average, where you rank within the facility, and then also the total number of facilities that are reporting these ANC metrics are all included as sort of to provide background information on this metric and also your denominator, the total number of booking visits that you've had. We wanted to show a little bit of variance. So we included the last three months of data. And then we averaged those over time and used that to generate the facilities average. So it's a rolling average. And so here you can see like two basic examples of a facility where things are going quite well or they are doing pretty okay. So they're three out of four. So they're not quite great, but they're pretty average. And then here's an example where the facility is actually doing not that well. So out of 11 facilities reporting in the district, they're number two. So they can actually say that they have some work to do with remembering to do anemia screening at the first visit. So I'd walk through this diagram with the animation, sorry, but here you can see the sort of process that we had for actually running through, for actually developing these dashboards. How do we populate those figures? Coming up with this comparative figures for where your facility ranks within the district requires you to know both your, all of the other facilities in your district's data and also your own facilities data. And it also requires us doing these averages over the previous three months. And when we first started, some of these features were not available in DHIS too. So we had to make some work arounds. But essentially it works that you have data elements for your numerator and your denominator. So for example, your blood pressure measurement at your first visit between 15 to 17 weeks, then you put those program indicators into a super indicator. And then you run this over a previous three months. And you can see here a month one, month two, month three. And then you come out with your average over the previous three months. So what we did is we ran these program indicators through a Python script to output where they rank within their district. And then the validation rules and instructions would either set positive reinforcement for a job well done, they're on track or suggestions for improvement if they were at the lower end of the spectrum. These action items examples we also walked through with stakeholders in Palestine to really understand what are the some of the barriers to improvement in certain areas or what are the things that can actually be done at the clinical level. So you can see here that this is an exact replica of these action items, but it might say that the number of women that you referred for gestational diabetes is lower than your colleagues to improve all women with results. Suggestive on diabetes need to be referred for testing, right? So suggesting referrals or asking for follow up to a different level and then do not forget to record that referral as well. As I mentioned earlier, there were a number of custom features that when we were developing this trial we had to wait for core DHIs to develop and also for the registries branch of DHIs to put into our implementation. So when we first started out some of these were really good already, we already had program indicators but in particular we had to wait for using this count, the two count functions and the enrollment type of program indicators this star here that was really useful for doing longitudinal program indicators. We also had to build a widget for this showing of the validation rule actions they were called or like the instructions that come with the validation rule after the analysis was run. So those are then all brought into a widget on the dashboard and shared. So we had to build that. And then also the facilities district rank via dashboard as I said, we needed to go in a Python script. So when it comes to the final results of this as well this is just a two different views of the same types of information. We actually did capture as well in another widget each time that a user actually opened their quality improvement dashboard and through that user was just to sort of get a sense of how it's being used. And we can see that by cluster it seems that there is actually a like a Pareto principle happening with dashboard use and data use. And this is a bit of an interesting finding because you can see that four out of the 49 clusters accounted for 58% of all of the QID views. So that does mean that there are a few districts or a few facilities rather that are really keen to understand their numbers. And then there are lots of districts and facilities that simply didn't open them that frequently. Here's one you can see that most facilities only viewed the dashboards five days over a six month period. So this is something that we need to go back to and really understand what would drive opening the dashboard and using the data better. More results on this second trial that we did will be coming up in the coming weeks. It seems like there's someone else has a mic is on. But I will stop sharing here and let Eleni take over the remainder of the session. Dr. Eleni, can you share your screen? Yeah. Brian, can you make me the host again quickly? Yeah. Brilliant. Can you see it? Yeah, we can see it. At least I can see it. At least we see your edited sites. Maybe if you go to presentation mode. Yeah. Good. Is it? Yep. Yeah. Thank you, Brian. Okay. My talk today is about the qualities and the characteristics of the registers that make them available to for research. I will summarize what Acuba and Vinyam and Brian talked about but interpreted in the lens of research and how to answer scientific questions. And I will also briefly present an example of a new study that we have engaged in Uganda. These are the 10 key digitized that are integrated in their registries and each of them supports the collection of high quality data for research. So Acuba talked about the clinical decision support at the point of care. They registries collect individual data at the point of care. So in the case of an A and C, an antenatal care registry that would be the medical history of the woman, obstetric history and other characteristics and it's collecting real time and longitudinal data meaning that the healthcare provider or the woman who is the client does not need to try to remember what was happening in the previous visit and the healthcare provider has a full management and tracking of the history of his client. And also very, very important what Acuba talks about it was and Brian as well. It was the ability to assess the quality of care. Sorry, through the registry. This is a unique feature because attendance screening and management are key indicators of the success of A and C. So these components are frequently can frequently explain the failure of A and C to prevent the mortality and morbidity among pregnant women and neonates. So and often these indicators are the aim of interventions that are trying to improve mother and child health. In terms of research again, the issue of missing data is a very important issue and very, very frequent. By supporting the workflow and reminding all the important indicators to be assessed in every visit, we can avoid this issue of missing data, the issue of misreporting and misclassification. And this is shown in studies comparing the paper-based registry with the e-registry, something like what Acuba presented. Then Binyam talked about the targeted client communication, the reminders of the appointments and their referrals. And this feature can also effectively increase attendance to A and C and this strategy aims to change the behavior of the client, the pregnant woman, and improve the timely attendance to A and C independently of her health status or her socioeconomic status. And these results to better data for research because again, having missing information is an important issue here. And this reduces our ability to generalize our findings, our research findings and reduces the quality of our evidence. So we need to use methods to collect data that are not going to result to biased estimates. For example, if we didn't, instead of an e-registry approach, we're doing a survey, our collected data are prone to who is gonna participate in the survey because the surveys are usually voluntarily. So we know that women that are in high risk to develop a disease during pregnancy or complication, for example, when of low socioeconomic status are the ones who most probably are not going to participate in our study. So we have a lot of misinformation in this sense. And also using an e-registry though, that efficiently covers the populations, the population we're targeting, this can reduce this issue. And we have an estimate that is as representative as possible. In the aggregated level, this also results to a reliable estimate of the prevalence and the incidence of the disease for this population. And the issue, huge issue of the false denominator is also reducing. And then Brian talked about the clinical quality performance dashboards and through this feature together with the workflow support that the Cuba described, we can improve the assessment of the health outcome or the health indicator of interest. And this will result to a better quality of the health outcomes and a better quality of our data. Also the features of the e-registry related to the management and data of the management and use of the data, kind of as for example, the automatic application that Brian explained about, the visualization of the data, automated analysis, this can empower the health professionals on data management and they can themselves identify further uses of this data and of these resourceful registries. So for to summarize through the e-registry approach and research, we can increase our sample size, we can have a representative sample and a good coverage of the population we're targeting. We can assess time trends, we believe that we have a candidate response from the participants, good quality of data and a large flexibility in study designs. On the other hand, the costs of the study is reduced, the mid-reporting misclassification issues are reduced because of the low recall selection bias and the loss to follow up and as well as the time between the data collection and the data delivery because it's a digital solution is potentially reduced as well. And this is an example of a project I mentioned before. This is a positive project, evidence-based policies and health systems interventions for antenatal care is in collaboration with Macarena University in Uganda, Hispu Uganda and the King's College London. In this project, yeah, this is the district we are aiming, the Mekona district in Uganda, some demographic characteristics, we have 30,000 births every year, 62 health facilities for antenatal care, they and C4 coverage is 60% and half of the women are delivering their babies in the facilities. What do we want to do there? WHO is asking whether the WHO NC model with a minimum of eight contacts can impact the quality of the ANC in LMICs and what is the effect on health, values, accessibility, resources, feasibility and equity parameters. And this is what we want to answer through this study. What does it take to move from ANC4 to ANC8? The knowledge gaps here are related to the implementation research and health outcomes and the methodology is an ANC eRegistry and a two-armed cluster randomized control trial. The first day related to implementation research is that we want to identify the enabling factors, supporting interventions, the environment and facilitation that can ensure the physical and acceptable and effective transition from ANC4 to ANC8. This is also what WHO wants and this we will do through this RCT using an ANC eRegistry. We believe that this will increase the fidelity to the intervention. The intervention here is ANC8 versus the control that say ANC4. And we can assess the timely attendance, the quality of health health provision, feasibility indicators, acceptability satisfaction and performance indicators. And for the second aim, we want to know whether this new model with a minimum of eight contacts can impact the quality of ANC and what is the effect on health. The gap in knowledge that we come to assess is related to the excess pre-term mortality in middle income settings. And this is not supported by correlated outcomes. There is no trials of ANC schedules head to head like ANC4 versus ANC8. There's no trials of low risk pregnancies only. We have no trials with monitoring fidelity to management. And there's also a lack of trials in low income and rural settings. So our project is assessing these gaps in knowledge. And of course we want to assess the maternal and mental health outcomes between intervention and control. And of course the key issue of the quality of health care provision here. To summarize the goal here of our project is to see what it takes to go from ANC4 to ANC8. And through that we want to provide evidence that will add directly to the WHO and coherent evidence summaries and guidelines to improve of course the uptake and quality of ANC in the Mucono district in Uganda to provide the scalable ANC eRegistry solution co-designed with users to fit national policies, guidelines and infrastructural context and to provide policy guidance on effective implementation of alternative ANC schedule. And last to provide the DHS2 metadata package for ANC that is embedded in the tools of the DHS2. Thank you. Okay, thanks a lot, Eleni. I know that was a lot of information to go through in a short time. I don't see any questions in the COP but maybe we could open it up for anyone to ask questions since this was a lot of information. Frederick also has some technical difficulties and won't be able to join us for the rest of the session, unfortunately. So we can proceed straight to Q&A with the remaining minutes. I was gonna say, Brian, we've probably got to call it there. I'm afraid so many questions will have to go into the community of practice. I'm afraid we've got two minutes until the next round of sessions begin. I'm on orders to give everyone a few minutes. I'm afraid. But yeah, if we were in a conference hall in person I would ask everyone to give the guys a round of applause. They really sped through that today. So thank you very much, everyone. We've got more sessions coming up and just what we've got going on. So next up in the next round we've got stuff on security, data quality, and looking at the DHS2 design lab and the DHS2 research as well. And then later on this afternoon, this evening or this morning, depending on where you are in the world we've also got the use case bizarre as well. So please do check that out. Thanks very much again to all the presenters today. That's the only thing that we've got in this room. So I'm going to close this room out and I'll let you go and join the next sessions. Thanks very much, everyone. Bye. Thanks, all. Thank you, bye. Yeah, bye.