 Thank you. So it's a great pleasure to welcome you all to this session on DGIS2 data in research. And my name is Johan Saber, I'm an associate professor at the University of Oslo. I've been working with DGIS implementations for almost 20 years but as a researcher now I'm also very interested in actually how DGIS2 can support health services in both more traditional and in novel ways. So it was really good to see abstracts submitted that wanted to talk about topics around this theme. So we have made this session now based on some input from his network and I'm very glad to see also people that we don't really work that closely with having so much interesting to say about use of DGIS2 and use of the data coming through systems running on DGIS2. So we have three speakers today or four speakers for three presentations and we'll start by listening to Yangtze Sherpa presenting on improving the usage of DGIS2 platform sharing or sharing of community experience of Surge district Nepal. Then Yangtze is followed by Emily Yelverton and Mitali Ayangar from DataKinds with their presentation on program evaluation in resource-limited environments a demonstration of a novel machine learning approach to deriving actionable insights from DGIS2 health data for health care interventional management. And the third speaker today is Louise Tina Day from the London School of Hygiene and Drug Medicine on the Every Newborn Birth Indicator Research Tracking and Possible Study. So I will put in the chat again a link to the community page for this session where you can post your questions. You can also post them in the chat or raise your hand to ask your questions in person at the end of the session. So we're going to run the three presentations in one row first and then we open up for questions to all the presenters. So then I'm very happy to give the word to Yangtze as the first presenter. Thank you, Yangtze. Thank you, John. So thank you, everyone. Good morning. Good afternoon and good evening, everyone. So first of all, I would like to thank the ISP group and team members for these great initiations. That this ISP movement has come beyond the software application to the broader socio-technical ecosystem of health information system. So I will be focusing on my presentations mostly on the aspects of health information system, which includes people, competences, institution and practices. So overall, it is the community experience of Surkid district of Nepal. So let me begin with my introductions. I'm Yangtze Sherpa. I worked for App Associate Inc. as a health information technical officer and currently working in Lumbini province of Nepal. So let me begin with my presentation. So I will, this is the outline of my presentations. I will walk you through the background of Nepal Health System, linking with health information system and DHIS-2 and then with the introduction of strengthening system for better health activity with the approaches used for the usage of DHIS-2. Some of the findings I've shared and relating with the results from the community practice along with the challenges and recommendations. So this is the organogram of Nepal Health System. Currently, there is three tiers of government that is functioning in Nepal. So Ministry of Health and Populations is the lead in federal government. So Ministry of Social Development in the provincial government. Likewise, under these government are different local bodies, which is functioning to provide basic health services. So in terms of health information system also, this three tiers government is functioning, but the federal government is still the commanding authority to provide technical guidance to provincial and local government. So Nepal has transitioned into federalism structure since 2016. So in relation to information system, we are currently following the Nepal Health System strategy of 2016 till 2021. And it spales out to improve the viability and use of evidence in decision-making processes at all levels with some key outputs. And there are also currently a nine information system. So of these nine information systems, we are working especially in strengthening HMIS. And the HMIS data is currently entered in DHIS-2 platform. So let me walk you through the DHIS-2 history in Nepal. So it was first initiated in 2016 by Honorable Minister of Nepal, and from 2017 it was rolled out in sub-national level. And in between 2017 and 2019, all the data which was entered in the telegacy was entered in DHIS-2 system. So from 2019, it was then rolled out in local level and other health facilities. So what SSBH activity is currently doing? So let me give you a brief of the activity. It is a USAID funded program of five years cooperative agreement, led by App Associate Inc. and partnered with other three organizations. The major counterparts are the government representing from three tiers of government. And the main point is group by improving access to and quality of maternal, child and reproductive health services with specific focus on newborn care. So we can see there are three outcomes. And the cross cutting priorities is the generation and use of evidence. So it is very critical to know that what number of reporting units are there in case of generating and using the evidences. So in Surkhet, there are currently nine parent facility. This facility are representing the local body, five municipalities and four rural municipalities. And there are currently 118 health facilities. And also the private sector, 24 private sector are, you know, they are having access with DHIS-2. So DHIS-2 reporting units to integrate those private sector. Currently, the activity is also working to integrate those private sector in DHIS-2 platform. So looking at overall numbers of reporting units in Surkhet district, we found that the majority of the reporting unit is from Surkhet. It contributes around a few 16% of total provincial reporting. So how SSBH is conducting the activity related to the usage of DHIS-2. So these are the different approaches. First, we conducted an assessment, like local level capacity assessment was done, training need was assessed and on basis of this assessment, customized technical assistance was prepared. And secondly, different kind of capacity building plan was implemented. And on basis of the data quality outputs, like timeliness, data consistency, completeness, on basis of this finding, again, the local body was supported with the implementation. So this process took, like this assessment was started from January to June 2016, and from 2019, sorry, and from 2019 onwards the implementation is ongoing. So this is the local level capacity assessment findings relating to information management and review system. Here there are altogether 11 domains that we assist to interview with health section chief and representatives. So here we can see that one of the local body only have legal and policy framework regarding health information system. However, the other indicators like the capacity building, guideline formulation, conducting reviews, strengthening feedback mechanisms, most of the health system indicators and governance indicator was found to be poor. But very interestingly, the reporting mechanism was already there when we started this assessment. So this is the health facility level capacity assessment in relation to information system. There are four domains we checked. Most of the health facility have recording and reporting tools, but the other three domains, it needs to be improved. It's very less, very less number of health facility that you really check on the data quality. They do not conduct reviews and analyze service statistics and use of data and information for programming was also very. And this is the capacity building plan. These are the five criteria we took into account. Those are Internet availability, DHS2 reporting system, computer and laptop availability, trained HR in DHS2 and electricity availability. So on basis of these, we found which health workers from which health facility needs training and after the training was provided, we updated their seat along with the health facility updated information. So after these assessment findings, the activity prepared a customized technical assistance emphasized on use of DHS2. Also, there were also other team customized technical assistance plan. First, our priority was our priority was to emphasize on training. And then secondly, we also developed mentor and onsite coaching in selected sites. And third, we linked DHS2 reporting with RDQA. Fourth, we did a dialogue with public health service office and also with the local level health section on usage of evidence for planning and capacity building. And fifth, we did a dialogue at local level to formulate health actions, health act and policy. So even though this is the governance indicator, it was directly linked with a formulating monitoring and evaluation framework. So these activities was conducted to improve the use of DHS2. So we found very encouraging results. Result number one, there was an increase in health facility reporting rate, meaning that in 2019, the most of the parent facility from the local level, which is shown in the bar, the yellow section. These are the parent facility they were reporting on behalf of health worker from the health facility as an end user. In 2021 May, we found that those parent facility were not reporting and the facility itself as an end user were reporting during this time. So we can conclude from this finding that the health workers were more capacity to enter the data in DHS2 platform. So result number two, there was increase in reporting, timely reporting rate, this very interesting figure. We found that the reporting rate was only 52.4, but it has increased to 77.4. And as we progress in 2021 with the months, we can see that 83.9 is the reporting rate currently. So this is the result of the interventions that I mentioned that we did on site coaching to track the reporting status using DHS2 platform. And also during the monthly meeting reporting status was shared with all the health workers and feedback mechanism was also used for providing immediate feedback through DHS2 platform. So we decided that we also used other social platforms like email messenger, Viver for immediate feedback mechanism. So result number three, there was improving data accuracy through RDQB, routine data quality assessment. When we conducted this assessment to be verified different indicators in 21 health facilities and the data was corrected on basis of this finding. There are many highlighted with the pink color, these are the data which needs to be corrected. So it was corrected from the health facility level. So one good example that I'm happy to share is there was also an improvement in documentation of patograph. The patograph is the clinical decision making tool used by the nursing staff during the delivery to save a newborn and mother's life. So what we did, RDQB was conducted, wherein we verified the cases, as by the cases, and we found that there was no any documentation drawn regarding the patograph used by the nursing staff. So the activity itself, we supported to conduct on site coaching to correctly plot the indication in patograph and the local level decision maker. Also, they decided to release the delivery incentive only to those health facility who filled the patograph of each delivery case. So the result was very astonishing. The patograph were initiated in 100% of delivered cases thereafter when we followed up and it is still ongoing. And the health facility where we started this intervention, the decision maker also replicated this activity in other health facilities. So the number four result was there was increase in the skill both attended delivery. Sorry. So in 2019 and 20, it was 20.4 which increased to 40.5. So the result was achieved only after conducting the SBA training. So in relation to this, I will show you the other. Yeah, this is the evidence sheet that we used. First, we retrieved this data from DHIS to platform. And then we, we try to check which health facility do not conduct SBA delivery. So there are four health facility, but among these only three health facility we choose because the Kanita CHU, they were not having the boarding center facility. So three health facility was choose and those health worker from those health post was given training. So very interestingly, we also, we also observed there's a significant change. The number has increased and on the right hand side like the delivery by non SBA facility has decreased. So I have shown in the bubble map that for those health post to be where we conducted the SBA delivery training. The non SBA delivery number was also decreased. So some challenges that I'd like to share while using DHIS to there's still untrained health workers in the new established health facility. And there is also existing non reporting health facility. There's a transfer of trained staff and mentor currently in the circuit. And among the end user, there's a mindset that they only need to learn about the data entry while they come for training. And besides that, those who are already trained, there's a lack of use of analytics features. So a recommendation that we would like to put on is we need to prepare capacity building plan. And it needs to be updated. So frequent on site coaching is also needed for the mentor and health workers. National language user guide should be developed to use a DTI is to functions. And there is also a need of allocation of adequate budget to strengthen health information system, not only from the local government, but also by the provincial and the federal level government. So our ultimate aim is to translate the evidence into action. So the activity is continuously working on it. And so far we have achieved some good practices that I've shared. And we are also hoping to share good practices of using DHIS to platform in coming days. So I would like to thank the entire team of strengthening system for better health. So with this I come to the end of my presentation. So the floor is open for discussion. Thank you. Again, I think we will go through with all the three presentations first. So please post your questions in the chat for in the, in the community of practice page. So then I think in the interest of time, we just move along to Emily and metallic. Thank you for sharing now. Thank you. Well, hello everyone and again, good morning, good afternoon and good evening, depending on what time zone you're in. Thank you very much for joining us. Emily and I are really excited to be in the session and to tell you a little bit about what data kind has been doing in this space. The first person who is presenting this work is a data scientist and data kinds technical lead. And I'm metallic and I manage data kinds portfolio of data science projects, focused specifically on strengthening frontline health systems. So in these brief 15 minutes, we're going to present a novel approach to evaluating the impact of health interventions, based on program data that's actually housed on DHIS too. And this presentation sparks a conversation, a curiosity or even a collaboration. So we'll pop on our contact details as well at the end of this deck, and of course you can always reach out to us via Twitter and our website as well. And just to let you know our colleague Caitlin is also in the audience. So if you're popping in questions, she'll be there to help respond fairly, fairly quickly. So data kind has been in the business of bringing people together to co create transformative data science machine learning and AI solutions for social impact for nearly a decade. I think we consider it one of our chief responsibilities to stick around to make sure that these solutions are indeed adopted and impact is achieved. In the last few years, we've taken our vision of success further. We strive to innovate on how we might use data science and AI to not just solve individual or bespoke needs but really to support entire sector level problems. And I think that's what's at the heart of data kinds impact practices program which is the work we're presenting today is actually a part of that. We've been thinking about creating solutions that work for multiple actors within a sector that exists as global goods and contribute to bringing about genuine desired sector wide change. We've been thinking about data maturity data ethics and data science opportunities to strengthen frontline health systems for over two years now, and we're really excited to share this with you all. We're really excited to share with the recognition that it is data mature and there is a large volume of digital data and in a large part that's due to the widespread implementation of DHS to which has overcome many challenges and fragmented or inconsistent data collection processes in low resource settings, but we still have to ask what are problems in frontline health that are data scienceable. By asking a lot of people data kind engaged in over 100 conversations across the frontline health space, and we have the spectrum of problems, you know that run, you know, just the entire gamut of challenges. But on synthesis, we determined that data science problems that needed to be solved can be broken down to increasing access to digital data, increasing trust in the digital data and increasing the use of the digital data, specifically to gain timely reliable and actionable insights. And it's particularly around that fourth area that we wanted to explore alternatives to traditional modes of intervention evaluation. And so given the expansion of the highest two and the global community working on it to manage digital health information, we were able to explore computational methods for estimating the intervention impact using data house on the highest two. And that's how we've created some opportunities for continuous and more regular intervention evaluation at multiple levels within health systems. I'll actually hand over to Emily now to tell you more. Thank you, Tali will now turn to the technical part of the talk in which I will discuss one method that can be used in a low resource environment to attempt to estimate the impact of a particular treatment program or intervention. No one here is a stranger to the difficulties of executing an RCT the considerations of study design the time and money costs, the reality of fitting it in alongside day to day operations in a facility. There are huge barriers and low resource settings, and that's not even considering the ethical implications of an RCT. After all, how do you decide from whom to withhold a potentially life saving treatment. And when you're trying to measure the impact on an outcome that's relatively rare, the problem gets even trickier. All of this speaks to the need for an easier, less resource intensive, but still scientifically valid way of evaluating an interventions impact. The model is perfect, but synthetic controls is a relatively easy to use and easy to execute tool to include in your analytical arsenal. It can also be run in a laptop and does not require big data infrastructure or parallel computing to implement. The difference and differences approach is a relatively straightforward and simple tool, frequently used in economics and health context to get to the answer to a similar question. What is the impact of a treatment on a treated group. The treated group here represented by the B points on the graph follows a similar outcome trend to the control group a, which did not receive treatment. We also assume that the treated group would have continued to follow that trend in the absence of treatment. And the difference between the treated pre post change so B2 minus B1 and the control pre post change so a two minus a one that difference of differences is attributed to the estimate of the impact of the intervention on the outcome variable. However, difference and differences has a fairly strict requirement that parallel trends I mentioned in the previous slide. In other words, in order for the method to yield the most reliable results. The method requires that the outcome variable trends for both the treatment and the control groups were the same prior to treatment. And that's a very difficult assumption to meet in real life data sets for a variety of reasons. Poor data quality, missing data and the on the ground reality of trying to implement an intervention in a health facility, while also carrying out day to day operations. All of these factors and more are at play here. The synthetic controls method relaxes that parallel trends assumption by forcing parallel trends through weighted averages of the treatment variables. In this example from health economics, synthetic controls is used to try and determine whether or not the passage of a tax deduction led to an increase in kidney no date kidney donation rates in New York State. The dotted line represents synthetic New York, in other words a control forced to follow the real New York's outcome trend prior to the treatment or tax benefit. The difference between the trend of the synthetic counterpart and the trend of the real life treated units outcome variable is then attributed to the effect of the intervention. In other words, the dotted line represents a projection of what the donation rate would have been in New York State have a deduction not been passed, while the solid line represents the actual donation rate. After the legislation was passed. The result was an increase in donation rate of about two and a half percent. Now we'll demo a use case specifically on DHS to like data, we will quickly cover data prep, the allocation of treatment and model results. So for this specific use case, I use the DHS to sample Sierra Leone data set that I installed locally on a postgres database. If you have access to a particular DHS to implementations web platform, it would be easiest to use the pivot tables feature to export monthly or quarterly data for specific data elements and your facilities of interest. However, if you are a comfortable coder, it's possible to use our Python or your analytical language of choice to connect to the DHS to API and programmatically retrieve your data specifics of how to do this are out of scope for this particular talk. But if you're familiar with the API, the saves the step of manually retrieving and exporting your data and can help make your analysis more repeatable if you're interested in running it more than once to track results over time. In the DHS to data model as a framework, we simulate four years worth of sample data from 2017 through 2020 for monthly live births and facilities using a person distribution with mean parameter lambda equals 100 and monthly occurrences of babies born with low birth weights also put on with mean of five. We chose this because a lot of count data falls under this particular distribution. The end result is an average rate of low birth weights of about 5%. Data are then aggregated by quarter in an attempt to smooth out the monthly variation in births and low birth weight counts, which is common when working with relatively rare outcomes. In our example, note that while there's little difference in the rate of low birth weights prior to intervention, and this was done on purpose since deliberately selecting facilities based on their outcome variables can have confounding effects. Parallel trends are not observed here. And after the start of the intervention, the treatment facilities rate of low birth weights appears to be trending downward, and there's a clearer difference between the rate for control versus treatment facilities. So how much is that difference. And with what confidence can we say that this apparent decrease is due to the intervention. So here's a slide with some sample model results and in this example, the plot shows the average effect for all treated facilities of the treatment on low birth weights post intervention start the x axis time relative to treatment equals zero indicates the start of the intervention for low birth weights. The y axis represents the estimated impact of the intervention on low birth weights. The outcome variable was transformed to be the natural logarithm for input into the model and this is something that's fairly standard for models whose outcome variables are rates or counts. So with this estimated impact in terms of percentages, we've reversed that transformation by expediting the model estimates. The results appear to show a steep drop of about 25% in that first quarter of the intervention, followed by an increase that's still an overall negative change, followed by a leveling off the average negative change across the post intervention period is about 16.6%. The gray shaded area represents the confidence interval. Note that it includes zero so a strict interpretation of these results implies that the decrease in low birth weight is not statistically significant yet. This does make sense though. After all, interventions can take a long time to show conclusive results particularly in health outcomes. However, it does appear that low birth weights are directionally improving in treated facilities. It's also worth taking a moment to discuss why there might be an apparent initial drop in adverse outcomes followed by a rebound. This could be for several reasons. It could be because of a drop off in treatment adherence over time. It could be due to variations between facilities. Patients in the treatment program started going to other facilities or vice versa. It could also simply be due to better reporting, better training facility means facility staff are more able to recognize danger signs and in the past, the adverse incidents were actually going unrecognized. It's also possible to see results for individual facilities. Here's an example plot that can be generated from the model results. The red line represents the outcomes for one facility and the blue line represents another. Both facilities appear to see a decrease in the targeted outcome variable, but at different magnitudes. The blue facility sees an increase in worse outcomes upon intervention start, followed by a steep drop later on, while the red facility sees a steep drop at the outset of treatment followed by an increase, but the overall impact is still positive. So in other words, the incident of low birth weights appeared to fall. As of the sixth quarter post intervention, the blue facility had an overall average decrease in adverse outcomes of about 10%, while the red facilities average decrease was around 20%. So that was a very quick few minutes and we covered a lot of ground. So in my last couple of minutes, I would love to recap a few very important points from this talk. So the synthetic controls method in summary is a relatively lightweight way to test for the impact of an intervention. It relaxes some assumptions that are required for difference and differences, making it easier to use in cases where the outcome variable is uncommon. The overall average effects, as well as the individual treatment unit effects can also be extracted from the model results. But as always, careful examination of your baseline data and selection of treatment units to avoid potential confounding effects is critical. And most important here, local context matters. Models don't solve everything. Talk to providers on the ground and understand how your data is collected for best results. Providers know things that analysts don't. And lastly, but not least, we do believe that this method improves on difference and differences and can be applied in environments with constrained resources, and we encourage everyone here to join us in continuing this research. If you would like to learn more, or are interested in recreating the sample analysis or trying with different parameters, I have made my sample code available on GitHub, which you can see in the link in the slides. And we will also try to make this presentation available following the conclusion of the conference. And for whatever reason you cannot find the slides please email me or Metali or Caitlin and we will be happy to happy to connect with you. So, again, thank you very much. It was a pleasure to be able to speak with you all today. We really appreciate the opportunity and please feel free to reach out to me and Metali or Caitlin, if you have any follow up questions. Thank you. Thank you very much. Emily and metali and Caitlin. Very interesting presentation. I have some questions for that, but we'll get back to that. I'll give the word to Louise then for the third presentation, third and the last presentation of this session. Thanks very much, Johann. And thanks to the co-presenters as well. I completely agree context matters. And so I hope that what we're going to present now will in some way link with both Yangtzee's presentation about Nepal and also Emily and your colleagues' presentations. So we're, I'm going to talk about some research that wasn't directly on DHS2 data, but it was a research that was about data that may be going into DHS2. So a bit different from the previous two speakers, but I hope you can see some connection. So I'm going to present every newborn birth study, why what was done and what was found. I'm presenting on behalf of a large team, names all on this slide under the leadership of Professor Joy Lorne is the principal investigator at the London School of Hygiene and Tropical Medicine. And if you look carefully somewhere in here, the advisory group, you'll find Johann Sabre. They're just also acknowledging his contribution to this work. Firstly, why was the EMBIRTH study done and their study is about improving data for newborns to end preventable deaths, 2.4 million newborns dying each year in the world and more than 2 million stillbirths. So in every newborn action plan, strategic objective five is to improve measurement and WHO has an ambitious measurement improvement roadmap that's currently being updated and EMBIRTH to link them with that roadmap to try and create evidence to overcome data gaps. We were interested in core indicators rising up through the center of the data pyramid shown in yellow here, aggregated data from routine registers that might be feeding into electronic HIS such as DHS2. And the core indicators, which are all on this slide, there are many of them, but there's increasing consensus that these are the important ones for newborns around impact coverage and input. And the EMBIRTH study was looking at particularly about these coverage, but also outcome indicators for newborns in born in facilities. It was very much a collaborative design and project with research partners from ICDDRB in Bangladesh, IFFACARA Health Institute in Tanzania and Golden Community in Nepal. And London School, as I've said, and Professor Joy Lawn's leadership, but linking also with the UN and other data interested people. So the aim of the study was to assess the validity of measurement of selected newborn and maternal health indicators in hospitals to inform prioritization and selection for use in routine health information systems. We also looked at population based survey as a comparison, because that's often used for national and global tracking currently. The protocol is in the public domain now and as I've said already it was a collaborative effort, including data experts. What do we do? For objectives, we want to look at numerator validity, denominator validity, content and quality of care, but also the barriers and enablers to data quality. We've been hearing through the DHS2 conference this year, and I'm sure in previous years, but how data quality is so important and we wanted to specifically look at what are the barriers and enablers to that data quality. Newborns are cared for in many places in hospitals. We looked at three of them, the Labor and Delivery Ward, the Kangaroo Mother Care Ward or Corner and the Neonatal Ward. And today I'm just going to present some of the results and signpost you to where you can find other results if you're interested to dig deeper. Research is in Bangladesh, our colleagues at ICDDRB designed customized Android based tablet application to collect the data in time stamp way. Our research colleagues in Tanzania led the qualitative work and also the Kangaroo Mother Care indicator. And our research colleagues in Nepal led the neonatal resuscitation as well as the experience of care indicator work. So starting with how accurate were the numerators and denominators? How do we, what do we do? Well, we used an external gold standard to do criteria validity work and we compared what was observed to happen with what was measured to happen, either in survey or in register. And the research was carried out in five public hospitals providing comprehensive emergency obstetric and newborn care, two hospitals in Bangladesh, one in Nepal, two in Tanzania. We think of it like a triangle. So at the top in, in the gold was the gold standard, the results I'm presenting today are just about observed. So the practice or the intervention was observed by the research team. And then that was compared to shown in blue here, what was being written in the routine facility register data, the data that's usually aggregated around the time of birth to go up into the IHS too. And also that same observation compared against what the woman reported when she exited the survey after discharge. What do we find? As I've said, the results are published in the public domain now. And there's a overall validity paper and also a supplement with 14 papers, each linking to individual indicators and care practices. I wanted to show you a picture of my colleagues who did the research, including the analysis and these are the lead authors on these 14 papers. And as you can see, we, we looked at coverage and quality indicators, measurement systems, outcome indicators, and also experience of care. And probably the easiest way if you want to look further at the results is to go onto this website because there's nice videos as well of our researchers sharing what they learned as a result of doing this research. On the labor and delivery ward, what do we find? We observed more than 23,000 births, among which nearly 7,000 was caesarean sections. And we looked at three indicators. Well, we looked at more, but today I'm just presenting three. Firstly, uterotonics to prevent postpartum hemorrhage. Secondly, early initiation of breastfeeding and thirdly, neonatal bagmas ventilation. And in the figure, the gold colored point is what was observed to happen. So you can see for uterotonics, very high coverage of that care practice. In purple, we show, we show what the survey captured, so under-reporting. And for the, in blue, we show what the registered captured. So again, under-reporting, but with wide confidence intervals. For breastfeeding, the coverage of breastfeeding in the first hour of life was very low, surprisingly low. And you can see that registers hugely overestimated that, as did survey. And for bagmas ventilation, newborn resuscitation, obviously this only happens for a very small number of babies that don't breathe after they're born, but registers more accurately captured that than survey. Interestingly, when you split by mode of delivery, we see that being born by caesarean section affects the care you get, but it also affects the measurement of that care. So it's something just to bear in mind when we do our analyses and think about not pushing all births together, but maybe thinking about stratifying by mode of delivery caesarean or vaginal birth. The second set of results I present today is about kangaroo mother care. We observed 840 mother baby pairs. And happy to say that very high rates of kangaroo mother care were being observed in these kangaroo mother care corners and wards. Register underestimated that slightly and survey women could accurately report that they were giving kangaroo mother care to their babies. If we put all the results together, using some ratios, so ratios of survey to observed where one is what one is accurate what we want if you look across these five hospitals, and then also pooled results here. You can see that in some hospitals for some of the indicators, there's good accuracy by survey or by register, but overall there's a lot of work to be done to improve data quality. So this is the survey results just the same. I'm just blocking out the register so you can see the overall pattern of survey, but we're really interested in routine data at this conference. So that's what we're going to focus on now, looking at the routine register data for these interventions. So some hospitals, some indicators are good, but as I've already said, breastfeeding has a long way to go in terms of data accuracy. If you look at the supplement, you'll see other papers that describe other interventions. And we found, I mean, in sort of generalising our results, we acknowledge that survey is really important for estimating population based contact coverage. But once you start looking at individual clinical interventions, it's very hard for women to report things that they may not have seen or that are complex interventions that, you know, it's not really fair to be asking mothers what happened. By contrast, registers which are filled in by health workers are really important source data for more than 80% of global births now which are in now in facilities. So registers can provide the data on the clinical interventions, but as you've seen data quality really varies. So there's work to be done to standardise register design, filling and flow into national routine information systems, and that requires some implementation research. So looking very briefly at content and quality of care are third objective. I'm just going to give a highlight of birth weight here. Using those same ratios, I think you can see for birth weight actually registers really outperformed survey, very high accuracy of birth weight in these registers, both for low birth weight and normal birth weight babies. This is really important because there are more than 20 million low birth weight babies born each year. There's well known problems with birth weight heaping, but routine registers in these hospitals were very complete and very accurate. Most babies were weighed within an hour. There was less heaping for digital and analogue, although digital skills were only used for 15% of those 23,000 births. And we found more heaping at night time. So again, just as we heard earlier, context matters and even context over the day night period matters. Still births were not weighed all the time, which again was an interesting finding. So routine data accurate for birth weight, but we do need to invest in those digital skills. And we also did a lot of work on the timing of these interventions. Bagmas ventilation should be started within one minute, but you can see from this quality gap analysis that actually that was done for a very small proportion of the babies who received the intervention. They just weren't receiving it quick enough. And that highlights places where quality improvement can be focused. On to the barriers and enablers to routine report. Why are the data have different quality in different contexts? So there's a lot of worry about this lack of trust in data quality and that's impeding use. So these registers, which are the usual data source in these settings have potential, but what's impeding the data use is the quality. As we've shown already, we had highly complete data implausible values were also rare. However, the accuracy varies. So just looking at those high, this is just uterotonics high coverage of the intervention, but in those different hospitals, different capture in the register. Bear in mind that these two hospitals have exactly the same register as do these two hospitals, but look at the difference in register data capture really showing that the data environment. The data culture is affecting use. And when you pull them all together, you have wide confidence intervals. So this part was qualitative research we interviewed health workers, but also our data collectors with in depth interviews and focus group discussions to ask them what was the barriers and enablers to them filling in these registers. So it was a multi country multi sites analysis of these barriers and enablers. The first finding was these all these registers are in each country the register is different. And in the Nepal register actually the coverage of care indicators were not, were not captured. The register design, some of them had specific columns or non specific columns such as drugs given and you could write any drugs. But as I said, some registers didn't even have a column for the intervention and the instructions and conventions varied. In addition, there were many other documents in which those interventions had to be captured. So some of these labor and registers had 58 columns. And, and in other words, the, the, the register were very even more variable. It's just a picture of some of these registers varying from these highly structured registers with such as the 58 column one. To these handwritten registers which I kind of call user design it could change each week or each page depending on who sets it up. We found a very. We found across all our settings the same themes emerge that were either acting as barriers or enablers, and they fell around these three buckets of register design register filling and register use. So regarding standardization nurses told us that sometimes I have to add columns to include data I know is important for the monthly reports, because the register doesn't have it and I would miss it and that would be a challenge. Regarding time a nurse told us in an eight hour shift if I have a large number of patients, I may spend more time in documentation than the time I spend in attending the patients. So this is a big ticket if the care is being affected by the documentation that's that's important. And regarding register register data use and feedback nurses told us I haven't got any feedback from HMS about documentation. There is a monthly meeting in the hospital with data people, but we don't usually participate in that meeting. So, unlike surveys which have set questions and filling processes and standard training registers by contrast, have not standardized and there's very limited training to filling. So what next, we can't wait for the data to be perfect we need to start using register data by increasing those feedback loops and linking to improved data quality. And linking to standardized registers thinking about caesarean section births and understanding through proper research, how can we improve data quality and use at the register level. We found that having blanks in leaving intervention not done as a blank was very confusing with calculating completeness. There was clearly way too much duplication and burden on health workers and interventions that involve an element of timing such as breastfed within an hour were particularly poorly poor had poor accuracy. And again and again I keep mentioning this fact that we don't have a standardized register. We're involved now in phase two of this project which is linking register data up for up for data pyramid in colleagues with in Bangladesh and Tanzania participating in this research with data for impact and funded by USAID. And it's a collaborative two year feasibility implementation research capitalizing on the momentum of in birth phase wonder but really looking about indicator uptake in the system. And we're hoping to design some tools or perhaps a toolkit to enable other high burden settings to look at the data elements they need to have the right indicator measurement and drive that improvement of data quality in their countries. So we've got five objectives and very happy to talk with people afterwards about what we're doing. And but in both phase one focused around the data quality around this part of the data pyramid and in both phase two we're really thinking about going up the pyramid through all these steps of data aggregation and summary forms into your electronic information systems such as the And thinking about that data flow and feedback particularly for newborn health. We think the tools need to be based around three buckets tools that about using data tools about mapping data and tools about data quality. Because at the moment the 90 countries of ENAB every newborn action plan they can't report on the coverage of these high impact evidence based interventions that newborns in their countries need to we really are dreamers that this table would turn green that countries would know what's the care that these newborns are receiving and be able to drive change towards improved quality and coverage of care. We all know that valid data alone will not save lives it needs to be used by professionals policymakers governments to invest in health care for newborns and women. And whilst there in section rates continue to rise we need to think about how that affects our measurement. But there is a lot of data being collected and it's honoring to midwives who collect that data to use that data now as we continue to try and improve data quality, overcoming those barriers, including increasing feedback between the HMIS levels and understanding what specifically can be done from an evidence perspective which is what this session is about to improve that data quality. So I just want to thank again my colleagues and collaborators in all these countries, thanking SIF the Children's Investment Fund Foundation for funding phase one for USAID for funding phase two. And yeah look forward to conversation and questions thanks so much. Thank you very much Louise. So we have, I think we'll take five minutes for questions I see there have been several questions in the chat for for young see and she has answered and they're feel free to post more questions, either here or in the community page where we can, we can keep them alive beyond this session, or raise your hand. That's the future and the reactions in your zoom. Ask bar. Any questions for any other presentations. Yeah, we have a hand from Elaine Byrne, please. Thank you. Thank you all for very interesting presentations there. One question I'd like to ask just really is about sharing the data from DHIS to particularly with kind of organizations outside the health facility. Really are there kind of data sharing agreements or I know Emily mentioned that was on a kind of trial data set. But maybe, you know, the others can respond to you know, how would you access the data or is it in partnership with people that are working there who have access to the data anyway. Thank you specifically for data kinds perspective. So, data kind does have data use agreements with partners who are working in the field, either directly with the Ministry of Health or with program organizations you have a restricted level of access to to do the work that we've been doing on this for those do have very limited use policies so such that we couldn't share those results publicly which is why the demonstration our team put together was on the trial data and as a demonstration of the methods for a standard data set. Thank you. I see there's a question from Suleyman, I think both raised his hand and written it down in the community page please Suleyman. Okay, you can hear me. You can hear me please. Yes, yes. So, I have a question for Luis. I want to know how do you do they organize all the team for the studies about the lead. Is there the MOH or the university. And the question is, did they set up another server for the survey, or it was the same national instance used for the survey. And this is the question for Luis and for the first presentation about Nepal. Is the early forms implemented directly in the access to in Nepal. Okay, thanks. Thank you. Do you want to go first Luis. Thank you so much Suleyman for great question. And so, yes, we linked with researchers in the country, ICDDRB, IFACARA and Golden Health, Golden Community in Nepal, and but they worked really closely with the Ministry of Health. So there was very much that kind of, what can I say that vision to be working, not just as standalone researchers but really embedded with the real world if you like. So, yes, that was how we did it for the for both in both phase one and phase two. And, and yes, we did set up a separate server actually for this study for in both phase one in both phase two we're trying to link much more closely with DHS to so for the survey we did, we base our questions on the typical DHS mix questions. So we used their platform we had our own, we had our own server partly because we were actually also trying out questions that don't yet exist in those surveys, like Kangaroo Mother Care or that most of the interventions don't exist so we couldn't really use their platform because it wasn't set up. Yep, I hope that helps. Thanks. Thank you. And then I think young see the last question if you go to you and I think we draw the line after that. Thank you Solomon for your questions I tried to address the chat box. But let me explain. Currently, the DHS to already have a function of data quality. There's, there's one section there. So we can operate that to find the data quality issues. So currently Nepal. And since RQA is conducted online online and also the offline it depends on the provinces what kind of function do they prefer. So, I think in DHS to platform currently the RQA is not embedded, but in RQA tool itself, there are there used to be a verification of recording and reporting tool only, but now in the province that I walked we tried to address the DHS to data entry since it also have issues you know the entry in the during the entry, the, the user do entered data, which is not correct with the recording and reporting format so we tried to include the DHS to enter data also in the RQA form in the offline person. Okay, thank you. Thank you. We are running out of time. So, just like to thank them all the presenters young to Sharpa. And we have written me tell the younger and Louise Tina day. And of course all the all the participants, and since we have the community practice page. And I really encourage you to to use that for any further questions and for the presenters to just drop by that page and answer any additional questions. Thank you.