 Hi, everyone. My name is Grant. I'm from the training and communications team. Welcome to the session on integrating tracker and aggregate data in DHIS2. I am not an expert on this at all. So I'm going to hand straight over to Olaf, who will get you going with the session now. So, Olaf, if you want to take it away for us. Okay, sure. So I'll just start by sharing my screen here. I hope you can see the presentation. Good. So welcome to this session on the topic of linking tracker and aggregate data. We'll be. My name is Olaf. I will be giving a brief introduction to this session, talking a bit about different approaches to integrating tracker and aggregate data in DHIS2, some considerations. Things to keep in mind if you have a tracker and aggregate system and are planning to integrate them. Then I'll hand over to Peter Leningen from BAO who will be presenting the program data set connector app for DHIS2, which is an app that, well, you can present the details later, but that facilitates the linking of tracker and aggregate data with the metadata essentially. And then lunch shows really from ICF will be presenting some workflows and tools developed by PEPFAR or for PEPFAR for linking tracker and aggregate data, and a bit around their strategy. And then hopefully we'll have time for some questions towards the end. I also want to mention that there is a tech lounge session in two hours time on this topic. So if you have further questions, you can also join us. Okay, so I'll just start by talking a bit about different approaches within the DHIS world of linking tracker and aggregate data. I think many of us probably think of what I listed here as the third approach to actually extract data from tracker and saving it as aggregate data values in DHIS as what we're talking about here. But I also want to just mention briefly to begin with that there are also other ways in which you can combine tracker and aggregate data. So we have this perhaps obvious that still wanted to highlight it possibility of actually within the analysis tools of DHIS to drawing in data from both tracker and aggregate systems. So in the examples on the slide here, we have an example of using aggregate data for COVID vaccines combined with the tracker data on adverse events. That's the first table on the right there. Today's one way you can combine the two data sources below we have an example of a dashboard where there is a line listing visualization using the raw tracker data combined with the aggregate outputs based on the same data source. That's one one way to do it. Of course the big downside here is that it assumes your tracker and aggregate data is in the same digital students, which is often not the case. So here there is this opportunity that I know quite a few implementations have been doing, which is to use aggregate indicators for combining tracker and aggregate data. The most obvious example of this is when you have service data collected in the tracker like an example below here of BCG doses from an immunization tracker and combining that with denominator data such as population estimates, which is stored in the aggregate domain. The aggregate indicators lets you define formulas based both on program indicators and on aggregate data elements. We also have then the possibility of combining service data in like in the other example of NC first visits in situations where you have, for example, previously collected aggregate data. You now switch to tracker data and you want to combine the two longitudinally you can do that by adding them up, or if you have different geographical regions of types of facilities using tracker and aggregate perspective. And that is what most of us are thinking of here in this session is this idea of taking aggregate values out of tracker and saving them as aggregate data values so in the example here having your tracker individual cases. So taking the aggregate numbers and saving them as data values in aggregate data sets essentially. So, of course, often a combination of these approaches may be relevant. You might have different programs, different phases of the implementation where you want to do different things for combining the two. And of course it also depends a lot on how you have configured the tracker and aggregate systems, or if you even have an aggregate reporting systems. But to focus on this third approach of saving tracker data as aggregate data values. And just to elaborate a bit on why this is useful in many cases, even if you don't have a, even if you only use the tries to for tracker you may still want to produce aggregate data values. And of course the first is that it allows you to combine separate reporting systems so if you have an HMS where you have aggregate data, you have tracker programs. You want to be able to analyze that data from across programs, you need to move your tracker data into the HMS, or if you have, as I mentioned, or a phased approach where you have certain types of facilities using tracker other types of facility collecting aggregate different regions in the country, etc. So there are some scenarios where you have two reporting systems and you want to make sure you have the data in one place for analysis. But there are also other reasons which is sort of isolated from whether you're collecting aggregate data. One is that there is some additional flexibility in the analysis you can do on your aggregate data values, particularly around dimensionality of the data. So I added just a couple of figures here towards the bottom, showing how program indicators, counting TB cases by HN6 look in the event reports. This is actually the pivot table where you don't have any, you're not able to separate out the HN6 as I mentioned so that you can pivot and filter etc. To be able to do that, you need to take your program indicator values and save them as aggregate data values to have sort of the full flexibility of the data visualizer in terms of pivoting filtering on these dimensions, data dimensions. The last one which is perhaps the least obvious is that we've seen particularly in big tracker implementation that some of the program indicator queries are quite heavy on the server. In some cases, basically takes down the whole server in some of these bigger queries. In particular when you have program indicators without any clearly defined time periods like HIV patients on treatment. So we need to look at all active enrollments for example, or people ever given a COVID vaccine. So in these cases, regular intervals, taking those values and saving them as aggregate data values may reduce the load on the server quite a lot. Of course, there are challenges to this approach. We have fully baked in the built into DGIs too. You don't have all the tools in DGIs to actually do this. You need to have something outside of DGIs to move the data. Part of that is having a mapping between your tracker metadata and your aggregate data metadata. And that can be cumbersome and that's what Pete will be talking about as well later. As soon as you have two instances of DGIs to you have the additional issue of keeping your organization unit, your facility lists in sync across the instances. So just to go through the process what is what is available in DGIs to for doing this. So assuming here we have our tracker data values in a DGIs instance. We have our table with aggregate data values, it could be in the same instance it could be in a separate DGIs instance. So what we need to do here is first define our program indicators. So for example, counting all the enrollments in an immunization program where the children have been given BCG and the age is less than one year. Then the DGIs to API allows you to export those program indicator values as aggregate data values and what is called data value set. And this can in turn be imported into the DGIs to API and saved linked to aggregate data elements. So this is sort of the process and what DGIs to has built in, but as you see on the on the right there, the actual export and import all the data is not built into the DGIs to course you need some form of script or app or tools for doing that. So the additional point is this issue of mapping the metadata you need to have a way of identifying what program indicator corresponds to what data element category option. So there's some functionality for doing that in DGIs. But there are also examples of apps out in the community that does the mapping outside of the child sits off. You have if you have multiple instances of DGIs to there's also the question of whether you want to actually first produce your aggregate values within the tracker instance and then move aggregate data values to your aggregate instance or if you want to do that directly there are pros and cons of each. And as I mentioned, as soon as you have two instances that you're moving data between the whole issue of organization units also becomes critical. Now, I'm moving on a bit to discuss some of the sort of consideration some of the issues to keep in mind if you're setting up some integration between tracker and aggregate. So I think I've touched upon most of these issues already around HMIs reporting. But thinking of this full data transfer process, there are a number of things you need to make a decision around. How often do you want to migrate data from tracker to aggregate? Do you want to do it every day? Do you want to do it every month, every quarter? How far back in time do you want to migrate data? How do you want to deal with updates? Often the tracker data capture is not actually done while you're seeing a patient or beneficiary is often done retroactively. So even with tracker data, you might have a delay in reporting. So how do you then deal with that when you have a separate system with timelines on the aggregate side? Potentially with aggregate data being locked for edits after a certain period of time. So there are a number of governance issues that needs to be decided around how to do the data transfer. And a related issue of data quality and validation. What do you do with data that is where you find the errors in the tracker data and it's already been migrated to the HMIs system? Do you correct them in the HMIs? Do you correct them in the tracker and move the data again? And the whole issue of completeness, timeliness of reporting in HMIs where there isn't any corresponding concept in the tracker world, how do you deal with that? Yeah, and often it's when planning this, it might be useful to plan for a transition period where the HMIs and manual aggregation of data is done in parallel with the automated aggregation from tracker. So that you can do some comparisons, look at the discrepancies, which there will probably always be some discrepancies and use that to identify potential data quality issues and decide on when the automated tracker reporting can replace the manual HMIs aggregation. It's also important then because of these reasons I talked about that you need something outside of the HMIs too that needs to be maintained, you need to maintain a mapping, you need to keep your organets in sync. It's important when planning this that you take into account that someone needs to do all this work, you need to have the people with the skills, technical skills to maintain this over time. The last point I want to make is that in these metadata packages that I've been mentioned many times so far during the conference, some of this mapping from tracker to aggregate is built in as much as possible. So in the areas where there is a tracker metadata package and aggregate metadata package, it does include the mapping between the two out of the box. So if you're using these metadata packages, for example, TV case based package in tracker, as well as the aggregate package. Some of this work is included out of the box. And the guidance around these packages in some cases also include recommendations related to some of these issues I talked about earlier how, how far back to transfer data, etc. Okay, so that was what I wanted to say to start up the session. So I'll leave the word to you now. I hope you have access to share your screen. Yeah. Can you guys see my screen. It's great. Thanks. Hi, everyone. I'm going to be presenting on their program data set connector app, which kind of satisfies that challenge that I love was talking about setting up the metadata to support the mapping so it's not to help linking tracker and aggregate data models. So I'm going to repeat software engineer systems and sort of came up with the idea for this app and developed at the start of this year. And so presentations going to go through an introduction to the app, a hopefully successful live demonstration. And then if I have time some future improvements. So what is the app. It's a DHS to web app built using the app platform to automate the process of configuring DHS to sort of making all that metadata that you need to set up this transfer of data from the tracker to the aggregate data models. What this also allows is you can define custom disaggregations on program indicators and set up a mapping for that to happen and use the disaggregations as true data dimensions in DHS to which should hopefully enhance track analytics a lot. To do that, the app uses this concept of category option filters to connect a program indicator disaggregation on the track side with a aggregate category option. So how does it work, you don't have to specify a huge amount of things and you pick where you want to send the data, which is the data set and data element and where the source of the data is going to come from which the program indicator. And then you specify how to break down this total data by each of the category options using the category option filters. The app does all the work for you of creating their program indicators with the different breakdowns, creating indicators that relate to those with the special DHS to properties attribute option combo and category option combo for data export. And it makes a custom attribute assigned to the indicators which also holds the data element you ID. The data will be exported from and that kind of means all of the configuration is built for you and it's bundled into an indicator group which allows you to export all that data in one go fairly easily. So let's have a go at this live demo. Sorry to get out of my demo here we go. Okay, so the program indicator going to be looking at disaggregating using that app is the inpatient cases program indicator. You can see the play server. I'm on a copy of play here has a few breakdowns already but we're going to take that to the next level and break this down by gender, age and weight with a significantly larger number of categories. So the first thing that you need to do is define where the data is going to be sent. And that's going to be an aggregate data set. So this is totally customizable. Whatever you design the categories and category options to be it's up to you really have you on a break to the data down and these can be so long as you can filter for them on your program you can create the disaggregations. So it's extremely flexible. So let's see how to actually set this up using that. So here is the app screen. It's fairly simple just a table with rose representing the mappings you can do sorting and filtering as well. So let's set up a new mapping for this. As we saw, I'm going to select the data set and the data on I want to send the data to and the program indicator where I'm going to get my data from. And you can see the app. It's automatically looked up all the relevant category options associated with the data set and data element and it's created this table for us. And then this is where the user fills out the category option filters. So basically what would you add to the program indicator to break down this total value by the specified category option. So in this case to break down in patient cases by zero to 19. You would add a filter to check that the age data element is an event program is between zero and 20. But this works for tracker programs as well as you can use attributes and anything that you would use in a normal program indicator filter you can put in here. And you'll notice these are populated automatically. So the app saves these category option filters. So when you create other mappings it checks to see if the filters exist already and will automatically load them for you so you don't have to replicate work. Okay, so we've added this new mapping to our app. Let's go ahead and generate the mapping. And this will be creating all the metadata for us, which has worked excellent. And so let's go ahead and look at this metadata and kind of talk through a bit about what's been made. So we can see that 70 program indicators being generated for us, which is what we expect because there's two genders, five age groups and seven weight groups so two times five times seven. We've got 70 program indicators have been made for us automatically, which is great. And then we can go ahead and have a look at one of these. So this is a copy of the total the original program indicator, but we've got our category options in the name, some internal fields which is useful cleanup. And then the key thing is the filter, which the app has automatically appended additional filters to filter down the data by the category option combo that's represented. This program indicator didn't actually have any filter originally but that would be maintained. So, next, the app makes, makes indicators with a one to one mapping to the program indicators. And that's because at the time this was being developed there was a small bug with DHS to which meant we couldn't use indicators directly. But that's been fixed in 236 so hopefully down the line we'll just be needing to work with program indicators. But for now, we've got the indicators. And this is a one to one mapping of the program indicator, but it's populated some specific fields for us to help us export the data into the right place. So we have this category option combo and attribute option combo for aggregate data export. And if we search this, we can see that it aligns with these category options here female zero to 19 weight greater than 100. Yeah, got those there. And the actual option combo relates to any disaggregations we apply on the data set. There's none in this case this just the default. And then the apps made this custom attribute for us and assigned it to the indicators called event aggregate mapping. And that holds the data element you ID, which we want to send the data to so the aggregate data element. And what these three fields mean is when you export the data, instead of exporting an indicator value, it replaces the indicator you ID with this data element you ID. So the exported data is transformed from an indicator to an aggregate data value with the correct disaggregations to go into our data. And the last thing that the app makes is the indicator group, which we can see here. And that's just a collection of all the generated indicators from mapping. And it includes a handy URL in the name to get us started on exporting our data. Okay, so that in here. So we then need to specify where we want to export the data for. So let's just go last month for now. And which organization unit we want to use. I'm going to go with though here and great, we can see that we're getting our original program indicator, but it's been broken down by all the requested disaggregations and the map from the indicator to these data elements with the correct breakdowns. So this is now just valid aggregate data, which we can save and then re-import. Go to the next one. Data import. Get our file. Import. So that's imported. We can see originally at the start of this video, there was no data in here. But now we've just ran our import. Let's go back. We can see the single program indicator has now been mapped to all these disaggregations where there is values. And like I said before, this is completely up to you what you want to create these disaggregations as that's however you configure your categories and category options. Yeah, we can then just quickly go into the data visualizer and see how this can be used to break down our program indicator. So first of all, let's get the original total program indicator. And we'll hit last month though, because that's what we did the data transfer for. Then let's add our aggregate data element version, which we just created. You can see the total is the same, always promising. And the key thing is our aggregate version. We now have full flexibility to break down the data by these disaggregations that we've created. So gender, age and weight, whereas the original obviously we can't do this break down. And so this has given us a much more useful representation of this program indicator than we had initially. I'm just going to load a pre-formatted version of this table, which is the easiest to look at. Great. So here's our program indicator, what's the data element equivalent of it broken down by weight, age and gender. And we can already see a few interesting things from this example. And so by looking at the gender totals, we can see the inpatient cases doesn't have a significant gender skew. You can see across ages and across genders, less than 50 is the most common weight. So I might make some more category options to investigate that category, that weight class further. And you can see that the most common age in patient cases is 40 to 59. So yeah, that's just a quick tour of how to use the app for transferring data. I'll go back to the presentation now, see if we can get this to an advance. This was my backup if the live demo didn't work, but it seemed to go well. So there's still lots of improvements for the app. Currently, this that CEO filter that you set up is shared between the mapping. So I want to make that specific per mapping so it can be different. Adding extra CEO filters in and improving feedback and a few other points here, but I've run out of time. So I will finish there. Great. Thank you very much for listening. As my email, if you want to reach out to me with any other questions. Thanks for your time to finish. Thanks a lot. And I think this app is in the app competition as well, isn't it? Yes. Part of that. I'll be there again, re representing back with any six minutes. Okay, so are you ready to take over? Yeah, let me try to share my screen. I need to stop. I might be able to put you off. Great. It's telling me that I'll put you off. So. All right. Do you guys see my screen? Yes, but it's blank. There it is. All right. So hi everyone. Good morning. Good afternoon. Good evening. So my name is blood. I am with ICF. And I lead the data exchange and interoperability group within PEPFAR systems. So some of you are probably aware, you know, what datum is. So datum is a PEPFAR version of DHS to where PEPFAR collects. The HIV indicators mainly in aggregate form, but there's also other work streams. So it's been worked in many, many countries, which means that, you know, the data flows in from multiple countries. And a lot of times this data originates from patient level systems. So it's not always DHS to so it could be some other systems. You know, it could be open MRS and it could be some other custom legacy systems. And if you're lucky enough to not have seen the PEPFAR is my guidance, you know, keep that way. And it's usually changes of pages and it usually changes every year. So, you know, the indicator reporting frequencies might change. The indicator definition might change. The disaggregates might change. And what that means that when you have multiple countries reporting their data, you know, they need to adopt and they need to change their ATL systems wherever they get their data. So there is error prone data entry and there's a lot of manual data entry into datum. So, you know, the current state is really, you know, there's changes that means to adopt it in the field. You know, there's errors then patient level data analysis is usually done at the single facility and so on. So I want to do and get to the demos. So I'm just going to cycle through the slides much quicker than you might be able to read. But so the goal of the dash is really to try to address this. It's a proof of concept. It's a suite of tools that PEPFAR is developing. So it's not just DHS2. So we're trying to aim at data coming from any system. So it could be coming from a tracker in DHS2. But then, you know, the end result is getting the data in aggregate form into DHS2. So it sort of fits into the third model that Olaf was presenting where you're extracting it from the tracker or the patient level form and importing into aggregate. But what we're trying to do is address it in a way that, you know, we maintain some control over this changes over time, you know, create open source tools and try to use standards along the way. So what we do is usually try to fit into the open HIE model. And some of the stuff that I'm going to discuss probably fits into, you know, Bob Jolliff's session, the interoperability one. But, you know, there's definitely a benefit here as well. So what we have developed is several tools that allow mapping separately, then transformation of this data and then aggregation and generating ADX message. So for us, because we go by standards, ADX is the best way to represent the data and to import it into DHS2. And it doesn't have to be data, obviously. And there's not many HMI systems supporting ADX other than DHS2, but still, you know, if you had other HMI system, you could potentially use it there as well. And the input can be either CSV or JSON. So now what I'm going to do next is going to try to do a very deep dive into small pieces of the components. So this diagram is not very representative of everything because we do have multiple pieces. We have facility registry that addresses the harmonization of the organization units across systems, especially when you're dealing with multiple systems coming together into one DHS2. So the matches is the first piece there. So all it does is it allows you to do the mappings. It takes a format, you know, CSV format or JSON formatted data and allows you to map it to minimum data set that you require in order to represent your indicator. So if I were to take an HIV indicator, so I can map it to CSV and I'll give it a, oops, name. Oops, it doesn't matter what I give it. The minimum data set is stored as a fire questioner resource and a fire server. So we are trying to use fire as much as we can. As Bob mentioned in the previous session, if you were there, fire is challenging. It's not very easy to work with, but we are definitely trying our best. And what you see here is the set of the fields that you would require in order to produce the aggregate indicator. So you would need a patient ID in order to deduplicate, you would need birth date, gender, location, and viral load count if you're doing an indicator that requires viral load when they are started and things like that. And what you can do is I'm not going to do this fully, but you can use your CSV file to sort of pre-populate the possible dropdown options. So I have my simple demo data file, which shows me all the columns that are available there. So all I have to do is then just pick the ones that are in my system. So date of birth would be DOB, gender would be gender. In addition to mapping the columns, you cannot map the value sets as well. So for gender, for example, it's not given that your system is going to use a female, other, and unknown that we would expect, right? So in my file, I actually, well, I know this, but you'd be able to upload if you wanted to. So I know it's M and F and O and U and so on. So I would save that. And once I finish this, I have the mappings established. Now, once I have the mappings established, then I can take the CSV file and convert it to a fire representation or in a common format that can be digested downstream by something that can aggregate this data. So I have one that's completed. So I'm just going to use that one for demonstration here, but we are trying to make the matches have API endpoint for transformation, but it can also send the content to an endpoint if you have one, but I'll quickly show you. So this is the simple file that I have. It has very few records, you know, not to scare things off. So all it does is has this CSV data with the columns that I identified using the gender with the M, F, and U as the value set. So once I upload it, it's going to convert it to a JSON representation over fire resource bundle. If you're not familiar with fire, what fire resources is basically it's representation of an object and it's very specific. There's documentation what a specific resource can have and how it should have it. And also a bundle is really just an array of those resources and because my file had multiple records, it's really just this long list of this individual resources. And if I scroll down, it's just, it's not super large, it is verbose compared to CSV, but what we have now is a representation that if you were to map five different CSV files to this one questionnaire, all the questionnaire responses after the mapping would look basically identical. Now the next step after this is if I take this, next tool is something we call Fire Engine. And what Fire Engine does is it takes this and using aggregation directives, it puts it in an ADX format. I'm not using the full workflow. In our full workflow, we have open it, open him involved where things are set up as mediators and there's orchestrators and transformation going on along the way. So I'm just going to take a shortcut here and I'm just going to use a Java code. So what Java code is going to do is take the JSON file that I just generated. Well, it's the same file, it's just, I have it on the file system and I'm just going to run a transformation. Now that's going to take just a couple seconds, but while that's happening, let me show you what the actual aggregation directives look like. So we use OCL, the open concept lab, the terminology service, which is part of the OHE platform to maintain this aggregation rules and directives and also the metadata. So you can see here, this is something familiar to you probably. There's something that looks like a UID, there's the names and codes and stuff like that. So what we have here is UIDs from Datum for things like category options like under one, one to four, five to nine. But if I actually look at 10 to four, for example, there's actually custom attributes associated with it. The option where it actually says, and this is something similar to what Peter was doing in his presentation where we're calculating age in years of the patient at the time when the test was done and see if it's more than 10 or if it's less than 14, then that record will be assigned this category option. And then what happens is we basically aggregate all the data and put them in the right back buckets. Now the ADX is there. So once it's aggregated and I don't know how many of you have seen ADX and as Bob also mentioned, DHS2 does support ADX. And I know Jim Grace is on this call and he has a lot of help that he did to make this happen. So thank you, Jim. So this is the ADX message that represents that CSV file that we had. And what you can see here, we have the data element. I'm using the code system because it's usually easier to read. But in ADX, the good thing about ADX is you actually don't have to do category option combo. You can do the categories themselves and then specify the individual options. So in our case, we have the TXcur. It's a demo data element, but we have value one for someone who is 20 to 29, who is positive and who has unknown sex. We also have one person with unknown sex who is 10 to 19. And then we have three people who are 10 to 19, who are male. And I know I'm glazing over a lot of stuff. Like HIV status is positive only, which it's implied. So in the mappings in the OCL, we can specify that something is implied. So you don't actually have to specify in your input file. So if you remember the input file we showed, it didn't show anywhere that the person is HIV positive. So it is, you know, because it's treatment, it usually is implied. So for the indicator, because this is the input file, then we can take it. So this is really the demo I wanted to show. So this file then you would import into DHS to, and it would become part of the aggregate reporting module. And then let me just show you the very last slides. And I think I have a couple of minutes left. So, and this is, you know, what we're trying to do and where we're trying to go, right? And it's really what we're trying to do is get to the improved data quality through this and reducing the implementation costs and time burden and use the, as many off the shelf technologies as possible. And, you know, all those goods that are, you know, to come. So that is really it. And I really thank you for your attention. And that's my email. If you want to reach out or use the community of practice. If you have any questions. All right. Thanks a lot sharing. Thank you. Interesting to see the two approaches next to each other as well. So yeah, we have 10 minutes now for questions. If anyone wants to write in the chat or just unmute and ask questions to any of us. If you, I've got some unmute the anyone that wants to ask a question. So if you raise your hand, I'll do my best to find you as quickly as possible. So maybe start with a few that's come up in the chat. So one for Pete is the cold open source and is it going into the app hub? Yes, to both of those. It has to be open source to be in the competition. So it's on GitHub. I can put a link in here. And yes, it's also going to be submitted to the app hub. Still on direct message. I think it's, it's definitely ready to be submitted. Maybe I'm just being too perfectionist about sorting a few things, but it should be there fairly soon. Yes. Then this year also wrote that there is. This app already in the app store called MD sync. Which is among other things helps with the organization in sync. I don't know if you want to elaborate. Thanks, Olaf. Maybe it's actually better if you can mention a little bit about the different apps you have come across and where we can find them easily. Because the meta data sync, as you said, is one of the applications that allows to synchronize organizational unit trees. It allows you to do mapping across to different DHS to instances. Mapping of metadata within the same DHS to instances, synchronize metadata transfers or data transfers across different instances with different metadata using the DHS to app app. But as you said, there are other applications that are also doing these processes. And I think for all of us who are trying to develop solutions for using this process of transferring data, it would be really nice to have like an idea of how many apps are out there. Who is doing what so that we try to avoid as much duplication as possible and try to build on each other solutions is rolling. Yeah. Thanks. So I can also mention that we're working here in Oslo on the implementation guide on this topic of linking Tracker and I read. And as part of that, we also want to have it will be mainly about sort of what the tries to support out of the box, but we'll also have a section where we list some tools out in the community that can help with either the whole process or parts of the process. So that's on the agenda to publish that with some overview of resources useful in this area. And we're also discussing with the interoperability team. To what extent we could also provide some tools for facilitating this data transfer from the Oslo side. So that will probably not be apps, but some template scripts or similar that countries can adapt. So no raised hands. Or other questions. Yeah, if we don't have any more questions to we call it that give people a few minutes to take a break for the next session start. Yep. Brilliant. Cool. Thank you so much. If we were in a conference hall together, I'd get everyone to give a round of applause to everyone. Maybe next year, hopefully next year. So I'll just share my screen now so you can see what other sessions we've got coming up now. So if you want to stay in this room, we've got the enabling innovation by designing for accessibility. Then we've got the second part of DHRs to for education. More stuff on DHRs to the data warehouse. LMIS and NCDs or non-communicable. Non-communicable. I had to say this like three times today recording the update video. Non-communicable diseases and DHRs to. So we'll take a break now and give yourselves a few minutes to go and get a drink. And thank you very much. Thanks everyone. Thanks.