 So let's dive right in. So a little bit of backstory. I'm gonna talk about LMIS just for a sake of illustration. So for the last like 10 years, people have been asking, can DHIS2 be used as an LMIS? And up until just a couple of years ago, the answer was meh, like kinda, like sometimes maybe it really depends on the context. And we don't really know much about LMIS. So, yeah, maybe. And it turns out that that really wasn't sufficient. Like that, you know, just saying, if you want to, you know, use at your own risk, wasn't like a message that people could get behind. You couldn't get behind it. We couldn't make a roadmap and respond to it. And donors didn't like it. And at the end of the day, that's kind of the important thing. So what do we do? A few years ago, we started actually figuring out where DHIS2 ends when it comes to supply chain. When do you have to push it to other, push it into another system? When does DHIS2 not become sufficient? When should you use DHIS2 in supply chain? And then we hired Brno and George and they come with a wealth of knowledge. And we got, I think we got a lot more organized. And hopefully you see that now in the logistics space. We're a lot more organized. We have a lot more partners. We have a very clear, I think a much, much more clear kind of perspective or communication around, this is what DHIS2 does for supply chain. This is what DHIS2 doesn't. And which is why you have partners like Medexus physically here saying, this is where we pick up where DHIS2 leaves off. And I don't know about you, but I kind of feel that we're kind of in a similar situation with Master Facility List and MFL. As we were in supply chain a few years ago, can DHIS2 be used as a Master Facility List or an MFL? Probably to some extent, where does it end? It really depends on how you're defining it. So what we're gonna do is we're going to, oh wait, this is not my most updated slide. Okay, I'm gonna just improvise for a second like I usually do. So what we're gonna do is I'm gonna start with a little bit of a definition. And luckily for us, the guy who wrote the definition, Jason Pickering just came in. Yeah, you might want to leave now, Jason. Yeah, your name is literally on the paper. So we're gonna go and there's 14 functional requirements for Master Facility List. So we're gonna go through those one by one and say, can DHIS2 do it or not? And I'm thinking this is gonna be highly debatable, to be honest. Then, and we're gonna see some very clear points where DHIS2 is not responding to these functional requirements. That's where my fellow presenters come up. So Vlad is going to then talk about how they have managed to extend some functionality in DHIS2 for PEPFAR use case to address some of the gaps, what they've managed to do, what they're not able to do. And then when you hit a wall and DHIS2 can't be stretched any further, that's when we have Nathan TerraFrame Geoprism coming in as a complimentary service, okay? So they're gonna present after me once I get through some of the basic definition stuff. So what is an MFL? Hopefully you all know this, this is coming, this definition is coming from WHO and OpenHIE. You know, essentially it's the central repository for all geographic data. So facility registries, it's got all of your key facility information. I'm not gonna read the definition to you, but you know, hopefully we all kind of, since you're sitting here, I assume that you know to some extent, right? Okay, so let's just go through the functional requirements. Like I said, it's a little tedious, but I think we'd never had a breakdown of what we can actually do, what we can't do. We've never had this as conversation before. Okay, so functional requirement number one, and again, there's 14 of these. So system shall support the ability to create, define, and evolve the attributes and associated data dictionary for a registry. Do we have this in DHIs too? I should have covered this up the other half. Like vote yes or no. Do we have it? Yes, no, yeah, sort of. That's what I said. So yeah, basically, yeah, sort of. We can update and manage arguments and that kind of stuff, but do we have a metadata dictionary? No, not really. Do we kind of have some things that could be arguably called metadata dictionary? Yeah, maybe. What do we not have? We don't have change logs, right? Org unit change logs. You can scrape them out of the server if you want, but there's no easy way to access changes over time to org units and visualize that and incorporate that into your analytics or to your reports and that kind of stuff. So that's a big, big limitation. Okay, moving on. The system shall support the ability to create, define, maintain multi-organizational hierarchies of facilities related to geo-objects. Do we have this? I think the important thing here is maintain multiple organizational hierarchies. Do we have that? Jason is like kind of sorta. I am more on the no side of this one. Again, I think it's fairly debatable. It depends on how exactly you're defining a hierarchy, but in our definition of the concept of a hierarchy, we only have the one, right? Are we planning to do another one? Well, you've been asking for it forever and we've never done it. And are we going to do it? Well, I can tell you right now, it's not on the roadmap. Lars says he has a way to do it, Lars has a lot of solutions that sometimes don't get it. But it's not currently planned, okay? So number three, a system should support the collection of data following minimal facility attributes. Do we have that? I think so. I think we can make facility attributes. Jason, what do you think? Still good? Okay. We're getting through it. The system shall support the ability to set up and manage users permissions, reading data, writing data, viewing data and system administration. Do we have that? I think so. We have user management. Is it cumbersome? Yes, is it difficult at times to manage a lot of users? Definitely. Are we working on revamping the management app, the maintenance app to make it easier? Yeah, we are. But in the broader strokes, I think we do have that. At minimum, a facility registry should support the ability to create roles and assign permissions to the roles, okay? Again, I think we have user roles. So that's pretty simple. The system should have a flexible standard-based API, REST API, got that. The system should have the ability to pull and push data to other systems based upon defined criteria. Yeah, kinda, we do. We have an import-export app. We have the API. We don't, you know, there's been a lot of interoperability sessions. I think we could do a lot and, you know, extensibility sessions as well. We could do a lot more in this regard, but, you know, as they've defined it like CSV, for example, yeah, we can have CSV imports and exports. The system shall support the ability to do bulk imports. Yeah, not easily, but this is actually, at least in the core, but this is something where the community is really excelling it. The community, I mean, every year for the app competition, we have at least five apps that are made to be able to import data into DHIs too, submitted to the app competition. And some of them have won the app competition in previous years. So the community is putting a lot there and some of our default solutions that we say that you should use are actually community apps. They're not made by UIO, but, you know, the core app is there. It works, you know, most, you know, probably some of you have a job, you know, you're nearly full-time just doing this. And, you know, we don't wanna take your job away from you, because then what would you do? So the system should support the ability to search for facilities and attributes. Yeah, and I think this has got to leave in a little bit better with the facility layer that we have in the Maps app and the org unit profile that we have there now. Almost there. The system should support the ability to see a facility located on a map. Yes, you can see it in the Maps app. And Vlad and I were just talking, you can't really see it when you're making the org unit, right, when you're making a facility. You can't see where it is. So we have lots of countries with lots of examples of making facilities and they, you know, reverse the Latin long and then, you know, it ends up in like Paris, France facility, or, you know, they put in double zeros and then it's out in the middle of the Atlantic Ocean. So I think we could do a lot more here to, with, you know, when you're actually creating the facility, but, you know, you can, you know, see where the facility is with the, with some tools that we already have in the Maps app. The system should allow public access to view data that is relevant to public health. Do we have that? Yes, who says yes, who says no? A definitive yes. Any no's? No. Yes. No. Just no no. Okay. Not a no yes, just a no no. You know, this is one that's really debatable. Technically yes, you can pull it out, you can push, but do we have good core solutions for this? Jason saying yes. As an analytics product manager, I'm going to respectfully disagree with Jason for the first time in my life and say no, I don't think that we have this very well. I think we could do a lot better. And if what we do have, I don't think is terribly implementable by most of the people using DHIs too. So I think we could, I think this is an area of improvement for sure, but is it so specific to master facilities? It's a general improvement. It's not necessarily we're doing this because we want to be a better master facility list. All right, the number 12, the system should support facility data creation to manage site changes, closures, opening service changes. I left this one blank because I was interested in what you guys think. Yes, no, not really. Any kind of soft maybes? Maybe. I think that this one is, this is a tough one. You know, we have opening and closing date. It's really cumbersome. It's not terribly useful. It doesn't come through in your analytics very easily. You know, we make it very difficult for you system administrators to maintain when facilities are open and closed in the database. We make it very hard for you to assign you know, data sets, programs to facilities, which that then throws off your reporting rates because you do bulk assignments because that's the easiest thing to do with DHS too. So, facilities, they may be not opening and closing, but maybe they're changing services. They're adding new, you're adding new data sets to them and that's a difficult and cumbersome operation. I think that this is, to me, this is, yeah, I mean, maybe, and it's keeping a lot of people, you know, sitting behind the maintenance app, you know, for too many hours a day, but it's difficult. It's very cumbersome. It's an area we really need to improve. Yeah, well, thank you for that. I'm beating up on us, and but I appreciate you're being a little bit more generous than I am to myself. Okay, so is DHS too a master facility list? What do you think? Okay, well, this is another area where Jason and I are in full agreement. I'm gonna say kind of. To some extent, yes. What are we missing? The two big things to just to recap is we don't have change logs. I mean, you can scrape it out. You can draw it out if you want, but it's not easily available. We don't have any way to denote time in the hierarchy, right? So you can't rewind the clock and go back to a previous hierarchy. We've got the one hierarchy as it exists today, and you're gonna visualize data against that hierarchy for all of time. Even if when you capture that data, your hierarchy was very, very different. And countries are constantly changing, right? New district, new facilities. So this is a, that these are some of the limitations. And Lars just came in, which is really great timing, Lars. You might wanna leave. Support multiple-org unit hierarchies. Do we do it? No, we do not. Are we going to do it? Yeah. We don't have it. I'm not aware that we currently have it on the roadmap unless Lars has a roadmap that we're not privy to. Will we get to it? Maybe. But will it actually address all these problems? Still, no. Are we gonna fill those gaps? Maybe eventually, but not in the foreseeable future. So where does that leave us? And I think this is what I've been talking to a few folks. DHI's two is a good place to start, especially in the education sector where they've had nothing. DHI's two is a really good place to start. Just get all your facilities there, get your schools there, get yourself organized. Go as far as you can, but at some point you're going to hit, you know, you're gonna reach a maturity level where DHI's two can't be stretched anymore. Like just the core itself, which is when folks like Vlad and Nathan come in. So I'm gonna give it... Oh yeah, I don't need to talk about that. And then the last thing I'm gonna say is, tell us what you want us to do. But I think Vlad's gonna illustrate that very nicely. All right. Hold the tab. Okay. So, hi everyone. So I'm Vlad Shriashvili. I work with ICF and I support PEPFAR. Many of you hopefully have heard of it. And we have a system called Datum. And Datum is a DHS2 system. However, unlike most systems that you usually hear about, it's a global system that PEPFAR maintains for over 70 countries. It has over 110,000 sites in it. But the problem is that it's not all sites in every country. It's really sites that PEPFAR cares about because we are an HIV program. So we don't want to know about every single facility in every country. You have India, for example, we wouldn't want to know every single facility in India because it'll just overwhelm our system. So because of that, each operating unit is tasked to maintain their facility list within Datum. So those 70 countries usually have a site admins associated with each of the countries and they're tasked with creating, updating the facilities within the system. Now, that creates some problems because you have users who not always think through and they accidentally create something at the wrong level. They'll have more than one site admin sometimes and they'll create multiples of the same facility, sometimes exactly the same name under the same exact parent. And once the data is in, it's not easy to delete the data in the organization unit in Datum. Until 237, there was no merge option through core. It's coming. Lars has mentioned in the opening, we're very glad, but there's an asterisk for us because it doesn't go far enough for us. And I'm gonna go over that. Why it doesn't go far enough for us, but it's not just the mergers, there's things like deletions, then there is the relocations and there's a divide on what do you allow the site admins to do it, the ones who are in the field and what is done by the core team on the Datum side that maintain. So we're tasked with maintaining the data integrity, making sure that the data doesn't get lost. And basically with this introduction of the merger in the core, we would want to make sure, for example, that site admins are not the ones to use it and we would still be maintaining and making sure nothing gets lost in the process. So we've been running Datum since 2014. It was a pilot year. And this problem has started like from the very much the first year where there were org units that needed to be dealt with. And over the years, what we've done is we've created an Excel sheet that we give to the people in the site admins and we tell them, fill this out to tell us what you want to do if you need something changed. It is quite cumbersome. It covers four different operations, addition, relocation, merger and deletion and it can be done in bulk. So what we do is, after they submit this is we use a Python script to parse this into a control file, a CSV file and we validate to make sure that it's within the right country. It is the, if they want to merge, they give us both a site that is a donor and a site that's a recipient that if they're relocating, they're relocating it within the same country and at the right level because this is not just at the facility level. This can be anything. And the biggest thing usually is a redistricting. Kenya went through redistricting a couple of years ago when they just shifted the geopolitical hierarchy. They introduced new level in between then there was the swapping and shifting across boundaries and all of that to be done manually is not something anyone would want to do. So addition is pretty straightforward. It's something that site admins themselves can do it through the maintenance app. They have access to it but if there is a hundred new facilities that get on boarded to provide pet par support, they don't want to sit in the maintenance app and create hundred facilities. So they just fill it out. That's the easiest one. Deletion is usually trickier. Doesn't happen a lot because usually it's a merge because it's accidentally duplicate or something but there are cases where something just needs to be deleted. It's not simple closure. They just want to delete it and removed. So you cannot just go in and delete it in DHS too. You can try to delete it but if there is something associated whether it's gonna prevent you from doing it. So Jason Pickering is one of the people who created the SQL code that we use. It just goes in and scrapes every location where this org unit appears including data and all the related tables. The update usually is more generic, flexible, relocation, rename up to change of the code, change of the short name, things like that also pretty straightforward. And then there is the merger. So when I saw that the 237 introduction of merger I had a look and it has some of the features that we use but then there's certain that we use that are not there. So you have a donor side and you have a recipient side usually and there is an overlap of data sometimes. Sometimes there isn't. So if you're familiar with PEPFAR you know that there is additional dimension in PEPFAR data which is the implementing partner using a mechanism. So for every data there's a mechanism. So for a facility when you have for example a hundred people tested it can be reported by two different mechanisms and those are not overlapping data. They sort of remain a separate data within that facility. So, but then that happens sometimes that you have two data points, one in one side and one on the other side they're exactly at the same facility by the same partner and they need to be merged together. And this merger usually can be done one of the six options that we allow users to select and see the sum average main max first or last and I believe only sum and last is available in 237 merge options. So this for example would be one of the things that we would either ask to be added or we'll just continue using this approach until there's a way to do it. But then there's also the data types and how do you deal with them because not all the data at first collect is numeric. There's Boolean ones, then there's the textual ones. The narrative text usually is the trickiest one. Do you join them together? Do you take the last one? And it's usually the last one because merging to text field doesn't make a lot of sense but that's what we have. So I'm gonna try to wrap up here this was going to be a different presentation about MOH alignment that we do. It's an attempt by PEPFAR to try to harmonize the data with the ministers of health and see how close we are with the data that they collect. And in the process we needed to get the data from ministers of health into data and we wanted to do as seamless as possible so that the mapping would be reduced to the minimum. And what we did is we used a custom attribute within DHS to add an additional field to the org unit metadata object where we capture them ID from the country's minister of health system. In most cases it's just a UID of their DHS2 system for a particular org unit. But when you have seven, 6,000 facilities in a country you cannot do this manually. Names don't always match. Hierarchy is not always identical. So we used a tool called Gopher which intro health developed with our, we had input and help them but there were the ones behind that which allows you to basically use to DHS2 and then an input at CSV or some can do to DHS2, I think where it will use the analysis of proximity of the facility names and the hierarchy and then whatever it can resolve, it'll resolve on its own whatever it cannot, you'll just go in and manually select the facilities and then say, these two are exactly the same and then you just have the attribute in DHS2 that says that, okay, this facility also has this ID and that's the ID in the image system. So these are the ways we've gone around the core features that DHS2 provides and we'll provide more feedback as we go along and hopefully most of them will be incorporated so that we don't have to use custom SQL scripts. Thank you. I think I've taken over. I have. Can you guys hear me? I'm good. Excellent. Well, hi, my name is Nathan McKeechin. I'm the founder of TerraFrame. TerraFrame develops open source geospatial solutions that leverage something called spatial knowledge graphs which I'll touch on briefly. So I'm here to talk about a couple of things. One is this notion of a common geo registry and how a common geo registry can enable interoperability across multiple information systems, including DHS2 using common geographies, the same representation of geometries and geographies across multiple systems. And then I'm going to give a brief demo, I don't have a whole lot of time, of an implementation of the CGR concept called Geoprism Registry, which is what we developed in this open source. So I don't have to tell you guys this, I can go through this pretty quickly, but in order to obtain our public health goals, we need to integrate data by geography, data from multiple information systems in order to get the big picture, identify where the problems are and then also provide decision support. But the common problem is that each health information system in an ecosystem typically has its own copy of the geographies. Well, why is that a problem? Well, it's a problem because the problem is both in terms of technology and in terms of process, right? So these silos lend themselves to having multiple copies of these locations that aren't necessarily authoritative. And unfortunately, it's a glaring omission just in the field of GIS in general. If you look at the metadata standards in ISO, that the time component isn't really specified, right? So you may have a shapefile of boundaries, but there's nothing in the metadata that says when this was valid. You may be looking at something that's valid between 2000 and 2008, but that's handled sort of outside the metadata specification. So also a common challenge is most information systems have a single hierarchy, but one hierarchy, one size typically doesn't fit all. So a health aggregated reporting system has a specific hierarchy need, which isn't necessarily the hierarchy that a master facility registry needs. And it's also not necessarily the administrative hierarchy. So when you have these different information systems, let's say you've got your client registry, your health worker registry, your master facility registry, your aggregated reporting system, they all have their own individual hierarchies, but there are objects in common across them. But those objects aren't necessarily mastered by the authoritative source. And there's no, typically, it's difficult to propagate change. So for example, and it's been mentioned already here before, if a district is renamed or a district splits, how does that change propagate to dependent hierarchies in other information systems? Even within a single information system, it can be very difficult. And how do you keep track of that over time? So another reason why it's very difficult to integrate data from these different systems is the org units themselves aren't necessarily represented the same way. Different IDs, not necessarily a standard way of spelling, in the Latin character set. So from a sheen real ability standpoint, this is not something that we can just do a string comparison to determine what is semantic equivalency between different locations. And this is also a really big challenge, the time component again, right? So you may have operational data that was collected from 2020, but your geographies are from 2018 and 2019 a district split. So just, it's a general challenge, right? How do you, first of all, how do you capture that the time component? So you're aware that this is actually going on because you're not aware of this is going on. The time component of your geographical data is lost. Then when you do spatial analysis, it's not going to be correct. So a common geo registry is sort of an emerging concept. And that is, it's a mechanism for providing a single source of truth for geographical data. So a CGR is responsible for managing org unit identity, classification, geometries and relationship to other org units as they change over time. That's it, full stop. So when it comes to a facility registry, the goal of the CGR is not to be that. So in our implementation of the CGR concept, you can add additional, you can define something called a health facility. You could add a field, an enumerated field that lists all of the services provided by the health facility. You could add a field called number of beds, but that's not the role of a CGR because those fields should be mastered in a facility registry. Rather, it's the objective of the CGR to provide these health information systems in like an HIE ecosystem with authoritative geometries, with authoritative hierarchies for the correct time periods. So the idea is that either within the CGR or external to the CGR that you can, this would enable integration of data by location by providing multiple information systems with a single source of truth for how those locations are represented. So one ID that's used across the ecosystem. So how do we accomplish this? So relational databases have a lot of great strengths, but they also have some significant weaknesses when it comes to hierarchies. I think speaking to a crowd who understands the difficulty of trying to add org units below village to an org unit tree in any system that has a single hierarchy that is powered by relational database. Relational databases do not like hierarchies very well, but graph databases are fantastic for this. And there are caveats, of course, but with a graph database, there is almost no limit to how deep your hierarchy can be, how many nodes you have. I mean, it breaks down at some point, but rather than worrying about tens of thousands or hundreds of thousands, you're thinking in terms of tens of millions, hundreds of millions of nodes. What's also great about a graph database is you get more data. The graph tends to not slow down either. Okay, so geo-prism registry is the application my company has developed. It's open source. It was funded through the digital solutions from a layer elimination initiative. And so it leverages this notion of a spatial knowledge graph to provide the single source of truth for geometries, relationships, object identity and classification. And so what we're looking up here is sort of an internal representation in a graph of how geo-prism would manage those multiple hierarchies that were depicted in the previous slide. So the idea is that within the graph itself, you have a single source of truth by having one object node per org unit. So even though a province may appear in multiple hierarchies, there really is only one representation of that in the graph. So that as that province is split for its name changes for a particular time period that all dependent hierarchies are automatically updated with that change. Also something that we've addressed is a mechanism for implementing impedance mismatch from a typing mechanism. So in one system, you might have a health facility data type and that's it, just one type, one class. Another one that might be four, well, how are they related to each other? So with the graph database, you have a lot of flexibility for creating different types of relationships sort of to manage that complexity. So the goal is with geo-prism registry or just, or any CGR, ours happens to leverage a spatial knowledge graph, but technically it doesn't need that. That's just an implementation detail. The idea is that any hierarchy that has dependencies on information mastered from somewhere else is automatically updated. Does that make sense? Okay, so we've also added support for what we call historical events. So these are changes that are beyond just value changes, attribution changes, relationship changes. So I guess I'll take a step back here. So one thing that the graph does model is that it independently records a time component for every single attribute, geometries and relationships. The ID doesn't, that's fixed, but so the idea is that when you are labeling a time component for every change, then the graph can actually produce historical views of what data looked like. So we have a mechanism for publishing vector tile layers, for publishing a tabular lists, and which I'll show in the demo here if we have time, which means I probably should hurry up. So the idea is that we can produce historical views going back in time. Okay, so for more complicated things like splits, merge, combines, upgrades, reassigns, two here, two going for, two counties going to three. And by the way, the data you'll see here is all fictitious, inspired by the token novels. So there is a mechanism for a particular point in time specifying what became what. So in this case, the Lagone new county was created and it got part of its territory from Anduin and Eladone. So we think we can model any permutation. So we do have a synchronization configuration for DHIS too. So basically using our terminology, you specify which geo object type maps to which org unit. A geo object is sort of a more abstract concept than an org unit, which can model many different things. So, but in full disclosure, we'll be, no, this works well in very simple cases, but I think in more complex cases because people are using org units in multiple different ways that even though we have this, which is great, there are instances where for integration, we probably need to go to the API to provide a lot more flexibility. So for example, if an org unit level is being overloaded with villages and health facilities, we don't really cover that scenario very well in the graphical tool. So it's being used by DPC and Laos. Their health facility master list website is pulling from the from geo prism registry, which is then updating an instance of DHIS too. For specific questions about that implementation, be happy to talk to you about that. Okay, so do we have time for a demo? Five minutes demo or questions? Okay, so I'll just have to breeze through this really quick. So we're in geo prison right now and we're looking at the geo ontology definition. So defining all the, what we call geo object types. And more importantly, I'll touch on the hierarchy component, the multiple hierarchy component. So here we're looking at administrative hierarchy. We're looking at a referral hierarchy. And here we're looking at a health geographic hierarchy. And there's something very significant here. This hierarchy leverages the administrative hierarchy. So Shire is a fictitious geopolitical entity. So it's, and this is also multi tenants. The idea is a ministry of health manages the relationship between health facilities and Shire, but they are not responsible for managing the relationship between Shire and County because that's the ministry of home affairs responsibility. So as those relationships between Shire and County change, those changes are automatically reflected in this hierarchy as a dependency. So we also have a means of showing how different geo object types are related to each other. Okay, not much time here. So we also have a mechanism for, like I said before, oops, that's not where I wanted to be, publishing lists and layers. So here we have all of the geo object types that are modeled. And when publishing a tabular historical snapshot, you can publish a table for a single date, for a frequency base, such as annual, or you can publish it for intervals. And so here we're looking at multiple lists for health posts. Now, again, our role is just to manage the geometry component and even then potentially that the geometry might be managed in an MFL, but it's the hierarchy here. So if we look at a list of health posts, we'll see we have hierarchy components from the referral hierarchy and also from the geographic hierarchy. And so we've done an integration with the Gopher tool that Vlad mentioned using MCSD from FHIR. And just really quick, because we're almost out of time, I'll show you what it looks like on a geo object. So here we're looking at an individual geo object slash org unit and what, because everything has a time component here, we're looking at what's called the periods of stability. So looking at the object and how it changed through time and these are the periods for which it was stable, right? So attributes, relationships to parents in multiple hierarchies and also geometries all have an explicit time component. So I can also view all time periods. And so here you can see how the health facility name has been changed over time. And there's also just one really quick, a key important point here. So we are preserving or recording the history, but we're also doing is preserving the versioning of the history. So you might have a health facility test list for 2010 as it was understood in 2010, but then errors were corrected in 2016. So the modern version, you have the 2000 list today and the 2000 list yesterday, which needs to be preserved because operational decisions were made across that. So that's also something that geo prism manages. So I think probably we're ready to stop it. So I think so. So my question is like, you know, like in large and like knows about these things, we've been trying to deal with multiple, like configure MFL in DHRs too. One of the biggest problem what we have is the departments. You know, in a one district office or one provincial office, you have multiple departments and they have their own hierarchy. Like I have been supporting this one and all. So with this one, we can try to configure it, but how can we get the data shared across the geo prism to visualize the actual data to that particular timeline? Is there any external connector or how best we can try to collaborate to get that one in? This question for both of you. Yeah, so the dogmatic answer to that is that the role of the CGR is to facilitate that integration outside of the CGR tool. It's not meant to handle programmatic data. So it's easier to integrate that data if geographies are being represented the same across multiple systems. Now, having said that, the underlying technology of geo prism doesn't have an abstraction for association programmatic data with locations over periods of time. So at the API level, there is some support for that. It's just within this context and with the objective of the funders who funded some of this work. That's not our mission is to handle the programmatic data. Yeah, thanks for this very, very nice presentation. Very interesting. And I mean, we do get a lot of these requests for multiple hierarchies and also storing the snapshots, changes to the hierarchy of the time, right? And I think this software shows that it can be done in a very elegant way. Now, inside DHS2, these things become a bit more complicated, right? Because in DHS2, almost everything is linked to the organets in some way. So there's like the data sets, it's the data, it's the programs, it's attracting to this and all these things, which are then linked to the organets. So that means it's not that straightforward to kind of just change things easily because they might break relationships to these entities. And I'm also a little bit concerned about introducing too much complexity within DHS2 because you know that many countries struggle to maintain a single hierarchy. So if you now add multiple hierarchies for multiple departments and you also support multiple versions back in time, you now have a matrix of hierarchies to support basically, right? Which may run into many. So like asking all these departments or an administrative health to support a very high number of hierarchies could also be a challenge and something that might be problematic. So I think the question then is, do we kind of start on this journey and implement all these features of the time aspect, the multiple hierarchies inside DHS2 with all the complexity that comes with or do we rely on integration with a tool like Geoprism? I think John points to the, makes the point that linking back to the sort of service delivery data and so on is always going to be a challenge because one thing is storing this information but the other thing is like producing reports because people then will then ask, show me the reports for the hierarchy as it was back in 2016. And what if the hierarchy changed multiple times over the aggregation period? How do we kind of calculate these numbers now? So we hear you loud and clear. We need to support this at some point but I also want to be careful that we manage the complexity and not create a system that's too complicated to maintain for the ministries. Yeah, I would say that managing this complexity for hierarchies alone is a lot. That's why we're also trying to stay away from the programmatic side of data because that's even more complex. And I think that's the final word. So just a quick reminder, please hustle back over to the main auditorium at I think we need to like 11.55 even if the presenter's not done talking, just leave because we've got to start the app competition right at 12. And that's the coolest part of the week. So don't miss that. Please be back over there at 12. Thanks.