 Okay, everyone, so I'm going to switch gears a little bit right now. We have spent most of the morning talking about different data types that are part of an RHS. And what I'm going to talk about is a little bit more kind of thinking through all these data types and how we can look at data quality of these various integrated components. I'm just going to start with some baseline principles here just to make sure we're all on the same page. And I'm going to describe some basic principles of data quality. I'll then identify some specific tools and DHS to I'll describe some of them I'll also demonstrate some of them as well. So you can get a better better picture of what they actually look like. And I'll also describe some of the principles of integrating data quality within a health information system. All right, so a number of the data quality components that we are reviewing or that we will be discussing are based on these basic guidelines that WHO has developed this data quality assurance toolkit. I've posted the link in the presentation. There are three different modules that are available. Many of the principles that we pull from a lot of the features that we demonstrate a lot of the items that we will discuss are based on these three items. So if anyone is familiar with these, then some of this might be kind of linking to this guidance. So when we're looking at data quality, there is different frequencies that we can assess and think of right and you guys probably have some idea of this in your own context or in your own setting. How often do you actually look at the data. Now where we typically run into problems is reviewing that data routinely. But maybe in many contexts that we typically work with, they will look at the data towards the end of the year. They'll run an annual analysis. At that point, they will review the data and provide feedback on its quality. Another area where we typically have many challenges is sending that feedback down to the source of the data itself. So yesterday during the exercise, you guys learned a lot about how the source data can be kind of incorrect and based on how people are tallying things, for example. And often the discussion around this is that maybe central levels or provincial levels, and there's not enough kind of information fed back to those lower levels of the system that are responsible for reporting that data. So our job's is actually to take all this different data that we're collecting and kind of figure out innovative ways to present this back to individuals who can then interpret this, you know, try to make it a bit easier basically. So we can spread out the amount of burden regarding this review of data, right it shouldn't all be at the central level, because that makes it difficult to actually determine whether or not, you know, your statistics are correct. There's, you know, I've separated this out into a couple different periods. We have a weekly or monthly review, which is hopefully being conducted at lower levels of the system, or the idea would be generally to try to implement procedures that allow people at lower levels of the system to conduct that analysis routinely. So in the annual reviews, these can be maybe a bit more time consuming, right, might be might involve some other experts, you might bring in an outside expert and external advisor, or might involve a central team of experts who are very well versed in these data quality principles, and then you might have these periodic reviews these are often program specific. You might have a malaria review, someone might come in from outside or you might have your own internal team, and they will conduct a thorough review, not just of the data they will conduct also service delivery reviews as well, but they there will be some focus of course on the quality of that information for that program. So we're going to try and break this down a little bit in terms of how we can feed into this process, a little bit more routinely. So within those kind of different types of reviews, we have different types of data quality review that we can perform. So this includes verification surveys. This means that we're actually traveling down to the location, a facility or a district, and actually verifying their data. So there is some travel required, but speaking with some of you today. And yesterday, I understand that some of you are already doing this. We're actually going physically to verify the data at a specific site, or a district or another location, a physical location, right. We can also do this kind of, we call it a desk review but you know really whichever level you're at provincial national district, where you're kind of performing kind of joint review of data, without having to physically go to a site and review it. So this would mean that the data is then reported to you, you have it on hand, and you're able to actually review that data through some processes that you've defined. We also have some built in tools. I put validation rules as an example, but we have some tools within DHS to that will allow you to review that data at the point of entry, or later on at another time. So there are also also some tools to facilitate this process within the application itself. We're just going to discuss a couple of different metrics that we often use for data quality, there might be additional ones I'm not saying what I'm presenting are the only ones. But these are ones we might typically assess. And in particular there are many tools in DHS to to support the review of these specific metrics. So as an example here on screen I apologize the screenshots, a little difficult to see I will do a bit of a demo so you can see some of these as well. The first metric is data set completeness and timeliness. I say data set but this is like just your form your tally sheet from yesterday. So essentially the same concept. Okay, and we apply this concept typically to aggregate data for the most part, right. We have case based data as well. It's a little less intuitive to apply the concept of completeness and timeliness when you're reporting every individual case. So there are ways to do that, but it's more difficult to aggregate those measures. Okay. So, what we've noticed is that, in particular this measure. It's, it's okay in many places. It's improving in particular a lot more countries, especially as they implement electronic tools or if they had other tools if it wasn't DHS to, regardless of what they're using. Sometimes this is okay. But what we want to emphasize is this is not the only measure of quality. Right, a lot of people kind of have that idea while the data is being reported on time. I can see the data, therefore my data processes are okay. Another metric that we often use is consistency over time. This in particular is a very good one to use because it's very different levels can be very easy for people to interpret. You don't necessarily need a lot of training necessarily to have people identify values that are inconsistent with previous ones. There's often big spikes or small spikes where the data is missing. And this is very easy to spot. Right. We also have things like outlier analysis. This may be referring to either internal consistency where you might be comparing two variables you're collecting within your system, or external consistency where Olaf mentioned this earlier a little bit right where you're comparing a variable you're collecting with maybe some outside source from another system or maybe a survey or a senses or something like this to determine if these are compatible with each other. And then another important item and all of had briefly discussed this as well about all this population data. This is often a very big challenge where we have multiple sources population data, whether it be ones that you calculate via your senses that you calculate and you compare this then maybe with survey data or other sources, or even within the same system, when there are multiple populations, it then makes indicators very difficult to compare across programs for example so if your malaria program is using a different denominator than your TV program, or, you know, any other program, it makes it more challenging to actually kind of compare these measures or contrast these measures, because the denominators are not representative across these programs. Alright, so before I proceed, I just want you to log into Mentimeter here and we'll start this up. Okay, so just go to menti.com either on your computer or your phone and enter in this code. Take this out and start it up. Okay, some of you are responding already that's great. But I'll just keep this up so everyone can join and answer the question. Okay, you'll see the question on screen and I'll go there in a moment. Okay, so is it difficult to train people on data quality. So this is the question. I'll just give everyone a moment to respond. So a lot of I can see a lot of people kind of gravitating towards the middle there. We'll discuss this in a moment. Keep these results in mind as we go through some of these concepts. Okay, so we do have a couple saying yes, we have a kind of distribution here of responses, but a lot the majority here in this middle box where they've kind of said it's somewhat difficult. Okay, and we can kind of try to break that down a little bit and see why. Okay, a couple more responses coming in there fairly consistent. So do me a favor and leave this open. I'll ask you a couple more questions as we go through the session. Okay, so just leave this mentimeter open. I'll refer to it in a couple other minutes. But keep these in mind as we go through some of the rest of the presentation. Okay. Okay, so we're going to play a little game. Okay, and it's very simple. All right. So I'm just going to present a couple visualizations. And all I want you to do is think about if any of the values look incorrect. Okay, so we're trying to find out if there's any outliers in the data that I'm presenting. So first one here, what do you guys think have a look at the data. So anything wrong with any of the values. We're trying to spot any potential outliers or problems with the data that we're seeing here. Do you guys see anything. Yes, right. It's pretty obvious right now I wasn't trying to make it easy just to do so. You can see these green points right there really out of line with the rest of the data. Right. So that's our number of A and C first visits. Okay, it's just kind of trending upwards, having these random spikes, right. So that was really easy to spot so let's have another look at another one. What about this one. Is there anything wrong with this data. This is a dropout rate. Okay, so some of you might not be familiar with that term which might make it difficult. Okay, but what do you guys think is this okay. It's not okay right well what's wrong with the data. Okay so for those of who aren't familiar. Okay these negative dropout rates at the beginning. These ones here. Right. So this is problematic. There's many reasons why we don't have to get into all of that now, but you can kind of spot that very easily on this type of visualization. Right. What about here. Can we spot anything might be more difficult because of the slide. Okay yes so a lot of people said yes I kind of. Now just think about that for a moment and refer back to the previous question that I asked is data quality hard to train on a lot of you said somewhat, and I would tend to agree with that. Okay, but it is also our job to try and make these as interpretable as possible. We try to make it as easy as we can, because this means other people can do this. So with no previous exposure. I just showed you a couple visualizations outside of DHS to many of you were automatically able to engage with that visualization and identify a potential issue with the data that I was showing you. Okay, so it is also kind of part of what we're doing. We're bringing all this data together you can imagine now if we have TB malaria and see data, all in one place. And then we have to come up with ways to kind of mitigate challenges that we've had in the past, because those won't go away just because we're creating an integrated system, data quality issues, they just won't disappear. Right. They will still be there, except now we'll have them for many different programs, all in the same system. Right. So it's also kind of part of what we're doing to make these or simplify these a little bit. Okay, so we try to keep it simple when possible. Right. I know we can get caught up in the details. What is a normal distribution. What is a standard deviation. Okay, there are terms when we refer to data quality that can be difficult to train on a wide scale. But that doesn't mean we can't train others to help support this process. Right. So I'm not trying to dumb it down necessarily make it easy just for the sake of doing so. Right. That's not what we're trying to accomplish. Right. But we can make it a little bit easier so we can make routines easier for others to use. And by doing so it allows more people to reflect on their information more frequently. And that is something that I think we can try to achieve. Right. So if we're in particular training facility users or other users at other levels that may not have as much training in this area that may not be as familiar with digital tools that may not have as much background as maybe in public health or biostatistics concepts. Right. The idea is we try to make this kind of as simplistic as we can. That doesn't mean we don't need to build capacity at other levels to really understand how to utilize these tools, what these values represent how they're created. Right. So I kind of have an X here across this normal distribution and for a wide variety of users that can apply. Right. But then there are of course that subset of users where you would want to show them, you know, explain exactly what a normal distribution is, because maybe you're coming up with routines for measuring your data quality. And then you know they need to learn that. So that's why I tend to agree with that somewhat is a difficult when I ask is it difficult to train data quality concepts. I could say somewhat right because yes, in particular for that national team or that team that's really responsible for implementing these concepts. So we kind of learn a bit of detail. Right. But then there are things that we can do to make it nice and easy, right, without any previous training without any previous kind of, you know, just from your own backgrounds, you're really easy to kind of work with some of these visualizations right. Okay, so let's look at another one. What do you guys think here. Is there anything wrong with the data. This one's a little tougher right. So this happens. This is an example that I've taken from a visualization at the national level. Okay. And we see a little increase. And we're not really sure just by looking at it. Is there a problem or not. Right. Some of you are kind of thinking there's a fluctuation, but you can't really confirm if that's problematic. So in some cases it can be more difficult to identify when these issues are happening. Right. So in that case we have other tools so I'm just showing you here. So this is the value here in one, one district in a specific facility for the same value where we see this increase of 33,000 right that's what's causing this increase. Right. So sometimes you might need to dig a little bit deeper. Right. So I say careful with sensitivity because in particular, if you're only viewing values at the national level, then you might get outputs like this, and it might look okay at first. Right. But the moment you kind of dig a little deeper and check what it is, you know the cause of those values, then it can be a problem, and you do need routines to identify that. And in particular this is why it's very helpful to have more eyes on the data. Right. Because if this same chart were recreated within that facility, they would automatically see this big spike in their data, and they would be able to identify it immediately. Right. So there are some ways that we can kind of help people along when we utilize these features so it's not really just about the features themselves. I will describe those, but it's really about the routines that we try to implement to help people with this kind of thing. Right. And what we focus on when we're showing them how to kind of work with these features. It doesn't have to always be so complicated I'm going to keep kind of hitting that message home. So, shifting gears a little bit. I'm just going to discuss a couple of the specific features that we have available to review some some data quality within the system. So here's an example of a scatter plot with various outliers. I think once again, you can kind of see right. I won't say anything but you can have a look real quick right. So if I look at this without really maybe even understanding the house some idea of where the potential outliers are this is a bit more challenging right, but even still a lot of people were pointing to these this box kind of over here on the side right. So these two red dots in particular. Right. So there's a lot of tools that we can use this helps us. This does use some some principles that we would need to understand right. So how do we actually capture what the outlier is. You need to kind of understand the different methods that are available. So you set this up and it's not just spitting out nonsense. Right. So someone does need to understand actually those different methods that are used in this case it's called the inter quartile range. But you do need to kind of understand some of those concepts right someone does at some level, but then when you present those as long as you've kind of verified that what you're doing is correct. Then you can share those with more people and you know the more eyes you have on the data, the better I think the outcome will be. Okay, here's another one. I'm looking at completeness and timeliness. Now, what do you guys think here just looking at this is there any problems. Or is things looking okay. I mean, would you guys be happy with this. Just focus on one line. I know it's kind of a lot but have a look at the purple line for example. It's going up and down and up and down and then up again and kind of like this. Right. So, sometimes there are other challenges with regards to this that we kind of have to consider what actually happened. Now what created this spike. Was it that more facilities all of a sudden came online, where they're supposed to be reporting the whole time and they haven't been. So, there are other issues that are maybe kind of when you think about the quality of your data, when you're looking at the information that might be kind of directly impacted by other implementation related issues. So here, if, if, you know, maybe if you're working in this country, you might have some more context as to why that spike happened, or as to why there's this fluctuation why the same facilities aren't reporting every month. I think in most contexts no one would really be happy with this fluctuation. Right. Maybe. Yeah, maybe, maybe they're on vacation, maybe they're burnt out, maybe they decided not to submit, maybe they didn't want to sit there with the register. There's probably a number of reasons, right. So sometimes it's not just like a entry error either right. Like these are pretty like this one's pretty obvious right. Someone probably entered the wrong value. You probably didn't go from 400 HIV test to 33,000 in one month. Right. So it is to why people aren't submitting the data all of a sudden. I mean, maybe you will know, maybe you don't. It might be worth checking with the facility. So they're kind of operational issues that are sometimes tied to this it's not. You can't just really always just sit there, as much as we have tools to help with this. These are mainly meant to help you identify potential concerns with the data. Okay. But that doesn't mean that you won't have other issues operational issues potentially that you might have to follow up with as well. For example, looking at reporting rate, looking at percentages instead of numbers. Okay, so, so a couple of notes and I mentioned these earlier but I've just written them down just so you can get a good sense. So, in DHS to when we're working with completeness and timeliness measures. Okay, these are based on the reporting period of the data that you are capturing. If you're capturing data every month, you're able to calculate completeness and timeliness measures every month if you're capturing it every week. Same, same thing right if it's more like an epi week that starts on a Monday or something. Okay, same thing so it's based on the week in which you're reporting that data. And then also I kind of mentioned this briefly, we typically apply these measures to aggregated data. So if you're looking at things like case based data, or where you're reporting every case from a register, or every disease from a confirmed surveillance case for example. Okay, there are many ways to kind of collect that information, but it might be harder to apply measures of completeness and timeliness to that type of information. So just keep this in mind if you have maybe a current system or a current process for reporting completeness, when you're trying to kind of think of how you would kind of get these measures set up within a DHS to system for example. So validation rules is another feature within DHS to that we can utilize both at the point of entering data. So the user entering data could confirm their data. And we can also use it to analyze in bulk district data facility data, any level of data that you're capturing essentially right so we can use it both for kind of post analysis of the data after the fact, after it's been entered. And as well as kind of during the data entry process. So it serves kind of a dual purpose. And this is mostly checking, I'd say for the most part, especially during data entry, checking internal consistency so within the data that you are submitting you can create kind of simplistic rules. For example, the number of RT T tests malaria RT T tests you perform shouldn't be more than the number of malaria cases that you report. Right. So you can kind of go through and set up your own rules based on the logic of the data that you're collecting. And this you can use both case based data, because you can aggregate the case based data, as well as aggregated data to create these types of rules. And you can also do similar types of checks, when you're entering your case based data if there are some some kind of validation or logical issues with that information that you are collecting. That's just an example of where you can enter more information. So typically with these validation rules we also recommend that you kind of add as much information as you can, a good description of why this is incorrect, a description even of what to do. If this is wrong, you know, who what does the person do now what's the next step, because okay I've identified that there's a problem. Okay. Okay, but this breaks down the various items that are contributing to this. So you can also combine multiple variables. For example, if you want to add five or six variables to give you a particular output, and compare this with with another variable or value. So to provide to this we also have this idea of sending notifications. We have to be a little careful here because you don't want to just keep sending notifications it just becomes burdensome for the person receiving them. But if you can be a little bit selective in terms of sending these out, you can kind of say well, this particular data quality issue now exists, based on the validation rules we've defined. So please follow up on this issue. You can send these via SMS and email at the moment within DHIS too. So you can send those outside of the application so maybe for someone like an M&E person and M&E focal point who maybe isn't logging in all the time you can send these out and have them kind of access specific areas where there's problems. So these two in particular they help a lot with this kind of idea of automated checks, because you can schedule this to run once a week. You can do it once a month, whatever. Okay, and check your data and send out these notifications automatically when it detects there are problems in the data. Now remember you can also do this at the facility level. So the user can check their data as they're entering it. Right. So it's kind of this combination of these two ideas. So we talked about outliers a little bit. There are quite a different, quite a few different tools we can use to identify outliers. I showed just some examples of some simple charts. I showed an example of a table that kind of highlights the value in a red color kind of gives you a good idea based on what you defined. We also have some additional tools that allow you to identify outliers based on the Z score and then standard deviation for those of you who are familiar with those terms. Okay, we also have what we call kind of predictive analytics, and we use these to kind of identify. There are many use cases I guess right for example for thresholds for surveillance, you could use them, you could use them to identify potential outliers or problems with the data, with the data that you've received. So, so as an example here, I'm just, I have this table up here, and you can probably maybe tell, but there's there's kind of like an obvious outlier here right. This one here is 25,000. But right now showing me all the values and I kind of have to look through it it's not super intuitive or easy to identify. This is very obvious but in some cases it might be less obvious. What we can do is actually just extract those values out that are problematic and give you a summary of those instead. So it's a different way of presenting the data, but in this case I'm just showing potential problematic values rather than showing all of my values. Now maybe this is not something everyone should see necessarily, but it does help you to identify specific areas where you might need to follow up. For example, and we can use some kind of specific formula that we set up to identify those values and then extract those and put those on kind of on a separate piece of information for people to review. Here's another example here and it's a little bit simpler to see. So we have the red bars here which is our number of A and C first visits, and then the green bars here which is the A and C one threshold, which we've defined maybe some type of formula to allow us to calculate what that threshold might look like. And we're able to compare our actual reported value with our kind of threshold value. So if there are any values that exceed that threshold, and we're able to kind of figure out if something went wrong in that process, and we can also set up a comparison. So we could compare these values kind of on a routine basis we could schedule that comparison, and then it could I the system could identify or if there were any potential issues identified through that process. Okay, so within the the DHS to kind of platform. We also have this additional kind of tool on this WHO data quality tool. I'm actually all of that the back and on will have a lot of information on this as well. I'm just going to show a little bit of it. So you can see what it looks like. There are four kind of key measures that are identified within this tool that is completeness and timeliness, I think we all have generally agreed what that looks like internal consistency which is the comparing the data within your system that you're collecting external consistency where you're comparing data within your system with with data outside of your system, or collected via another process perhaps, and the consistency of population estimates so just. Here's an example of what this tool looks like. And we will actually talk a bit more about extensibility of the platform. Later on, but just an example that you can, you can add kind of some more features and functionality in general to DHS to as a platform. It's not just relegated to what you what you get in the first place. Of course we try to maintain and use those features as much as possible. So there are specific things that we want to do outside of that. So this is an example of when we first access the tool. And this tool, what we do is, is we basically tell it what we want to see. We tell it what kind of measures are acceptable or not. We tell it how to identify a potential outlier with with our data, and it comes kind of pre built with some suggestions for us to use. And we can modify those suggestions, we can kind of say, you know, how many standard deviations away from our particular value. Do we consider an outlier, you know, various parameters like this. But it's all defined by the user, you know what we see here. So here's an example of completeness and timeliness measures. We have a couple different programs, ANC, EPI or immunization, our routine HMIS collection form maternity. Okay, so we're also just like where this whole idea of integrating our data together in one system. We can also look at measures for all that data together. Right, we don't have to separate it out necessarily and do it all in separate pieces. So in consistency time, you would have seen some of this in the slides as well. We can create similar measures inside of DHS to as well. But just so you get an idea kind of, of this tool and what some of the measures that you can create look like. And this was the outliers table that I was pulling out and I used in some of the examples. You can see here, you know this kind of red identification of these outliers it's very easy to kind of utilize and interpret it. So we have a number of pieces of guidance in terms of how you can kind of train people quickly on how to interpret this and use this at the in the field level. So within this tool we also have kind of a summary of different kind of values and measures of data quality. So it aggregates all those different values that we were talking about on those four dimensions that I mentioned that I, sorry, identified earlier. So completeness timeliness internal external consistency and our population estimates, and we can actually produce this kind of report. We're actually working on a small kind of variation of this as well to make one that's a little bit more flexible. But you can see here for example it outputs the various data based on what we've defined and we can add, you know, comments interpretations things of that nature can print it out. So it does help right because for example if I access this dashboard. You know it's not something that has to be done annually. Right, that's the whole idea behind a lot of this. You could just run this check every month. If I ran this in the facility and have identified kind of several types of data values that I think might be problematic. I could just get the facility user to run this type of report at the end of the reporting period. And they probably quickly be able to identify some of the values that are incorrect. So it does help kind of facilitate some of this. The tools can help facilitate some of these processes on at the facility level. So can you head back to Mentimeter real quick. I'll just ask you guys another question. Okay, it's the same Mentimeter if you kept it open. Let me just, I've changed the question so in case you do want to answer it, you can. And here's the question for those of you who are wondering, we have clear procedures for managing data quality in our systems. So this could be DHS to this could be something else. Okay, just in general with your health data. Do you have a process outline for reviewing that data. Okay, so we have quite a few yeses, which is surprising. So I'm very curious to see what you guys have. Hey, but that's very good to see. Okay, so I'm going to keep it here. Okay, so a lot of you said yes. So now I'm curious what you include in those processes. Okay, what should you include in a data quality procedure. And for those of you have ones written up, this should be very easy right. Just a moment to to write some responses. Okay, so we have quite a few responses here and you know they are, they are varied and I think it's quite good, a good representation. I've just listed some items that I might include just so you can compare. I think some of them were missed. What do you guys think no one really talked about who's responsible for doing what in the system. What do you guys thought about the tools. We should look at outliers. We should have some specific business process. Right. That's all important. Right. But when we have procedures we really need to identify who's responsible for what one should it be done. What is the timeline that it needs to be done. Okay. What are the actual processes for changing these values how do we follow up with facilities. Right. All these kind of little details right the challenge that we often have. Okay. I think this is one of the biggest challenges we often have when we're looking at data quality. Right. I mentioned a lot of interesting tools. Okay, a lot of different features you could utilize. Right. And that is very good I think it helps us facilitate as I mentioned, it makes things a lot easier to expand the ability for us to review data in our system. But without clear procedures of what to do when we actually identify those values, it becomes very difficult to improve the quality of the information, because we are able to identify the value, but then we don't know what to do next. Right. So we just want to make sure that this ties together, especially now, if you're thinking more about an integrated system design like I said, you will have many programs data contributing to the system. Right. And the last thing you want to end up with is a system where you can't trust the values. Right. So I'm going to end it there, because I think it's time for the break. But if there are any questions, feel free to grab me. I'm happy to discuss this with you further. Several of us are are available and can discuss this area with you. I hope this was useful. Okay, if there's any feedback about the session that you want to change, we can do that. If you're interested in learning more about the features themselves, I think we could arrange a little bit of time to show you in a bit more detail how those things work and how the actual features work inside DHS to and how you can review them. Right, but we can take our break. I think Alice, do we have the photo now. Yeah. Okay, so everyone stay in the room, please. We'll take that group photo. Yeah, we can. Hey, thanks. Yeah, thanks. Thanks so much for the presentation. Really, very insightful. I have a question on the monitoring use of DHS to there have been many times where I've been asked to kind of provide. Yeah, to kind of to kind of provide statistics on the use of DHS to like how many people are able to access DHS to have been able to retrieve that information. However, I'll face some challenges in terms of aggregating that data so that I'm able to split that, for example, District X. There were so many users that were able to access DHS to at a particular time. I was wondering if we have a feature in DHS to that we can take advantage of to retrieve such kind of statistics, because having these statistics in a gated form, provide problems especially for programs where you are implementing in the selected districts. Yeah, so, yeah, so that's what I'm looking forward to if I can be guided where to go in DHS to and be able to get or retrieve that information so that next time I don't have face such challenges. Sure. Thank you. Sure. So there's there's kind of two ways that would be supported I think one gets you halfway there and then the other might need a little extra push. So we have this feature in DHS to called usage analytics. This helps you identify how often people are accessing certain applications, what they're opening within those applications, etc. So we have some tools that all of them the implementation team have developed that lets you know, for example, how many times certain charts are being opened or you know kind of the usage of these actual items that are in the system. In terms of kind of getting some of that extra information that you're requiring how many users per district are logging in for how long etc. So there may be some other ways to go about that maybe through either this this API that we have, or the sequel by directly accessing the database. So I think we could probably show you some or just mentioned, you know, some resources for you that you could have a look at in order to generate some of those. So between these different features I think they might be able to assist you in some of these areas. Yeah. Let's all come up and let's do the group photo because I don't want to take time from your break but if you do have any more questions for me. I'm happy to answer those whenever just grab me and we can set up some time for that. Thank you.