 Right. Thanks everyone for joining us. I'm sure that we'll continue to have people signing on but we have a lot of content to cover So I think we should go ahead and get started. I'm Mike Frost I'm the product manager for DHS to tracker based here at the University of Oslo And what we wanted to cover and put out into the community is the the most recent Guidance that we have about how to set up Large-scale tracker instances and handle the analytics associated with it as well We're putting this out in the context of the COVID vaccination Efforts that have been ongoing, but actually this this advice could be for anything that hits the kind of large scale that we're seeing And so we have quite a lot that we'll try to cover We of course invite you to please put questions forward. We have a thread in the community of practice That you can put questions and we will respond to them there That's a really great place to ask the questions because they'll be preserved for people that will be viewing this later It is being recorded and we'll post it online to share with others as well But please do put forward your questions and we'll we'll do our best to handle them Depending on how much time we have left after getting through the content We also can try to handle some of those questions Live and talk with talk with you about them But I'm just posting the agenda here so you can see the different topics that we're going over Some of these are fairly technical in nature We really want this information to be passed on to the system administrators and we'd like To get this information out as quickly as we can because we know many countries are scaling up as we speak in terms of Supporting their COVID vaccination implementations So just to say a little bit about why it is that this is such an important and relevant topic We have been supporting a number of countries that in 2021 were some of the earliest low and middle income countries to be able to Get the COVID vaccine and to try to scale rapidly their individual level systems using tracker And that it comes with some significant challenges beyond even what They had been prepared for or been used to Even if they were dhs2 countries, even if they already had their own teams and servers and ways of running their systems This was kind of an order of magnitude beyond what they were used to For a number of reasons When you're doing this kind of a vaccine where we're seeing that they're capturing entire populations Not a subset of the population There's often new sites and new users that are involved that are not just the traditional sites Which means that they also are getting new equipment new hardware They're setting up new work processes to try to to really cover Such a broad reach of the vaccine programs And there's also you know intense political interest and you know health interest in these numbers and wanting to see live numbers And wanting to refresh dashboards all the time Information is being scoured by the news organizations by the the Political parties in the country. There's there's just a lot of pressure and intensity of focus on getting this information And so what we've put together here are our hot To the minute recommendations about ways to configure and run your systems that will allow you to be able to Scale to the degree that we are seeing as a requirement Many countries have already started This is just a quick glance that we have some 40 plus countries that either already have scaled or are starting to Using the the different tools for a covid vaccine programs And we were able to get some of the numbers from those that have scaled the most to date Um, and these are numbers that were as of kind of the middle of december We were seeing uh, Sri Lanka kind of at the top of the list here had over 20 Million people at this point enrolled into their system, which was very close to being the entire population 10 million plus in rwanda. That's quite a bit more now as well 7.9 million in Nigeria and along with those kinds of numbers of people being enrolled You have a dramatically expanded user base and a number of sites And of course the data which we're calling here just the related events Is massive as well So this is the the reason that we have come up with these recommendations The reason that we have the webinar out now these have been added to our official documentation The link is here in the slide and is shared with you as well in the community of practice We will be continuing to update these recommendations So it's important if you're going to be responsible for one of these kind of large scale systems to stay on top of the The recommendations that are coming out We'll of course put them out also in the community of practice and through the monthly newsletter And what we are encouraging as much as possible is to take these recommendations from the beginning of your implementation Not waiting until you get into trouble with performance and have the system failing These systems are meant to be transactional. They're meant to be Real time. They're meant to be giving data that is you know, very useful at the moment And so when they go down it can have a dramatic impact on The services being provided and on the success of the vaccine program Again, we're talking about it in the context of COVID vaccines because this is what we expect for 2022 for there to be a lot of scaling But these same kinds of recommendations would be relevant for your large scale tracker implementations of any kind So with that as the background, I'm going to turn Over to bob to talk to us a bit about some of the specific recommendations that we have around server and hosting So i'll stop sharing my screen bob and let you take over Well, yeah, are you seeing a slide that says server and hosting requirements? We see it Excellent, let me Push it full screen quickly I might be there Sorry, that was not quite full screen, but it's good enough and Okay, so first of all as I People who know me know I hate I hate being very concrete when people ask me what server requirements are because it's really really very much depends on what it is you're trying to do and and Um, what the scale of things are but what I wanted to really emphasize here is the Absolutely the most important requirement if you're thinking of Hosting a dh is to server to do pretty much anything but particularly to run a very large scale Um vaccination campaign is you need to have the right level of skill and experience to do it If you're just if this is just your initial steps in learning how to install things using the manual using Linux then you're probably Not the right person to be to be Um administering a system which can be as important as this. So in many cases, I'm eventually to say possibly in most cases you might think of of Strengthening your team in terms of of recruitment probably If you're not able to do that then you really need to consider perhaps Going for a hosted software as a service type solution The other thing that I wanted to raise here. It's not strictly a performance issue, but I don't think it's raised anywhere else I'll raise it anyway Is the consequence of of these very very large tracker databases, particularly as Mike was saying Significant proportions of the population is you are now responsible for a significant amount of personal data And there are non trivial security and privacy challenges related to that so I'd say a pretty hard requirement any organization thinking that they're going to Going to host A dhs to Tracker vaccination server. You need to have at minimum Somebody whose role role it is whose responsibility is to be responsible for security You need to have a basic security plan Now you need to have a plan around around how you're going to manage data privacy I'm Not time to go into the details of what that would look like in this presentation, but We will hopefully give some more guidance around that in the near future You're talking about concrete things at the end of the day. You do need to get a server of some sort And what the specifications of that server might be will vary a lot depending on scale I mean these numbers were bandied around a little bit earlier in the month Um 32 cpus 32 gig ram ssd fast disk as sort of minimal requirements. I think I can tell you fairly Fairly authoritatively that Most of those deployments that Mike referred to on the slide three slides back makes relanka, for example They're using considerably more than this. I know the database in Sri Lanka has 128 gigabytes of RAM for example That's just the database Not including the tomcat and stuff like that. So in general, um, unless you're talking about a very small place These would likely be an underestimate then I talk about one or more virtual machines in the sense that um for very simple systems, particularly the simple aggregate systems that were Um, almost the only kind of dhs2 deployment 10 years ago the most common thing you just deploy everything onto one virtual machine Nowadays if you want to get more scalability better security better performance monitoring and control Um, you're likely going to break this down over a number of virtual machines. Not just one Well, how big a machine do you need and I'm not helping you much? Um by saying that's too small probably uh, what I suggest Way of going about this is think about What how many tracked into the instances? Yeah, what's your population size essentially? How many vaccination sites are you talking about how many users are you likely to have and then cross check against Against countries that might have something similar and then find out what it is they have used Um in a way in a sense that works much better than some kind of rule of thumb We don't have good rule of thumb calculations. Um, all we have is examples of concrete experience Something we might try to do is go back through that list again that Mike presented on the earlier slide We could probably find out and write down what resources are being used in those particular instances um Whatever you pick in the ends be prepared to discover that you've got it completely wrong Right and however much you decided to provision for your virtual machine It might well be that it's not enough Um, or it might be that you've done too much and you could save money by making it smaller The only way of knowing any of that of course is by monitoring the performance of your Server and we got I've got a couple of slides talking a little bit about that shortly Um Quick word about shared environments because I mean as most of you know now, it's quite rare Um, it still happens, but it's quite rare to have fully owned physical hardware Doing these things generally you're going to use one or more virtual machines either from a commercial global provider like Amazon or lin node or azure or whatever it might be or it could be Kind of virtual private cloud in the national data center is also quite common um Something to bear in mind in all of those in both of those cases is that you know When you're talking about high performance and you're looking for guarantees of high performance You want to try as much as possible to avoid purchasing shared resources The shared resources basically means that you know, a particular machine might have Uh 64 CPUs and they've sold 48 of them to you and they've also sold 48 of them to somebody else Right, so you don't actually have 48. You're sharing them Um, and what happens I've got got a couple of graphs. I'll show you in a bit What happens when you've got over provisioning like that is that um When your application really wants to work hard The environment will actually push back against it and throttle it because it doesn't want you to upset the other machines that you're sharing with um, so yeah, whether Whether this is with a commercial provider or whether it's something you're getting out of your national data center You want to as much as possible have guarantees about the underlying dedicated resources not simply um shared virtual CPUs and things that you've been given Um This is something I'm not sure how true it is I think it actually is if you are buying Buying something off a commercial cloud generally speaking It's a good idea to buy as big as possible because the bigger you buy the less chance you're going to be sharing with others Um, and then it's possible to containerize within that um, if you're working with a local data center often this might be within ministry or it could be In many cases a national data center providing Enterprise level services to government usually they're using VMware in the back end often they're going to sell you stuff Which is very much over provisioned You've got to be able to test it And be able to get back with evidence to your provider and say look you're telling me I've got this ssd disc, but I can see very clearly that it's not performing like an ssd disc Again, if you don't have good monitoring in place, you're not able to go and make Okay versions this is a this is a slide that's designed to be out of dates almost as soon as it's written What's important here? I guess it's not the actual versions. What we're saying is at the current moment in time What we know you should be using JDK 11 for anything that's over version 235 or above Used to be you were stuck with using JDK 8 Now you can and you should be using 11 Postgres versions people get they get different reports of which performs best between 12 or 13 I think it depends a little bit on the kind of loads That there are some things work better in 13 then work better in others work better in 12 But those are the two most tested versions currently and definitely You need to be on version 235 or above Um and not just that but you need to be on the latest patch release of it um The important thing about that is probably the last point I've got there is you need to have a well-rehearsed plan For testing and deploying new patch releases rapidly Because particularly in this area that we found ourselves in in the past year where tracker is suddenly being used for A scale that's never really been used before We've been discovering quite frequently You know little areas of optimizations and improvements and often big improvements But when those improvements come out you want to be able to take advantage of them quickly So sometimes they're performance related and sometimes they can be security related and the reason we need to take very seriously If there are if there are security vulnerabilities found in in the software our security team is generally very good at releasing Mitigations or fixes for those pretty quickly um implementations are not necessarily always very quick to um deploy them So yeah, have a plan for testing and deploying the patch releases If you if you don't have anything better to do while you're waiting for a new patch then practice so that you know How long it takes very important with applying a patch that you've got an idea Um or how much downtime for example you might need to schedule Okay, that's about all I've got to say in the moment about about actual um Server requirements. I want to talk a little bit more as I promised about about this question of monitoring and particularly performance monitoring. Um, and The kind of thing things the kind of questions you need to be able to answer when you're running a complicated mixture of services um is First of all as we've referred to earlier. Do I have enough resources? Or maybe uh, do I have too much if I've just done something to improve matters? Didn't make a difference. Did it work? How much did it work by? If I know that users are reporting in the field that something is slow Um, can I say what is slow? You know what part of the system is struggling? Is it coming from the tomcat? Very often if you don't have any kind of monitoring in place Um, what you get is very very vague answers to these kind of questions My people will simply say oh my system performing badly or it keeps crashing or it's so slow um, and it can be very very hard to to Pin point Where you need to make an intervention and how successful your intervention is being or whether you've provisioned stuff the way you should um You can't know the answer to any of these questions without measuring Um and running a large tracker server without having good metrics on it I say it's like driving around in the dark with the lights off so Now obviously many many ways of measuring there's there's simple tools on the command line that people might be familiar with um here i'm not Talking so much about them. That's quite a lot you can do with simple tools um I'm talking more about the kind of software that's going to record and display historical data About different aspects of your system to give you a good overall picture of what's happening and what's been happening over time um Kind of popular combinations prometheus and grafana many of you might hear about it's very popular certainly in very large Deployments the elk stack is something similar net data two pieces of Two pieces of software that i'm going to show you here That are less complex really to set up on those but which We have found in quite a lot of large installations have been very useful For understanding answers to some of the questions that i've posed in the previous slide So for example, this is this is a this a graph from mooning Which tells you what your cpu has been doing over the last 24 hours or so? um And it's very clear immediately when you look at that graph when you get used to looking at these things There's all this red stuff on the top is red and purple stuff Which spells something abnormal is going on right and we're hoping we're gonna we're gonna put together a session at some point about um What's the word interpreting some of these phenomena that you see But in this case what you can say looking at that red stuff is what's called steal time basically means that your Your host on which your virtual machine is running is throttling your cpu and so you're trying to use your 40 cpus that you have As you start to use anything more than 20 is starts to push back at you The purple stuff is even more scary when you see that purple stuff You see here IO wait It means that your cpu instead of actually doing any useful work is busy waiting on the disc That can be really important information to know because we just said Earlier that we need to have a fast disc your cpu is waiting on the disc means your disc is not fast enough for you um These kind of graphs again from mooning actually show you that sort of thing as well very quickly look at a graph like this and You can see that our main disc It's got 100% utilization. It's busy 100% of the time. Well, I'm not going to get any more performance out of it So this is telling me I probably need a better disc um As just another example of This is a server that it was running pretty fine all week and then something very weird happened on saturday The nice thing about having a graph like this is you can see that something happened If you didn't have any kind of graph all you might know is that users on saturday afternoons started saying that something is slow Right, what actually happened here if I recall correctly some long running postgres query, which Block some other transactions and caused your database to get hooked up But again having that insight into it is really important Those were just a couple of examples from mooning The other piece of software I've mentioned that's something that's proved really useful for Most of our big implementations is this thing called glow route, which is actually a It's like a webvist Java profiler But again allows you to get quite detailed and useful information really quickly. So you just open up the The page here that first glance the first thing you notice Is that 41 percent of the time is being used on this particular api call So immediately, you know, if this server is going to be optimized at all Probably you want to start looking at the thing that's using 41 percent Right, rather than going off trying to make other things faster, which are not really being used very much Uh, that's another example. There's an example Is it a kind of graph that you love to see what you're seeing here is the response times For I can't remember what request this was but this is basically taking 20 seconds on average here to 30 seconds and then We managed to deploy a fix by the new war file which addressed performance Optimization and you can see from about 845 there suddenly It says descended on the land, right and we're getting much much better performance So yeah, having these kind of tools is it's kind of invaluable. It's very hard to Get your system performing well and know knowing that it's performing well without any kind of optics like this Okay, just one last slide from me. It's just really a summary. I guess um Going to scale and going with my complex systems like this it requires monitoring. This is not a This is not an optional add-on. You need to be able to do it. You've got choices over what tools to use that um Importantly, you need to know when you're in trouble and when you are in trouble you need to know where the pain is coming from Um, you need to be able to see if you make a change whether that change has worked or not And this is really important. You need to be able to report useful evidence to back to developers to say look there's something not right Um, and here we can show you exactly what is not right. We're not just saying this thing is slow Um, similarly if you need to be able to complain to your hosting provider to say your disk is slow It's useful to be able to give them some graphs to show them that this really is slow Um on the point of of involving developers and technical assistance and support Maybe from the global team or even from from others Um, very often it's not possible or legal or appropriate or even a good idea to Give outsiders access to your national database. I mean just simply it's not usually the right thing to do But it can be a very useful thing to do And a very simple thing to do to give access to your monitoring system So that if you know you're having a problem you want to get some outside report one of the support One of the things we know has been very effective is by saying, all right. I've got glow route running in here um Would you mind logging into my glow route and having a look and seeing if you can help us interpret what the trouble is So it's actually a real good way of getting technical support To have web based monitoring systems in place All right, sorry that was a bit rushed but that is it from me or look in the chat for questions and things and Try and answer as we go along Scott are you just going to steal this from me or do I have to go? I've just taken it from you bob All right. Well, um, hello everyone i'm scott mus patrick I'm the dhs2 analytics product manager here at the university of oslo And I am now going to take us through A bit of the analytics problems and some of the solutions that we are seeing With these very large skill tracker programs specifically around covax So what are the actual problems? You know, I think probably many of you have unfortunately run into this But we are seeing some slow performing dashboards or maybe even dashboards that never load at all We see line list tables different types of charts analytics that also take a long time to load or do not load We see sometimes the analytics tables are failing When you when you try to kick those off And in a very worst case scenario and this happens Rarely very very rarely we see that the servers can crash because of overloaded analytics queries Again very uncommon, but it has happened So we're we're trying to be transparent and honest about it. It has it is uh, it um, it's uh It's a possibility So what causes these probably more important to understand? Very large-scale tracker implementations With inadequate server specs or expertise just as bob was pointing out That's the most typical cause of these analytics issues Is you're trying to run these very large analytics for these very large tracker programs Underpowered server infrastructure And as bob pointed out is extremely important to be able to monitor this and to address it The other principle causes very heavy analytics requests So for example in dhi's 2 you can make a map quite easily That will show you every single tried entity in the country if you have them geo located, right? So I can turn on and if you're sure lanka 20 000 tried entities on the same map Obviously that is a massive request To the server to pull up that kind of information on a map and it's going to take time It's going to be very slow to load If it loads at all The other one are we we see a lot of use of the event reports application to produce line list of tried entities patients or events For covax and for tracker in general obviously and so any event report line list that's over 100 rows for events or over 50 rows for enrollments Is going to be a heavy analytics request Another one is visualizations. So any type of chart or pivot table That is looking at a very long period of data Right, so if you're going past 12 months or so of data and you're trying to pull up data for maybe you're an entire Covax program in which you have tens of millions of people enrolled in That's a lot of data to pull out and produce in the analytics. So we're seeing issues there as well And the biggest culprit if we can point our finger at one particular thing is specifically enrollment type program indicators For example, if we're being covax specific We're really talking here about some indicators such as dropout rates And actually these analytics that you see here on the screen on the other side are looking at the various covax Indicators as they were initially defined with Through with with who in dh is to and we see that the cova or sorry the dropout indicators Are by far the least performant indicators They are taking the longest to produce those those values The reason is is because those dropout indicators were originally defined as enrollment type And where they're processing all tracked entities in the covax program Looking at basically all the events in the covax program and trying to identify a very specific number of them It's a very heavy. It's a very heavy calculation. It's looking at a lot of data It's a lot of things to calculate on the fly for for analytics And you see the other types of indicators that were specific for covax are significantly better performing The same thing goes with the chart on the bottom the chart on the bottom is illustrating that when you have When you're querying analytics for a long period. So again like up to 12 months You're once again asking for a lot of data And all of this is being calculated on the fly in dh is too So these program indicators are just happening in real time. So when you when you click update on your chart it's going to start Processing the data to produce the value for you We know if you put more data in there it takes longer to process Um, and so that's what this chart is illustrating is by adding more periods longer durations You're you're producing more you're putting more data into the calculation. So Let's talk about some solutions now The very first solution principally and foremost is to make sure that you have adequate resources and support for your servers as bob has mentioned You know in the links that we provided one of those links is to a Guidelines that we that we produced to help with tracker performance uh specifically for analytics and in those guidelines we have a A server checklist as bob was mentioning as well that you need to go through and make sure that you've ticked all those boxes And yet you are able to respond to that entire checklist That's probably going to help with your analytics issues A couple of other things that are more controllable specifically on the dhi's two side The first one is do not run your analytics tables during high usage periods throughout the day So do not run your analytics tables say at noon or in the afternoon when you have a lot of users going in entering data for uh Covax or any other type of tracker program that you're that you're monitoring that you have in your dhi's two instance If there's a lot of traffic on the data entry side That means more server required more server resources are being used up there And then when you try to run your analytics tables, which is very intensive on the server resources Then you're just going to start competing with yourself essentially, right? It's not going to be a good situation You need to make sure that you're running your analytics tables at low usage periods Typically we see this as best practice as the middle or in the very early morning Before before a lot of users have logged in on the day and gotten work started The other one is please use event type program indicators As much as you possibly can so things like And and and rebecca later in the presentation is going to talk a little bit about how we've updated our covax packages to be more dependent on event type program indicators, but we do see that these are more performant They have by the nature of them. They are processing a small amount of data making fewer connections And and have uh and typically result in Uh better performing analytics and on the fly calculations again enrollment type indicators can be very resource intensive and slow For these very large track the other one is um And this is a best practice that has actually emerged from the countries that we've been supporting through this is do not have uh dashboards with program indicator Based analytics as the default landing page after logging. So of course you all log in to dhi's too. The first thing you see is a dash Right, even if you're not using that dashboard, even if you just go straight from that dashboard to Data entry or the capture app or some somewhere else You have sent those analytics requests dhi's too is trying to load that dashboard for you And what that means is every single person that's doing that there's not even paying attention to that dashboard Is consuming server resources? um Just by the nature of that landing page being there So what do we advise as best practice is to set a informational dashboard as the landing page that has that's primarily populated with things like text items or hyperlinks Here you see an example from shalomka where they're this is their landing dashboard for their covax instance And you can see that it's just a couple of text boxes some useful information to the user, right? This is a very low intense In terms of resources Dashboard to load this will load very quickly every time regardless of how many times that you look at it It's not it's not doing any calculations. It's not pulling any data. It's just displaying some text um And you can do this very easily by making like the dashboard name like astrict astrict notice And what that will do is by putting those astricts in there It will put the dashboard to the front of their saved of your dashboard list as you see kind of here in the shalomka example So this is best practice now And in fact in 238 we have a feature coming out that will allow you to set this as default essentially for dhs 2 But if you're using as you know, you're using dhs 2 today You're using up to 237 you have to configure this dashboard yourself, but it's quite simple And and I highly highly encourage everyone to do this Some additional solutions We have we make the recommendation for these very large scale tracker programs to limit the sharing of the dashboard With with large program indicators, especially enrollment type program indicators to only those users that really need to see the information Not just anyone and everyone who could be curious about it But those users that you know are going to make critical decisions based upon that data The reason is is because the fewer users that are hitting these dashboards The the less requests that we are sending to the server the more performant your dhs 2 is going to be generally You can do this in two ways, you know, you can limit the requests in terms of the just the users That can see it, but you can also restrict the organizational units that that dashboard is showing right So if a district health officer only needs to be concerned for their job about their vaccination numbers Then only show them their own district, you know, you don't need to show them the entire country Obviously showing data for the entire country is a lot more data to include into the calculations again more taxing Restricting the org units is is easily done by selecting relative org unit assignments To that user for the analytics that you put onto the dashboard Another couple of things is make sure if you are able to well if you can is to try to View these dashboards or set up your routine data use meetings your your various decision planning events Outside of peak hours for the vaccination itself Right. So if you know that vaccination is mainly happening in the morning set your planning meetings in the afternoon Or vice versa As best you can to make sure that those who folks who are who are using dhs 2 to enter data Are not going to potentially be disturbed by your very heavy analytics requests that you'll be sending to produce these Produce these dashboards All right moving on A couple of technical points here on caching So what we recommend is for caching is to make sure that your dh is config Analytics caching expiration is set to at least 3600. This is in the time frame about six to ten hours And in the system settings app to make sure that your caching strategy Is set to at least cache at 6 a.m. Tomorrow And you need to set your cache ability to private to To avoid Some additional issues. What does this mean? This essentially means that when a user logs on and looks at the dashboard That dashboard will load once and then it will be cached into their web browser So that when they go away from that dashboard and then come back It will use what's already cached in the web browser Instead of pinging the server and hitting the server and pulling all of that data again This is more performant for the user So that means that they are only waiting to load the dashboard really once we're waiting for that that data be populated one time from the server And then every time they go back to it. It's just updating with any new information as opposed to pulling all the data that's associated Uh, and so this is definitely best-case practice and we've seen that quite a number of countries They don't have this caching strategy set It's extremely important and and it will limit the amount of traffic and and and can improve the performance of your servers The other one the last request here for this slide And this is a hard recommendation to make because we know how popular this particular feature is But we recommend for these very large scale tracker instances to turn off your continuous analytics Continuous analytics is a feature that that was released in I believe 235 or 236 that Updates in real time your analytics as new data comes in essentially the problem with continuous analytics is it is not it essentially has been shown to for these very large scale tracker programs To form a bit of a bottleneck. It's not as performant as we would like We are of course working on the core software to improve this performance But as it stands right now, if you're using, you know, dhs 2 to day Then you it's recommended that you turn it off for these large scale tracker deployments If you're using aggregate data, it's fine You know, you can go as big scale you want with aggregate data for continuous analytics, but for the tracker data For these kinds of program indicators, we recommend that you do not use continuous analytics Um And and again, we are working on trying to improve it in the core product, but you know, it is uh, it's a complicated thing It will take a bit of time Then I want to make just a couple of last resort solutions So we're I'm going to make these recommendations under the pretense that you've already gone through the server checklist You've done everything you can to make your server your server resources are there You've gone through all of the other recommendations that I made for analytics Those have not improved the situation. So these are last resort options for you The first one is to remove tracker analytics access for all non critical users What does that mean? That means that if they don't have a job where they need to view tracker data critically important for their job Then you can remove the access for them to see tracker data entirely Not ideal, but it means that you have a lot less users actually hitting the server trying to access the data And then pull those analytics The next point here is that you can be a bit more creative about about this If you have the technical skills available to you you can make things like sql views html reports are shiny apps That will be able to produce these indicators these very heavy indicators For you A bit more performantly than the standard dh is to analytics will Uh, and so this is an this is a Path that we're starting to explore now a little bit more seriously We have a team working on trying to establish some of these standard sql views We have a team that's exploring some are shiny app options Um, and hopefully we're coming out with more updated improved guidance specifically to this point But uh, but there are a lot of expertise out there as you know, looking at the folks on the call here There's a lot of folks who have the technical ability to produce these sql views or other ways at which you're able to pull the data Um, the specific indicators that you want out of dh is to to to view them in a more performant way the next one is This one is brand new suggestion. We have not fully vetted this one yet Uh, this is something that we've just started to explore, but it looks like it could be promising So i'm going to put it out there. I put it out there with the with the disclaimer that if you don't have the technical expertise available to you To you specifically the server Um, um resources that the server this expert server managers do not consider this but you can If you have those you can consider setting up a separate dh is to instance on a separate server that will routinely pull the um The data that is being captured in your production instance And you use that separate instance just to produce your analytics We have seen some fairly dramatic performance improvements if you have a dedicated server to your just your analytics um, and uh, and so that may be a possibility for you if you have the technical expertise to be able to do it It's not something you just want to play around The last resort here is to and this is this is A very very much a last resort is you can set your default landing application for all users To something that's not analytics. For example data entry or the capture app. Maybe um, and that will Essentially mean that if a user wants to see data in dh is to in the analytics, they have to physically, you know Click over to it And and that will limit the traffic that's going on in analytics as well Okay And again, I just want to make sure that we understand the context that all of these recommendations are for very large scale tracker programs This is not dh is to generally dh is to generally analytics performed quite well This is just when you have these massive large scale tracker programs that mic was was uh, was was making reference to earlier The last point to make here is please do not suffer in silence Later in the presentation ULAV will give you some guidance on how you can actually communicate with us But when you run into these problems, please do tell us about it so that we can help catalog it We can help we can potentially work with you to help under understand the issue and maybe find a solution More importantly than that even is there's a lot of folks out there who are working on addressing these shared issues Um, and we really want to hear about the solutions that you have a lot of the recommendations that i've given you already were uh discovered essentially by various countries and dh is to implementers that that had you know addressed the situation by some of these recommendations already Um, and so we've collected those here. We're presenting them to you now. This is not an exhaustive list There's a lot of folks out there who are coming up with other solutions and innovations in this in this respect And it's important that you share those with us so that we all can benefit from them We're a global community. It takes a global effort to to tackle these kinds of uh, big programs So with that I will hand it off to marcus To take us through the tracker So over to you marcus Thanks. Thanks scott Uh, just to see my screen So i'll just Just go ahead. Um When uh, when one of these big tracker implementations starts struggling, um And goes down or otherwise becomes um hard to use um Unusable, um, then that's often uh, that's always a product of many things compounding and um When we get involved, um It's uh, it's all too often already trouble. Uh, there's usually a server that is completely overloaded and and we We are trying to help In in many cases and the first things we do might be To make sure that There are monitoring so we can get some visibility to the server. So I just want to reiterate that We we do need monitoring to to help you And When When we work on a problem with the country, we often try many things at once Uh, we will work on the server. We will look at the hardware. We will look at Reducing the stress from analytics. Um, and we will look at making changes to the program To alleviate some of the pressure and make sure that we have the most performance system we can So, um, what I will do is to go through six of these learnings that we have made from real world Very high pressure situations um, we have these six learnings that That has helped greatly in in Many use cases and should be considered for all All all high level High volume tracker implementations It might not be that you can implement all of them Uh, or should implement all of them for various reasons, but you should consider all of them They are all in the document that was shared at the very beginning and This is again a strong recommendation to go and look at that document. It's very short and concise and contains valuable information hard-earned information So, um, the first First, um issue slash solution. I will look at video is the id generation and this is Something we have discovered that is that the random pattern If you're generating ids using the gesture and using the random pattern, this is very heavy Um, we recommend that anyone using the random pattern Migrates to a sequential pattern Uh, if you're going to migrate pattern, it it's um important to have a plan for Making sure that your ids will not overlap with your new pattern. So In my example here, we have simply added a prefix of a So that your new id pattern would be prefixed in some way if if if it's generated with the new New strategy of sequential Um, we um would also want to mention that um Earlier this year the sequential was also very very slow But we have greatly improved the performance of the sequential a generated patterns So one thing to reiterate from bob's presentation is that you should keep up to date with the latest point version If you're not able to use the later or latest point version then sequential will also be a problem You should also upgrade this this point version the next recommendation Learning i'll go through is the um that the standard working lists in the tracker programs are generally heavy If we have a closer look here with our magnifying glass the Default list if you open any tracker program is the any enrollment status list. So this is a essentially all enrollments in that program Um in in your list, of course, it's paged but still this might not be a list that is very useful For the clinician or whoever is working on records um, they will You should really ask yourself whether this list will bring any value to the users um one first aid Measurement that you can make if your server is struggling or if you realize this list is not Useful is to simply turn it off and that's done in the in the tracker maintenance And it's called display front page list. So if you turn this off Then it will look like this a user opening their child program We'll see the search form directly and we'll have to search for the record before working on it One downside is that if you turn off the front page list This will also be disabled in android which might not be desirable in all cases So this is something to keep an eye on if you turn the list off completely um, if you don't want to turn it off completely then building targeted lists is Is the recommendation Make targeted working lists that contains the Relevant tracking the instances for the use cases that your users will see in the day-to-day work Make useful shorter lists. This is a screenshot from the norwegian Norwegian COVID instance and To help a little bit with the translation and show some examples of of this instance The first list here is the indexes due for follow-up today. So this is a very central part of anyone working with COVID Tracing in Norway The other one is the where The list of notifications not sent. So this is a specific work task that someone might might have on the team um, we have another one for the assigned tasks, which Is the COVID cases with an assignment to the current user And another for the unassigned tasks as you can see there there are others But these are examples of lists that are short short and serve specific use cases my next um Slide here is some database indexes because we know that searching um when you use Non-unique tract entry attributes searching is heavy um texts comparisons are heavy and If you look at the search form for the child program the um the top Unique idea is fast. So Any any attributes that can be unique. It's a really big advantage if they are unique and the the Non-unique items like last name and first name is Much heavier and you should add these b3 indexes for these attributes um, this is a very Concisely described in the document linked as well and even with SQL to add these indexes Um, they are fairly quick to add and they have a very big effect Um, these indexes should be added on the most most used Searched tract attributes tract entry attributes and another Small mitigation that can be done is that we see Very broad searches sometimes and this is a bit of a bad habit on the user's part if they Try a very broad search like putting the First name a and and click search This will produce a very long list of results Not useful and also heavy to load so We recommend that you make sure to have a look at A setting called maximum number of tract entry attributes tract entry instances to return in search um under the um under the program details in maintenance Um and set this number to something sensible like for example 10 um, this would Would produce a user experience where the user would uh, if if they search too broadly with more than 10 results they would see a Dialogue like this, uh telling them to be more specific in their search um this Maximum number of tract entry instances to return should be set both for the program and for the tract entity type um, and it's um easy to do and and Effective to avoid this bad pattern from from users searching too broadly uh, my last Point to to bring up is the um The blessing and the curse of the api uh custom api queries In some cases the custom scripts and apps and and integration middle where is exactly what you need to Make sure that you have a targeted use of your power and make sure that that um You do everything in the most efficient way possible um What we see sometimes though is that um, these custom scripts are also an extra um, there's extra risk For these custom scripts to be to be uh heavy and inefficient um This might be partially because the uh the scripts are Less tested we have by now Good procedures for doing performance testing. We have battle tested the apps We have fixed many of the inefficient queries that is done by the tracker capture and capture apps For example, and by the dashboards Um, but when someone makes a new query it might not be as well tested uh, some pitfalls that um We have seen um Can have a big impact is the skip paging It's tempting to turn off the paging because then it's easier to program. Um, but uh, it will be a high load on the server if you turn off paging um, if you use the page count parameter, this is uh, Effectively, even if you use paging, they still have to count all the records In the database to determine the number of pages. So this is also something to avoid um We have seen that When you compare tract entry attribute values um in the api And use like instead of equals Like is much more heavy and you should always use equals if you can Uh, so this is another another pitfall to mention Um for the actual Mitigation of this though, what we can say is that Check for pitfalls. That's the first thing make sure that when you make an api query look for problems and look for um Look for the most efficient way to to make this queries um But you really need to monitor your system to see if there is any problem problems In the in your live system coming from uh from one of the custom queries So have monitoring in place and keep an extra eye on on custom queries Uh, that was my last slide. So I will hand it over to haime on the android team I may uh, you're muted Oh, she's sorry Uh, can you hear me now? I'll take you out of here. Yes. Yeah Thank you So thank you very much marcus for those of you who don't know me i'm haime part of the android team And we are aware of many of the implementations of kovac are using android devices So doing this set of slides. I'm going to cover some Considerations that we think we believe you should be taking into account Because they might really improve for the greater performance of your server This is a checklist of the things i'm going to be covering and by any of the of my slides What I would like you to do when you do the analysis on your system is to be able to check them all So very quick introduction of what i'm going to be talking about um Basically what I would like you to remember when you do this assessment is that android devices are going to behave like this foresighted Health worker that is going on the field and decides to take everything with them Because they don't know what they will need when doing the the work This means taking all the papers that may need this means taking all the labels that might need to attach to these forms All the equipment, etc, etc This is the key concept to remember because android is going to behave on that way android will download when I say android is the application android device forever I'm going to be using this in the same way uh android is going to download As much as it might need and the key concept is might need here It has to be able to perform the work when they go offline Going back to this similarity with the worker going on the field They will not know they could not know if they will be doing Taking care of 10 patients 100 patients 1000 patients They will not know what they will need so they will take everything with them So android is going to be um working on the same way And this is the concept that applies to many of things I'm going to be telling now So in terms of user accesses Or access one of the things we really recommend is that you have different users for probably using web or using android And ideally you have even different users for different uh Or cases units or where people or these health workers are going to go Examples uh can be fine in the document that was listed at the beginning But basically if you have one person one worker who's going to go uh to that specific hospital or to that specific health post You want to limit the amount of information that will be downloaded So try to target and minimize the numbers of organization units programs and data sets you are assigning this user Because again going back to my explanation of the beginning android on the synchronization is going to say He's going to tell the server give me all the information or the data or the things I might need to perform my work So we'll take us all these things so reducing can have a huge impact in the server Again, think that this is one device But the moment you say having many more devices hundreds or thousands of devices these requests are multiplied for this So the server has to perform all this kind of uh process Here I'm having well in all of these slides you will see on the bottom here some links What that will take you to recommendations that I'm explaining And you can read them afterwards if you need to uh Another thing is having the auto generative values that is very common Marcus has already covered in terms of random and the sequential So I will not be covering that But what I want you to Remember is a bit the same With android you will be downloading many things from the server And this means that if you are using preserved values because you're using a um unique id unique ids or things like this you will be having Them downloaded with android. So every time the android checks the server connects the server will say, okay, I'm going offline Give me a hundred values by default. So it will take all these values and this multiplies again for surveillance So you need to adjust this in the official documentation and the post you will have some information that can give you some examples But another thing I want to mention here is that it's important that you understand how android works in terms of using dates Today is the tour if January the 21st. So if you're using dates and these dates in your Generative values include the month android will download hundred Generative values for this month, which is January and will Be able to use them until February of course because on February they will mark as expired so in terms of knowing how your implementation works if these Health workers are going on the field for long periods of offline or they're going to be completely Online the whole time Can play a different role For example, if today would not be the 21st of January but the 31st so 10 days for now And you download hundred values or thousand values because you think your your Workers will be going out on the field without connectivity for a long time The next day so February the 1st all these values will be marked as expired So the android application will not be able to proceed and we need to request more of these values to the server Again multiply these for several devices for having the same problem over and over again So in terms of using this kind of dates, uh, there are some information on this post But I would say that the worst case scenario in case you're working offline Could be using days then could be using Months and then it would be using years in case you're using this for generative values This applies for offline Because if you're working online the whole time Well, you will be overloading the server every time you you ask for these values But at least you can use these values while being offline For sequential, I will not make animation. I think Marcus covered that pretty well before and random depending on the version you might be Having a huge impact in the performance By using one of those so what I've been telling you about is like how android behaves, but The good thing is that you could install this application on your server Which is called the androids and it was well up that is going to allow you to tweak These little things I've been telling you about the amount of values you want to download or for specific users for specific programs, etc, etc, etc As well as allow the metadata and data sync process. Well, all this I will not cover this because I don't think we will have much time, but basically, uh, you have here, uh, very well documented everything and In the document from the beginning we have covered some examples It's very difficult for us to understand all the implementations But we have given examples saying we think if you are having if you're working in an online environment with this and this Probably you want to set this to this and this and this Again, the good thing of this application is that one of the last things it includes in the menu is Testing for a specific user. So if you could put the user there and you perform the simulation You could see how much data that user would be downloading And can help you understand if it's a very heavy process or not I think I have a couple more slides one is not Well, this is about mobile device management is something we have been recommending for a while There is a whole guideline that covers this But basically I Wanted to put it here because despite not impacting directly the performance somehow It's important to to know how it works and how can it help you? A mobile device management basically it's a tool that you can use to manage your devices So not many Implementation have them have it at the moment But it could help you if in one of these points on surveillance, but let's focus on these ones Basically, one of the things we have seen that happens and usually comes with problem is that we for most There really is a new version of the application And because we're publishing this on google play Your devices will out to update probably and then you have not properly tested and maybe the new Application is not working properly or it includes some things that you were not tested You were not be you were not able to test or you could not Train your users on them. So somehow it might impact the performance That's the reason we're putting it here and with this mobile device management You can control the version Of the app you are deploying to your devices Another thing that is not really in terms of performance But my impact as well is that you should be with this mdm you can locate and And track those devices you can remotely wipe them etc etc and Yeah, and For example, you could also limit In case you see that many Sorry, i'm going to roll back imagine you have an implementation with thousands of devices and all these devices are reaching or hitting your server at the same time Hope at the beginning was since was showing some graphs So imagine you're seeing these graphs always happening like In the night Whatever this means that all your devices are synchronizing during the night So with the mdm you could be playing with this and you could Escalate and say okay the devices from one to one hundred will be pushing information to server at this time The 400 to 200 on tuesday some things that is so you could balance a bit with this this thing However, we are aware that this usually is a costly Solution and not many implementations can take advantage of it But if you cannot do this at least one of the things we think you should do Is to disable the out updates of your dhs to android application By default most of the google android devices will have this option available So when you start an application in this case the dhs to application is going to come without update This means what i was telling you before We for most will publish a new version of the app and all your devices will Download up there their their application This might have an impact on your Implementation because you are not you have not had the time to test this Or you it might be that it comes with a bug unfortunately and it impacts How your devices are performing and stress etc and also escalation server so having this to Disable it's a good recommendation So you have the time to test it before and then once you have tested that your application The new version of the application is working well with your setup You can tell your users, please it's time to update so you can go here to the application You can open the play store and click update and that's it Usually if you have disabled this auto update it remains through the process of updates So with an aversion you will not have to So we are aware that many implementations Have users that they're using their own devices and you don't have control of this But it's good that maybe when you if in case you have Manuals on how to install application you include this little thing here saying Once you have installed the application make sure you have disabled the updates And yes, this is the summary of the things I've been talking about What I was saying before if you try to remember that android is going to behave as this Foresighting user that will download everything before going on the field Please have any bite with you the analysis and then Come back to this checklist and say okay. I've done this this and this and this I know that the mdm might not be the one applying but at least make sure you can cross this And should you have any problem don't hesitate to Contact us in the community That's all for me. I think now it's time for rebecca rebecca for or What is yours? Thank you. Hi me. So I'll just turn my camera on to say hello to everyone And so I'm just going to view a little bit of what we've done with the metadata packages to adapt some of these Recommendations that have been made by the various product managers today And then also talk just a little bit about some of the implementation strategies And how we can make the the best out of these tracker and aggregate data models So So I'll start with just kind of reviewing the two main package resources that we have For the coven vaccine delivery use case So we do have The core aggregate package and as well as the electronic immunization registry tracker package And so we actually designed these from the beginning always thinking that components of these packages would likely be deployed together and also adapted and implemented within the country So some of this in particular the coven vaccine the core aggregate package There is a component of this that allows for daily reporting. Generally, this happens based on tally sheets of Vaccines delivered through the many different sites and sort of a campaign style And this was designed very much based on the successes in places like Bangladesh and Uganda who had done this daily real-time monitoring at a very large scale with campaign style vaccine delivery For measles rubella campaigns. So we felt pretty confident that this is this is a design That should be able to work in a lot of different country contexts Even when maybe the capacity and the infrastructure for being able to have your your tracker on time and at national scale Is perhaps still advancing So if you can take me to the next slide, please So the things that we did to update this package We actually did a lot of things that scott and that Olaf will also talk to you about a little bit later and marcus in order to optimize the configuration um, some of those things are adapting the definitions of the program indicators So there is a program indicator file for example that you can download look at it as a reference or you could also install on top of an existing Modification and just kind of go in and look at those program indicators And make sure you have those comparisons one of the biggest changes that we made um in response to the Analytics performance issues was to be able to include A dashboard that that supports that really key use case around being able to monitor the daily progress of the campaign And so in some countries where trackers really up to speed They're able to really capture all of that individual level electronic data almost in real time Sometimes the actual analytics are having trouble keeping up So what we have done here is actually we we worked through this and realized that we can basically serve the same kinds of Dashboards that are much much lighter At much less risk of crashing any large-scale systems through the aggregate data model And the way that we do that is by mapping these program indicators to a set of target data elements So uh, olaf will tell you a little more about the other tools you can use to to work out this approach for basically using A more performant daily monitoring dashboard that's being served by the aggregate data model But it's actually consuming that underlying tracker data and so the last piece is that um There was quite a bit of dismantling work that had to be done in order to map those program indicators To the aggregate kind of dashboard So that work has somewhat been done for you and it's also contained within that program indicator file So even if you have a a customization or adaptation of the tracker program There are some components of this package that you can either use as a reference Just look through the documentation to identify some of the key changes Try to apply those into your own configuration Or there's some components of these files that you can take that are really they're non-breaking changes You can adopt these aggregate components in your system as well as updated program indicators so next slide please um, one of the really key issues we wanted to talk about is um, you know being able to assess Within your country context if your tracker implementation If it's really ready to be able to use this tracker data as the source for your real-time monitoring data Um, and just to give you I mean a lot of people really they they understand these points. They're really not prescriptive It's just to think about um, but if all of your vaccination sites are not equipped with an adequate number of usable up-to-date devices Do they have stable internet connectivity? Are there a sufficient number of trained data personnel across all of these sites? If you don't have those items in place It's going to be unlikely that you're going to be able to get all of that individual level data In real time So you might need to start thinking about having some some parallel processes And this tends to happen in many countries. We're scaling up very quickly for uh, covid-19 vaccines But we generally know that large-scale tracker programs They can really take time for the human resources and the infrastructure to catch up. So please do have a plan um, you can be thinking about monitoring the lag time between completeness of data between tally sheets and your uh, tracker Registry to start assessing how ready the country implementation is And then also really thinking about some of those server hosting and monitoring functions. Are they really are they well staffed? Do you have the human resources and knowledge? And capacity that you need to to keep making these real-time tweaks And if not, um, you know, that's okay. That's why dhi's 2 has a really flexible data model And we do think we have some opportunities to still achieve real-time monitoring goals while you scale up the tracker system Next slide, please So I will close with just a couple uh examples to set up Some of where ullav is going to share with you about the use of the aggregate data model alongside tracker So in this first scenario, we're kind of assuming that you're you're in a country where trackers Really functioning quite well at scale. And so a place like Sri Lanka that was mentioned earlier, uh, where nearly all of the Population is captured as teis might be an example So in this case, you might be taking the the tracker program Some of that data really feeding your daily, um campaign monitoring indicators We would just transform that tracker data into agrit to be able to serve it to the users in a in a more performant way But we also know that there's generally use of some aggregate data model as well in terms of being able to do some basic daily stock reporting from the sites As well as just getting some target population data Next slide, please And the second scenario that uh, we wanted to remind you about which is very very common is that there might be Parallel reporting happening while that EIR tracker program scales up And so in this scenario, um, you know, you might have your track tracker COVID IER sitting in a separate instance And that might be happening at partial scale. Maybe it's covering some large urban centers But not all of your rural geographies. Maybe the data entry is lagging by a few days But that's probably okay. Um, you can still take advantage of doing sms reminders being able to generate certificates Being able to do data triangulation But maybe that tracker data is not complete enough to use as your source for real-time monitoring In which case it's it's equally Feasible for vaccine sites to just submit those daily reports those tally sheets In parallel and allow a lot of that analysis to happen more on the aggregate side It might reduce some of the issues that you would be having in terms of analytics performance on your tracker databases um, so with that I do encourage you to um Look at some of the new resources that we've put up on the metadata package downloads page and and really think about how You're ensuring that the the dh is to design in country is appropriate for the level of um infrastructure and operational resources available So over to you. Olaf. Thank you Thanks, Rebecca. Uh, so I'm Ola part of the implementation team in Oslo I'll be talking here at the end a little bit about this integration of tracker and aggregate data models So as scott has already covered we've seen that Using the program indicators in a lot of the dashboards analytics is very heavy on these large scale instances um and we've also seen that Using the analytics on the same type of data for aggregate data model. Uh, can be up to 100 times faster Uh, so the idea with integrating the tracker and aggregate data models is to be able to produce key information for the users on their dashboards In a way that is less taxing on the server and the application Uh, so as uh, we've already mentioned, this is about uh, taking those key program indicators you have in your tracker program and mapping them to aggregate data elements Then generating for example, uh, every day Aggregate values using those program indicators and saving them as aggregate data elements, which you use them to display On simple aggregate based dashboards So as many of you know, uh, there is no built-in support for doing this sort of transformation from program indicators to aggregate data elements within DHS itself Uh, but what we do have is some guidance, which we have a link to there. It's in the implementation guidance of Our documentation site as well Which explains the steps you need to go through if you want to develop some custom solution for moving this data from tracker to aggregate But the interoperability team Is also working on some scripts and tools That you can use as a starting point for doing this kind of automated transfer of data from tracker to aggregate Uh, so that will be available quite soon. We're doing some testing now on the performance making sure that The accuracy of the data what you're getting from the program indicators is what you're getting through the aggregate data model, etc Uh, so this this tool that the interoperability team is working on Uh, like I mentioned, it requires that you have some mapping between program indicators And the data elements In the metadata packages that we provide this mapping is already done as Rebecca mentioned Key thing to highlight here is perhaps that this tracker to aggregate is very linked closely linked to the analytics processes in DHS too So doing this tracker to aggregate Data transfer requires that the analytics scheduling That you usually do Is turned off and this analytics generation becomes part of the tracker to aggregate process so that you first generate your tracker analytics Then you move your data from the program indicators into aggregate data values and then you run the aggregate analytics Um So the script that the interoperability team is working on the key thing there is that It breaks this task of transferring data Sorry Into smaller tasks, uh, which is more scalable. Uh, so in the guidance we have It explains how to extract how to import the data But when you reach the scale that many countries now do in terms of the number of events number of data values we have This can't really be done as one operation to get all the tracker data out and put it all into the aggregate data model um, so the script helps break this process down into smaller smaller batches And it helps execute them in parallel to be more efficient uh, so the team has a Working version of this now, but they're still doing some tests. I'm trying to Find the right parameters to make this as fast and stable as possible The last thing I wanted to end on here is just, um What mic promised initially to say a bit about what? How we can support if you're starting to see performance issues um And of course the first is to look at the guidance that we've shared here now in the presentation and That we've shared a link to a couple of times already And I really encourage you to look at this before you're starting to have problems do this as soon as you start thinking of Implementing a large-scale tracker Um System start by looking at this so that you're aware of all the potential issues and solutions It also made this small small self-assessment or checklist, which essentially It's based on the guidance The idea is to help you Sort of make sure tick off all the boxes that you've actually Done all you can based on the guidance we have To ensure good performance And we also set up an email address that you can reach the relevant people in the team here in Oslo Who can help with performance issues to help troubleshoot and help advice if you're seeing if you're having issues with the performance Uh, so I just encourage you to even though you have the whatsapp number or email address of individuals in the team Please use this email address if you're having performance issues uh, so that we can coordinate how we support countries and ensure that what we learn in one In one instance can be applied elsewhere That's uh, what I wanted to cover. So I'll give the word back to Mike here Great, thank you a lot. Um, so yes, we we've uh been handling a lot of uh questions as they come in either through the chat or that link to the community of practice Um, so hopefully you're monitoring that and some of your questions there have been answered Uh, one thing that I would say is to remember that this is meant to be very very timely information So some of the recommendations that we have For example about uh, real-time analytics or about using not using the random Sequential for generating unique ideas. These are things that we know right now are not Performing when they're at this very large scale, but that doesn't mean it will always be that way We're also learning a lot from the implementations that come along and are slating Additions and improvements into for software development So again, we're we're planning and hoping to be very communicative with you all as a community and sending you information as things change And also as we identify other uh challenges with performance So we'll we'll continue to share this information. We'll be updating the documentation that you've seen And we'll be of course releasing new patches and releases of dhs2 Which will continue to contain improvements, but but this information We really wanted to get out to you all now as we know that many programs are scaling or are slated to start scaling in the the coming months