 All right, everyone. So good afternoon and welcome to the fun part. This is the most fun day of the year, obviously, where we're going to talk about all the nice and cool things that we have been building in DHS2 over the last year. So in this presentation, we're going to try to be a little bit interactive. And we're going to have a lot of questions. So you can choose to go to the annual community of practice and find the annual conference community there. And you can look for the what's new in DHS2 thread and ask questions there. You can also type questions right into the Zoom chat. And we will try to moderate and gather some of the questions. Scott for Patrick is joining me for this session. And he will be moderating the questions. And we will try to answer some of them in the last 10 to 15 minutes. If you don't get time, we will, of course, try to answer those questions directly in the community later. All right, so with that, let's jump straight to it. Today, we're going to talk about all the cool new things within Analytics and Tracker and Platform. We're not going to touch on Android. There's going to be a session later for Android. So you can stay tuned for that later in the day. So in this session, we're going to focus mostly on what's new in the 233, 234, and 235 releases. Obviously, we're going to have the 235 release coming out in about two weeks. And I'm going to be very adventurous and demonstrate from the development branch today of the software so that we can have a quick look at all the new goodies that's coming out in a couple of weeks. All right, so let's take a quick step back and look at what came out in 234. So one of the biggest things we did in 234 was what they call pivot tables in the Visualizer app. So previously, we had a separate app for pivot tables. And in 234, we managed to integrate that straight into the Visual Data Visualizer app. And the benefit there is, of course, that there's now there's this one app to deal with. We don't have two apps that people have to learn. There's just one. And the pivot table is essentially integrated as a new visualization type directly within the data visualizer, meaning that you can easily switch back and forth the different types of visualizations. The pivot table is a lot more scalable now. You can load at least three times more data into the pivot to avoid these types of out of memory or browser crashes that we have seen previously. And this also comes with what we think is a very important little feature. We call it the dimension recommendations. And this is a nice little thing. So let's have a quick look and see how this works. So we can start by going to the data visualizer application. And as usual, we can pick a couple of data elements if you want to do that, as we always do. By default, this is now rendered as a chart. The nice thing is that now we can go to the top left visualization type selector. And we can just select the pivot table. So pivot table is just another visualization type together with column, stack column, and bar, and area, and all these other visualizations, which is quite nice because then everything is integrated in one place to go. We can go here and pick some more organizational units if you want. And we can say updates. We can drag the organist down here. We can load and so forth. So this is now just another visualization type within the data visualizer. If you look closely, you will see these nice little green dots here on the left side. And the green indication is essentially what they call the recommendations for dimensions. So if you have selected certain data elements, this will basically recommend which dimensions are relevant for the data elements that you have selected. And this should resolve some of the problems we see where people select data elements and then they don't really understand which categories and group sets align with those data elements. So that should be a helpful feature. All right. So in 235 and 4V actually did a lot of new improvements on the visualization side. So there's a lot of new options available now for charts and visualizations. And the first one is what we call the dual-axis charts. So many people have come to us and say, there's a problem when you try to, for instance, convert things which have different units of scale. So one very good example of this would be when we tried to combine data elements and indicators. Because obviously percentages and roll numbers don't really look very nice on the same chart. So let's have a quick look and see how this works. So first of all, I'm going to select a couple of indicators. So I'm picking two indicators. And I'm going to create a quick chart. This looks nice so far. Then I'm going to go and pick a couple of data elements. So we have ANC 1, 2, 3. This is we have both indicators and roll data. This doesn't look so good anymore. We now have the percentages down here and the roll data up here. And the chart is not very useful. So to handle this, we can now go to the options and then the serious dialog. So we have a new serious dialog within the options. So here I can come and say, OK, I now would like to select some of the data items I've selected and put it over on another axis. So we can see here that we have x is 1, 2, 3, 4. So we'd now support up to four axes. So I picked a couple of these. And we can now see that the chart is immediately more useful. We can now look at both data indicators in the same chart like this. We can see that we have a secondary axis over here on the right, together with the regular first axis on the left side. And the colors also align nicely to the axis. Another very popular feature request we had is what we call combination charts, the ability to combine different types of visualizations in the same chart. Because you sometimes would like to have both lines and columns, for instance, in the same chart. So to demonstrate this, we're just going to continue with the same example. So if you go to Options and back to Series, we can now see that we also have, for each series, a data item that's a visualization type. So we can say that for these data items, I would like to use a column. And for these, I would like to use a line. So far, we only support column and lines. But we plan to add more in the future. So I can click Update. And we can now see that we can combine both columns, as well as lines in the same chart, which is quite nice. And this is also very flexible in the sense that you can select exactly which data item you would like to have as a line and which one to have as a column. All right. And we also talked about this one now already. We have a series management dialog that we just looked at, which is quite handy where we can select both the axis to put the data item, as well as the visualization type. So all of this can be controlled in the same chart. All right. Another very popular request was the ability to have two categories on the same chart. Because sometimes you would like to have more than just one category. You could have, for instance, two categories. And you would like to have those displayed in the same chart. And if you look very closely on the screenshot, you will see that we have facilities or actually districts coming down on the lower category down here. And then we have what we call quarters up here on the upper category. So districts down here and quarters up here. So let's look and see how we can do this. This is actually very simple because we have this nice drag and drop area up here. But it's very easy now to select different things. So let's build a quick chart. We can select some indicators. We can select, let's say, last four quarters. And then we can pick, let's say, three different districts like this. And now I can very easily just drag my districts down on the category. I can click up that. And it shows me a very nice chart where I have the quarters up here and then the different districts down here on the lower category axis. And the cool thing is that I can also not continue to add filters, of course. So if I would like to say facility ownership, public facilities, I can add this to filter. And then, of course, I'm only going to show public facilities like this. So we can now add more information into the same chart. All right, moving on. So we also added the ability now to have multiple color sets per chart. And many people have come to us and say, they would like to customize the colors that you see in the chart. So far, we only have essentially one color set. And there was a need to add more. And so far, we have added, I think, five or six color sets. So let's see how it looks. I'm going to reopen the chart. I'm going to go to Options, it's Style. And down here, we now have a heading called Color Set with default bright, dark, gray. We also support patterns and also colorblind optimized color sets. And this is to optimize and support people that have seeing disabilities or colorblind to make it easier to see. And also for printing purposes, sometimes when you print, it's easier to have these kind of patterns or kind of more contrastful colors. So if I change to bright, you can see now that everything changes to a brighter color set. You're also open to adding more, of course, here. So if you have specific needs for specific color sets, they're open for including more of these going forward. OK, another one for visualization, text styling is another one that's kind of very common and something you see in most BI tools out there today. So we also added this. So we now have the ability to style the text, both in terms of the application, the chart title, subtitle, the legend, the horizontal axis, vertical axis. So if you look closely here, you can see that the title is bigger than it usually is. We have italic on the subtitle and so forth. So again, this is very easy to use. So we can know the charts. We can go to Options, Style. Under Access, there's also ability to customize the vertical axis, horizontal axis. Under Style, we can change the legend key. We can say, OK, I want a bigger legend key. For the title, I would like to make that extra large. I would like to make it bold. I can also change the color if I want to do that. Subtitle, we can add a custom title. We can make it large and italic. And then in Reload, we can see that the title is bigger. It's bold, it's italic. We plan to make this even bigger by the time of the 235 release. All right, we also made improvements on the gauge charts. So the gauge charts is maybe a little bit underutilized, but we have tried to make it more useful now by adding a few things such as support for legends and also target line and baseline. So what I mean with legends is essentially that the color will change based on the value. So as you know, indicators in DHS2 can have legend sets, meaning that we can have red for poor performance, green for high performance, or orange like this one for medium performance. And so forth. And if you select any category, enable legends, you can now see that the color of the gauge chart will change as we move forward. We can also set the baseline and the target line, and it will show like you can see here on the slide where the different lines become visible on the gauge chart. So this is a very effective way now for making the gauge chart more useful. If you have key indicators, key performance indicators, you can actually display these now as gauge charts. All right, so switching gears a bit. This has been there for a while now, but I think in 233 or 244, we added the ability to add filters to a dashboard. This was also a very kind of top request, but it came from many different people. And this is nice because it allows you to do cross dashboard item filters. So it applies to everything within the dashboard. We also removed the restriction to not just have relative periods in what we call user organizations, but you can now also have fixed periods and also select explicitly the different organizational units. So let me show you quickly how that works. So if we go back to the dashboard, we can navigate quickly back here. So when the dashboard loads, we can see now that there's a new button up here that's called add filter. So we can decide to add the filter in here. The cool thing now in the first release, we just had period and organets. We now have also what we call your dimensions, which are essentially custom dimensions. So we have any type of dimension can now be used as a filter for this dashboard. So if I would like to apply, let's say a period filter, I can simply come here and I can select both fixed periods as well as relative periods. So both of them. If I can also go to organets and I can choose to select an organets to override for the filter. And I can, of course, switch between user organets and explicit organets. I can also select dynamic dimensions. So let's say I would like to say, okay, again, I only want to look at public facilities. I can say confirm and we get this little box up here that indicates that public facilities have been selected as a filter. And the cool thing now is that the dashboard is gonna re-render with data only coming from public facilities. And of course you can add any number of filters going into this dashboard. All right, another big one was what we call single-value dashboard items. And this is quite popular. You see it as in many other tools that for, especially for key performance indicators, people would like to have like one big number straight on the dashboard to indicate it's kind of key performance, key metrics for your country or organization. And the way they implemented this was essentially to introduce a new visualization type that appears in the data visualizer. So in the visualizer, we can go here. We can, it's easy to set up. You just pick one indicator. And then you say, okay, I would like to have a single value as the visualization type. I click update and it comes like this. I can save it as a favorite and put it on the dashboard as any other visualization if you want. So just another visualization type that you can include straight onto the dashboard. All right, another very popular request has been the ability to print dashboards. So, you know, people build beautiful dashboards in VHS2 and then they would like to essentially download to PDF so that they can print it or distribute sort of soft copies on email or Slack or whatever they want to use. So if you come to the dashboard now, there's a print button that will open it. It's in a print friendly format. And then you can use the sort of native prints functionality of the browser to print it to PDF. So again, very simple to use. We go back to the dashboard. We can maybe remove this filter. You come here, you say print. And then maybe you need to let the dashboard load first. Just a few seconds. We say print and then you can choose between the dashboard layouts or one item per page. So dashboard layout will try to follow the layout as you have it on the dashboard with potentially multiple items on the same or sample line while one item on per page will do as it says put one dashboard item on each page of the print. And from there, it's easy to, you know, save as PDF to the browser and download it to your local computer. All right, so switching gears a little bit and talk a little bit more about the sort of aggregation logic in the system. So another very popular request has been the ability to compare time periods in indicators. So up to now, indicators really only works on the, what we call the aggregation periods. And with aggregation period, what I mean is the period that comes into the analytics API engine. So if someone requests data for let's say quarter three in 2019, the system will calculate the indicator and return data for quarter three 2019 period. In 235, we now have the ability to introduce what we call period offsets. And period offsets essentially means the ability to go back or go forward. Next number of periods relative to the aggregation period. So it allows you essentially to compare time periods within an indicator formula. So this means when you define the indicator, you can do as you see here on the slide, you can define a data element or an option combo. Then you can say dots period offsets minus one, minus two, minus three, or just plus one, two, three. And this is essentially gonna rebind or forward relative to the aggregation period that came into the query. So one examples where this can be very, very useful would be in logistics where we can easily calculate now consumption of commodities. So for instance, we can create an indicator that looks at the, for instance, the stock levels. We can say, okay, I would like to know how much stock, what was my stock level three months ago? How much was it two months ago? And then I would like to compare that with how much stock on hand I have for this month. And by doing that, we can easily calculate that consumption. And then we can calculate sort of your stock on hand. Like for how many months do I have stock on hand before I get a stock out? All your use cases would be what we call trends and differences between this and last period. So we can easily do the calculation of how did we perform this month versus last month? And we can actually then present some kind of indication of things getting better or things getting worse. And then we can combine this with let's say legends and or charts, and then we can very easily display whether we are improving or things are getting worse. Now the use case will be progress against targets. Of course, so if you enter targets in the future, we can also then easily compare your current performance against the targets that entered, for instance, for next year. So we think this can really help each of us too and make it more feasible, especially within LMS and logistics. All right. Another big one that we did in 234 was what we call real-time analytics, also known as continuous analytics. And historically, these two has had kind of a weakness in the sense that very often the data that you enter does not become available for analysis until the next month. So the normal way of doing things would be that people enter data during the day and then we have the analytics tables run during night and then data becomes available in analysis the next morning essentially. So in 234, we have a new solution for this that we call continuous analytics or also known as real-time analytics. And this allows you, this kind of functionality does not rebuild the entire analytics tables but instead it only looks at the latest data that has been entered. So instead of doing the complete update of analytics tables, it only does the difference to data that was entered essentially since yesterday, since last time you ran for analytics. This can be configured in seconds. You can configure the number of seconds to kind of wait in between these updates. And since it only looks at the diff, like the difference since last time you ran for analytics it's much quicker. So we think that on small systems, the kind of delay between data entry and analytics now can get down to the seconds. On larger systems, it's probably down to a few minutes. So we have seen 2345, 60 minutes, something like this. But in any case, it's gonna make the diff between data entry and analytics much, much shorter. This can be configured as a new job in the Scheduler app. So if you go to Scheduler, there will be a new job type that they call continuous analytics table. So you need to probably disable your existing analytics table job and then you can create a new one for the job type called continuous analytics table. As you can see here, you can give it a name, delay in seconds, that refers to the kind of waiting in between the different runs of this job. And then there's something called full update hour of day which means what time did we run the full update? So it still runs the full update once per day. And this you can schedule to run, you know, 1 a.m., 2 a.m., midnight, whatever. All right. So switching gears a bit, let's move over to the maps application. So when it comes to maps, we also have a lot of exciting improvements there. Beyond and often in the team have been doing really, really good work and we have tons of improvements on the maps application. First a couple of them, we have first of all have much faster rendering now we're using VemGL, which is technology that essentially allows you to use more of the hardware accelerated graphics. So it uses the hardware of your machine much better than previous plain JavaScript libraries. And this has resulted in much faster rendering of maps. So it looks much faster now. It runs and sort of renders much faster than before. We also have much more seamless zooming. So when you zoom in, you don't have these huge gaps in between the zoom level but it's much more smaller levels now which appears as what you call seamless. We also added support for Bing maps as the base map. We had a little bit of trouble with Google Maps as they now require credit cards and it's not free anymore and so on, so on. So we decided to switch over to Bing maps. I mean, now have full integration of Bing as the base maps both satellite imagery as well as Street View. We also had a little support for what you call the phone screen mode. So essentially, if you click S11 now, you will see that we go into a full screen mode which is very useful for things like presentations. We also plan to add this to the data visualizer app. So let's head over to the maps application and see how things are improving over there. So we can go to maps. I created a little favorite here where I can open and just to demonstrate the performance gain. So as you can see here now, things are loading quite, quite fast. So that was loading a map with I think 30, 40 something chieftains and more than 1,000 facilities in less than a second. So we can see that things are loading fast. Scrolling is also much smoother. As you can see, we don't have these huge gaps. It's very smooth and nice. We can change the Bing. So we are looking at the Bing sort of Street View or Rhodes, as they call it. You'll have the Bing dark view which can be nice for certain types of maps. We have Bing aerial meaning sort of satellite imagery maps and also with labels. So this is nice and fast. One major improvement that's coming out in 235 is what we call bubble maps. So previously when it comes to thematic maps, we essentially only had the ability to render and display values using colors. So what I mean is essentially coloring the different polygons for the maps such as districts and chieftains. When it comes to data which is not percentages or coverages, it's sometimes more useful to look at bubbles to indicate the size or number versus just the color because it gives you kind of stronger visual of the situation. So let's have a quick look and see how this works. So I'm just gonna load an existing map that I have. I can change to a data element. So I can decide to look at, I'm just gonna pick a data element here. So as you can see now, these maps are not really that great sometimes, right? If you have super high values and some low values, it becomes kind of tedious like this. Doesn't give you the full picture. We can of course come here and say, okay, I wanna go from equal intervals to equal counts. That makes it a bit better. But now I can also go to style and I can say, okay, I would like to change to a bubble map as opposed to just a choropleth map. So choropleth map essentially means colorize the polygon, bubble map now will change to displaying as bubbles. And now we get a completely different visualization of the map. I can also go to style and change between the low radius and the high radius. So I can bump up the high radius a little bit, I can say 30. I mean, now get a nice visualization that indicates where do we have the highest number of cases or visits or death or whatever. You won't be looking at like this. Yeah. All right, another major improvement is what we call the event data table. So we do have a data table for thematic maps. We also now have it for event maps. So let me just super quick show you how that works. So we can go down to the layers. We can build a event map. We can say program. We can select the inpatient, built into mortality program. We can zoom in. Like this. And then we zoom in. We can also come here and say, okay, I would like to show the data table. So now we get a data table that lists the different underlying events for this map. And you can, of course, filter on it like we can for the data tables that we have for other types of maps. Okay, no data handling. So we can now display no data better than we could before. In the series item, you can explicitly now select the color that you would like to use for no data. Previously, we just didn't display anything for no data, which made the chart look very incomplete. We also have non-overlapping labels, which essentially means the labels don't look very sort of messy anymore. Before the labels, if you had a facility layer, very often were rendered on top of each other, making it more or less useless to see the names. We now have much more intelligent rendering of the labels so that they don't conflict and they're not overlapping anymore. We do have an event status filter. So if you come to the event layer, there's now a filter for the status of the event. So going back to this one, there's now a event status filter here. So we can say, oh, active complete schedule, what would you skip and so forth for the different statuses. We talked about WebGL and map performance. We're using WebGL. Things are loading a lot faster now than before. We can load a lot more sort of geographical features on the map compared to before. Donut charts for map clusters is another one. There's a lot of goodies here in the events, as you can see. So we can now render events as donut charts, as opposed to just a sort of one color point. So let me show you how this works very quickly. We can hide this one. If you go to style, we can now say style by data elements. So I can say, okay, I would like to style by gender, for instance. I get the colors that I can style by and I can update. And as you can see now, the map is rendered now using pies or donuts, as opposed to just individual sort of single color points. This is quite nice because now we can read more out of the data. We can look at distribution between gender, between age groups, between different types of data elements, and so forth. And display essentially more information on the map, make it more telling map than before. Okay, we can rotate the map. If you want, if you press control and drag, we can even now rotate the map. You can try this out yourself. Splitting map views is a nice one. I have to show you this one too. So instead of just displaying one map at a time, we can now render multiple time periods using the same screen. So let me go back here and reopen a chart. Sorry, not the chart, but the map. And as you can see now, we have one map. This is for last year. So I cannot go in here and edit. I can go to periods. I can say, instead of saying this year, I can say I would like to look at the last six months. That gives me the option under displayed periods to say, okay, I would like to now show it as split maps. So I can split the map into multiple views. When I click updates, this now renders it as six different maps. So we can easily now see all of the time periods for the last six months in the same screen. We can see the sort of evolution of how things have evolved over time. Talking about the display periods, we also have now timeline charts. This is a little quite cool, where we can essentially show the map as a video, as a timeline, where we can sort of visualize the evolution of the data over time. So by selecting timeline, I guess this nice little YouTube-ish play button down here. And I can decide to click play. As you can see, we have March, April, May. So this is the timeline coming down here. I can click play. And this is now I'm gonna show me the different maps on a timeline like this. Quite cool. All right. Multiple filters. We also added the ability to add multiple filters. So if you go into the filter tab in the layer dialog, you will see that we can have any number of filters now. Before you can, I think you can only have one filter. But now there's any number of filters that can be applied to a map. All right. So I think that concludes the part where we talk about the frontend analytics. A lot of exciting updates as you can see. I'm sure we have a lot of questions. And as mentioned, you can ask them in Slack or in Zoom or in the community of practice. And we will try to answer some of them later in some of them right now. So talking a little bit about the backend or admin side of things, we did something which we consider very important now in 234 where we introduced what we call the service side analytics cache. So we have seen that a lot of implementations are getting a lot of data now. The data volume is growing. There's more and more requests. And we have seen that more and more data is pounding the analytics API and there's the need for faster rendering of responses. So what we did was to introduce what we call analytics cache on the service side. We do already have caching analytics in the client side. So the bed browser will cache the data. But from 234 we have the ability to cache data on the backend as well. And this really helps because now multiple clients can essentially use the same cache. So the cache hit ratio goes up quite a bit. And we have seen dramatic improvements instances that use this. So we really recommend people to enable it. Another thing is that this is now security enabled. Previously people have set up caching through NGINX by adding a header in NGINX and a cache directive. The problem there is that NGINX sits in front of Toncat meaning that people can guess the URL and get sent to the data straight from NGINX. So this cache instead sits sort of behind the DHS2 security layer meaning that the cache is security enabled and security is as good as before. This one is linked now to the system setting called cache category. So there's nothing you need to do. Only thing you need to do is to go to the system settings and go to the cache strategy setting and enable caching there. I had a couple of hours, next morning, two weeks up to you. But this is now integrated with the system setting and there's nothing you need to do. So we strongly recommend people to enable some caching at least if you have a business system. Okay, so let's go a little bit to talk about tracker and events. So in tracker, there's also been a lot of improvements especially in the new capture application that you have. So one major change that we did in both in the system and the data model for 234 or three was to allow assignment of events to users. So we have seen a number of use cases where people would like to associate users to events. So one example would be like something that people have to go out and spray, for instance, in malaria where it's important to assign a task that to be done to some user or some person that's part of the team. So one example would be a malaria spraying campaign where a household needs to be sprayed and then a certain team or person is assigned that task. So what we can do here is to essentially assign an event to a DHS2 user. So in the capture application, there's now an Assignee dialog where you can come and assign one particular event to a user within DHS2. We also added the ability to save event filters. So if you look in the capture application, there's now the ability to have lists of events. Some people call it fricking lists, the ability to essentially to do lists for activity or tasks for people to complete. So what you can do now, you can go to capture, you can make a filter using some of the columns and then when you're happy, you can save it as a saved filter and then share it with people. So we have the same sharing paradigm linked to these filters as we have for metadata. So this should be familiar stuff. So essentially now you can save a filter and then you can assign it to somebody in the system, into a user in the system. Program rules have also only got a lot of improvements and we know that sometimes program rules can be a little bit brittle. Sometimes they can break a little bit hard sometimes to know exactly what to use in the rules and we added validation for program rules now so that if you try to enter, let's say a variable that doesn't exist or create some criteria where the element doesn't exist, then the system is going to complain before runtime essentially. So we can validate this upfront as opposed to waiting for someone to use it and then have it break. So validation of program rules upfront there. When it comes to performance, you have seen that this is also becoming a major and very important aspect of the system and you have seen that the amount of events are growing and you have seen to some extent that the previous event importer hasn't really catch up with the requirements for performance. So in 235, we've spent a lot of time on improving the performance of the event importer. The whole thing is entirely rewritten from scratch and it's a lot faster than it used to be. So in 235, you have a lot better performance when it comes to event imports. In an unofficial test, we have seen that it's more than three times faster than it used to be and also the concurrency is a lot better. You can have a lot more concurrent requests meaning users sending events at the same time than before and you have now tested with more than 100 users sending data at the same time concurrently to the API and the API is holding up. So a lot of work has gone into the event's performance. We're also working now on making the tracker importer much faster and we really hope to have a new version of that available for 236. So performance is really a high priority for us. We also made a lot of improvements around user privacy. As some of you might have noticed, DCS2 has been a little bit too liberal when it comes to exposing user information to regular users. So when it comes to non-admin users, we have seen that some of them can see too much user information compared to what people consider good privacy. So in 235, we have added a lot of restrictions on the user endpoints so it's now not possible for non-admin users to go in and view user information. Essentially, we have made a new user API that gives you very limited information, which is open. And then we have been out protecting the full user API and the user part of metadata through the view user authority, meaning that you can now decide to protect your user information much better by not granting the view user authority to sort of end users. And with end users, I mean people that are not user managers or administrators. So what I mean, what I'm saying here is essentially locking down the user information to the system to privileged users. Okay, we also did, so back a little bit to SUS admin things. So we also have done a lot of improvements when it comes to application monitoring. So we have introduced a new endpoint that now exposes application and monitoring data. So this is based on what we call Prometheus. Prometheus is a very popular open source monitoring tool that is freely available to everyone. And we also recommend people to use Grafana, which is a very popular sort of visualization tool on top of Prometheus. So we now have monitoring APIs that expose metrics that anyone can come and pull from or scrape as it's called. And this includes memory data, CPU information, information about APIs like which APIs are most used, most frequently used, which takes the longest time and so on and so on. And this is, of course, very helpful for admins to look at bottlenecks in the application. If you're interested in this, you can read more in the SUS admin documentation where we have written up this in length. So some of the examples as we can see here would be the JVM monitoring. So we can see here uptime, start time, heap use, non-heap use. Then the API monitoring where we can look at the different API endpoints and which take long time, which are failing and which are not behaving and so forth and so forth. So again, this can be very helpful for SUS admins. When it comes to Tracker, they're trying to be compliant and they're trying to make, you know, comply essentially with regulation out there. And very essential in a lot of the sort of regulation that we see in countries is the ability to do audits of the system. So in this case, audits essentially means the ability to go back and look at what has happened in the system, such as who read certain data, when was, you know, people injected into the system and the people information, personal information changed when our personal information removed, who looked at the data and so on. So to cater for this, we have implemented the new audit solution that supports everything from Tracker, aggregate data, and also metadata changes in the system. This new solution is based on a messaging queue called ActiveMQ Artemis, which is nice in a way that it allows for other systems to also plug into this. So if you have a custom need for other thing, you can now write kind of a wrapper and connect to the Artemis queue and that way come up with your own sort of solution around this that might meet your need. You can enable and disable this in DHS Conf, of course. And by default, it writes to the DHS2 database. So it writes records for audits to the database. You can also configure it to write to logs. So you can also write to the log file. But again, the nice thing here is that since Artemis is kind of an open source, well-known messaging queue, it's also easy to plug in your own solution if you have very specific need for audits. So this should be helpful to meet regulation from governments in different countries out there. Continuing on Tracker, we have a lot of functionality now around sending and scheduling the messages. So we have for a long time been able to send messages for a program on a routine basis. What's new in 234 is the ability to send messages based on program rules and conditions within program rules. So you can essentially build program rule expressions that as the action allows you to send a message. And the message can be sent immediately or it can be scheduled to be run or sent at a later stage. It's based on messaging templates and allows you to customize the texts and the content of these messages. So this is useful, of course, for many different scenarios and use cases. One would be to send messages for positive malaria tests, for instance. You can send a message to the relevant people. You can schedule a reminder for anemic women when they do antinatal care and so forth. So again, this allows you, this provides for more flexibility, more configurability and more kind of fine-grained control over when and if to send messages. All right, so moving over to the platform aspect of things and I can see they're running a bit later, so we need to speed up a little bit. The platform team has also done a lot of work of the last year. In 235, we're coming up with a new SMS configuration app. The previous app was kind of starting to look a bit dated or very dated. We now have a completely new version of the app coming out in 235. And it has to sort of the fresh new look and feel. It's based on the new technology, the new React and the new platform that we are building. It allows you to set up, you know, commands. It allows you to set, look at received messages, send messages and pretty much everything that the previous mobile app was doing. So the previous app was called mobile configuration. That one is going to be phased out probably into 36. And this is basically superseded by the new SMS app, which is focused around SMS configuration, SMS app. The input-export app is completely rewritten. It contains more or less the same functionality as the previous one, but it has a lot of sort of new, nice-looking UI, which is easier to use and has a more attractive look than the previous one. So input-export app in 235 completely rewritten and contains, you know, user-friendly, nice UI that you can use to input-export. And I will support both data-import, data-export, metadata-import, metadata-export, tracked into the instance export is being included and so forth. We're trying to now cover more of the features in the API in the new app. Yeah, this is the event-import screen, as you can see here. Okay, so a little bit about deployment. So in 235, we will slowly introduce kind of a larger change to how we deploy and release DHS2. So as all of you know, like up to now, we have six monthly releases of DHS2 where essentially all the apps in the backend is released every six months. What we have seen is that we would like a little bit more flexibility, a bit more agility in the way we release DHS2. Because we have seen that sometimes, you know, we have improvements in a certain app and we don't really want to wait, you know, six more months before it can be released. We sometimes would like to deploy and make those apps available more quickly. So we are now slowly shifting towards releasing the web applications, the frontend applications, on the monthly or six-weekly cadence. So the point here is that the frontend applications will be released much more frequently while we will keep the release schedule for the backend or API software. So the backend will continue to be released as a six-monthly cadence while the apps will start to be released on a monthly cadence. And this has a few consequences. So first of all, we now plan to deploy all our apps, not just sort of third-party apps, but also the core DHS2 applications to the DHS2 app hub. So we do want to deploy all of the apps to the app hub and we will deploy sort of the monthly released apps into the app hub. We will probably continue to release kind of the full bundle every six months so people that, you know, basically would like it to keep it as it is today. They will also be, you know, accommodated here. But the thing is, like, once we release a monthly version of an app to the app hub, it will be possible to override the core app. So the consequence of this is that if you're sitting on DHS2 and you would like to receive the latest features of, let's say, the dashboard or the visualizer, you can actually decide to say, okay, I would like to get the new version of the dashboard, but I don't want to upgrade everything else because we know that upgrading everything has a cost. You need to retrain people. You need to test everything. You need to stop your server. There's downtime. You need to migrate. And you might not want to do that very often. So this essentially allows you to just take the latest and the greatest when it comes to some of the apps, but still maintain on the same backend version as you are. So just to repeat, this is nice because it gives you faster access to new features. You don't have to wait, you know, for six months or a year to get a new feature. You don't have to upgrade everything to receive it. And of course, also, if there's a bug somewhere in some app, you can now choose to receive the bug fix just for that app without upgrading everything else. So just as an example, if you decide to install VChat 2.35 in October, then in September, there's a new release of the visualizer app. You can now choose to just download the visualizer app and remain on the same sort of overall version. Continuing with the sit-down part, we also have horizontal scaling now with VChat 2. Some of you will notice we have the ability to load VChat 2 now in our cluster, meaning we can have multiple TomCats and multiple web servers serving the same application. We may use this now for quite some time, especially for PEPFAR and all the South Africans also have used this for a while. It really has had great effects. We haven't really seen any downtime since we introduced this, where we essentially have multiple application servers serving the same VChat 2 instance. So this will allow you to deal with more users, more data. It means less downtime. You can now have what you call high availability where if one server go down, we have other servers taking over. And of course, this is well documented in the sysadmin guide, so if you go there, you can read more about this. Security is an ongoing effort. We have done a lot of work on security of the last year. We have received very detailed, high quality pen tests from different organizations over the last year, which we are very happy for and very thankful for. We also finally have the opportunity to hire a dedicated security engineer, which has given us the chance to do a lot of security fixes. And we have seen that there's been a lot of fixes, a lot of improvements when it comes to security in DHSD. All right, so I think with that we have about nine minutes left. So I think we'll pause there and I think we will see if there's any questions that have come up so far. So Scott, if you want to take it over, feel free. Great, thanks Lars. I actually learned a lot about DHSD in that session as well. There has been a few questions. One topic that's been quite a few questions in the COP is on the dashboard filters. A few folks seem like they may have been a little bit confused about how the dashboard filters carry through user permissions. And just to make sure that everybody understands that if a user is assigned not to the national level or to the highest level, maybe they're assigned to a district level or a state level for data entry and data view, then on the dashboard filter in the org unit selection, that's the highest level they will see. The dashboard filter does not override the user settings or the user permissions. It carries through those. Just to make that clear. Lars, there was also another question here about being able to integrate multiple DHS2 databases and view all the data on a single dashboard. Do you have any thoughts on that? You're muted, Lars. Sorry. So the question was to integrate multiple instances of DHS2? Yes, and to be able to view all of that data onto one dashboard. Kind of operationally, how can that be done? Yeah, so that's a good question. And I think to that, there's no kind of magic answer to that. I would say your best option right now would be simply to do integration of those two instances, either into one of them or into a kind of common shared instance. There's no way that the dashboard can sort of go outside the DHS2 instance that it's linked to and look for data. It basically looks for data in this DHS2 instance. So what I would recommend then is that you decide either to integrate one of them into the other or that you set up a third instance where you essentially have kind of a portal view or an overall view. And the best way would be that in that case you need to set up data elements that match the other side and then you need to exchange data between them so that you take the data that you'd like to visualize over to the other instance. Right now. Okay, great. Thank you. There is quite a lot of interest in the continuous analytics. So two questions on that. I think these are the last two questions we have time for then. What are the downsides if any of using continuous analytics and how does continuous analytics and server side caching interact together? Those are excellent questions. So the first one in terms of the downsides, there isn't really a lot of downsides. The only kind of impact will be that there will be a little bit more load on your server. So we do recommend that if you want to run continuous analytics you should bump up your server spec a little bit. Like you should have a little bit more CPU a little bit more RAM and so forth. Make sure you have a fast disk. That is the only implication so you can sort of handle the increased load. The load isn't that much. It's much less than it used to be if you try to run kind of the full analytics during the day. Just make sure you don't have an under spec server. The other question was about caching and continuous analytics. That's an excellent question. The answer is we don't really have any intelligence there right now. I think the question was really like how can we make sure that the frequency of the analytics job doesn't override or collide with the cache. Unfortunately right now if you go to analytics there's a system setting called cache strategy it's 15 minutes 30 minutes one hour until 6am next morning or two weeks. We could think of a solution where you say cache until the continuous analytics table job runs again. So you cache for like 10 minutes until it runs. Those types of more intelligent caching we can definitely look into. I like that suggestion. We don't support it right now so for now you need to make sure that you keep these things relatively in sync. It takes 15 minutes you put the cache on 15 minutes. That won't be perfect but that would be I would say good enough for now. Okay thanks I think that's all the time we have. I'll stay on the community practice and keep answering the questions that anyone has coming through but I guess we hand it back over to Max now.