 Hello from the the DHS2 product management team. We're here today to talk about the release of DHS2 239 and Android 2.7, Android Catra 2.7. Yeah, so looking forward to getting started in just a moment. I'll let Max or Grant let us know when we're ready to go. You're on. Okay, sounds like we're ready to go. So my name is Austin McGee. I'm the Deputy Tech Lead for DHS2 and I'll let my colleagues introduce themselves one at a time and then we'll kick it off with the demo for 239. What are you going to start? My name is Lars Oland. I'm co-tech leading with Austin here at DHS2. I'm Marta Villa and I'm the Product Manager for the Android team or mobile team over to Scott's. Hello everyone, I'm Scott Russ Patrick. I'm the DHS2 Analytics Product Manager. Hey everyone, I'm Marcus Becken. I'm the Tractor Software Team Lead. Great and with that I will turn it over to Marta who will start us off with the Android 2.7 release. We're really excited to share with you the release of the latest version of DHS2 Core as well as the Android Catra. So Marta I will turn it over to you. I'm going to turn off my video so that you can focus on the functionality that will be displayed. Thank you Austin. So I guess I'm going to start but the order we will follow is we will start with me for Android. Marcus will follow with Tracker, Scott with Analytics and then Lars will present for platform. So starting with Android. This release is a release where we have dedicated a lot of resources in backfixing. So actually those fixes are released already as part of the patch releases 261 and 262 but functionally it's a small release. You will see now when I go through the new changes because we did a lot of maintenance and backfixing. Two things that are probably interesting for this community about that work is that for those of you that use the SMS reporting integrated into the Android app before you had to go to the to GitHub to get the APK that actually had the SMS functionality because we had some issues with permissions in Google Play but we have fixed that. So the app that you download nowadays in Google Play includes the SMS functionality. So that's the new thing. Am I not sharing the screen? Max can you confirm if you can see my screen? No, no. Fantastic. I forgot. You stopped it to see the video. No problem. That was probably the ugliest slide I have. So can you see it now? Okay. So you only missed this and I was in this one. And then another thing we are working on that is still not visible but there is a lot of work on it is building a new LMIS module integration to have a dedicated data entry flow for the stock management use case that will probably be released in the next in the next version. So most of the of the changes that can be seen and appreciated through the user interface or all of them are for user experience and to improve the user experience and the usability of the application. The first one's been in the sync process. So the sync process before and I'm going to see I hope you can see my phone now. Fantastic. Can you see it? Yes. So when you log in into the Android app, let me do it. This is an Android demo user. So it has very little data and metadata. Yes. So the login process that most of you know happens in the sorry, this is a problem of five. So the login process happens in this blue screen. This is as it was before and it has two steps. We download the metadata and then we download the data. So everything was done in this splash screen and we were displaying this loading banner. But the change now is that once your configuration is downloaded, we open the app and we show the user the programs that are there. And then we download the data program per program or dataset per dataset. And we display what is happening in each program with a loading banner that will appear in a minute. Another difference that we have included in the sync process is that before every time you would open or like close the app and open again, even without logging out, the app will sync everything. So that was making the loading a bit slow. There is something is going on with the server because this is taking too long. And now we are not doing that. We are syncing everything only at first login or when the user forces a login, a full sync. So this is how the screen looks when you are syncing. This is the loading banner per program and this is what tells you that this program is already ready with the data and everything on it. Okay, I know what is going on. This is not the right user. Another difference that we have made is in terms of accessibility, we have a lot of places where the icons are a bit small and are the buttons. So we have reviewed the whole user interface. And as you can see here in this example, this is not visible to the user, but when they tap in small icons, now the sensible area that will react to the app is bigger, making the use of the app easier. And I'm going to need a second here because I am logging in again. Yes. Other changes that we have made for user experience are in the TEI dashboard. So if you can see, this is the track entity instance dashboard. And in the old app, it looked like this that you are going to see in a minute. So this is the previous version. You can see here that this is the button to open the attributes, the details of the track entity instance. And this is a button that is used to share the TEI using QR codes. So this functionality is widely used. It's very common to edit the details of your track entity instance, but sharing the track entity instance is not that common. But we had it here in the middle of the screen. So the way this works now, and this is the actual login that I wanted to make before, we have given all the space to the open and we have included the track entity type name in the button to make it more intuitive. So open Malaria entity details and we have moved the share to a secondary menu because it's less used. Another change here is the new event button always visible. So in the previous version of the app, to create an event, for example here, you had to open so that the users see how many events are already there and then create. So this was two clicks to create an event because we wanted to make sure that they can see the list of previous events, not to create stages if they are not required. However, those two clicks were not, we got some feedback that we could improve that. And then what happens now is that if you already have events, the option to create events is there. Because before, if you are in this screen, it's not clear that you can actually create an event by pushing here is hidden. You cannot see the icon. So the icon is always exposed, but now when you click, if there are events, we expand and we show you the events. But with one click, you see the events and you can create a new event. So that's the change mainly. And then now moving to the forms, same. If we go to the odd version, I'm comparing all the time because these are very small changes that if you don't see all the new, it's more difficult to appreciate. So this is an event, the entry form. So before, when I was here, I entered the last value. Fine. Now, what the user is supposed to do to continue working is to press here and open the next section. So that was not always intuitive. Some users were blocked when they reached this point. And the way to address this solution was to add this button here. So at the bottom or after the last data element of every section, there is a next button, which is actually doing the same. It's going, I'm going to press and it's going to collapse this section and open the next one. But now the user is not blocked wondering, how do I move to the next section? The option is always there. And remove hints from fields is just saying that this, this field, this hint, this help label, enter positive integer before did not, this will not disappear when you actually enter the value. Now I'm entering 56 kilos. The text is not there anymore. But it was before. You see, we have the value and the hint kind of moves to the upper above the text, but it stays there. So this screen is a small detail, but this screen is like a bit overloaded with information while this one is cleaner. So that was another change. And then I think almost last is in the data sets. The tables now apply the, the theme of the data set that was not there before. So the, let's open a data set, walk to the home screen, open state sets of the table. Do the same, the previous version. I'm connecting to the same server with the two apps. So the tables were always blue. We were losing the color contextualization, but now they are actually getting the color from the color assigned to the data set at configuration. The scrolling has also improved. It's more robust and smooth now. And then the data entry field that we display now shows the context of this bread crumb like a label here is telling the user all the time, what is the data element that you are actually entering the values for. And this is a text. So that's why this is the keyword is and they tend to be for was just opening the, the keyboard without context like this. So I can move, but I have to actually look at the table to know what am I entering. And then last is the legends. We are applying legends to the tables and this is not new, but the previous format was displaying the color assigned to the legend to cover all the background of the cell. So that sometimes looked good and sometimes didn't based on the selection of the user. And yes, we can play with contrast. It's not always useful. And we were also not taking or making using the whole space of the screen. So this is how the legends look in 2.6. And the way they look in 2.7 in this version that we are releasing is this. So we are keeping here the true color that the system admin has assigned to this legend section, but for the background, we are always applying a light version of it. So we don't have problems with displaying the text color and the background color. And I think this is mainly it for, for Android. And I think tracker, I'm going to pass it over to Markus now. He will share his screen. So I'm going to stop sharing. Thank you, Marta. And let's see. Sharing my screen now. All right. So I'm assuming you can all see my screen. Yeah, thanks. On the tracker team, we have some exciting news on the on the capture app. It is starting to take shape. And we are closing in on the feature parity. We also have the capture app being continuously released. And I'm going to get a little bit into that. We have been testing the capture app in the field somewhat. As you know, the tracker functionality in the capture app has been in beta for a release. And we have gotten feedback that has resulted in better workflows in the capture app already. And we hope to continue to improve this with you guys. I'm going to get into some of the improvements we already have made. We also have some program rule enhancements I will talk about. And lastly, I will get into some features that we have built API support for, meaning that the backend is ready. And now that we have a continuously released capture app, we will soon have the front end parts of these changes ready. This is relationships, this is referral functionality and the program stage working lists, which is on the way and will soon be released on continuous delivery. Okay, so the first item I'm going to talk about is the continuous delivery because it affects the other issues and the way we deliver them. We have since, well, 239 is the first release to be bundled with a continuously released version of the capture app. If you are running 238, you will also be able to install the capture app from the app store and update your version of the capture app, which means that if we look at the app store here, it will look the same either if you open it in 238 or 239. You will likely see that one version of capture is bundled already. And with the 239, it will be version 113.7. Opening this entry in the app management app, you will be able to upgrade to the latest version and see which phone you have installed. So I'm doing that there. This does mean that most of the features, almost all of the features I'm presenting today will be available also into the 238. If you install the newest capture app on 238, you will get access to the same features with one or two exceptions that I'll get into on the very end. Okay, so the first feature I'm going to demo today is a capture parity feature. We can now enroll one track entry instance into multiple programs. And if I'm opening my capture app here, you can see that on the very top of the screen, we still have the opt-in to use the new capture app for tracker functionality, use the new enrollment dashboard visible at the top. The reason I see this message is that I'm now logged in with an admin user. And if I choose to opt in, this will be the choice for all my users. All my users will then see the new tracker functionality in the capture app. So I'm going to do that now. The page also contains an option for opting out again if you want to use the old tracker capture app for your tracker programs. Yeah, the first feature I was going to show you is the functionality for enrolling one track entry instance in multiple programs. And this was always possible in the old track capture app. And it was one of the things we have built to achieve parity with the old app. It is now possible to have one track entry instance, for example, go through the child program and then later growing up and becoming pregnant. And this is the example I'll show you right now. I'm going to select one of the enrolled people in the child program, go with get through it. And then I will switch the program in the top selection or in the context selector and say that I want to report in the rmsh tracker. And she is not enrolled in that program. So I have to click this button and enroll into the program. You have all the features you're used to in prefilling the existing attributes. I can fill in the missing ones, for example, the date of birth. And oops, let's set it a bit much. Maybe 78. And I can enroll this same TEI into the modern child program. As you can see, the functionality for going directly to the first stage has also been implemented. So I'm directly taken to the form for entering the first visit data. Okay, so completing this. The next feature I'm going to have a look at is the option for displaying the front page list for a tracker program. As we know, the front page list is not always very useful. And if you open a tracker program and you see a list of all your tracked entity instances, this is not always what you want. We also know that this query itself is a heavy query that is good to avoid if it's not what the user wants. So now by checking the display from page list, you will be taken to a different default page for searching. And I'm going to show you this now. I'm going to deselect your truth. And now you see that when I select the H-O-R-M-N-C-H tracker program and the Angelahun org unit, I'm taken directly to the search page. And this looks almost like the search page we are with one addition program to the right. The user can now make a selection between searching for a tracked entity instance or looking for one in one of these two saved working lists to the right. This gives the user all the flexibility. At the same time, we are avoiding doing unnecessary or heavy calls to show a list, for example, of all tracked entity instances in a program where you would normally search or normally select one of the existing working lists before we show these tracked entity instances. We have also made some improvements to the way the filters are stored when you work with filters. If I open one of these filters, for example, the Women Register this week, I will see that I might be looking for some specific person or someone with a birthday that was erroneously entered. I might have a recollection that I put in some people, I put in some TEI and I forgot to select which year she was born. So I can make a filter here and say that I want everyone with a birthday of this year. I see that in the first list I opened in Geli Hun, this was not where I made the mistake. But if I now switch to Nyan Dama, my filter will still be selected. It's not yet saved or updated as a working list, but my filter is there. I find that, yes, there is indeed one person in Nyan Dama that has a birthday this year and we should probably go in and fix that updating to the correct year. The filters, the filter selections will be kept over many operations and we think this is going to be useful when you want to apply the same filter many times in different organets, for example. We have improved the workflow of searching in all programs or expanding your search. If I go back to the search form here and type, for example, A, we first are presented with tracked entity instances that is already enrolled in the WHO RLCH tracker, which we were working in. If we don't find the person we are looking for, we now have the search in all programs button at the bottom and clicking it will produce the results in the list below instead of switching the context like we did before and losing the program context you're in as well as the search terms you had entered. Okay, we have made some improvements to the scheduled events. If I open my patient, scared treats and open the dashboard and schedule an event like this, there is a quick actions widget at the top where we seek to add useful shortcuts as we learn about the use cases and what is useful. But if I use the schedule an event button, I will be presented with a list of program stages that can be scheduled. Selecting an antinatal care visit will open the new scheduling page showing the suggested appointment date and also counting how many scheduled events are already in this clinic for that day. If I have to change it, for example, because December 29 is not the clinic day, you will be informed that you have selected a date that is after or before the suggested date, but you will be allowed to reschedule based on needs on the ground. When you have scheduled an event and open it again, if I later come back to this record and open the schedule event, I'm directly taken to the data entry screen where I can enter the event date and fill out the event details. Then one smaller improvement that we also think can be very useful. After deselecting an organette or when selecting a new organette, you will see a highlighted version of the previously selected organette. This will allow you to systematically go through organettes if you're doing a back-end reuse case and see which one you previously had selected. I'm quickly demonstrating that with an event program. I can start in Engel-Hun and I will enter all my events in Engel-Hun. After I finish my work, I might deselect this and go to another clinic. I'm reminded that Engel-Hun was the previous one I was working in and changing it will be easier. Also, if a user by mistake had deselected the organette, it's easy to see where you were working and where you left off. Then a few issues that is not going to be demoed, but that the service mentioned, we have now completed the program rule engine parity and added support for some variables that was not supported so far. They are listed here and we are now supporting the same variables everywhere. We have also made enhancements to the way program rules are parsed both in the tracker capture app and in the capture app. Complex expressions are now parsable. Nested complex expressions are now parsable, removing some limitations that was there in more advanced expressions before. And very last, I'm going to talk about some features that is not yet available on the capture app if you install it, but we already built the backend support and we will be releasing this feature in the coming weeks. As the backend is already ready, running a 239 version of DHS will allow you to install the feature when it becomes ready on the app hub. The first one I'm going to mention is the relationship handling or relationships for fields for display where we have built the new configuration model where you can relationship by relationship decide what fields you want to include in the display for relationships in the capture app. This is not released yet, but the backend has been ready from 238 and we are about to release this functionality in the capture app and the last step which is not ready yet is adding this to the maintenance UI as well allowing the user controlling this. Another feature that is built in 239 and will soon be released is tracker program stage working lists. It will now be possible to make a working list based on events in a tracker program which will allow combining data element values and tracked into the attributes in one list. It will allow filtering and sorting based on both data and tracked into the attribute values and these lists will be stored and operates like other working lists. This is one of the first features that will have what we call feature toggling meaning that even if you use the latest version of the capture app your DHS2 backend will have to be on version 239 in order for this feature to be visible to you. If you install the latest version of the capture app on 238 you will not see the functionality for these working lists as they require changes that was done in the 239 backend. The last feature that is also mentioned because it's soon being released and the backend is ready is referrals. We built backend support in 239 for linking two problem stages together to form a referral where you could for example envision a lab sample being ordered and referred to a lab and the lab would see the order of the lab sample and they would fill in the lab response and send it back. We have also built and planned for backend reuse cases and as mentioned the API support is ready for 239. We are soon releasing it in the latest capture app and this functionality will be available to you if you're running a backend on 239 or new. The last thing I'm going to mention is some technical enhancements that we it's not really visible but we really want to mention as well and that is the refactoring we have done on the context selector on the on the top menu here. It's not so easy to see it but we have done some refactors and we are in the process of harmonizing this with the data entry data entry app so that they will look very similar. We have also replaced the component for the organization unit here so that we will have the same components everywhere. For all tracker queries we have also now made sure that the the queries going to the backend is done on the new tracker endpoint as you might have heard of. The tracker endpoints has been re-implemented to be more maintainable and faster more performance and we have now switched to using the new endpoints in the later versions of the capture app for all calls to the tracker and we see a good effect of that. We have done refactors to components to make them easier to bundle as plugins so this is also an invisible change and it will become more visible later when we get further on the work of plugins and how they're installed and how they can reuse but we are working on making and packaging these components so that they are reusable as plugins. We have enhanced several sites of the metadata download to make it faster and do not skip downloading of data that is not needed. We have also made a new feature for a max limit of records that can be returned from the API so it's possible to set in the dhs config file how many records we are maximum returning from the API endpoints to avoid the most extreme queries done to the today API. So that is all the tracker features and I'm going to hand it over to Scott here. Hello everyone so I'm going to take us through some of the new analytics features. Lars is going to follow me and go over some of the additional functionality that's been added to analytics or is supporting analytics but is more properly placed in the platform product stream. Just two quick points before we really jump into it. The first one is that the line missing app is now on continuous delivery so just as Marcus was pointing out we are able to continuously release the line list with new improvements and bug fixes so as long as there's not back end functionality that is required for the corresponding front-end so this hopefully means that we'll be a little bit more responsive to your request for the line list that we can make improvements continuously and then make those available to all of you. The latest feature is not going to be in 2.39. The next point here is we have made a lot of good progress on new plugins for the dashboard. This has been a collaborative effort between the analytics team and the platform team. This means that the dashboard plugins will be more secure, performant, and allow for better offline caching. Unfortunately, we did not complete the line listing plugin for 2.39. This initial release of 2.39 but we're hopefully going to get it in as soon as possible. Now jumping right into the features, in the line listing app we have added a new time dimension that's the scheduled date. You see it there highlighted on the slide with the arrow pointing to it. You can use this schedule date time dimension just like you use any of the other time dimensions. You can add it to columns or filter by it. We also have scheduled event status available in the line listing app as well. If you go see where the blue arrow on the left is pointing, you see event status. When you open that, you can see that you can choose active, completed, or scheduled. If you choose scheduled, then you'll only see those events that are scheduled. This allows you to see scheduled events that are scheduled for today, tomorrow, next week. Hopefully a very pragmatic and useful tool to maybe users at lower levels to see what's coming up in their work routines. One of the oldest or longest requested features that we were very happy to say finally able to put in is the legends and applying legend colors to both text and cells in the line listing application. You can see an example of that here on the screen where you're able to just go into the options menu just like you do in the data visualizer application and apply a legend. You can apply the preset legend to all data items or you can apply a predefined legend. That predefined legend is associated with either the data element or the indicator that you're turned on in the line list. Very similar to how you see legends applied in data visualizer to pivot tables. A couple of quick points on maps. We did improve the interpretations component in the maps app. Now it's the same as the data visualizer and line listing application. We also in the thematic layers have an option for viewing only completed events. We did make some significant performance improvements to the Google Earth Engine aggregations. Those population layers that have become very popular quite quickly are now going to be loading a little bit faster. We see that this is a really popular feature. It has become a very popular feature in a relatively short period of time. We have gotten feedback that it was taking too long for some of those to load. Luckily we were able to make some performance improvements. Now you'll see those detailed population layers coming through Google Earth Engine loading faster in the maps app. We also managed to backport that to 236. If you're not using the latest version of DHIS2, as long as you're using up to 236, you'll still see those performance improvements. Now that was my very short but sweet analytics. I'll hand it off to Lars. All right. Thanks, Scott. Let me take a minute to share my screen. This is Lars. We're going to talk a little bit about the platform side of the application now. We're also going to touch a little bit on the analytics side of things. On the platform team, we have been working on many new apps and improvements for 239. The most notable new app would be the new data and free application. As some of you know, the old data and free application has been with us for more than a decade now. It's been a true and tested app that's been used for many years. It was about time for us to rebuild it and give you a more modern and better experience. The new app is essentially implemented on the new technology stack that we have at the DHIS2 team. You will be using the new React-based technology, the new look and feel that we have for most of the apps. We are giving you a better and more modern experience. Some of the improvements that we have made, we will talk about shortly. I also will say that this is not a revolution. It's holding a lot of the same principles and workflow that you used to. This shouldn't be a need to do massive retraining for people or change the way you do things. It works in pretty much the same way, except we have made a number of notable improvements. Let's talk about some of those. The first one would be always visible data set and period selection. We have a visual indication of available organets in the hierarchy. We have a details common MIMAX history audit log panel on the side. Data validation is more prominent than before. We also have seamless offline support. Let's dive into some of these points. First of all, you can see that we have a always visible top bar, which always shows which selections you made. In the previous app, the problem was if you scroll down, you would lose sight of the selections that you have made. Let's dive in and have a look at what the new app looks like. First of all, the new app is released as a better person, which means that we do expect there to be some minor glitches, though it's been through good testing. It's fairly stable and should be good to use. With the new flow, you basically start by selecting a data set as before. This is actually new, so we're selecting a data set first. In the previous app, you started by selecting an organet. We're going to go ahead and select the child health data sets. In the next step, you will see that we're going to select the organets. You're going to select the role-famous megalithum clinic, and then we go on to select the month. This is a monthly form, so we're going to select a month, and this brings up the form. As you've probably seen, this is conceptually very similar to what we had before. It just looks better, much more modern style. As usual, you can go here and enter data. You can tag along, and the data will automatically save in the back end. As before, the data just saves in the background. There's nothing you need to do in terms of pressing a save button or anything like that. Even if you scroll down, you will see that the selection at the top remains, so it's always possible to see the form and the organets in the period that you have selected. We should remove some confusion around what you're actually entering data for. Another thing is what we call the organet filter by data set. This has actually been a major request for a long time. The new feature here is that when you select a data set, the organet hierarchy will essentially gray out the organets for which the data set is not assigned. Many people tell us that if a person or a data entry person or an M&E person is working in, let's say, HIV or TB, that they don't want to be able to select facilities which don't offer those services or have those data sets assigned. We think this is going to help a lot on the usability in terms of finding the right organets, essentially. Going back to the demo, when you select a data set, you can now see that a lot of the organets that don't have this data set assigned will be grayed out. It's very easy to see at the glance which facilities have this data set assigned. There's no need to go and click on an organet that doesn't have this data set assigned. We also have a detailed side panel. In the old app, this was a model dialogue that blocks data entry while it looked at things like the history, the audit log setting in max and so on. Now we have this as a always visible side panel which allows you to look at history, look at the audit log while you enter data. That should also help in terms of making this information more accessible. Let's have a look. Now when you enter data, you can go and click the view details button to bring up the side panel. Here you can see we have information about the data elements, the description codes, data element ID, carry option combo ID which is going to be helpful for implementers and mints. You can mark the follow-up by clicking this button. There's a comment field you can put in a comment about the value. You can go and set min max limits. If you want to constrain the absolute min and max values for this, you can go ahead and save the limits. There's also a nice history chart which shows the last 13 values. You can see the previous values to figure out if the new value is reasonable or not. Finally, we have a new audit log section where you can basically look at the changes to this value over time. You can see when and who and what change have taken place for this particular value in an easily readable interface like this. Again, you can also tab along the values and you see this side panel basically updates as you scroll through the form like this. We also made validation more prominent in the UI. Before, you had to click like a model dialogue that will show up and block everything. Now it's possible to run validation while you enter data and it will show up in the right side panel. The also grouped things by priority. You can now see what are high priority problems and what are high medium priority and low priority violations of the form. This is helpful to basically emphasize the most significant validation problems and allows you to focus on the most important validations to fix. Once again, let's have a look. We can go ahead and say run validation and that will run validation this form. As we can see, there's two medium priority alerts. You can see that PCV2 cannot be higher than PC1 doses given and so on. Again, you can also just keep entering data, keep navigating the form. You can click run validation again to rerun validation all while entering data in the form. In the future, we also want to run this automatically in a more intelligent way so that users can be notified about validation problems even without clicking the validation button. We know that is a problem that many people don't click run validation at all so we want to make this more automatic and more intelligent in the coming releases. We also have seamless offline support in sync. If you remember, some of you might have used the old app where you basically have to click a button every time you would like to upload data which is stored locally on your device while you are offline. In the new app, the sync process is automatic so there's no button to click. You will just get this little nice icon in the top bottom right bar which tells you that now the data is synced to your server and everything is saved on the server. If you have data locally, it will also tell you that you have data stored locally. Once again, there's no need to click any buttons to push the data up to the server. This should help. We know that some people forgot to click updates or save and so on. They should make the whole process more smooth. All right, so that was the data entry app. We are going to switch gears a little bit. We also made improvements to the Organic Group Manager. Unfortunately, in the last release, it wasn't possible to add users inside the User Group Manager. That was a problem. Now we have put it back and also made it more scalable. The problem is that sometimes user groups can get very large. The number of users can get in the thousands. The old user interface wasn't very scalable. Many people reported that their browser crashed if they tried to load or save a User Group. Now we have a much more scalable UI where you can basically add people individually and remove people from a group individually and then save without the need for loading all of the users in the group into the screen. We put back the ability to save users in the group. It's also more scalable. It can handle large user groups in a better way. Switching gears again. From the UI to the back end. In 238, we made significant improvements in terms of how you upgrade clusters. Now in 239, we made it even easier. Just a bit of terminology first. When you talk about nodes and clusters, it essentially means deploying multiple servers or containers to provide a single instance of DHS2. You know that many people still run DHS2 using a single server or container. You want to make it easy to run it across multiple servers or containers. You know it was a bit difficult. Now it should be quite simple to do. We do also encourage people to utilize more servers to provide a better scalability and availability for DHS2. In 239, nodes can now be added and removed dynamically. You can plug in a server or container and take it out without touching the configuration of the other nodes. Once again, you can now just add a server and take out the server when you need it. You have a spike in usage. Then you can also take it out when you want to reduce the capacity without touching the other servers. This is very helpful, especially in orchestration frameworks like Kubernetes, where you don't really want to touch or even know about the other members of the clusters, if you want. We're now using Redis, which is a mostly in-memory database, to store what we call the cache invalidation messages that are being passed between the nodes. Something happens on one server. The other ones will be automatically refreshed and not show stale data. This is pretty internal for DHS2, but it simplifies quite a bit. Redis is a requirement, but once you set it up, it's very easy to keep going. It's very minimal configuration needed to get going here. Everything is in the DHS2 docs sysadmin guide. We recommend that your sysadmins go there and check it out. Switching gears again, we're now going to talk a bit about integration and data exchange. In 239, we're introducing a new service called the aggregate data exchange service. This is essentially a service that allows for data exchange that could be from one DHS2 instance to the other or within a single DHS2 instance. This is a built-in solution that hopefully should replace a ton of scripts that we know exist out there and provides an integrated experience for moving data between DHS2 and also within typically individual data over to aggregate data. Once again, a data exchange is essentially a transfer of data from what we refer to as a source instance of DHS2 to a target instance of DHS2. In the source, we basically are utilizing the analytics API. Data will be aggregated in the source instance using the analytics engine, which means that you now can support data elements, indicators, program indicators, reporting rates, and so on. Pretty much everything you can do in the favorites in analytics or using the API, we can now also do in the source instance of DHS2. The data is exported in the data value set format. Out of this source DHS2, we get data in the raw data value sets or some people call it the aggregate data value format of DHS2. This is quite powerful because it allows you to do data transformation, data aggregation in the source instance. If you would like to use an indicator to compute the number, if you would like to aggregate in time or up in the hierarchy before exchanging data with another instance, we can now do that easily. The format is very identical to the analytics favorites in the API, and so it should be familiar to many. For the target instance, we then import data as raw data. On the target side, it looks like raw data, and you can treat it and store it and look at it and use it as you would with any kind of raw aggregate data. This can be run ad hoc using the API or using the new web app that we're going to talk about in a minute, or it can be run as a scheduled job. If you want this to run at a specific interval such as 2am every night, we can also do that now as a scheduled job using the job schedule or application. What can this be used for? Let's have a look at some of the use cases for this new solution. I have laid out four different use cases. The first one would be DHS2, HMS instance, moving data over to a data portal instance. We know that many countries and organizations now have dedicated instances that act as a kind of a data warehouse or data portal that collects data from multiple DHS2 instances. Instead of writing a custom script, you can now use this to move data from your HMS or with your data portal instance that could be public facing or at least be integrating data from multiple data sources. The other one would be moving data from a DHS2 tracker instance to an HMS instance. We do in many cases recommend that you have a separate instance of DHS2 if you're dealing with confidential and sensitive individual data. At the end of the day, you often would like to move at least the summary or aggregate of those data over to the HMS so that you can use those data as part of your integrated analysis in dashboards combining the different data sources as we like to do with DHS2. So now you can set this up typically by setting up program indicators on the source side and then move it over to aggregate numbers in the HMS instance. The third use case is what we call precomputation of program indicators to aggregate data. Many times we know that program indicators could be slow to load real time. So if you need to compute data based on individual data and put it on a dashboard and have it load fast, it could be a good idea to precompute those data into an aggregate data elements and then use that data element in the dashboard. And finally, we can also then now automatically actually move data from national HMS systems over to global donors. We're working with the Global Fund on a project where we would like to infer some of the reporting and the submissions of data that the country do directly through DHS2 without having to go through out of bands approaches like Excel or manual entry and so on. So essentially like inferring reporting straight from country HMS. Okay. We also built a new web app for this. So this also has a web application which is intended for manual submissional data. So let's have a quick look. You go to the apps menu. You search for data exchange. It has a green icon as you can see. We can then go and select one of the data exchanges. You can have many of them and this will essentially now load the data. And this is inferred from the definition of the source. We can see there that we have one table per organets. There's time coming on the columns and they're on the rows. We can then go ahead and review the data and then we can do manual submission to the target instance. And now this could be like a country submitting data to the donor or someone submitting data to let's say a regional portal and so on. Okay. So moving on. So let's switch gears a bit and we're going to go back into analytics territory a bit and talk about geometry and how we can import and store multiple geometry. So in 239 we introduced a new metadata attribute value type called GeoJSON. We're going to talk about GeoJSON in a minute. But this is essentially a format for geospatial data based on JSON that allows you to store things like polygons and points and line strings and all those geospatial data types. So the standard metadata attribute system in DHS2 can now be used for GeoJSON and geospatial data. This allows us to support multiple geometries now per organets. So there's no, we no longer constrained to just a single geometry for an organet. We can have multiple per organet. And this is helpful for things like catchment areas, for instance, if you would like to define a catchment area to a facility and not just a point of a facility, we can do that. If you would like to have both the geographical boundaries of a district as well as the administrative boundaries for districts, we can also do that and display those in maps now. So this is quite powerful and allows you a lot of flexibility around creating and storing geospatial data and displaying maps. We also built a new importer for geometry data based on GeoJSON. So this is a new importer that we hope will face out the GML, the previous GML based importer. GML is a much more heavy weight format based on XML. Some of you probably have tried before setting up geometry in DHS2. It's a faint of heart. It's a complicated business. The new importer is much more lightweight. It's based on the GeoJSON format, which is now pretty much tools like Mapbox and ArcGIS and QJS and so on, and also allows for moving the spatial data between these different systems. So it makes setting up maps much easier and it also simplifies integration with existing geotools out there because we're now conforming to what most of these systems use. So the GeoJSON, as I mentioned, is basically a format based on GeoJSON. It's quite lightweight, but it's also very extensible. So if you need to add more data, you can do that. This is from geojson.org. So at the very basic, the GeoJSON format is based on the type, that's typically a feature. Then you have a geometry, which could be a point, polygon, line string. It contains the coordinates, and then we have a flexible set of properties that can be used. So in the importer for DHS2, we then read one of the properties, we map it to the name or the code, and then we get the geometry type, the feature type, and the coordinates from the geometry object there. So it should be fairly easy to use, and you're looking for feedback, of course, of a solution. You will find this by going to the input export app in Eclipse on the original geometry imports. You can select the file. You can match the GeoJSON to the correct property in the properties object. You can also link it to different attributes of type GeoJSON. So if you would like to link this to, let's say, catchment area or geographical boundaries, we can now easily do that by using this drop down. Okay. Okay. So continuing on the Geo theme, we also have a very exciting new solution for importing dynamically generated population data straight into DHS2. So this is a new solution that's built on top of the Google Earth Engine. We have been supporting Google Earth Engine for quite a while. It's essentially a vast catalog of geospatial data that's available and made available by Google. So in 239, we can now basically generate and import population data dynamically for DHS2 organisms. This is dynamically calculated in real time in Earth Engine based on the roll-pop data sets. So it essentially works by uploading the DHS2 organisms with the geometry associated with the organisms. And then Earth Engine will look at the geometries and then basically calculate the population data in real time based on those geometries, based on the roll-pop data sets. And once Earth Engine is done computing those numbers, we can then import them back into DHS2 as raw data. So we can take it back into DHS2 as normal data elements and data values. And this is quite powerful because, of course, then it allows for use in indicators, in analytics, in dashboards, in maps, and every other wonderful thing you can do with DHS2. So once again, if you don't have reliable population data or you would like to have kind of a second set of population data, you can now automatically calculate this in Earth Engine. Okay, so this is found in the import export app under the Earth Engine imports item. So we can have a super quick look at this. We go to the Earth Engine imports. We can select data sets. We support the population and population age groups of separated data sets. We do also plan to add more data. It's below this one. It takes a second because it goes to Earth Engine over the whole internet. So it takes a couple of seconds. You can select a year. So whatever census or population data set you would like to use. And we can then say, I would like to import this for the for the shift of level or the district level. And we can say use associated geometry, which means do you want to use the default geometry for the argument? Or do you want to use one of those attributes of GOJs on type as we talked about? We can say none. And then we can we can decide which data elements to use for this. So we can say, I would like to import this data for the population, the elements from Earth Engine, which is kind of a made up name. But you can have multiple sets of population data now in these two. We can preview. This takes a while. I'm not going to click this down, but then you can preview and then import this data check it out. This is how the preview looks. And then finally you click imports. All right. So shifting gears again. So this is coming from the integration team. We have now a DHS2. One of the new cool things we've been making available is the DHS2 to RapidPro integration. So we now have built a native integration between DHS2 and RapidPro. You can check it out on GitHub. It's open source, of course, under the DHS2 GitHub accounts. This integration basically supports synchronization of RapidPro contacts, as they call it, with DHS2 users. It allows you to transfer aggregate reports or aggregate data from RapidPro to DHS2. And it also supports reminders to RapidPro contacts when aggregate reports are overdue. We also plan to build this out in the future. And this is a good start. We do think RapidPro is a great system. It has support for complex workflows based on SMS, which DHS2 does not support. So if you're interested in RapidPro or already running RapidPro, we're very much interested in hearing from you. You can contact us and we will put you in touch with our interability team. We do think this is a great step in the direction of a tight integration between DHS2 and RapidPro. All right. And we also made a whole bunch of incremental improvements to the API. You're just going to put this in the show notes. You can go and check it out. A lot of interesting minor improvements there that might be interesting. And that's it. Thanks for listening.