 Okay, everyone. So welcome to the last day in our session on the DHS2 roadmap. So today we're going to present to you some of the things we have on the roadmap for the next coming two to three years. So that means we're not really going to talk about specific releases at this point. We're going to talk more sort of longer term and see what we're going to do over the next two to three years. And we're also not going to touch specifically on JIRA issues and go a lot into detail, but we're going to stick to sort of high level areas and focus on the bigger things that we want to work on in the coming years. So this is also not a complete list. So if you don't see your favorite requests on this presentation, there's no need to panic. You should panic maybe a little bit, but not too much. Because there will also be other things, more details, sort of things that we cannot kind of squeeze into this session. But the prioritization process is still to be done. We haven't really done the prioritization. So the order of things in this presentation doesn't really have any kind of meaning. It's just an audit list of things that we will work on. So before we start, let's talk a little bit about the sources that we have for the roadmap, like what are the input different channels that we have for gathering the requirements for the roadmap. So what you see here would basically be the main channels are also sort of other more informal channels, of course, but the main channels would be the HIST groups, the different HIST groups and in different parts of the world, West Africa, South Africa, East Africa, Southeast Asia and so on and so on. That also relates to the countries. We also talk directly to countries, Ministry of Health, Ministry of Education and so forth. Donors, of course, have their say in the HIST as some of the requirements we get for donors are kind of obligations in some of the countries that we have. The University of Oslo also plays a role, of course, the management team in the different projects that we engage in at the University of Oslo for it also impacts the roadmap. So the Disha to product manager team is also important. We have a team of product managers here at the University of Oslo, of course, who are trying to manage and groom the backlog and also of course impacts the roadmap. And finally, we of course look at the community of practice, the forum for all the great feedback that we receive from all of you guys out there. Alright, so in this session, we are organizing ourselves by product. So we're going to do a platform. So platform is really the new term for what previously we call apps. The platform team is really focusing on the foundational parts of Disha's to in as well as certain apps such as data quality, data administration and maintenance. Then we have analytics, then there's tracker and androids. And then finally, we're going to talk about the future of the double it show health data toolkits in the in the metadata packages. Alright, so since I'm have a luxury of starting this this presentation, since I'm the product manager platform. So we're going to start by talking a little bit about the needs for for platform. One thing that keeps coming up quite a lot is the ability for Disha to to better support what we call MFL master facility list, or also called facility registry. So, this is kind of been a classic topic that's been with us for a long time, and we have done bits and pieces, you know, over the years, but we haven't really succeeded in creating a kind of a comprehensive solution for facility registry. So, this is how on the list now, we really tried to make a push for this now in the coming months and years. And the other challenge with MFL is that it always like means something different from people to people from organization to organization and requirements are quite quite different. And when it comes to the MFL. So, but we have tried to kind of define some of the requirements that we need to support, and we will start every will try to be concrete and get started on concrete features. So, some of them are what we call the organic profile, meaning an extended the data profile for ordinance and the profile can include things like democratic demographic data information could be photos, it could be contact information. It could be what South Africans have called semi permanent data for many years, meaning, you know, number of beds, do we have power electricity, do we have internet and so forth. Modeling health services is another big thing. And with that, I mean the ability to sort of model and store and visualize the different health services that a facility provides up to now that services have mainly been equal to data sets in each of two but we often see that a data set can be comprised of many services. And so we do need to have better support for this. Another important point is what we call organics analytics. So organic analytics refer refers to the ability to do more statistics, more analytics more visualization around essential units themselves, right. And we planned to operationalize that in the in the form of what we call will call organic indicators. And organic indicators will then be a new kind of type of indicators that can include different criteria on or again, that can be things like how many organets do we have of this specific type. How many do we have has it has power and how many have electricity and internet, how many have running water, so on and so on. We also of course then as soon as we have this we also need to enable this in in maps dashboards visualizations and everything. And we also think that the maps application is going to be essential in this disregard and allow for visualization of organizational units. The next one is a classic. I'm almost embarrassed to mention this one. It's been a long I think it's, I think it's a single digit JIRA number for the issue. It's been without for many, many years. They built to support multiple organic hierarchies. And the background here is of course that's in a single country there might be more than just the health hierarchy. There might be schools and there might be different types of health programs might have different hierarchies that might be villages that might be communities that might be private facilities and so on and so on. And often you will see the need for for more than one hierarchy in a database. This is a very large job that spans across the entire system and so we haven't been able to complete that yet. But at least it's on the list. Having opened and closed periods for organets is another big one. So, right now we have we have an opening and starting date for organets, but we also see that in many cases facilities, they open and close periodically right so they will be open during the summer and closed in the winter, or for other reasons they open and close again many times and we don't really support that in dishes today and particularly mountain in analytics. Propose and accept organets, meaning having a more of a workflow around organic creation is another big one because we see that for dishes to work as a facility registry. Sometimes there has to be a more of a process around creating organets as opposed to just someone creating it without any process or workflow around it. We would need something like propose and accept stage in the creation of organets and as well as audits and so on. And finally we also believe that branding actually has an important piece here because we have looked at sort of competing facility registry software and we do believe that they actually have quite a lot of functionality. We have an organic data model, we have a lot of maps functionality, we have installations, we have a lot of things, APIs, but we don't really brand it as MFL. So we also believe that by branding this better and making it easier to perceive these as a facility registry, we can also gain a lot. Alright, so shifting gears. So data quality is another kind of big open term that we will try to make a serious push for now in the coming years. So this relates both to the built-in functionality of DHSG as well as the WTO data quality app that we already have. So we have an app that they built in collaboration with WTO and that one also of course needs to be improved and worked on. So again, we don't fully know at this point what we're going to do, but we do know that we need to move some of the logic in this app from the presentation layer, from the web layer, back to the API and back in service for better performance. Because we know that in this app, there's a lot of processing going on in the front end, which makes it slow on bigger databases. So moving some of the performance and computation back to the back end API is going to be important for this one. We also think that we do have quite a bit of data quality functionality in terms of validation rules, outlier detection and so on. And a lot of this will be about actually injecting this and displaying this dashboard is in the visualizer app basically displaying the data quality information and making it more accessible to people. It's a little bit hidden right now. Yeah, we also know that we need to improve detection of outliers and gaps. One example is right now, the outlier detection just lists all outliers without any kind of priority. I think what people really want is the ability to look at the significant outliers. And with significant outliers, I mean, which outliers actually contribute to skewing my data at a high level. So one example there is, you know, if you have a facility that has a 50% spike, but the spike really just means I'm going from two to three cases, right, then that one case is not really going to contribute to the aggregate number at the high levels. But if you have another site that goes from 10,000 cases to 13,000 cases, that's less of a percentage wise spike, but it will contribute a lot more to the national data. So being smarter there and looking at what kind of outliers actually affect the aggregate data is going to be important. And also taking into the concept from the WHO app, the data consistency and completeness. So looking at like, what is the consistency of data over time, like do we see spikes differences and so on over time. So this is a screenshot from the WHO app, where we have a couple of things like there's completeness, there's consistency over time. There's also percentage wise sort of distribution of data in scatter plots where we see sort of where data is being distributed over time, which can help of course on finding bad data outliers. Okay. And this is also a classic, a lot of classics. The multi calendar has been around also for a long time. We know that we need to support the different calendars out there in some of the countries that use DHS to which do not use the standard sort of Gregorian ISO calendar. And the most important ones would be Ethiopia, which has their own calendar. There's the Nepal calendar. And then there's the solar Hiji calendar which are using mainly in Afghanistan and Iran, I think. There's also other things like person calendar and so on and so forth, but we believe that these three are the most important ones. When it comes to calendar, we do have some support. We do have, for instance, we do support conversion updates in the back end. We actually used to have a date picker that worked, but that one got lost in translation when we switched from jQuery to React. Because there is actually an existing date picker in the jQuery JavaScript framework, but when we migrated to React, there was no counterpart within React. So the last I want, and we need to make a new one. So calendar is actually, I would say more complicated than what it sounds. There's also a lot of work that needs to take place on the back end in terms of analytics. When you do partitioning over time, and then do tracker analytics, and you compare data when you generate cohotes and so on. So there's a lot of complexity to take into account. And that's basically why we haven't been able to do it, but we now know that we need to make a push and prioritize this. All right, so moving on. Combining data entry into a single app also seem to be important. We know that users very often do data entry across all the three different data models on the HS2. I mean, we know that in several places, people do sort of backlog entry of paper forms. And we know that the paper forms in the system have been leveled using all the three months of some data is aggregate, some data is events and some data is tracker. And right now people have to navigate, you know, across all those apps just to enter their backlog of paper forms. And of course, understand that that is that is confusing. And we need to provide a more streamlined user experience there where people can can essentially do data entry, all of the data entry in one place in one app. There's also talk of integrating, you know, both aggregate and events on the same form. We don't really have a plan to do that yet. But of course, that's also an interesting requirement. So in terms of planning for this, right now we're working on the new tracker web app, as you probably know, or actually be working on integrating the tracker capture app into the existing capture app. So that work is ongoing. And we hope to be done with that for 236. So after that, we plan to move on and then integrate the aggregate data entry app into the capture app as well. So the end goal here is to have all data entry taking place in the in the capture application. Okay, um, another big topic is public access to data. So we do know that having this type of public web portals are becoming increasingly popular a lot of a lot of organizations and countries they do want to have public portals where they expose some of the data which they have sitting in the chest. We also know that certain certain people typically like higher level managers executives and so on they, they don't really like to log into the chest to and sort of play around and navigate around in the chest. They prefer to, you know, get a link or get an email or just be sent some kind of information, which they can easily access so. You know that data should be easily accessible without the login, we need to do make that easier than it is today. And of course, there's at least two use cases here one is public access meaning there's no authentication no login at all, where people can just do the data without performing any type of authentication. Or what we call secure link where people can be sent a link with a secure code embedded in a URL, and then people can click on that link and that can be an email or SMS or Slack or whatever you prefer. We also need to work on getting or giving access to third party apps in third party platforms more easily in the chest to and to do that. We actually do support some of this already we do have to support with token based on the occasion so some of this already in place but we need to make it easier. We also want to support open ID connects open ID connect to say identity layer that's built on top of off to and makes it very easy to kind of get access to resources on behalf of other people using something called an authorization server. And this seems to be the modern kind of best most modern most standard authentication layer for today. And actually a lot of this is actually done for 236 the coming out 35 excuse me becoming at with open ID connected to 35. So far with Google and soon Azure ID. And we also, you know, continuously working on adding more providers to open ID connect. Another thing that came up quite a lot would be metadata translation so as all of you know like we do support translations of metadata in the chest to. For example, you can go to their elements and you can translate the name and the short name in the description of the form name of data. You can also learn that there's also more entities and also properties that need to be translated so. So what I mean with an entity would be something like a program rule action or a predictor or SMS commands, but also properties within existing other such as numerator description denominator description for indicators. Program enrollments date label incident date label and so on program stage event date labels on and the point here is that some of these are all of these properties actually are exposed in the user interface so that when people for instance use tracker they will see these things. And as a result of that you actually need to them to translate those those properties as well so we can provide localized experience. This is very important for metadata packages, where we distribute the packages in a number of languages, which enables us to just you know maintain one package as opposed to having to fork it and create multiple packages for each for each language. And of course you know this is important for multi locale systems both both countries where they have multiple languages like in, let's say, Tanzania, we're not so healing English or in South Africa where it's a number of languages, as was global systems run by donors and NGOs and so on. Okay, another big topic that's also been around for a long time would be interoperability so interability means the ability to really have multiple systems appear as one or provide, you know, shared value across multiple systems. You know that there's actually a lot of connectors to be just do out there so if you look closely you will see that a lot of the popular software in the international development space to do, they do support these as to, you know, interoperability already there's a lot of connectors out there. We do want to make a push for a more standard space data exchange approach as well. So we are actively supporting and working with the fire standard and fire, as most of you know, is really a standards framework that allows you to build standard space data exchanges. Fire isn't just a turnkey solution. Fire really also requires you to define a profile which explicitly defines which data should be exchanged per implementation or per use case if you want. And you also know that fire might have a lot of downstream implications on your system because it's it has a lot of different pieces that needs to be supported in software which are supposed to participate in exchanges so but you're working on building a toolkit now, which is supposed to be to make it easier to build these data exchanges. So we would like to provide the building blocks for this. And then people can hopefully easily then create your own data exchanges with a little bit of, you know, technical effort. And of course the main use case everybody to collect the connect systems like like lab systems, Mr systems, logistical systems human resource and so on. The other part of this would be also working on guides and documentation. So we do need of course for this to be successful we do need to continuously work on our guides, our documentation both the API docs to the developer docs and so on. So that people can find you know intuitive nice docs when they try to interact with with each other. We also working on a real work on developer advocacy meaning we will, you know, assign time for for our own developers to work with third party developers and help them guide them provide guidance and so on. Yeah. Okay, another big one is the maintenance app. So the maintenance app is clearly in need for an overhaul like we need to, to make several improvements to the maintenance app. And first of all we need to do. I think we need to do an internal rewrite the code base the code base is kind of become tired over the years has been through many refactors meaning kind of paradigms. It's time for doing some internal restructuring before we can work on and add more features. So we don't fully, we don't fully know at this point what we need to do. We still need to go and collect user stories requirements do the sign face and all that. We do know that we need to work on more sort of natural workflows, making it easier to set up things in one place as opposed to having to go in 15 different places before we can create there's that for instance, bulk sharing is coming up quite a bit if you want to share like many objects at the same time. That needs to be easier. And we also get here a lot that we need logical grouping of objects. And I think what that means is when you have 10,000 elements without great naming convention. It can be quite overwhelming to try to understand what what the elements are are in my database. So, so making it easier to get an overview when you have a massive amount of metadata will be important. And then, and of course in the platform team, we also need to work on platform improvements. That's part of our job. And we will spend a lot of time and effort on app platform strengthening advocacy to other developers know other teams organizations, as well as as training we have launched a training program now led by often where we do have regular sort of academies if you want some dev training sessions where people can listen in and learn about the app platform, the different libraries components, so on that people hide. And we will definitely continue to be planning to hire a developer advocates that can strengthen this effort from our side. So that's core apps are getting complex over time you know that we keep adding features, and it's kind of an inevitable consequence of that would be that the apps are getting more complex like one example is the tracker capture application of course. They're also generic like we have these just to be strive for making generic systems, meaning they can be used for a lot of different use cases, but that also comes to the costs, meaning that it doesn't always work perfectly well for very specific use cases. So, based on that realization, we do need to understand that they need also to make it very easy to build local custom apps. We have seen also in this week we've seen a number of examples of, you know, how, how well people are doing with local apps we've seen for instance the Sri Lanka experience during COVID-19, where people were able to build nice custom apps pretty quickly and we're quite successful with that. We also seen from Uganda that they build applications, which have really been helpful in their COVID-19 response. So, so we do believe that we really need to make it easy for people to build apps. It shouldn't, it shouldn't be very complex. They shouldn't have to start from scratch. You should be able to reuse different components that we have at multiple levels. And when I say multiple levels, I mean, you could start to customize and reuse the lower level components like UI components. You can also use high level components such as, you know, lists of events or data into forms or geometries. And you also would like to build an extension point in apps. And an example of that would be that you can basically fork one piece of tracker capture without forking the entire app if you want. So forking only parts of the app is also something you're working on. Yeah, so in terms of strengthening the platform aspect is of course also working on, you know, our own frameworks, app platform, we have a library and so on and so on. And we do this, of course, with third party developers in mind. So we do want to allow third party developers to use the same frameworks, same libraries as we use internally. And that way, make it cheaper and more effective to build apps. We're working on our web APIs, of course, building up more features. We always expose functionality in web APIs. We do put increasingly better documentation. We released a new documentation portal this year, which I think is quite nice and was well received. We're working on building resources such as learning resources such as API documentation, tutorials, design systems, we have a new developer portal out now. We do want to provide recordings of some of the developer training sessions and so on. And all this goes into capacity building, of course, where we would like to use all of this to train third party developers, train other organizations, form partnerships and so on and so on. And I think this is my next last slide. So app modernization is not a big one. So we are continuously working on modernizing our own apps. So, as some of you know, like the web layer is still using some inconsistent and outdated technology. So we're talking about apps such as the old approval app, the aggregate data entry, the mobile configuration, and also the event reports, event visualizer apps, which use kind of outdated technology and reports and stacks. So in 235 actually coming out with a new SMS app, which replaces the mobile configuration app. So rewriting those apps into the new stack with the app platform react and so on is a hyper. So as I already said, we do want to streamline the front end applications. So we do want to work on like converting the old kind of server side stress applications. We also have JavaScript modules which use outdated technology like XJS, which is really no longer supported, or has an incompatible license to react applications. We also want to use the app platform for all that apps. So app platform is our internal platform for building web apps. And we do want to use that platform for all of our over 35 40 whatever this applications. Finally, also working on what they call continuous delivery and releases of apps. So what that means is that starting from 235, we are making it possible to release that application on a more continuous basis. So we plan to release these apps every month or every six weeks so that you can actually take some of the new apps and use those directly without having to upgrade your entire dishes to platform. So what are the benefits of some of these things? So the benefits of all this monetization would be that we can do more rapid and streamline development process. Like we wouldn't have to maintain 15 different frameworks on the front end. We can streamline across some few technologies, some few stacks, and that way make it much faster to develop our own applications. Getting rid of old code is also important for security, like because we do want to be on the latest on the greatest because that's more secure than being on old stacks. With a continuous release, you also would like to allow for the front and the back end to be released and deployed separately so you can actually have your front end deployed one place independently, and then your back end deployed somewhere else and also upgraded independently. The continuous release you also believe will give you early access to new features, meaning that you don't have to wait six months if a new feature is available. You can take the monthly release of the web app, let's say the visualizer and get access to new features quickly. And of course, the continuous releases of apps will of course also allow you to upgrade individual apps independently, which means if you want to receive a bug fix or a feature, you don't have to wait and upgrade your entire DHS to and then having to retest everything, but you can only upgrade one app and get a bug fix or feature specifically that you're looking for. So finally, we also working on stability, performance and security, those are kind of recurring terms. We are bug fixing is always a high priority. Be continuously doing performance improvements and of course looking for security fixes all over. Okay, so I think will that I'll talk a little bit long, I'll stop there and I'll hand it over to Scott. Thanks, Lars, lots of promises. I'm glad we were recording. So I'm actually going to repeat quite a lot of what Lars has already said and I think the appreciation there is that a lot of the priorities that we have are cross cutting all the product streams because the priorities for the entire platform. But I'm going to dive down a little bit deeper into just specifically the analytics priorities. And I'm just going to also show you a lot of pretty pictures of what we can do with 235 so actually you see in 235 if you missed Monday session. We can make some really cool new analytics types like multi category charts and spot or bubble maps. All right, so what's kind of our biggest priority and again these priorities just like a larger presentation are not necessarily any specific priority, nor am I going to tell you exactly what we're going to do for 236 that prioritization process is still undergoing, but larger the larger longer term priorities I will outline a bit. Some of these priorities are features like what I'm about to mention. Some of them are also process that I'd like to clue you into and make sure that you're aware of and kind of give you some guidance on how you can engage yourself as well. The first feature priority is the ability to get data out of DHS to again Lars made reference to this. But we fully appreciate that the vast majority of people who need data indeed that stored in DHS to probably do not have logging credentials to DHS to right these there's lots of different health officials donors partners implementing partners. Lots of folks who really need data to drive their decision making, but sometimes just access to DHS to and I think nor Stoops actually pointed this out yesterday in her presentation this access to DHS to login credentials can be a huge barrier for this. And we appreciate that we need to get the data out and two ways in which we need to get the data out is sending the data to where people are actually spending their time. Right, most of you don't check your DHS to dashboards on a daily or probably even weekly basis. Most users are checking them on a quarterly or monthly basis in alignment with their planning cycles. But data needs to get out to the people more quickly if we want this we really want to drive data use. So where people spending their time, they're spending their time in their emails, they're spending time on their cell phones, they're spending time going to websites that don't require login credentials. And that's what we want to focus on we want to get the data to where people are spending their time specifically on their emails last pointed this out as well. The new push analysis and getting a good rendering engine to be able to push analytics out of DHS to directly to people's inboxes. And that kind of goes hand in hand with the idea of a summary or weekly digest email. This is something that PSI has really been spearheading and has some good case studies on around this utility. But it seems like a really good approach to be able to just get data on a periodic basis to where people are spending their time in their emails. And then the of course automatic dashboard emails. The other place is on cell phones. So we have a legacy Android application for dashboards, but it's very limited in terms of its functionality. It is time to develop a new mobile friendly dashboard application. And we're going to use a technology called progressive web apps to do this, which means it'll basically be a more or less a shrunken down mobile friendly version of the web dashboard app that will also work on iOS as well as Android. And that's going to be a big focus to be able to make sure that we can get dashboards right into the people's on their cell phones and the palms of their hands and they can take it wherever they need to go and share and communicate data. Here you're actually seeing a picture of one of the latest and greatest features that we put out in 235 and this is the dashboard print layout. Something that we're really excited about for years people have been asking for some kind of report building application or an easier ability to just print out dashboards and and pass it around and share them in meetings and and this actually gives you that ability to actually build a dashboard like you have built a standard report and then print out that that dashboard in a printer friendly layout so that you can just share it around in meetings and and, you know, post it up on walls whatever it is that you like to do with your hard copies of of analytics. The next thing I want to point out is we appreciate that people want to be able to present data directly from the dashboard. So one of the larger requests that we get quite often is is there like a presentation view of a dashboard. And what we've actually we've got some low hanging fruit that we can work on to address this actually with a screw the screenshot that you're seeing or the short video you're seeing is the ability to expand a map dashboard item to be full screen. And then you can watch it now yet click on the full screen that's in 235 and then I can zoom in and do my normal map stuff. Of course not edit the map like you're in the map application, but at least I can present it there. And we find that this can be a really effective way of communicating and presenting to a large audience directly from the dashboard. And we would like to expand that functionality to all the other dashboard items. So data visualizer event reports, etc. And then Lars also mentioned working on web portals. There's kind of two streams of thought here. The first one is that it may be necessary for the core DHS development team to actually make a web portal. There have been a lot of community innovations around building web portals and actually two of them are going to be presenting in the web the web app competition in the next session, which is going to be really cool to see. And so we want to be able to support and strengthen those community efforts. Well as well also appreciating that there may be a need for a core application web portal. And then one thing that's really exciting is going on is we're having a lot of interesting conversations with some really large players in the field like Google, Esri, WorldPop. And they want to plug into DHS2. They want to be able to get data out of DHS2 and put it into their platforms. And we know that of course those platforms have some pretty incredible web portal and analytics features. These are things that we want to encourage. These are things that we want to promote. And we're excited to be able to work with these partners potentially to get data out of DHS2 and to where people actually can use it. That moves me on to the next big feature that I want to talk about. This is line listing tracked entities or patients across multiple programs. We hear it again and again and again. Right now in the event reports, as many of you are familiar with and probably struggling with, unfortunately, is that all you can do in the event reports app is line list a tracked entity across multiple stages. But we appreciate as people expand on the use of DHS2 to monitor individual patient level data, you have one patient multiple programs and you want to be able to tie that patient record across and view data for that patient across those multiple programs. Now, this far outstrips the functionalities of the current event reports app. And in fact, the current event reports app is really old. It's extremely ugly. The user experience is pretty terrible as you all can well attest to. We're aware of it as well. And it's just not an application that we can build new functionality on top of. It's just insufficient to be able to build new functionality. And so where we're going with it right now is that we're appreciating the need to build a line listing application, a brand new application that will enable us to build line lists off of all kinds of different unique attributes and relationships that are associated with tracker data. So like track entity type relationship, lots of more flexibility around periods. And here in the screenshot you actually see and this is coming from Brian O'Donnell's work with the Norwegian Institute of Public Health on COVID tracking where they're actually building chart types. In this case, a contact tracing on top of a line list. So there are unique chart types to line this as well. And these need to be enabled and built up in the core. So this is going to be a really big push over the next couple of releases to really know down this use case, build it out. And we really want to have as much community feedback to the process as we possibly can. So we'll be sending out mock ups and communicating around this. Quite a lot. Lars also had a slide of performance and stability and I'm sure all the other product managers probably will mention it at some point as well. You know, we are in a constant battle and balance between adding new features and making sure that the applications are performant and stable enough to receive those new features. A good example of this is the pivot table application. In 234 we merged the pivot table into the data visualizer application. And the main reason we did that was not just because the pivot table was getting old and had a poor user experience, but because it wasn't performant anymore. And we knew we had to do a refactor. We had to write a new backend and we wanted to put it into a place that we could continue to develop as functionality, which was the data visualizer application. So we are in a constant battle with performance and stability. One of the big things that we've done on the analytics teams is a pretty significant revamp of our development process to be able to give us a lot more time for testing and identifying a bugs and fixing those bugs well in advance before anything is released. And Lars mentioned this as well, but we're coming to a point where we can do continuous app release. And the continuous app release model will actually give us even more time and ability to do better testing and performance improvements to the apps. But we are. This is a lot of kind of invisible work that's going on that is taking up a huge amount of our dev time that you don't necessarily see you don't necessarily see all the new functionalities. You may not even necessarily appreciate the performance improvements. Unless maybe you're in Bangladesh where you have tens of thousands of org units and hundreds of thousands or units and tens of thousands of users. But these are all cases that we are we are making sure are performant and stable in in in the analytics apps. Okay, a little bit about the process now. One thing I want to highlight is that the analytics development process, although we are still well connected to many use cases and many of you here I talked to you on a regular basis about your use cases. We want to be able to focus specifically on some lower level analytics users and what I mean by that that our district and maybe even lower level like community health workers or clinicians in rural facilities. We want to make sure that we're actually producing the tools that allow them to use analytics and have data driven decision making as well. And in order to do that we're going to have a little bit of a refocus on some of our prioritization process and communication strategies to identify three to five key use cases from around the world. Where we have some district level users facility level users that really want to use data in their routine processes and we want to work with them. We want to develop features with them. We want to study those use cases quite thoroughly and help through our research partners, our UX designers, our developers, my product manager myself as well to really understand the use case to make sure that DHIs too is well equipped to support their needs. We already identified one use case in Rwanda, which we've already started a WhatsApp check group and have some really great communication and even some features coming out 235 to be able to support their use cases. But if you think that you might qualify as a really low level use case that's trying to drive data use at the lowest level, please don't hesitate to reach out to me and I'll be happy to discuss if you think that we can make use one of these case studies that we follow very closely. One of the last points I want to make is around our communication training and resources that we on the analytics team are going to be putting out. We have appreciated the need that just having kind of click by click tutorials on how to use the applications is dramatically insufficient and that we need to have more comprehensive guides to like data quality. For example, we've been working for the last several months on developing an entirely new very practical pragmatic data quality guide that would guide you and not just how to click through the data quality features but how to actually really harness all the potential to improve data quality in your DHIs. We want to do the same thing with data use. We want to have guidance for the other various applications. So even third party applications that are being developed, analytics applications that are being developed, we want to make sure there's user manuals for those as well. The next point is on training. We have recently launched an academy just for GIS for mapping. That's a picture there that was in India. Bjorn and Austin led that one. This is a great example of where if you're a power analytics user and you want to really go to the next level, we want to be able to provide you the training and resources for you to be able to do that. We also have a data quality academy coming up in October that's really kind of pushing that envelope for in terms of capacity and skill sets. We're trying to make short videos to give everyone the kind of the guidance on the latest and greatest features. And then also we're really excited that there's a lot of community initiatives going on to also make data use guides and tutorials and specifically around some of these WHO packages. So Arthur Haywood in Zambia has been working really hard to make some really great videos on data use for the WHO immunization program that really aligns up with our WHO packages as well. And so it's still data use is still analytics and we want to make sure that these community efforts can be elevated and communicated out to the rest of the world. And then finally, as Lars pointed out, the ultimate goal is that we want to make a very clear long term roadmap. We want to be able to give you the knowledge that's necessary for you, especially if you're a third party app developer to say, OK, DHS to analytics is going to be doing this over the next 12 18 months, two years. And what I need from it is not on that roadmap. So I need to develop another. I need to develop my own application. It's important for you to know that. And I get, I get asked all the time, are you planning to do this? And if you're not, should I build an application? And we want to make sure that that communication is clear. So that's all I had. I believe I'm handing it over to Mike now to take us through tracker. Thanks, Scott. And I hope that everybody was as excited as I am when I see how much of trackers being represented in the platform priorities and the analytics priorities. I won't I won't cover all of that in a repeated fashion here. I'll see if I can be pretty quick with going through these things. But wanted to first give you a sense of when we talk about the tracker product, it's it's a little misleading because we're covering quite a few different areas here. So it's, it's really a look at the individual level data in DHS to which includes not only the existing trap trap front and back end, but also the new capture app, which already covers events and will soon cover tracker. There's already been some questions on the COP and that we'll take a look at that in just a minute so you can get a sense of what that's looking like and what the timeline might be. We of course work very closely with the analytics team for these new tracker analytics features, which takes a lot of effort, and we support from the tracker team, a lot of the WHO packages which you've seen throughout the week being represented. This is all great opportunities for us because it gives us a chance to learn from the different parts of the HS to but also from the users and from the requirements that are coming from the field. So what we're representing here in this kind of overall roadmap session is a lot of information that we have gathered from user input from field based research from you and the community the community of practice from our own project support. So I tried to group this into some specific priority areas stability and performance for large scale implementations data quality and handling the analytics usability and end user tools and global standards and interoperability. The top use cases that we're seeing again I shared a little bit of this on Monday when we talked about tracker but two of them that I think are up and coming and are demanding some more functionality or the education use case for EMS is and the logistics use cases already a significant amount of work has gone into these both in using the tools we already have but also identifying additional functionality that we're going to be meeting. We've talked a lot already about stability and performance I won't re mention all of that but some of them that I think are really worth thinking about or looking forward to is for example in the new capture app. So the third bullet point you see there I hear a lot of requests for the web application to be able to handle better intermittent internet connectivity. Because of you know the existing tracker app and the way that it's built that's always been something that is really difficult to achieve but in the new capture app. Specific attention has been paid to that we have a whole way of handling data differently so that you can count on tracker being available more when the internet is gone and doing a sinking after the fact which should really make a difference. And given new tools for monitoring and managing your system. Part of why this is this because the tracker implementations end up being quite a bit larger than your existing DHS to kind of aggregate system so just a sense of this I was talking with Ghana earlier to get some of the numbers of their application from the last year or two. They've registered the six and a half million events by 9000 users covering 250,000 patients. All very recently and in fact that doesn't cover all of their tracker work. So it ends up being quite a lot that you want to stay on top of if you're running this kind of a day to day transactional kind of system such as tracker. And we also want to give you better guidance around these things and do more rigorous testing on our side with the kinds of numbers and scale that you are hoping to achieve. So there's a lot of emphasis there I've already shared in the previous session that a lot of work has gone into it for 235. This will continue to be a big priority for us and you'll see changes every release related to this. We know that with that kind of volume of data that are coming in that duplicates ends up being a really important topic if you really want to be able to aggregate upwards and have confidence in the data that you're getting from the individual level data. Also in terms of having accurate data for quality of care for provision of care. You really need to know that there is one person per record that is in the system. We've done quite about a bit of work on this and I'll show you just a little bit of that in a moment. The request is well about the ID generation. We know that most of the countries where we're working don't have strong national IDs and instead they have to have some form of a generated ID. There's many countries where they have already developed their own kind of patterns and algorithms that they want to be able to match with India chess to so that's something that we're really going to put some emphasis on. Some of the tools around diagnosing and correcting that most common errors and program rules and program indicators. You'll continue to see for better information on that better error reports that come through helping you to identify where things are going wrong. And then there's an entire new service for data import into track. So I wanted to give you a sense that you know these things are not just vaporware. There's a lot of work that's already gone into this just showing you a screenshot here from Jira about all of the design work and the functionality that's already been started around deduplication. We hope to make significant progress on this in the near future. And the data import actually is very nearly done. This has been over a year's worth of work trying to create a next generation tracker importer, something that will be able to stay with us as tracker grows and as DHS to continues for the long term. So you will see a very significant change in the way that we're doing this data import which is one of the highest requested tools that we've had for DHS to track. Analytics again I won't spend a ton of time on because Scott showed some of it, but one area that we haven't mentioned too much is the tracker to aggregate linkages. We really want this to be a seamless experience that the underlying individual level data can be aggregated upward can be sent into the HMIS. There are many methods right now to do this but some of them involve customized scripting. Some of them involve having individual data being pushed into a separate dashboard all of these things. We don't want you to have to manually do these kinds of things on your own. We want to make this a lot easier so we'll be putting effort into that. Relationships you know again that we've rebuilt the relationship models entirely in the last couple of years. We would really like to have the analytics to match those relationships which will really help with things like the cohort analysis will help with the kinds of analytics that Scott was showing around the line listed and the kinds of events being able to have relationships between events between outbreaks between programs and as Scott mentioned the entire overhaul of event analytics. In particular, usability and end user tools we are very aware that tracker is not always the easiest experience for end users end users in the tracker sense often mean people at the lowest levels of the health system. We have workers nurses people working in small clinics and in the periphery. We need tracker to be really intuitive and something that is easier for them to use. We also know up at the higher levels where they're managing the system that it's quite complex to set up a tracker program that we have a very confusing kind of process for assigning rights and being able to ensure that sharing settings are correct. We do a lot more in the maintenance app to improve the configuration. There's many requests around kind of end user tools to improve their work processes and workflow. This fits very well into our overall ideology that you know the data entry part is not meant to be an additional task for these end users. They really should match their work processes and make life easier. So that means including things like task management and working lists, allowing alerts to be automated for complex multiple criteria managed for programs like disease surveillance, improving what we're offering in terms of track entity instance charting and summaries across programs I'll show you a little bit of what that is going to be looking like. We're offering requests for client facing portals. This is of course part of the global efforts to put people more in charge of their own data and their own health care. We really want to enable them to be able to log in and see what their patient records look like. The role has been a growing challenge tracker was originally I think of it as very program centric in the way that it was designed and we're moving towards a more client or track entity instance centric kind of tracker, which is going to help us to allow functionality to support things like centralized lab services or other work services. And then there's an ever present request around doing line line, line listing kind of data entry and editing. This one to be honest is one of the more challenging pieces of functionality that we're going to tackle. I can't promise it for the coming year because it's something that's going to take us some time in order to figure out and have it mapped properly. But it's something that we know really is required for a lot of use cases to enable them to enter and change data very quickly for large batches and groups. I want to pause here for a moment and take a look at what's coming in the new capture app functionality just to give you a sense of the kinds of things that we've been building and working on. I'm going to use a few moments in this to show you here in the capture app what it looks like as a tracker user. I'm in a tracker program at my specific org unit, and I already have a possible list of favorites here of working lists, those that are overdue or high risk, those that are receiving their final vaccinations. These are preconfigured and can be shared across users and use groups, and are really able to help people manage their workflow better. We have a search here that is much more intuitive and is going to help people find the patients that they're looking for correctly. You might see more quickly here a list of those that are potential duplicates because really they shouldn't all be sharing a single identifier and we would have tools for handling that. Within a patient you're going to start seeing much better summaries of who they are, the programs that they're enrolled in, where they have active enrollments. And when you go into those enrollments, get a better summary to start with about the relevant data, the kinds of indicators that may be associated with it, their relationships, and allow you indeed to do right there some of the quick actions that you might want to do. Such as entering a new event immediately within a given stage, being able to do things like a referral, setting it up to send them off to a lab service, whether it's one time or permanent referrals. So again, much of this functionality has already been developed. It's a functionality that we expect to be releasing in the Capture app coming up in this new year. We'll give some more information about that as we get closer to actually being able to release that, but I wanted to give a sense of what we've been focusing on within the new tracker within Capture app. And then I'll just finish here on the last topic of global standards and interoperability. A lot of effort goes into making sure that we're able to exchange patient level data and individual level data with the corresponding systems. We've already mentioned HL7 Fire is one of those that we have a high priority on. We're working on exchanging data with some of the most requested systems like GenExpert for TB, for OpenLMIS for logistics, and we'll be producing a lot more guidance around how to do these things, particularly with an emphasis on security and privacy. So with that, I will stop talking, and I think I'll hand it over to Martha. Okay, thank you, Mike. So let's go too quickly to Andre, what's in the roadmap for Andre. First of all, I would like to talk about releases. One of the things we want to focus on is on stabilizing the release process, the more implementations we see, the more we understand how difficulties for you to update the apps in the field. So during 2020, we aligned the releases of the app with the release of the HL2 backend. 234 was released already with 2.1 version of the app, and for 235 to be released next week or the following, we will release 2.3 already with compatibility with 235. However, we still see that 4 releases per year is a lot, and for 2021, we are going to reduce that to 2 releases per year. Let's see how it works for you and how the app is adopted with this model. So we will release together with the HL2 backend, let's say. It means in April we will have 236 for the HL2 and 2.4 for the app, and in October we'll have 237 together with 237 together with 2.5. In the middle, there will be minor releases and batch versions, of course, for fixing the bugs that you find in the field. Something that we are not very happy to announce is that the end of the support for Android 4.4 is getting closer. We have been trying to avoid this as much as possible, but we are getting to a point in which some of the libraries used by these Android versions are going to be deprecated from Google. It's nothing we can do, and that's a window that during this year, the latest we can probably during the last quarter or a second half, we will have to stop supporting 4.4. The first time we announced this, we just want you to have time to react. And yeah, again, we will try to postpone it, but we will have to stop supporting 4.4 at some point during 2021. I'm going to take a few minutes to talk about the next version, the one that is coming in the coming days, because I didn't talk about that on Monday, it's quite short. The next version, 2.3, we have focused mainly on improving the user experience, so most changes are in the user interface. As you can see here, we have new cards. The new cards were already implemented, but now they have added for events and cards and the events in a track identity instance, so in the track identity instance dashboard. What is new? They are cleaner, they look cleaner and they make better use of the screen, but the good thing is that we can expand or expand the card. Then the user can see the name of the data element and the value directly without opening the event. And when it comes to the track identity instance dashboard, we can see the list of events with some values that we want to follow up. For instance, in this case, we are seeing the weight of the baby and the breastfeeding mode directly in the events. We don't need to open each one of the events to see this data. This was a request from the community and we finally get it in the next version. Another improvement is the improvement in the errors. When we tell the user you have mandatory fields or something failed, we are now telling the name of the field that has to be fixed. Another improvement is the data entry form. On the left, you can see how the data entry form looks already in the current version and how it will look in the next version. We have improved the rendering of all value types. The ones here you see the pictures are rendered in a better way and we have removed all these icons that were actually not adding information and just taking space from the screen. And the last thing this is not only Android, it's also for web. We have reviewed and extended the HAS to icon library in the next version. So some icons have been redesigned the ones on top of the old ones. These are the new ones that we are providing a more neutral library for icons and then we have added new domains like education environment or COVID-19 based on requests from the community. You will find these ones on the server from 235. Now, what's in the long-term roadmap? So I'm moving now to the long-term priorities that we know we have to focus in the next one, one, two years. As the rest of my colleagues, we don't have a clear plan so they are not in any particular order. I want to mention two areas where we always work. I don't think I need to develop them much, but as already said, for the HAS to platform and for the rest of products, we always work on performance, stability and security and not independently. We work together with the rest of the products. So I just wanted to make sure we don't forget this even if we don't put it really upfront as fancy features. And the same for user experience. We have an ongoing process of improving the user experience based on your feedback, based on your data issues, based on your comments, based on the requirements gathering. For example, this is the new track entity instance dashboard that is already designed and that we will implement as soon as we can, which is actually aligning with tracker and the effort of bringing the tracker programs to the track entity instance, to the person, let's say, to the patient, and then being able to navigate the information across programs in an intuitive and friendly way. Another improvement that we know we have to make and have been requested, we have been requested to apply legends in the data entry so that the user can identify potential special values from the data entry moment. We have requested, we have been requested to add the signature for rendering a signature in the data entry form and then also to add a search field for the long data entry form. So all these things we are quite sure they can come during next year. A big pending task are the local analytics, what do we mean. Local analytics means simple visualizations tables or charts that can be generated offline with the data collected in the device. It is not a dashboard and does not get the information from the analytics in the server. It's a simple visualization, a program level, data set level or track entity type, track entity level. The aim is to help the user analyze his or her data collected in the device. So the idea is to have series of data elements, program indicators or attributes aggregated or rendered either by time or by argument. We are defining the user interface for this now. Another area where we want to improve and we know we have to is on supporting the implementations. We cannot replace a mobile device management, but we want to be as helpful as possible in managing implementations. So in that sense, we are planning to extend the functionality in the settings under settings web app by web app, sorry, by adding centralized error log for the administrator. More configuration options for your programs to adjust the user experience to your configuration. We want to facilitate somehow the control of the version that you have deployed in the field. We cannot fight the Google update engines, but we will try to add some help on that. We have been requested to have more flexible sync periods. And we are also implementing an import export functionality so that the user in the field can securely export the database and share it with the administrator for troubleshooting if that's something that you want. To maps, we are very happy on how the maps look now. The two pictures that you are seeing there are from what you already have in the app, but we know we have to extend that. And we are thinking of adding more map layers. We have to be able to display data elements and attributes of feature type, either with coordinates or polygons. We have to add a layer with a user location and to add also the possibility to display the organization units of the user in the map. We have been also recently asked to link with Google map navigation so we are not going to integrate navigation in the app but we can easily say, okay, go here. And then integrate the possibility for a health worker to go, for instance, to see one of these units. Another request or something that we, this is quite high on the pipeline, let's say the offline multi-user or multi-server support. What does it mean? Right now, if you look into the Android app with one user, it downloads everything. If you look into another user, the second user will override the first user. So if you go offline, you can access with the user that was logged in online and it's only one. However, we know that in some cases one person can work with more than one server or two users make work with the same device. To do that, we are adding the support for multi-session, let's say, working offline. So meaning in the device, you will be able to choose your user and change users being offline. This, I mean, we also have to keep on following the evolution of the DHS to, let's say, web features. So we have to implement, break the glass, we have to implement working lists, tracker and event relationships. We have to implement the assignment of events and we have something still to add in data sets, like, for instance, the indicators added to the data set are still not there. So this is also something we know we have to work on. And then we have new cases and this is the most undefined area that we have. We have new cases coming up very strongly, like facility assessment, where you do an assessment, you need to give feedback to the user offline in the moment. So we need to have some interactive user interfaces. Education, as you have seen in some sessions during this week, education has specific requirements in terms of data entry or data navigation. And then what we have called that data driven tasks list, task list. So the idea in this domain is that we know we have to help health workers and managers turn data into prioritized actions on a daily basis. So this is something very undefined. We think we will do it through assigned events, working lists, notifications, putting everything together in one place. And but yeah, this is something we really have to define to be able to put it in a proper plan, but we know we will tackle in the long, the one to one, two years roadmap. So that's all from Android. I'm going to leave you with Rebecca to share the plans for the WHO health advocate. Thank you. Thanks, Marta. So we got a couple questions actually on what is actually coming up in the toolkit. That is the WHO health toolkit. And for whatever reason, I need to share again. I'll just quickly outline some of the new metadata packages that were released this year in 2020. So we have three, two main types of packages. So we have the aggregate packages that tend to be a bit more HMIS M&E standard indicators and those come with dashboards and analytics. So a dashboard analytics package that has the dashboard and the indicators that can be mapped to an existing HMIS if they're existing data elements and data sets. But all of those come also with a complete set of data elements data sets for those countries who are kind of starting from scratch. So these aggregate packages we've expanded, we had started with HIV, TB, malaria and the immunization program. We have now added reproductive maternal neonatal and child and adolescent health. So RMNCAH with UNICEF, our partners. There's of course the COVID-19 aggregate model. We have just released vaccine preventable disease surveillance. So this is kind of the weekly IDSR. So we're new on our site. So we'll be adding some more around that. And then we actually had a little bit of an interesting case that came up, which is a set of global fund indicators for emergency reporting of these indicators to monitor disruption of service delivery. So this is actually about some for those with the HMIS and for global fund partners to be able to monitor in this time of COVID, some of these indicators, but it's actually quite interesting because the global fund indicators are generally common across partners. We've also been expanding the tracker packages as well. So we had a series of them for COVID-19, the ports of entry, the case-based surveillance, the contact tracing. We had released about a month ago, a TB case surveillance package, and we also have a rapid mortality surveillance. So it had cause of death. This is more all cause death, but we call it a bit of a prototype of work in progress. So this is being piloted a bit. And then we have also released some content updates to the cause of death. So that's the mortality package that follows the medical certificate for cause of death. So we added the COVID-19 codes there and expanded it for a French translation. I just wanted to give a little note about how we support DHIs to versions because this, of course, is more of a health content product. It's the metadata itself, primarily. It's not necessarily the software. But in general, what we support is for aggregate packages when they're released, we support for the current version, and then minus two. Now some of these had been developed previously so they might go back as far as 2.29 in the case of HIV, TB, malaria, but just to give you a sense of how we support these over time. So we do maintain updates of these every time there's a new DHIs to version release. In the tracker, we've limited that window of support because the tracker product is changing so rapidly. So our general guidance, we are a little bit slow because of COVID. We released the COVID packages in 2.33 and have maintained support for 2.33. But in general, you can expect that the tracker packages are going to be released in the most current version of DHIs too. And then from there, we will continue to update them for new versions. There's a few more in the pipeline. So by the end of 2020, it's a little bit of an intimidating list. But we are working on a community health package that's in aggregate. So this is indicators and dashboards primarily along with those data elements, data sets across at least 10 or 12 different health areas that could be captured in the community. We're also working on a package that's HMIS, LMIS. So the point of this is setting some standards for being able to bring in some key logistics data from health facility level and possibly from a complimentary electronic LMIS system for HIV, TB, malaria and immunization programs. So this is really about integrated data analysis, bringing the health service delivery data with some key points from your logistics data into combined dashboards. We're expanding into a couple of new health areas. So one is rehabilitation and the other one is noncommunicable diseases. So these are being worked on in terms of requirements right now. In terms of immunization tracker, we actually have quite a long list, but a lot of these are really quite advanced in their development and we're working with our content partners to to finalize those requirements. So an immunization registry will be coming online in the next couple weeks. It'll soon be followed by adverse events following immunization tracker, an optional birth notification component that can be used in combination with your health registry. So that's for health facilities to be able to notify CRBS systems of births when children are showing up for their vaccines. We will be looking at immunization campaigns, which is becoming very relevant. I think we all hope that there will be a COVID-19 vaccine and then that's going to need a campaign. This is really built off of some early learning from Uganda. This is very soon vaccine preventable disease case-based surveillance. So I had mentioned in some other sessions that this design with CDC and WHO content experts for vaccine preventable diseases. This was really the base of our COVID-19 case-based surveillance packages. But this one is a bit integrated. It covers about nine diseases at the moment. It can be expanded to cover more in a single integrated case-based program. The first coming is HIV case surveillance, a TB drug-resistant surveillance monitoring, so that it's another module that could go hand-in-hand with your TB case surveillance. Looking at malaria case surveillance for elimination settings, entomology surveillance, so tracking mosquitoes, basically, as another part of malaria programs. Finally, looking at a COVID-19 logistics model that actually uses tracker in terms of the data model flexibility. In 2021, we actually have quite a few more, so we'll be expanding into hepatitis, adding data quality dashboards, and re-looking at some of our nutrition packages, trying to finalize that one. And we'll have some new trackers as well. So ILI influenza-like disease surveillance and SARI under a new partnership with CDC. This really goes hand-in-hand with the COVID-19 response. We'll be looking at integrating COVID-19 into the case-based surveillance tracker for VPDs, so bringing this all together. So we have an NCD registry for hypertension and expanding our logistics support. So using that tracker model to get some more logistics reporting from health facilities levels. And lastly, we do support content updates. So there's a couple in the pipeline. The WHO releases new strategic information guidelines. Often this means that we then update the content that's in those packages. So there's a few of those content updates we know is in the pipeline. The last thing I'll note is that we have these metadata packages, but we know that there's a lot of work to be done around building out the toolkit to support implementation and training and actually helping countries to adopt these standards and get the data in and actually have people using them. So in that regard, we have started looking at digital and user training templates. We have several of those for the COVID-19 packages. Looking into package-specific implementation guidance, for example, being able to look at immunization programs as a whole and some issues that are specific there. We continue to update our training databases and we have more of a mid to long term plan to actually overhaul training land to get a better demo database and training database that's more realistic and will help with those data use trainings. We are looking into building out interoperability guidance. So a lot of these metadata standards for the WHO packages, we see them as kind of a target. This is what should be put in, but we know that other tools are out there. A good example might be community health information systems. Countries might be using all kinds of mobile tools to collect individual level data, but then what do they need to be able to push and integrate to the HMIS in a standardized way. A few implementation things around being able to create templates, so metadata templates for how to manage user groups and user roles that actually match typical use cases we would see in the field, such as laboratory users or an HIV program manager. And lastly, we are working with WHO to standardize a case profile. You've seen a lot of case based surveillance. And now we're working around these tracked entity attributes as a bit of a standardized library that can be reused and eventually support cross program analytics. The last thing I'll say this is a bit more mid to long term, but we are looking into ways that we can actually integrate global standards like fire like snow med with the metadata that's coming out of the standard packages. So as Lars mentioned, this is very context specific. The one piece that's a little software related is actually some of the custom web apps that we support as the broader WHO toolkit. So Lars already talked about those, but just to add them to the radar we have the WHO data quality app. It'll have a bit of a refactor, but also we'll be doing some updates to match the new data quality review guidelines. So that's more around the content. And we do still support the immunization analysis app, the scorecard app and the bottleneck analysis apps for our immunization programs and these will be refactored. But also we do routinely assess the functionality that's in these custom web apps and gradually the product managers ultimately make some decisions about which of those features to support in the core DHIs to software roadmap. So that's actually it for me. I've just left some links and resources here because we have built out our websites and documentation portals to reorganize where we keep these resources so this will be available on sked. And with that, that's my short presentation and I think we have a few minutes for questions. So thanks very much. Okay, I think there are a few questions on the community practice. If any of you would like to take a look there and maybe pick one you think it would be interesting to answer live. Sure, so maybe I can. I did rush through mine a bit so maybe just to mention on the capture app functionality that I was showing about tracker. So this is something we've been developing actually for more than a year now the tracker functionality in the capture app so there's quite a lot that has been done. There was a question about when will this be released. And we're trying a bit of a new release strategy with the new capture app where we want to do a bit of a soft release with a project or two first, where we get really comprehensive understanding of where there might still be errors or gaps. So this is something that we are working on as of kind of the beginning of January and going forward. Our commitment is to have some version of the capture app release by 236 with at least a subset of functionality. But we will be more transparent about those release dates and we'll get a little bit more information out there as soon as we can so just in the in the issue of transparency trying to say when we will get that out there. So we should expect to see a capture app release with tracker functionality in 2021, and I'll get more specific about that as we get a little bit closer to it. I do see one question on the COP about logistics and the reporting evolution so I don't know if Scott you actually just wanted to briefly summarize a little bit of what the direction is going with logistics. Right so we did have a session on this yesterday actually. Sorry, you may have turned your camera on Scott. Yeah, certainly don't want to deny the folks the option looking at me. Right so we did have a session on logistics yesterday so if you want to kind of see a general overview and strategy of the University of Oslo stroke core DHIS to approach to logistics the supply chain use case please have a look at that presentation it's still available you can download it. In summary essentially logistics is becoming a large focus of our feature set as well as the support that we give out to the community. We appreciate that we really can't have a fully functional hms in which we're able to do proper bottleneck analysis root cause analysis without actually also having logistics data. Integrated into our indicators and and analytics to that end we are actually going down a pretty exciting path to be able to actually have some full time staff here at the University of Oslo to be focusing on logistics specifically. Building out giving guidance on the feature sets as well as potentially some developers focusing on develop a building out feature sets and functionalities to support logistics, especially lowest level data capture and integrated analytics with logistics logistics data. As Rebecca pointed out there's several different models that we're exploring one of the more promising one right now. And also one that the international Red Cross is currently piloting in their field hospitals in Yemen is to actually use the tracker model to have slightly more transactional logistics data and even in court utilize the latest barcode scanning features build out product catalogs. And have kind of a full overview. One thing that I do want to point out about the supply chain is certainly DHS to is not trying to be a full blown LMIS and what I mean by that is that DHS to data model is not conducive to warehouse management, or being a full on ERP but we are working more closely with some products platforms out there like open LMIS and especially Medexus that do fulfill those specific unique feature sets for warehouse management ERPs and we're excited about some projects that we've got undergoing with them to bring our platforms a bit closer and to have a little bit more seamless interoperability or even potentially some integration between the platforms so that countries don't have to struggle so much with these interoperability layers that have been very problematic throughout the world.