 Everyone. Oh, okay, here we go. Good day. I'm Damien. I'm from one of MSF's operational centers in Brussels. I along with some other members of other operational centers, Alejandro, Jamie, Ramon, Rosario, Chris, and that's myself. We'll be talking about solving offline challenges in a sort of a low resource settings situations that we find ourselves in. We're supporting our projects in faraway places. So as we're part of different OCs, as we call them, we have different approaches that we've used to try and deal and solve this issue. And throughout the slides, we'll go through what we've done with each OC and what challenges and what solutions are there and what's still existing and what's still to be sorted out. So this doesn't work, of course. So we'll go through DHIS2, how we use it within MSF, within the five different OCs, the common challenges, the architecture that we use, both within HQ and out in the field with the sort of mini servers of the locations and the challenges and use cases. So the data and metadata synchronization, which is a big thing that we come across a lot of time. Alejandro, we will discuss his reactive vaccination campaign app that he's worked on with a third party developing company there. And also Chris will discuss Praxis, which is like an intermediary interface between the browser and DHIS. And then we'll go through some conclusions. So it's MSF, I'm sure most of you know what we do. We're 50 years old, same age as me. And today we have 65 approximately thousand people or staff. Most of them are national staff working in the projects in the field locations. And some of them are working in the headquarters of operational centres. And there's also what we call expats, expats. So they're international staff that work in the field. And I'll just I won't discuss the project locations and type of context, but it's quite varied and we do spread ourselves around quite a lot. This is some data, which you all love. So it's from 2020. We're still processing and reporting on the 2021 data for activity reports. So we've had 850,000 patients admitted. We've had I think it was eight million outpatient consultations as well. We have various avenues of field metal activity. So we do some operational research. We do some collaborations with ministries of health. So sometimes we help out in hospitals. We we look after, say, an inpatient division of the hospital, while the ministries of health look after the other areas. And we also have vaccination campaigns, reactive outbreak and just sort of learn the ones there. And here's our operational centres. So we have five placed in Amsterdam. We'll one in Amsterdam, Paris, Geneva, Barcelona, which covers Athens and also Brussels. We also have one in development. It should be existing in Western Western and Central Africa and also in progress in Latin America as well. So this time next year, we hope to have seven operational centres. As you can see, it's more European based, but we're going through a field recentralization project or initiative. So over the years, we hope to farm out the centralised aspects of MSF to the field locations or to non European locations anyway. The HRs, too. So as I said, different operational centres we have means we have different IT teams, different priorities, different skill sets and of course, different health reference and epidemiologists. So we have various guidelines we use for the clinical staff in the field to follow by and to train and make sure they do it properly. And we're trying to align the general indicators, the standard indicators with what we have in DHIS. So the data produced is a best and accurate reflection of the work done there. Yeah, DHIS, we have different configurations and technical implementations, which we'll discuss further. And one green thing there is we align with the major versions. So with the testing and upgrade scenarios, we're going for at the moment, 2.37.7 and we're all going through that. So we've combined our resources and done some various testings and that should be ready to go soon. Here's another table of data. So as you can see, our base versions of DHIS are quite old. Full time employees starts. So we started DHIS from 2015 to 2017, have a number of offline servers. The data values are quite large, active users, and we have some custom apps there. So in total, we have about 15 that we've developed through various companies. Let's leave it there with a bit. And now, mission challenges. So of course, the poor internet connectivity that we have as an often common scenario, operational hierarchy. So we get requested for frequent changes that projects seem to think is a good idea, but then we go, we have to discuss the operational utility of that with them and it can take some time. Limited IT skills in projects. So we work with the ICT team and the field ICT team on maintenance of the decentralized servers that we have. Various data literacy levels among staff. So some are good, some are bad, and also a high turnover. So especially with the international staff, they often come for three, six, nine, or 12 months since. And then they go away. So all the training and efforts that we give to them in building the capacity, it sort of disappears when they go off. And also the nature of activities requires offline solutions as we try and work through. And now I'll pass over to Ramon. Thank you, Ramon. Yeah, thank you so much. Thank you. So well, so thank you all for being here. Just a reminder, you can ask the question at the end. We represent different things as Damian said. So please, at the end, we have some time to answer any question. So, well, how to, we have, well, our missions in different places, how to make a data chase tool available on that missions. So, well, you can have your mission with internet connection through satellite or maybe through 4G or fiber. So this is the first idea. You have a cloud server, your data chase tool is online, and the clients connects, well, through your, your reliant, your internet connection. So I don't know how many of you have this kind of setup. You can write your hand. Okay. Okay, I expected a vast majority, but this is the easy way. There are some times where your mission is, I mean, for example, in a refugee camp in the border of two countries on the middle of nowhere. So probably you have a satellite connection, probably served with all the mission. So maybe that won't, the option A won't be best. So what is the solution, one of the solution for this is to have a replica of your server and you install physically. On the mission. So you have a laptop or a box, a server, a real server having the exact, the same, the same instance, but in that works offline. So we call that offline field server. And your clients doesn't need to be connected to internet. They connect just through Wi-Fi. And if you have internet during the night, the server can on the background synchronize data and metadata. Another option is, for example, if you need to do operations outside the mission, or it's not clear that you can have a server there. We call it like a mobile server. The concept is the same. He said that this is mobile. So you have your server, you take with you, you enter the data. And the difference is that you don't synchronize in a regular basis. But when you have internet to do it, so this is another option. A fourth option, option this, maybe you want to remain in the logic of having field servers, but maybe you have good connection in some projects, missions. So you can have it online. The advantage is that you don't have physically there. So you don't have to maintain a physical thing on the mission. And the fifth option, you can develop an application that you install in the client. And it synchronized with the server, but the client used it offline. So this practice that we'll present Chris after. So, well, each section of MSF has like a mix of all these options. But in general, we are using one or two or three of this model. Together. So, well, what are the challenges of having servers that are the field servers, online or offline servers? So, well, maybe you want to update time to time your forms. So if we can see that again, you update that in your form in the cloud server. And you have, you want it to be available in the field. Or maybe you want to delete it or you want to delete all the things. So I see people saying, yeah, we have this problem. So, well, what is the, just an example. I deleted yesterday and our unit on our cloud server. And I just, I said, well, let's see what is happening in reality in the background. When you use the standard functionality, I forgot to say that to synchronize that, we can use the standard functionality called meta task sync in the, in the HHS. So what's happened real happening? You delete an org unit. So, well, the organization group are being updated. Well, you deleted an org unit. So it's good that the reference to this deleted organization unit gets updated. And also the data sets is not included in any more your org unit, but there's a big problem here. I don't know if you see it. There's no reference to the org unit. So the problem is that your org unit that you are deleting your cloud server won't be deleted on the local servers. And this happening with the org units and with all the metadata that, well, it's a logical problem. You cannot update something that does not exist anymore. So we're facing this kind of problem. All the kind of problems we are facing is the synchronization box. Maybe you are familiar with screens like that. There are special kind of metadata that are like working together. I am thinking about option sets, options that I cannot exit without option and the inverse also. So, well, you update, well, try to update an option in an option set or remove it to change the order. You have many changes that that won't work anymore. And that brings me to another option is sometimes the logs are not verbose enough. So, well, you get a null punter exception, null, okay. How can I figure out what is failing there? Well, we can guess and try, play this game. Well, I take manually all my metadata and, well, let's try putting the maps only. Okay, that works. Let's put now the data limits. Okay, that works. Let's put the option set. Ah, that's okay. Is that maybe something inside the option sets? I've even hear someone here that told me, ah, what we did, we develop a script that plays this game, guess and try automatically. So, we can take a coffee and the script tells us the solution. Okay, that's better, but still, it's a problem. No, it's a big problem. Well, you can ask for support and it's true that sometimes you, that's a very good support. So, sometimes you report the problems and you get a patch in some weeks or months. So, that's quite okay. But there are some times that is not the case. So, here's an example of a colleague that reported the option of step one, two, three, four, five, eight to reproduce the problem. He reported on August 2020 and the answer was in December 2021. So, one year and a half after. And the answer was, well, this person is not yet supported. I'm sorry. We are closing the ticket. Okay. Well, thank you guys. So, well, what are the consequences of that? So, sometimes patients and our people working on the field cannot wait one year and a half. I know that that is this free support, but we need to move to Hadox solutions. For example, well, Barcelona developed an app to synchronize metadata because this was not working and they need to synchronize the field servers. Or when I was in WHO with Nacho in ICT, we developed the meta-tasking app to do same things, to solve common issues. Or, well, I heard this Congress, someone developing scripts to do the checks that the achieves should do. So, well, what we are asking is to get the standard functionality working. I mean, there's a very good product and it's hard to make that it works in every case, but it will be good. Other kind of challenges we are having is synchronizing data. It's a bit different from before. For example, you can't decide to change your form and age, change from decimal to integer or to text. It's not a good idea, by the way. Or the text to integer. So, well, your data will remain in the database, but you lose the function of your analytics. You won't be able to exploit this data anymore. So, we have to pay attention to these kind of changes that will be good in the in the access to can offer a mapping or a migration of the data. Well, other kind of issues is data quality issues. Well, for example, again, if we remove a patient on a on our cloud server or on the field server that it usually happens, maybe this deletion are not propagated to the other servers. So, we start to having inconsistencies between data. There are also human errors that in this architecture, you decide to give people access to the offline server and to the cloud server. Well, it's not a good idea. You will have errors for sure. And what we are starting to develop is data quality scripts that, for example, very basic things. You count the patients by your unit by program on the field server and you do the same in the cloud server. You inject this data in the in the cloud and you can compare, no? But what? Well, so these are the kinds of issues we are facing. And Alejandro is now presenting one of these ad hoc solutions. Very good. And, well, I'm available for questions. Here was a question here. Hello. My name is Alejandro. I'm here representing MSF Spain. So, and now what we are going to present is one of the use cases we had in this occasion of relation to reactive vaccination campaigns. Yeah. Okay. So the challenge that we had in terms of, let's say, data needs is that we had configured DHS2 with a dataset that was fixed. It has the disaggregation was based on age groups that were following the standards. But that was not meeting the criteria required at field level. So you will be wondering why you did that kind of design. So the truth is that it was really complicated to get a combination to have this design in different datasets because there were too many combinations potentially. So there was some information that was not available until the last minute, like the vaccines availability or type of populations in the areas where we are. We were going to the intervention. So another additional channel is that challenge was that we needed to configure it really quickly. So it was reactive campaign. So when the plan was approved, we had one week before the intervention team was going to that remote area that maybe was with low connectivity. So we need to have it really quickly. And even the hierarchy sometimes was not that clear. Maybe they knew the whole area, but not the exact vaccination points in which that was going to take place. So what they were just in the end is an Excel spreadsheet. So that was really customized to what they needed. They were doing the data entry on a daily basis. And that was giving them also the analysis they had requested. So from time to time that was compiled in a dataset and that was entering into DHS2. But since the two data models were not compatible, basically, so in the end, that was leading to retrospective data entry many occasions in complete data and also some data manipulations because they had to convert from one model to the other and poor data quality. So what we did is so we wanted to try the use of DHS2 as a source of data. So we wanted to keep using DHS2 for data collection. So if we wanted to do so, so we needed to give support to the campaign management. We needed as well to provide the analysis at least the same level of analysis that they had in Excel. And as a result, if they were using that on a daily basis to make decisions, so we would have more quality of data and more completeness of data. So we were considering different options. And in the end, we thought the simplest approach was to go for a web app, custom web app within DHS2 that we call VaxiApp, it was developed by ICT. They are over there, so you can maybe consult them if you are interested. And it was simple, well, in the way that the users only have one interface to access all the functionality that the application is giving. So it's basically around two axes, let's say the campaign creation, data entry and data analysis. So we'll see that now in a little bit more detail. So in terms of campaign creation, that's the initial dataset that they had at the beginning. So all the antigens and the colon and the fixed age groups. So that was replaced by a process within the app in which the user are providing generic information like the name of the campaign, the days in which the campaign will take place, vaccination teams, other information like the odd units where the intervention, the vaccination is going to take place. And then the main logic is on the selection of the antigens. They can select only the antigens that will be used in that specific campaign and within each antigen what is the information that they need. So it's not completely free the choice. So we know that some information is mandatory. So some of the fields are level as that, as mandatory, while there are other pieces of information that are optional. So that there are more than that. So it's not just that, I just cut the image in there, but there are more options. So in this case, like gender and displacement status, some disaggregation that might be needed in one occasion, but not in all of them. And another thing that is interesting is the age groups. So by default, we are providing the standard age groups, but they can unselect some of them if they are not needed. And in some occasions also, they can subdivide one of the age groups in a smaller group. So what we wanted with that is at least to have some kind of transversal analysis among campaigns. So it's not completely open choice, but at least it gets, it, I mean, we are giving them with more flexibility. And that's what they had requested to us. So when the user finish that process, what they will have is a custom dataset, which is created according to the selection that they previously made. So what happened after that? So that is the campaign creation. So what will happen is that in the menu of the app, there will be a new line with the campaign that they just created and different options on one side to the data entry and then to go to data analysis. In terms of data entry divided in two, so they can on one side set the target population with a specific process within the app as well. So they have to encode the age distribution in the region and then the total population and that can be done globally. And if they need further, if they need to specify that by health area, let's say they have the option to do that as well. And then on the data entry part, they will see the normal data entry in the HS2, but within the same environment of the app. So we are doing that using iFrames. One thing that we are doing to facilitate a little bit the task is that we are selecting the organization unit. Well, the first one then if they have to, maybe they have to change. And we are providing some validation roles to facilitate to improve a little bit of data quality. When it comes to that analysis, again what the app is doing is going through the dashboard functionality of the HS2, again within the same interface of the app using iFrames. So this dashboard that is included in different graphs and tables, it's created as well in the campaign creation. So it's saving some time of configuration as well. So here they will have everything they need to follow the campaign, vaccination coverage and so forth. Okay, so another interesting aspects that I wanted to highlight is that there are two roles. They can have different access to campaign creation for the campaign manager and the normal user would have access to or have access to data entry and data analysis. That all the configuration is stored either in datasets or in constants. And that was a requirement that was key for us because as Ramon was explaining, we have local servers. So we need that to work with metadata as in functionality. And there are some elements like data stored cannot be synced against local servers. So we need to store that in some piece of metadata that then can be synchronized against local servers. The only thing that is not synced is dashboards, but they are created again. The first time it's open in the local server. And it's modeled in a way that is scalable. So all the metadata is just behind, it's created automatically using a script that is written in a configuration file. So if there is need to include another antigen to modify a little bit the elements that are in the dashboard or change a little bit the logic inside, that's easy to do. In terms of assuring like a quick deployment, there were some aspects that were solved through processes, not through the app. The app was on the configuration side. So what we did through processes is like having laptops with VHS2 pre-installed in at least one laptop, the emergency teams that are carrying out the vaccination interventions and also having generic hierarchies in case that's not really clear at the very beginning. So they can at least start encoding there and then we'll modify the name whenever we have that information. So in relation to use that was made operational last year. So it was used in five campaigns in 2021, eight campaigns during this year, 2022. The feedback that we have is that it's positive and we have seen that most of the data with a few exceptions have been encoded while in the campaign, so not retrospectively, so that's a good sign. And they have started even requesting new functionality. So one of the things that they wanted to do is to include, to use that as well for preventive vaccination campaigns. That's happening already, although they need to have also some kind of, in the analysis we need to improve a little bit how they can differentiate between data that in the end is coming from reactive campaigns or preventive campaigns. Also they want to incorporate, we were requested to incorporate the encoding of information that is not in relation to the vaccination campaign but it's normally done together like nutrition surveillance or provision of vitamin A and also having like more flexible hierarchies to be, I mean, in order to exploit a little bit better the analysis. Right now there is only one organization unit level in which they have like regional configuration. They want to have a list too in order to group the analysis in, for instance, access. And one of the challenges when using web apps and I guess that's general, so when you are updating from one version to another version there are bugs that appear there, vaccine functionality that are supposed to work like analytics of type last value or grief fields. So if that's not fixed what we have to do is like work on workarounds and that is not the best because in the end it's not the standard but that's the challenges that we are having when working with custom apps. Last slide to present a little bit that's maybe well out of the topic of the reactive vaccination app it's related but in MSF we're normally using operational hierarchies but now cases like vaccination campaigns, community activities, surveillance and others start to, I mean, to put into consideration as well regional hierarchies. So in our case what we are doing so that's not completely solved and it's something that we have to pay attention to at least in MSF Spain so what we are doing is modeling that as projects in the end. So what happens is that the hierarchy starts to grow. So what we have in the end it's like one missions for instance with a few projects that are still active but a lot of branching there and projects that are already closed and we are not really reducing all the potential that might have like if we are doing the intervention twice in the same area like having the historical values that of previous activity. So that's, I mean, some of the things I don't know if any other have that use case in your organizations but that's something that we would be interested in in knowing how others are trying to solve those kind of challenges. And yeah, that's it for myself. I pass it over to Chris. Yep, hi. Eros, thank you. Hi there, my name is Chris. I am from the operational center of Paris and I'm going to talk about our solution, our custom solution, Praxis. So Praxis was developed to answer the need of routine data collection for our section and so this includes data for our inpatient, outpatient and multiple activities that we have. So we were looking for a solution that would span kind of the health domains that we work in and meet several key needs. So a lot of these needs are things that my colleagues have already talked about but what we were looking for with Praxis was one to have a system that was accessible and usable offline and or in low poor internet and activity situations. We also wanted a system that was going to give us simple and decentralized customization of lower level organization units and selection of data sets so that we could decentralize some of this configuration and customization to the project level. And we also wanted a simple and intuitive user interface that really reflected the different kinds of user roles that we had. And again, this was responding so some of our challenges with high staff turnover and just the ability to be able to quickly deploy a system and have it usable and available for people. So I'll talk about each of these a little bit more in detail on how Praxis is meeting these needs. So for the first one, this is an overview of the architecture that we have with Praxis. So we have a single DHIS instance that we are utilizing as our data warehouse and also where we're analyzing data. From DHIS, we are pushing down to each of our Praxis instances, data and metadata. And so what Praxis is doing or what Praxis is, is a progressive web app that can be installed in a browser and it fully replaces the need to log into DHIS. So you can log in to enter data, to approve data and then also to see some reports that are being pushed back. And so the way that our sync is working is two way sync. So we're sending our metadata and configure metadata and data from analytics tables down to Praxis and Praxis is sending data entered. And then this is working because Praxis is utilizing index DB to store data. So when Praxis is online, it is syncing as you would if you had multiple DHIS as syncing in between each other. You were having a sync happening, but when it's offline, the functionality of Praxis isn't going away. We can still log in, save data, we can submit data and it gets added to a queue to be synced when there is connectivity. Reports are also available. And all of the metadata is also available as well. Obviously we need connection to grab updates, but this is only happening not so frequently. So then so in terms of the configuration and the kind of decentral configuration that I was talking about. So this is just our project admin side of Praxis. You can see on the left here, you can see our lower portion of our argument hierarchy where we have our project and then our facilities and within our facilities, different services. And so through Praxis, we can set this up. So within a project, we can add new facilities and add new activities to facilities. So giving the possibility for this decentralized configuration and setup to quickly happen. And then we also have the ability to add data sets or assign data sets to our different areas. And so for adding data sets, again, it's quite, it can either be aggregate or programs. This is a shot of an aggregate data selection process. But when we add a data set, we give the users the ability to select exactly what from that data set they want to be able to collect. And so Praxis gives us the ability to mask our data sets. So there can be some customization of exactly what we're collecting. An example of this is when we have diagnosis. So we have an OPD and we have hundreds of diagnosis set up in our OPD data set. However, it's not necessarily very useful or user friendly to have hundreds of diagnosis appearing for a project that mainly wants to track several key diagnosis. And so for every instance we can or for when during data selection, only the core diagnosis that are appropriate for that context can be selected and then the rest are hidden from the user. In terms of usability, so as I said before, Praxis is set up with four main user roles in mind. We have data entry, two levels of approvals and observers. And so for data entry, when you log into Praxis, you have a homepage that directs you to what data is waiting for, what weeks of data are waiting for submission. And if data has been submitted, where data is sitting for approval. And then data entry is quite simple. It shouldn't be anything surprising as it is built off of GHIS. From for our approval users, it's a very similar interface. We can, they are able to see what data is ready for approval and then can review that data for and approve it or send it back if necessary. And then finally, all users have access to reports. And so these are standard reports that are set up for each dataset and beyond because we are also combining data across our datasets in some cases. So reports are available at what we call the service level. So this is the organization unit level where we're collecting the data and then for the two levels above that. So the facility level and then also the project level. And in every praxis instance, it has a specific organization unit that or it is downloaded to be tied to a certain organization unit. So for a particular praxis instance, it's only seeing the data from the project that it's where it's been installed. And so it is pulling down the same report everywhere, but it is only showing the data for the particular organization unit. And the reports are standard across all of our projects, but then again customized based on what has been selected and what data is being entered. And reports are always available for users. They are updated daily if there is connection. If not, it's you'll, you can see there's a updated date. So you can always see when the last time you were connected and you had your data refreshed. So there's special things on this slide. So I'm also going to just kind of wrap it up and bring us to some questions now. So in conclusion, you can see that across all of the sections, we have very like very different setups and we have a lot, we're extensively using DHIS. So we have lots of data sets, lots of programs. If you remember from one of the data slides that Damian showed at the beginning, we have lots and lots of data values that are going into our systems on a monthly basis. But we're also facing and we're facing some very common problems given the context where we work. So number one, for internet connectivity, the challenges to the operation, oops, what happened? Sorry. The operational hierarchy is always changing. We have new facilities being added quite frequently or activities changing depending on what the needs are. And then within our projects, there are a variety of skill sets, but we often need a solution that's going to be easy to implement as people are busy as well and always changing. So we've presented just a couple of examples of how we've addressed these challenges. So one, using kind of the core functionality of DHIS that is there and then some developments and other solutions that we placed on top or within DHIS. But I think Alejandro said this quite well is that we do face a lot of challenges, especially when it comes time to upgrade. Upgrading when we have a lot of custom solutions built in creates a lot of heavy process for us and if we're facing blocking issues, sometimes it can be quite difficult for us to make sure that we're staying on the latest version, especially if we have to wait a year and a half for a solution. And so that's also why we are finding other solutions and workarounds. And so I think now we can take questions, but we also have some questions back to the community. So looking for ideas around how we can engage with others in the community, not only to continue to share our use cases and the solutions that we're coming up with, but how can we connect on troubleshooting some of our offline challenges that we have and pushing for more offline functionality within DHIS and supporting to sync strategies. Thank you. Does someone want to run the mic? Thank you all for the presentation. And I think one of the things I got from your presentation is that you have invested heavily in providing an offline solution, both financially and in terms of effort. Is there anything organizationally that is preventing MSF to adopt something like the Android tablets in the field because that comes with the offline solution? And if yes, is there anything that would make it workable? Or are you locked in into your current solution that is actually quite difficult to walk it back? Yeah, thanks for the question. So first, it depends on its, because as you said, we are different OCs, so we have different strategies for our case. And I think it's also in the question, maybe one of the difference between the Android app that we are using as well for other things. We just presented three use cases, but we are using as well the Android app in some context. If you want to give, let's say, all the users the capacity to analyze their data with more flexibility, that cannot be done in Android. Android is to capture the data and then synchronize to the server. But if you want to give them the flexibility to do analysis, then you need a local server. So that's why we started to use that strategies. We didn't mention Android because there were a lot of topics already. But first of all, we have at least in Geneva a special case for Ukraine. And we are deploying Android tablets, but as Andrew said, that serves very well to report data, but they cannot analyze them. So, but yes, we are using also that. I can add something to that. I can watch from many sides. When all this started, Android was not there for data entry. And analysis is only available now. So it's normal. They are not using it. In the case of OCP, they are configuring in the field. So they are changing metadata. Praxis is changing metadata. And when they say offline server, it means that analytics are running in the project without internet. And they play with the data visualizer. So it's difficult to replace that with Android. Maybe one day. Or maybe you have internet by then. Thank you for this information. I'm curious in the offline scenarios. Can you talk about your support processes at the infrastructure level there? So, I mean, the DHIS2 configuration, that's probably understood. But who is supporting the physical hardware OS layer up to the application layer? Do you have ICT folks that are very knowledgeable? Is everything containerized? I'm just curious how you support these in the field. Thank you. We have a dedicated ICT team in HQ plus field ICT team that are based in HQ, but they do flying visits to our projects. And we also have regional ICT offices there as well. The way we do it, we have many servers that looks as we call them form factor boxes that we set up in the network of the office or the local health facility. And then we set up SSH access to log in and configure. And then also data and metadata sync to us. We do have Docker containers as well. We also have in a grand scheme what we call field network kits. So that's like a kit that has everything you need for a new project to start up in terms of ICT network, firewalls and so forth. And at the moment we're working on incorporating DHIS as well as open MRS or EMR into these things so that they're there and ready once the project starts and is ready to start encoding data. I think in terms of the question about how it would be helpful to share some information between organizations like what type of hardware, what type of hardware Android devices are you using? What are some of your findings on long-term use over time? What are the requirements on those hardware devices, both from the mobile user standpoint as well as maybe some of your servers? What makes you determine to use an offline server versus the mobile kit with what might be a laptop? And what are some of the pros and cons that you've found between those choices in the field, in the sub-environments? Some quick pros and cons. So the pros is we have two options, either the field using any server there. And the con is that the data has to be synced to access the data. You have to have the sync working correctly and then also access to our HQs to the local cloud server. Some projects don't need that, like we have some in our Turkey and Lebanon offices that the internet connection is quite good. So in terms of maintenance and support it's minimal because they use our HQ, what we call field server, which is what we've designed as a replacement to the remote servers for them to connect to in the cloud data. But for it was engaging, but maybe beyond what you can ask here, because in the end it's as you said we have those different things that I don't know if there is a channel in the community, one of the things that we struggle with is that we have a high workload and maybe as well as a personal activities, what we do is to dedicate also more time to sharing. So that's something that we do, we go to the potentially to the channels that are that are already created in the community, but also attending those sessions and then engaging with you. Hi, I'm Kil. I'm a bit nervous to have been on the release manager for DHS. I kind of have a question around releases and I understand of course it's a real challenge to kind of stay up to date. But I'm just kind of curious about the kind of time scales that it means for you guys to keep up to date and to keep in sync. And also what the impact of, for example, the way we now do hot fixes, which is this additional kind of very small kind of urgent fixes that we put on some of our releases. So for example, on to the 7.7 which you're moving to, we may have hot fixes on there if we need to for security reasons or so on. So you have the ability to to to move fast enough to Not always. And yeah, we are now in we are now in 233. And we're planning to move to 237. So that requires more or less, I don't know, maybe three months or more of testing. So normally with this one version, we see things that are still blocking points for us. We try to put everything in JIRA. We try to to make sure that what are the really blocking points. And if the next release that I solve, we need to retest again. So it takes time. I think in the past, we were doing that once a year. Now, since we are trying also to align intersectionally, that last one is taking a little bit more time. But we, I would say that we will try, we will try at least to update once every one year, one year and a half given the effort that it takes also to go to the places and update all the local servers. So I don't know, maybe in 238, we see the coupling that is happening between apps and yeah, on the major version that can change for some hot fixes that can be an option. But once we move to 238. Now we have, we said we were aligning 237, maybe you have to do things. I don't know. And so typically when you, you typically upgrade major versions rather than do you keep up to date with like the latest patches of a, or is that not possible? No, in minor version, we can be, it's more if we need to search something like a program, the computer is not a program with the major version should be okay already. So that's why in the minor version we are not going. And yeah, again, so minor version, is that it should be, it should just correct yours, but we have those experiences with that was not the case. So that's why we go with care. So, and I'm saying we have to do the full testing. So if it's really urgent, we do so, but, but it wasn't working in some cases in the past. And I know that hasn't put a lot in terms of testing in the HST. So maybe you have to also remove that past experience and try to complete it also depends on the setup, because there are five setups, different setups, but we have the same problem. No, our colleagues, medical and IT, they don't want to hear about upgrading a minor version because we have had experience in the past of painful experience of, okay, there's a minor version implementing a new functionality. Okay, that's great. But well, it's quite, quite risky. We have to put all our teams to test the whole system for a minor version. So in the practical, it's a PT because we cannot upgrade the minor version. So the correct way should be that we always keep up late into the minor version. So we hope that with this new change of thinking of upgrading individual apps, maybe that will be easier for all of them. So we are looking forward to this moment. But in general, you will take the last minor version available when you are testing, right? You don't go to the major. So yeah. Hi guys, thanks for the presentation. There is something I was missing there and I think it's more about the process, you know, not so much about the tools. So I know, for example, in Barcelona, they do these meta changes, I think every four months. So I'd like to know more about this, you know, what happens, you know, when somebody wants to change meta data in the field, you have to make a request to headquarters. This is approved by the people there and then how you push the changes, you know, this whole process, you know, updated meta data in a specific project, you know, more or less. Okay, I think it's on. Well, so I think we all are going to have different processes and experiences in place for this, especially. So on the OCP side, I think that like, while we try to align with like major updates once a year, but for us, the way that our data model is set up is that we have, I mean, if you remember from the slide at the top, we have a very few data sets compared to other OCs because we have one data set for everyone. So one OPD for everyone. And so making changes to that requires making requests to us and then it being approved and being able to push down. If it comes to the example of diagnosis lists, what we have in practice gives us the ability to the project to make changes to what they've selected as their activities adapt or if they're changing. So we've kind of countered the fact that we maybe have a little bit more rigid of a structure with like very few data sets that everyone has to use with giving people flexibility through our app to adapt and respond to how their activities change over time. I mean, I think it's a full different topic, about governance. There has been like really nice presentations about that topic as well. We are not doing the changes right away. So they have to be approved. There is a medical department that takes into account all the requests that are coming from the field. And what we don't want to do is like to release things that are changing the data collection process at the level to frequently because otherwise they have to reprint all the seats and maybe so we want to limit that. So normally, you know what we do is like we try to do like three changes, three waves of changes every year, more or less, unless there is something that is major like a project that I think with certain names like that we have to create a new program so that can follow another process because it's going to impact only all those queries that are going to apply that program. But if it's something that is already being used, then it has to come with certain approaches in there. Thank you very much for sharing. I think that the infrastructure that you showed is very impressive. There must be a lot of details that you have probably encountered a lot of small like experiences. I'm wondering is there any place like a piece of public documentation that people would have a look to maybe learn from our experiences, maybe reproduce them with their own setups, some like public repositories that could be leveraged. That would be also another challenge channel that people could give you some feedback and maybe help you out as you were asking about the question. But I was wondering where could I look for some public references? Okay, we have a GitHub site where we develop scripts and stuff to maintain and updates remote servers with document images. So if you want to contact me later, I can see that you really access to me. Well, because I work with you guys as well and I know that the challenge with all this offline decentralized system, another challenge I found and I don't know, I'm not up to speed, you know what you guys have been doing in the last couple of years. So I was about the security, how do you ensure that the data that they feel is hosting in a sort of room or somewhere that is protected and encrypted? This kind of fee, you have any measurements or policies about security, because it's easy I guess to protect everything when you have it in somewhere in the cloud, but in the field, it's a bit more vulnerable to people stealing stuff or whatever. I am a friend. So it's going to be very, it'll be very practice focused on how we're doing infected. So first of all, with our HMI system, we're not testing any identifiable information. So that's one level of security that we have is that we don't have identifiable information that we're collecting, even through our software programs. And then number two, how practice works is that we have a product key that is is shared when we install and so that adds like a security level, so only someone with that product key can install praxis and have access to that specific argument specified in the product key. And we have some folks who won't be able to update the product key for praxis, and so we can do this when we need to and reset the product key. So we kind of process it to provide this. And it was when it comes to tracker data and if there is any data that can be considered like sensitive. So of course, we are running the process like pacing in past assessment that don't not just for the tool, but for all the process because it's before the tool even on data collection in paper. And yeah, that's done. That's been applied. So basically, we try not to move to the cloud if not needed to collect the minimum information to encrypt if we have to send files to the cloud when that is needed. And in the case of OCBA, we are we are this year running like a security assessment at 311 that includes all the IT system and application. So there might be more recommendation coming, but that's something that is being ongoing now. Yeah. And for security in our laptops, we have a sticker that it's written. Please don't touch. Now we are in Geneva. We are set up in not many servers like OCB, but the maxi boxes that is called FIWI and we are virtualizing now our servers. So they are more difficult to to be target from attackers. And in any case, the local servers are not accessible from the internet. They really they cannot access it if you are in the local network. And the and the personal data also the personal data is not transferred is not transferred to the cloud server is only used on the on the mission of the project. No, we are just removing tracker data is being synchronized, but the personal data is not being transferred there. Thank you.