 Yes, the topic for today is long-term tracker maintenance, and we have split this into two halves, you can say. In the first half we are going to talk about general maintenance and how to keep the system from deteriorating over time. And the other half we will talk about is one very important part of maintaining a system in most settings, and that is change management. In essence, the change management is about updating, adding new functionality, fixing bugs, and this kind of thing. I'm Marcus Becken from the University of Oslo. I'm leading the tracker dev team there. And with me today I have Prosper Behembez from his Uganda. I tried to add Prosper to the public agenda as he on the screen earlier here that I failed, but Prosper is surely coming and he's going to support me in this session today. So we have seen this house many times this week, and this house is sort of the analogy for different aspects of a tracker implementation. And if we imagine the house looking like this when you are first releasing, you have finished piloting and you release it. This is your house, all freshly painted and looking nice. This session is about how to keep the house nice and avoid the house and resting, decaying and abandoned as we can see here. And I wanted to start off this morning with a little bit of feedback from you from the early birds that already has joined us. And I want to ask you a question and I want you to type a response back to me. It will help us guide the topics a little bit to which parts of the system you are interested in or which topics are important to you. And the question I want to ask and now you have to listen very carefully is what types of challenges do you think will arise over time? Or what types of challenges do you know will arise over time in the tracker implementation? So what is the problems that does not show up the first week but maybe after a month or after a year? And what types of problems do you think about when I ask that question? And I'm going to share a different screen now for a second. Let's see. I'm going to try at least. Marcus, I don't want my tracker program to look like that rundown house. No, not do that. Let's try to manage that. But the first thing I have to manage is to... Okay, stop share. There, I found the button. And now you should be able to see a mint day screen. You all see the question, right? What challenges do you think no will arise over time in the tracker implementation? So this is not your first mint in the week. So you probably know the drill. Go to www.menti.com and add the code 18, 19, 49, 39. And then you will be able to answer this question. It might be what challenges worries you over time. And as you send them in, they should show up here as a word cloud. Please, please go ahead and type in some answers. First ones are in. Thanks. I brought the quadruple espresso to the session. So if I'm starting to talk very fast, please let me know in the chat. Yeah, by the way, a practical thing in the sessions, please type in your questions as they occur to you. I will try to monitor those questions coming in as we go in the session here. I also plan to take a break twice during the session and answer a few questions live. And then once we don't get to live, we will get to later. So please just post them as they occur to you in the questions channel. So I'm seeing performance scale. I think this has been very much highlighted and worked on with the COVID context. I think that's been an interesting upgrade with COVID. So I think that's great. Performance is seems to be one of the main main challenges. And we will get a little bit into that and performance and scale comes a little bit together. Because when you scale up, there is some part of the system that might become less performance over time. So we will both get a little bit into how to design so that we avoid it as much as we can. And also a little bit about how to manage performance of the system over time. And I see people have been listening. There's considerations on privacy and security. We've talked about that. Yes. And someone also wrote user turnover and capacity building. This is related. And I like to see this brought up. Yeah, I think that's a highlight of this session. It is the fact that we have technology, but that's only one part of the problem. And the issue of the other part is really the implementation. How do you work together as a team? How do you keep it functioning? I see metadata. I see new requirements, which is also about updating and maintaining the program over time. There will be changes to the metadata. There will be new requirements coming in. Yeah, Pomod highlighted that yesterday in one of his presentations. He said, the only constant in this COVID situation is that there are no constants. It's always changing. Yep. Seems the cloud is stabilizing. It means probably that most people have entered their concerns and data storage space. Yeah, that's an interesting one. We will try to get a little bit into data storage space. One thing that I can remember mentioned right off the bat is that in addition to the normal payload data, if you enter more and more patients, there will be more and more data, of course. In addition, we have the audit logs, which is something to keep an eye on for the data storage space. And audits should be managed. You should keep the audits you need. But it might also be a good idea to clean up audits or turn them off for programs that don't need them. System monitoring, I see here, that's good. It looks like we will get into many and most of this concerns. And I think we are ready to move on. I think that it seems like the following slides will hit pretty well, what you were asking for. And I will also try to emphasize the things we have seen here. And scale and performance and sustainability is the winners. When it comes to sustainability, I have not focused so much in this presentation on the sort of funding and the stakeholder buy-in and this part of sustainability. I have focused more on the technical sustainability for the system. So just to prepare a little bit for that. And some of the things that I see in this cloud is also touched on later sessions this week. So make sure you come to the later sessions, for example, for interoperability issues, also for a bit on scale and performance. This is being touched on on the hosting later this week. When it comes to the security and privacy, we also have the access management session on Monday. So I also see topics here that we will cover later. So we'll keep this cloud with us as we go into the sessions later in the next week as well. All right, I will stop sharing here. Thanks all for your contributions. And we'll go back to the house. And we'll talk about how to avoid the problems that we just saw in the cloud there. So some common challenges. This is mostly crosscutting the ones we had in the workload already. The software might be out of date. And by this, we mean that when you install it, it's new and everything is up to date. But as time passes, the operating system, Tomcat, Postgres Java, will become old. And there will be fixes on security. There will be improvements in performance in this software. And it's important to have some routine for updating it. The other part is the HSTU and the Android app itself. The day you install it, it's the best version that we have. But over time, we are doing maintenance and development of the software. And it will be necessary to update after a while. One of the other sorts of problems you get is I call it accumulated garbage. And it could be mentioned in many ways. It was maybe referred to by some of you as metadata or updates, new functional demands. What will happen over time is that you will get, sometimes you will get user accounts that you no longer need. Especially super users. Super user access to help debug is especially especially touchy subject. We will get a little bit into how to get over and clean up this. And the other part is a new state elements and indicators which might make it harder to maintain your system. We have problems with turnover. And that might be that some parts of your system is very well known to some of your some of your staff. There might be knowledge about how the system is set up and working. That's only in the head of some of your staff. And if you have turnover, this is information that is often lost or at least partially lost. Another problem with turnover is new people coming in that they might have new conventions and they might do two things differently than the people that were there before them. And one last big group of challenges that might come over time comes from accumulated data. And some of the scale and performance we talked about here is can also be kind of referred to as accumulated data. On the day one your system will have no data and you will start entering records. And over time there might be both logs and also data that is being used day to day that gets accumulated. And this might affect your system in different ways. So to pick up the first point of the software out of date, I'm mentioning operating system Tomcat, Postgres and Java first here because this is maybe the one's easiest to forget. And to be able to have an upgrade plan for this, they are invisible unless there is a problem. You never think of this. And you as a manager need to make sure that you have a routine for checking or thinking about this from time to time. There might also be parts of this that has automatic updates like your operating system. But there should be a routine to go and look at the versions of everything and have a plan for upgrading from time to time. On the DHS2 software and Android app, we know that main releases are released every six months approximately. And these main releases are bundles of new functionality and fixes and they are a bigger operation if you want to upgrade the main release. But I want to put special emphasis on the point releases every six weeks. I often see systems when we go into help or when we are supporting someone. We very often see systems that is on older versions, point release versions. So if you are running 235.3 in the beginning of the week because 235.3 was the newest version we had, it should be something to think about when we released 235.4 which we did two days ago. As you see here on the side, the 235.4 was released on 25th of May. You can read about the fixes and updates that was in 235.4. And you should really consider upgrading on this point releases. Maybe not every point release but you should think about you should have a clear plan for how often you upgrade. On your main version there will be a six week cycle for point releases approximately. Which means that the next one is coming out in one and a half month approximately. So on the upgrading of this infrastructure, Prosper will come back more to testing later. I just wanted to briefly mention that when you upgrade the infrastructure there's like you should do a small regression test for sure to make sure that your system is still responding. And that upgrading of a point version from 235.3 to 235.4 requires I would say a medium regression test. These point versions are no new features but there might be fixes, there might even be database schema updates in a point version and you should do a regression test when you do an upgrade of the point version. When it comes to the full upgrade of a new main version then that's a different game. You would need a full regression test, you would need a rollback plan, you would need an Android upgrade potentially. In the future we will also have individual app upgrades that might be necessary when we start delivering apps in their own release cycles. As you know new versions of the DHS core apps is being delivered on the same six-monthly release cycle as the DHS2 main warfare. But in the future this will not be so. The apps will be compatible with several main versions and upgrading an app will be a separate decision from upgrading the DHS2 backend war file. So this will be something to think about in the future. To get a little bit into the problem of accumulated garbage and some of the things that we might call accumulated garbage, one of them is user accounts we should try to make sure we have a roll of routines. So roll on routines also but in addition to avoid garbage you need roll off routines. So when someone quits their job it's very important to clean up their access. And this should always happen but then routinely you should also check whether this roll of routines is actually working and I would be very surprised if there is not some users that gets forgotten or somehow not rolled off properly. And there are some tools in the system that you should make sure someone looks at from time to time to make sure that we look at people who hasn't logged in for a long time. A line for example here hasn't been in the system for a long time. Of course these are not the people that do not log into your system might not be the the one that is most important to to get rid of but you have some tools and some help in the user management app. This is not something you can trust though because if you have a really malicious user that will log in and misuse their access they will log in and you won't see them on this list. And these users you would need to properly roll off from the beginning. Giving some local management here might also be a good idea and give the local authorities or the authorities closer to the users responsibilities and making sure that in their district they know everyone who has a user access. And the other big point here and this is something I see very very often and I'm sorry if someone in the if I'm pointing the finger at someone in the room right now. But if we are asked to help from from also we very often get the super user account and in my case what I'm usually doing is I'm logging into the system and I'm going to help supporting supporting or debugging something debugging a program rule or whatever it might be. I almost never need super user access. What I would need if I was going to log in is access to some test facility hopefully in a test survey if that's possible and never super user access almost never. So don't give out super user access and if you do please delete it again. And in your own organization there should be a very limited number of people with super user. It's almost always possible to give a more targeted role access to the users. If you give super user that guy can do everything. It can you can look at all the data you can tamper with everything. This should only be your most trusted employees. And the super user access is something to think about from the very top. The number of super users should be monitored and you should know which super users are in the system. That also brings another topic I'm not really getting into it now we'll get more into it later in a different session but if you give someone access to your system they should sign an NDA with you. And we will try to do that from the University of Oslo we'll try to have a ready one that we will give to you if you ask me to log into your system I will send you this document it will have my signature on it. I will have it on the on my desktop I'll send it to you and you need to sign it and send it back or else I won't log in. That's a bit of a tangent so I'm continuing to the next part of Accumulated Garbage we might see in the system and this is unused data elements and indicators. So just trying to see early on database and searching for name you will find several first names and several last names and it's very hard to know which one is actually being used in different places of the system and as you add more programs as you add more more data and as you make changes this problem will become bigger and bigger and it has to be managed this is very hard to manage though it has to be managed in many different ways and one of the ways is that you should routinely do a walkthrough of this of your metadata and make sure to get rid of the trash. For data elements for program indicators for indicators it's super confusing if you try to build a dashboard and search for indicators and you find many indicators with very similar names. So metadata cleaning is the recommended action and this is this is maybe I would say something that I wouldn't aim for doing more than twice once a year. Also if you're getting into changes there might be need for data migration to avoid unused data values in your system. Okay so another known challenge that we have to manage is turnover and just to illustrate one of the problems with turnover the design your program and how it works might be one of the things you might lose when you lose one of your staff and I put design ideas here and in my example I'm showing off the gestational age field in one of the trackers I was once involved in and if the description of how this works is in someone's head it will be super hard to just search for a gestational age and start looking at the program rules for how this field is calculated we can see that the gestational age is clearly wrong it's super high and unless you have a proper design documented for example like this explaining how this calculation is done it might be nearly impossible for someone to fix the problem or update this this calculation. So our recommendation here is to make sure your this system is designed and you keep the design document and keep it updated. On a more general term the system knowledge the knowledge about how the system works about which scheduled jobs are running and what problems you once had is also something that all too often is in people's heads and I put some points here in your implementations you have to make sure you get the culture for noting down and making documents documenting how how your system works with the scheduled jobs when you make a hard-earned learning if you have a problem and solve it somehow with a workaround document it and keep it somewhere safe and document operating procedures so that if there is something that you know you have to do from time to time please make sure that is that is written down in case that guy that usually does it leaves. Another problem we turn over is as mentioned in the beginning convention changes and convention changes might be look very innocent it might be when we looked at the data element earlier we saw that first name was written with a capitalization in one case and it was not written with the capitalization in another that's not a big problem but it's not a good culture for the people working on your maintenance team to not be very strict with these conventions when it comes to user groups the problem is no longer just an annoyance it might be a real problem and in the Sierra Leone database we see this real problem we see the some of the user groups being made especially for giving access to some data sets and then we have some others that might look like family planning program coordinators for example what is that and administrators african HQ we should have a very uniform way of making these names and making sure that it is apparent to anyone what this group is as apparent that it can be from the group name and the only way to do this is to make sure we write down our conventions and and that people rolling off rolling on will will not bring their own ideas they will they will look at the existing conventions and bring their own ideas on top by changing the convention and and doing it as a mindful task and then one of the very big points that that was coming from your cloud as well and I can see a question or two coming in on that as well the accumulated data and it's natural that your tracker system will become bigger and bigger over time and unless you have specific cleaning routines you have to expect that your tracker instance will forever grow and one of the things that will make it grow forever is the audit audit history if you turn on the if you turn on the the audit history for even for reading access then you will be very well protected in case there is a problem and in case you have to go back and look at look at what actually happened back in time if there is a loss of data for example you will have a very good overview of what happened but a data read audit is also very space consuming so depending on what parts of the audit you actually turn on then you might need to have routines for cleaning out this audit table okay and then and then another challenge that we have seen very practically over time somewhere and I think maybe Pam will touch on it yesterday we as we get more tracker data into the system it's very possible to build indicators that will become heavier and heavier and heavier over time we have seen this in many countries we have seen this in Ghana for example where our indicators for calculation of people attending care was looking at more and more and more data as more and more data gets accumulated in the HIV system and this is not something that is always avoidable but it's always something you can manage and one way of managing it is to make sure you don't have a superuser that logs in and does a calculation on your entire country implementation so if your if your superusers has access to the same dashboards as the clinic or district users that might be a red flag to look into. A cumulative value is usually also heavier than other types of indicators because the number of data items it needs to look at for tracker is going to be bigger and bigger and bigger as time passes. Another part where we have a very known challenge is this working lists. If you're using the standard working lists in tracker if you have not put any thought into into this this list then the tracker is delivered with this front page list that we see here and for those who know no tracker we are now looking at a list of all the active enrollments in my program and this might be depending on your workflow this might be a list that is forever growing and after the first week it's no longer a list that's actually useful to anyone. It might be a good idea to turn off this list and ask everyone that looks for data to search for the record they want to see or you should make a custom list you should think about and design what is useful for the user maybe you only want to see the the records that has a scheduled enrollment today maybe you only want to see the records with some some filter placed on it that will make sure you have a working list that is meant for the user to find very quickly the record he's working on if you don't have such a list it might be best to turn off this list the working list here. We also have unknown challenges that by their their nature is unknown and although I'm telling you about two known challenges here there is no way of making sure that your setup does not have a bottleneck somewhere that will get worse and worse over time as more data come in so the last point here on the accumulated data and the real management that you can do over time is to make sure you have a well developed monitoring mechanism and this monitor mechanism right here I'm taking a screenshot of glory that's running on many servers and can be useful in monitoring response times and and seeing problems before the user system there is other there is also a guide in the in the documentation on how to set up grafana and there is many different ways of monitoring your server and the most important thing from your perspective is that you should make sure that you have a team that set up a monitoring that will show you the problems before the user system if you hear if you hear about the problems from the user it's usually too late or it might be so bad that that it puts very much pressure on the team trying to support both on the country and and when we get involved from the central level it's too often too late to to start supporting or the system might already be down and that puts a lot of pressure on the people working on this we would much rather try to find the problems earlier on all those systems on the central level we're also getting better at at the testing and performance testing but on your server you need to monitor it to make sure that you know it's doing fine the test you can be configured in a million ways and it's not possible for us to test all of them you have to make sure you monitor your server so just a quick recap of the things we have been through here and then we'll do a short break for some questions the software out of date question should be managed with software update routines and this is something you have to initiate as a manager if you don't it won't happen because this is much of it is a problem accumulated over time it's not really a functional need that drives you to upgrade it's the it's your job as a manager to make sure this is planned for and considered from time to time for the accumulated garbage we will get into how to avoid the community garbage in the next part of the session but the cleaning routines is super important because it's not possible to avoid everything we will you will need to have a roll-off and the activation routines and routinely cleaning the the metadata and the device to manage turnover you need the science you need to have things documented you need service operating procedures and when it comes to accumulated data there is some design decisions you can make sure to to do to minimize the problem but in your maintenance process you need to make sure you have a monitoring step you have a monitoring routine to to look at your service health and to raise an alarm on an early stage if there is if there is problems coming coming okay so the next part of the session is the maintenance and change management process and before that there has been some questions in the chat one of them was what version are you working on for a tracker and I see that some people are going to start on version 35 and I would say start with 35 or 36 36 was recently released but the testing for 36 was better than a previous version so I wouldn't be very hesitant at starting at 36 either 35 is fairly new and and a good version one thing to mention on the the versions of the HS2 is that one of the most important things at the moment is to start at either the newest version of 34 the newest version of 35 or the newest version of 36 that would be my main recommendation and the reason is that all these three versions have recently been upgraded to become in some cases an order of magnitude faster the the earlier point versions within 34 35 and 36 well 34 and 35 I mean the earlier port were point versions like 34.3 was much slower and we have fixed many bottlenecks the last year working with many of the implementations around the world so stay at the latest point version that's the most important comment I can give there was a question on the audit history and we will on the hosting session we'll talk a little bit about the audit config so come back there as a manager the most important decision is how much audit do you need to turn on or do you want to turn something off the the question that was raised in the channel was whether you are able to identify which user has entered tracker data not only modified but entered it in the first instance and there is an easy answer mohaber is that that is stored in the database we don't have a it's not showing in a elegant way in the user interface it's stored in the database so you can get it from there but it's not showing this is something being considered for the new capture app that we're working on to have the also the first user visible somewhere so that you can see which user entered it the data in the first place that was kind of missed in the last app there. Abdul is asking about saying that one of the main challenges was to update the form and moving data elements for example from one station to another and we will get more into the this sort of problem in the next part of the session so i might save this for later and Nirmal Dakal is asking about the largest known tracker implementation to date in terms of tracked entities visits and so forth and i think that Sri Lanka is one of the bigger one that we mentioned yesterday with 17 million TEI i think in some ways the Bangladesh instance is bigger but i don't have the numbers right here maybe i can get help from some of my colleagues to dig up these numbers and reply Nirmal in that chat all right thanks so with that i will go into the next part here and this is the development process versus the maintenance process and you know that development is something that well that happens at the beginning right you you start by designing your system you make all your data elements you have a sheet for for your indicators you build your dashboards you make everything ready you test it and you release it and the first time you do these three steps what you do is usually work without anyone well you might be under time pressure but you would have your development guys working on this and after testing and after you're happy with it you will release it and the way you release it is that you open it for the public for the first time and then after you have started your instance for the first time then the process of maintenance starts and i'm going to talk about development in the maintenance process during the maintenance you have to maybe make a change like Nirmal is asking after the users have started using the system you will need to add the field you will need to add a new program or change something as Nirmal was saying as moving one data element from one stage to another this is also something you need to develop test and release and then maybe just after you finish developing the first thing and when you started testing it there is something else that you need to start to develop test and release and this part of the process is what i would call the change management and maintenance process and this can be new functionality like we mentioned new stages new indicators it can be bug fixes maybe your covid surveillance guidelines changed and change is the only constant pamel told us yesterday and that is so true for covid it has been a nightmare supporting somewhere some places because the surveillance guidelines changed as we learned about the virus yeah when you do this kind of changes this is a point for later but you have to think about retraining like are we need they're doing they actually need to retrain now after these changes and for covid this has been very very very relevant maybe there is training materials we need to upgrade maybe there is data migration in the case of Nirmal that was one of the problems you need to move some data from one stage to another you can't just move the data element you have to move the data as well oh this is a this is something you've all been waiting for the word of the day so make a note of this android tracker is the word of the day you all got it and with that I will hand the word to prosper to talk a little bit about testing as you note down your word of the day prosper will unmute hello prosper hi Marcus I'm there yeah thanks Marcus and thanks for that opening you know please along what challenges would would face and clearly outlining how we can grab out some of this and then this particular session really looks at testing both your configurations and testing your your final product before you do the release yeah so it both serves as also for the purpose of documentation the way we've seen it if you have a well-documented test plan it should be able to inform your documentation some of the changes that happen some of the new features and it's also serves in in purposes where in implementations where you know you are sharing the configurations you are multiple multiple users considering the configurations so you all tend to have to come to common document that you can all refer to so for configurations testing we're really looking from the point of the method that I itself what that elements you're going to have what attributes you're going to be having what type are they and and to the point where we look at you know the program itself once it's been created testing the different stages and the behavior in in those stages but most importantly also looking at the program rules which are very key for our implementations and the program and the program indicators so we've used this particular um the case use case in our in our support to the minister of health and wellness in Botswana to build a a tracker program for nutrition tracking children from birth all the way to 18 years and so this judgment has served for very many purposes one to help us document what we are capturing and when it's being captured secondly to be able to use this judgment for training because when we talk about sustainability as Marcus was was sharing um you need to have a document that people can keep profiling too for how the program is documented so you will find this kind of document very key for even training the the users who you are trying to get familiar with your program implementation so um it's a pretty you can add in more columns you can add in more more information that you need to capture but pretty what will be more important is to have a you know a documentation of all your over the over the implementation and for this we're looking to see that you can be able to do to document your attributes your your your stages and and and and the program rules and I indicated that so we do see that if you could come up with a document that's clearly specified the data element or attribute what type it is any special um attribution to that particular attribute or that element in terms of what we call others you can see like for example for age we are auto-cultuating it or the feeling and and so somebody who'll come in future to look at your documentation and and your program will know that the age is auto-cultivated we look at the logical considerations these are the skip projects that you most of you will have in your tools or the logic patterns for your for program flow you need to document them properly and and and so somebody who is using this document and testing your configurations is able to know what exactly you're trying to achieve we also could look at a program rule documentation and this is basically spelling out what a different program rule as a particular attribute or other element is supposed to do and a little bit put that logic and they also have a value the program variable because you know program rules use variables program variables so you can also specify these program variables again to help you document your work your your implementation so that if I'm looking for a different program rules and I'm going to change it I can look at this reference document and be able to to go to that program and I see how it's configured but most importantly this document will be used by your team to test the configuration so we have a status here some of you can even add a column that can you know exactly describe what the what the test outcome has been and then a status here was whether the team has come back and is able to to review and and clear that issue so for for this particular we're using comments it's a good dog shit that we're using comments but you can have another column that can specify a person who tested what they're able to do so and you can have color coding that we have down there to to quickly communicate what is happening on each of these of these tests so some of them you'll see that we have discussed it and it's it's putting the system and training testing some would be that you are free yeah the team you have discussed and haven't been added in the system some you may need more clarification from the program implementers these are the non-taking people who will tell you how the the the program rules should behave or what and how that will need to be collected and during the testing of course you also find some areas where you need to delete some of these things and you can highlight them right so this is a living document that can be used over time and both documents you are testing and your configuration example down here is having a team in the Minister of Health and Wellness in Botswana trying to do some testing and give feedback on this on this kind of judgment and it has been used for training as I shared next yeah so again we have looked at testing your configuration but there is another point you know towards this release of this implementation you need also to come up with a test plan and particularly this will look at testing the completed program or program track a program or event and at this point you you're probably moving from the developers or the implementers the digitalized implementers to the end users and it's always good to use people who are probably don't have a lot of knowledge about the you know the program rules and all that and so in your test in your end user test manual or test plan you need to clearly explain the functionality of what's supposed to be tested and then give them expectations and once the expectations are not made then you allow them to be able to document that so this is one of the manuals that we developed for again testing a track a program for school learning in education in A-13 the Kingdom of A-13 and you see clearly you have stated the objectives of what is being tested the requirements for the testing if you go over the test number six you have to describe what they should test what is required and then the steps they have to take to be able to test that particular functionality and again you can give them the expected results so that if they don't get the results then they are able to document that you know we are not able to arrive at these results and so that helps you again to find your configurations or to look at you know whether it's user settings users you know roles and all that and so down in the corner here you have a test results and resolution where you allow the users to be able to for each test to go over it and then document the issue that they have first and then they could recommend what can be done but also as you as an implementer you can also do your recommendations and then you have the action and this again helps you to be able to monitor what has been resolved and whether you are making all the requirements so this test plan can always be drawn from the requirements for the system so if you have the requirements and you have this requirement and it has to be tested and then you see the output and the if it's not made then you can be able to go back and see how to do so everything so you can also have these documents these are what documents on google docs to share so that you can be able to all be able to work together since in most of the implementation we found ourselves working as a team so you need a centralized testing plan and you know the both the end user and also for configurations next and the last so again the tracker implementations we we came to the also look at the the implementation itself in terms of the devices to be used and and so it's very important even before they release or deployment to test the gadgets that you are going to use in most cases particularly for covid where we have used the tracker so much we've tried to rely on the on the tablets as a model data capture some laptops and you know desktops um it's one thing that partners will come and start dumping tablets to you which cannot be able to meet the requirements so we do advise that you can you also take time to test the different devices and this ranges from the you know the tablets the phones in terms of size in terms of capacity in terms of what accessories you can have on those on on those devices particularly for both swan as you can see in the corner we had an opportunity to test different phones from right the five or two or four inch screened phones to the to the tablets seven ten to the to the chromebook as you can see the gentleman there trying to test out chromebook uh uh the entry in the clinic and for the tablets and phones we tried to put like you know keyboards to see if that works very well and tested it with with the different health workers and for this particular one it's really important that you use the the end users and take it to the environment of the clinic don't just test from your your town office and think it's going to work it would be good to go out in the field where you have no connectivity where you have no space and um and uh with so many people uh so many crimes you are serving and also test its performance so uh take time to to do test all the different devices so that you can be able to recommend the appropriate devices to to use in your implementation so again this summarizes what we want to share with you around testing that will help you in terms of documentation in terms of managing different um um implementers who are supporting the the configuration managing the updates both in the in the in the implementation in the program and also the the the dhi is to develop in itself and also cleaning up your metadata because you will see that if you have documented all your metadata you will see a lot of redundancy that you can be able to clean uh with time uh thank you very much and um I think I'll turn it back to Marcus. Thanks a lot Prosper. There was a question in the chat while you were presenting whether there would be test plans available for the generic packages and we will get back to that uh Nirmal but maybe um we can ask whether the documents that you just shared is um is uh whether it's possible to make them available for as a sort of a reference on how to do it? Oh yes yes uh so these documents are just pretty uh open out yeah I will pass them on to the to the team to be able to share with the with the with the participants yeah the two documents which is the configuration testing and also the end user testing. Then we might put them on the I'm sure there's a place we could put them together with presentation or something that would make it available for the for the audience at the later stage. Okay thanks thanks Prosper. So after the the change has been developed and and and tested then you need to release it and something to something to think about with extra emphasis when you are doing releases on already existing already running system is that the user training is so easy to forget um and um the the users know how to use the system and when you change it they uh this change might be so small you don't need to retrain um you might just need to update the reference material a little bit um but for bigger changes you definitely need to have a new training you need to think about the the retraining and this needs to be done before you even start to think about releasing this to the users if it suddenly shows up in the in the user interface and the people don't know how to interact with it you will get confused users and bad data in um and then also communication our front is something that is so easy to forget when we sit and forget and develop and these are kind of horizon questions that you need to think about as a manager this is your job to think ahead and think about everything that the developers aren't thinking about when they're developing the change for you the users need to know about the deployments especially if there is downtime if the system is getting taken down uh but also because the system will change at some point and they will need to know about the changes up front and then um when when all the surrounding things are okay you can start to think about the detailed deployment plan um and then um to talk about the deployment plan um i put it inside the maintenance um arrow again to signify that this is a release that's happening on an already running system where there is already data collected where there is um there is um uh users that uh are using the system to do their job um and um there is one additional plan you need to make not only the the release day deployment plan you need to make a rollback plan and the rollback plan will help you in case something goes wrong during the deployment and this is uh i put extra emphasis on this because it's so easy to forget and this is also something that that all the solution oriented technical people on your team will probably not think about until it's too late and this is your job as a manager to make sure that the rollback plan is there uh and it's being developed together with the release plan this is your job if you call yourself uh well if you are the manager of this um then this is your job i'll give a free um a free basic release deployment plan here um the the very basic plan can be close to production environment for all and users block the ip's make sure no one gets into the system then back up the database and then follow any detailed steps this can for example be updating metadata this can be updating the dhs code war file it can be database data migration for example if you're moving one data element from one program stage to another you probably also need to move all the data so you need to run a script that is time to move the data uh then um verify changes then this is the next step and and this is crucially done before you reopen and you'll see why in a second uh verify changes is something you do um after everything is updated and before you let in your users and and in the in the production environment it's best that this is not a very big test this is probably a very small test just making sure that the um the changes that you already tested somewhere else is okay so as part of your release process you do a verification step while the system is down and then you reopen it uh in the very end um and the reason to do uh strictly block out your users during steps two three and four here is that this will allow you a very basic rollback plan you can restore the old database um and that you backed up in point two above you can restore the old version of the hs and and the code and then as a quick verification you can reopen your system to the users um so um the the important thing is to have a rollback plan and these are the two basic plans um it um it doesn't need to be much more than this um and except for point three where you would usually have a more details detailed steps um that that is that is very specific from case to case and and this is this would be the the the main focus of your technical team to make sure you have all the detailed steps for the upgrade that is needed um and having such a place having such a rollback plan is extra important when there is um database changes database changes comes with main version upgrades they come sometimes with point point releases as well um and um of course if you build your own data migration scripts then this um this rollback is uh extra important if you made a mistake in your script um this is your script and it's not tested by anyone else either so um so all the all the um it's your own responsibility that this script works and and you should uh should make sure you have a rollback plan in case something goes wrong in case your script deletes all the data values in the database or moves all the data values in the database from one into one stage and breaks everything um just to re-emphasize the reason why you want to block your users out while doing the upgrade is that um in your in your rollback plan we are restoring an old version of the database that was backed up earlier and if you restore an old version of the database and also there was user users adding to the database um while you were working then those users will lose their data so that is why it's important to block the users while you if you plan for downtime uh if you if you have a communicated downtime up front that's usually never a problem to have the system down for a while um i'm going to show you uh at the end there i'm going to show you an example process that we followed in another uh project um then this um this project was set up with three uh three servers that we were using in this um i would use in this example um all development so after the first time um the first time we we developed everything we did it on the production server oops and um when we tested on the production server and we released onto the production server but when the production is running we are now in maintenance mode and when working on changes we would not make them in production we would work in the development server to the left here um and all program updates data elements if you move your data element from one stage to another that was first done on the development server um and then um it was tested it was uh it was uh system tested in that development server and when we were happy with the changes we exported the metadata um in this case it was a simple setup so we exported all the metadata uh and imported the metadata except users and imported this metadata in the test environment to the right here and then this import was done on the on the environment that was um otherwise the same as production so it had the same metadata as production and we could test that the metadata import works and also in this environment we could spend some more time testing that our changes were okay um so when there was a metadata update and we came to the day of the release that was the day we put it into production and then the detailed release to production would be mostly to import the same metadata file that we took into test into production and and this is the way um this is sort of a minimum picture of what we recommend when you're working on the on the on the tracker implementation that um that is already already running and in maintenance mode if you're making a change then you would need some sort of setup like this to be able to develop in a different place um as you know when you develop you will make mistakes and some big mistakes and some small mistakes and you will test and you will fix and you will make sure that the the change is okay before you take it to other environments um we had a very similar process for for updating the code of the HS uh we would deploy first into the development environment and test the new war file there and look at how it affected the database and test existing data that they were fine and and look at new functionality if it was new functionality there and then we would deploy that same file to test and when we were ready also to production so um i'm nearing the end here and um i'm a few minutes over so i'll go very quickly through this you can read through these recommendations we have touched on them all in the in the um earlier in the session um clean your system of all metadata keep infrastructure updated and follow point releases and consider the larger upgrade projects from 2021 to 33 this is a while ago um you should try to stay above the at least stay on the maintained versions of the HS2 which is 34 and up at the moment um you should keep backups um routinely you should make sure there are backups and you should take backups when you release as we discussed make sure to have the system designed and documented and also design and document changes you do do not develop on the production server this is uh this is uh if there was one thing i want to teach you today is that you should not develop directly on the production server if you're doing a tracker implementation uh of any scale um well of scale bigger than just a handful of users um last point is that you can consider splitting the system into separate instances we see very different needs for this configuration management in aggregate systems compared to trackers for example and especially if there's a tracker that is used by people every day and the tracker being down is a practical problem for thousands of people you should you should probably have that on a separate instance and you should have a very strict regime for changes for access and that might not be as needed in an aggregate system i'm going to leave you with this checklist to make sure that um that uh you as a manager make sure this is documents you should have this is plans you should have um in in place and these will both help you um avoid the house decay and um and making sure that the house stays um stays like i thought the house was coming here it's not all right that the house stays uh stays nice um and do not rest the first point is uh points you have to make sure that is like cross-cutting for all your maintenance the last point is is kind of points you need for each each change that you want to make to that to the system um so this is your change manage point and points and with that i will i will hand it back to to kim thanks a lot for attending please post your questions reach out um on on slack and on the community and we'll stay in touch good luck on your uh good luck on your tracker implementation management