 So this session we'll be talking about monitoring and maintaining DHS2 implementations over time. Health information systems are complex systems that evolve and grow over time. And this requires ongoing maintenance. There will be information requirements that change, scope all the implementations often grow with new health programs being added. There are new users that need training. There are existing users that require retraining. And there is IT equipment and infrastructure that has to be maintained and replaced over time. These kind of foundational maintenance activities are often not prioritized when there are made new plans for health information systems strengthening. There is often limited funding. And the responsibility of these kind of maintenance tasks are often distributed across several units in the Ministry of Health or delegated to the health management information system which often have limited resources to work and prioritize these kind of maintenance activities. The learning objective of this session is to identify what areas of DHS2 implementations that do require ongoing maintenance and to understand why it's important to plan and budget for these maintenance activities to keep a well functioning DHS2 system in place. The outline of the session is to first look at the capacity building and activities, maintenance activities related to capacity building. We'll then look at the infrastructure and equipment, then the metadata maintenance as well as regular auditing of the system. We'll start by looking at the capacity building component. And there are several things that makes it important to plan for ongoing capacity building activities to maintain a good implementation of DHS2. First of all, there is new stuff coming on board continuously within the health sector and among the users of DHS2. And there is existing stuff that require refreshing training. Over time, the type of information that is included in the DHS2 platform changes. There will be new dashboard, new reporting forms, new case-based programs, which means that knowledge of the system becomes outdated, requiring refreshed trainings for all users. This is in particular the case for the core team maintaining the DHS2, as these also need to be able to follow up on new features, configure and implement them to make them available across the board. Because of this, it's important that countries have a funded capacity building plan that covers training both of the end users at the national, sub-national health facility and even community levels, as well as the core team of administrators of the system. In cases where there is no continuous capacity building, the end result is often that the core team of system administrators is designing, building a system that is not working well and that users at the sub-national, national level are not able to use the information that the system provides. So if there is no continued capacity building plan, this can result in inexperienced poorly trained system administrators that build a system that is not fit for purpose. It means that end users of the system who are supposed to use the information to improve their work within the health sector is not able to actually leverage the information that the platform provides. For the core team of system administrators, the primary way of building capacity is to DHS2 academies and through various collaborations within the HISP network. This includes workshops on specific somatic areas as well as specialized in-country training when required. For end users, there are a number of different modalities. Dedicated training workshops on the job supervision training and capacity building can be built into other routine activities such as review meetings. We provide an online DHS2 platform which is freely available for users also within countries and it's possible for countries to set up localized online courses to provide training. Finally, there is various job aids that can be developed, for example videos, instruction manuals, etc. For end user training, the scale is often so big that it's necessary to have a cascading approach where you train a team of trainers who can then go and do the final end user training. Capacity building is discussed more in detail in the capacity building session of the course. So now we'll talk about the infrastructure and equipment. So DHS2 is a web-based platform. This means that there are essentially two equipment infrastructure components that needs to be in place. There needs to be a central server where the platform itself is hosted and end users need to have devices with internet connectivity. The server and hosting costs and work is sort of an ongoing running task and cost that increased over time as the system grows both in scope and in scale. For implementations that rely on physical hardware that is self-hosted, there needs to be plan to replace this equipment over time. For those relying on cloud-based hosting, there is an ongoing monthly, typically hosting cost that needs to be budgeted for. In addition to the actual physical servers, there is software that needs to be maintained, kept up to date, which in turn requires specialized staff who is capacitated to do this work. In terms of end user devices, the basic requirements is that they have some form of computer, tablet phone and internet connectivity. And this also has a running cost both in terms of replacing these devices as they're old or they break down, but also providing internet subscriptions or airtime. This is again a cost that often increases over time as DHS2 typically scales down towards reaching the lower level of users, increasing the number of devices, the number of internet subscriptions that are required. So a country may start targeting the district level users, but then as they are able to accommodate end users at the facility or even community levels, the number of devices needed will typically increase many fold. We'll now have a look at the metadata maintenance. Metadata, in the context of DHS2, refers in essence to how the system is configured. So the metadata in DHS2 is the reporting object, such as the data set, so reporting forms. It's the health indicators and the way they are configured. It's the dashboards and visualizations that the end users see. So all of this is what we refer to as metadata within DHS2. This metadata changes and expands over time. So when a new health program is included within the DHS2 platform or when there is changes to the reporting forms or the required analytics, or even when a WHO metadata package is introduced into the system, there will be changes to the metadata as a consequence. Also, when the DHS2 evolves, there are new software updates with new features. The metadata often changes implicitly just by using a newer version of the system. The system itself will change the metadata to accommodate the new features. Just to give an idea of the scale we're talking about here in the National DHS2 Implementation, it provides some examples from a real DHS2 national system. So in terms of data elements, the basic variables that is used for data collection, we may find 10, 15 data elements, several thousand indicators used for analytics, dozens and dozens of what we call categories, which is disaggregations into, for example, HN6 groups and over a thousand public visualizations referring to charts and maps. So this means that in a system such as this, every user of the system, as they're trying to find the relevant information that they need, they will have to sift through all these thousands and thousands of variables. This is often a result of a system where the metadata is not well managed, where there are duplications, where the sharing settings within the system is not used so that users only see the relevant metadata, etc. An important reason that this happens is that this kind of metadata maintenance is often deprioritized by the core team who has a lot of responsibilities and typically are pressured into working on new content, new features rather than maintaining what is already there. As I touched upon, having a poorly organized metadata in the system is problematic for several reasons. It makes it difficult for end users to find and use the information that is in the system. It makes it difficult for the system administrators to manage introduced changes. It can also lead to data quality problems for several reasons. Typically, in cases where there is duplications of metadata and different users may enter data into similar but not the same variables, for example. And it can also lead to more fundamental technical problems that will make it difficult to upgrade DHI Studio to new software versions. To ensure good maintenance of metadata within DHI Studio over time, it's necessary to have a well-trained core team of DHI Studio administrators who manage the metadata. This team needs to have SOPs guiding how metadata changes are done over time. There needs to be regular reviews and assessments of the metadata. And last but not least, the core team needs to have enough time and resources to actually prioritize working on maintenance on a weekly, monthly basis. Finally, we'll talk about auditing of the system to strengthen maintenance over time. So, assessments of DHI Studio and the health information system is often recommended before there are big activities that are planned, before the initial implementation of DHI Studio, before DHI Studio is used for case-based surveillance for the first time, etc. But it's important to also think of assessments and audits as something that is useful to do on a routine basis as part of the ongoing maintenance of the system. So in the global DHI Studio team, we have worked on several different audit assessment tools that DHI Studio implementers can use to assess their implementation and identify areas that need strengthening. We have what we call the maturity profile, which is an assessment that is meant to be done relatively quickly as a desk review to assess the overall state of the DHI Studio implementation, identifying strong areas and weak areas to help guide planning of strengthening activities. We have a security audit tool that can be used for assessing the system against key security measures, such as whether there is proper encryption, backups, staff responsible for security, etc. We have a metadata assessment, which assesses some of the configuration of the DHI Studio, identifying both areas where there are errors in the way the metadata is configured, as well as giving advice on areas where there is room for rationalizing the way the system is set up, for example, if there are duplications of indicators, etc. Audits by independent government bodies, for example, in order to general, can also be useful as tools in a way to strengthen the maintenance of the system as it requires the system to be well maintained and any changes to be documented. These routine assessments can both be formal, for example, as is done by the Auditor General, or they can be informal done by the team itself on their own implementation. It's also possible to have external his groups or consultants come in and do these assessments to have some form of independence in the way they're done. So to summarize, you know that DHI Studio implementations, along with the overall health information system, evolves and changes over time. The scale is typically growing, the scope is growing, the content of the system is changing, there is new functionality and new users. To have a DHI Studio implementation that is well functioning and sustainable over time, it's critical to have a plan and a budget for the key activities related to maintenance of the system. This includes capacity building, server and hosting, equipment and infrastructure, metadata maintenance, and having regular assessments and all this. These are areas that are often left out of plans and budgets for the DHI Studio implementations and gets the prioritized for more exciting things, such as new implementations of a case-based system or new application. But overall, they're critical for a well-functioning DHI Studio.