 that we have a session on security and privacy. So the idea here is just to give you an idea of some of the key things to consider in terms of the security privacy when implementing the tries to or other systems and some practical sort of approaches to making sure you end up with a system that is as secure as possible. So I'll start with just some sort of general intro. And of course, health data is one of the sort of types of data that can be the most sensitive when we think of the privacy confidentiality. And what we're seeing in many places is that there aren't really any good legislation in place that has taken into account that this health data is now digital. So I think most countries will have some sort of legislation around data privacy confidentiality in general, but very often not suitable for digital information. And of course, this type of, especially when we're talking about individual data, there can be both sort of accidental issues where data is disclosed and are being misused. But there can also be intentional attempts to get hold of this data. And I think there are different things we can do both in terms of the technology, in terms of policies and establishing practices to make this information systems as secure as possible. But it needs to be something that you actually plan for from the beginning if you want to have a secure system that takes personal privacy into account. So I think there are some very general principle around data privacy. So first of all, not collecting information you don't really need. So don't collect more data than you need for whatever purpose you're collecting the data for. The data should only be handled by people who have actually been authorized and trained to handle this data. They should know what the limitations are or what they can do with the data. The data shouldn't be kept longer than is necessary. And the people you're collecting data about in the digital system should be informed about this and actually consent to this data being collected. In general, here we're talking about both aggregate and individual level data. And often we, of course, focus on the individual level data. That's where we have the most sensitive data with the personal identifiers, et cetera. So there are, of course, higher requirements for securing that data than the aggregate data. But even the aggregate data can be sensitive. It can have indicators that the ministry does not necessarily want to have everyone to have access to. So there should be policies in place that specify who should have access to what data when, even for the aggregate information. As I said initially, there are often gaps in the legislation around digital information. So I think when you're planning a DHS2 implementation, it's key that you actually look at what legislation is in place that could be applicable for DHS2 or other information systems. And if there is no updated legislation around digital health, it doesn't mean that you can do whatever you want. It sort of means the opposite. Then you really have to think through this security privacy issues because there is no policy in place that actually guides what you're allowed to do. So then it's up to the ministry and the team implementing the system to actually come up with their own policies. And then there are things like the GDPR. I don't know if you're familiar with the GDPR, this EU-wide legislation around data privacy is of course not applicable to everyone, but it has a lot of concepts that can be useful when you're defining your own national policies. So that was very general about why this is important and the legal aspects. I think practically when you're planning an implementation, we really recommend having a security management plan. So having some sort of statement from the ministry saying this is sort of the scope of this plan, it's covering our information systems within the ministry, et cetera, and then appointing one person who is a security management, security manager. I'll come back to this type of role later, but that's essentially someone ideally not a technical person working on the system, but someone at the managerial level who is responsible for the security for the information systems. And if there is a management group meeting regularly, security should be on the agenda as a sort of standing item. Last point here is that end users of the system also needs to have security awareness as part of what they're being taught. They shouldn't just be taught how to enter data, they should also be taught how to manage their devices, their usernames and passwords, et cetera. So security needs to be part of training curriculums as well. When you're planning an implementation, you should, like I said, look at what sort of legislation is in place that could be applicable to the system you're implementing. And then see what is our plan to make sure that what we're implementing is actually within what is allowed by this legislation. So making sure you have a plan that makes sure you are actually compliant with the laws. If there is no legislation, it's not something the DHS2 team can come up with like that. The legal process can take several years. But then what you could do from within the ministry is at least put together a group that is working on policies specifically for the information system sort of in the interim while you're waiting for legislation to be in place. In any case, we recommend that you do what is called a data privacy impact assessment, especially if you're planning on implementing trackers or case-based data. Part of the implementation there should be to make this DPIA as it's called that describes what is the process of data collection management for this implementation. You should sort of describe what is the information that justifying why this information is necessary and proportional is the risk and the sort of privacy you're giving up by giving up this information proportional to the benefits from the sort of health system side to collecting this. So this is something that you should start early in the process. And it's something when you change the system, you should also update this kind of assessment to make sure it's actually in line with what you're currently doing, not just what you're doing in the planning phase. So of course, you tries to, it can be set up in a good way. It can be set up in a bad way. We're seeing examples of both. But if it's not set up in a good way, that has implications not just for how the functions of the system in terms of how you can make data analysis, how convenient it is to collect data. But it also has consequences for security and privacy. So you can think of it as sort of several layers. One is the sort of overall architecture and the way you set up your hosting in the server that has impact on security. If the server is not secured and you allow people to log into the server, they could access the database, for example. So then it doesn't help that you have good policies for end users to change their passwords and have good policies for closing accounts of people who are leaving, et cetera. Then of course, it's the access control within the system itself. So making sure you have a good system for making sure new users only have access to the parts of the system they should have access to. We shall come a bit back to. Thinking of the system design, how you set up your data collection system in an integrated way. That actually also has impact on the security because if you sort of structure your system correctly, you make sure you're not collecting the same data in multiple systems in parallel, for example. That reduces the amount of sensitive data you need to collect in different systems. So if you have a plan for how you design your system, that reduces the amount of data being collected in all these different systems. There is also less data to worry about getting in the wrong hands. And this is something that needs to be ongoing and plan for in the long term with some sort of process in place to review the security regularly. So going a bit more into sort of specific areas, I think we mentioned yesterday as well that very often security issues are based on people, sort of social human elements. So there was this assessment from a couple of years ago where they estimated that 85% of all data breaches was due to some sort of human issue, not technology issue. So it wasn't the server or the software having a bug that people could exploit. It was people not having good passwords, maybe getting an email and wrongly thinking they should share their credentials, et cetera. So this is why having security awareness as part of end user trainings is very important, both to the end users and also to the core team and at the higher levels. So there's actually for those who want to learn more about sort of specific topics here, there is this YouTube playlist that some of our security colleagues have put together, which talks about best practices around passwords, et cetera. Or at the organizational level, we do think, like I said, that you need to identify someone at the managerial level who are like the security manager or security officer who is responsible for security in the organization. In addition, an engineer who is actually more technically involved in reviewing and advising on how to set up the system in a secure way. So one is more the oversight and the data owner. Security engineer is more advising the core team on security best practices, monitoring if there are incidents, monitoring the server, et cetera. So it's sort of two levels. And it's important that the security manager is not just someone in the core details team that you say you are the security manager. If someone gets our data, it's your responsibility. It needs to be someone at the higher level who has that role. Otherwise, what we see is that very often they can come up with all these identify issues, things to be addressed. But if they're at sort of at the lowest level of the hierarchy within the ministry, it's very difficult to actually get this taken seriously and if resources put in to address the issues. No matter how good your processes and your configuration and everything is, there could always be security incidents. So you should always have incidents response plan. So if something happens, whether it's someone hacking into the server or it's an administrator who has accidentally shared their passwords to someone so they have access, there should be a plan. What do we do now? What are the steps we go through? And this is something that people should be aware of. So they realize that when there is an issue, they know who to report to. And this person who is responsible can sort of go through the steps in this incidents report plan. So in the presentations, you will find a couple of examples of what these plans can look like. In terms of the technology, there are sort of two obvious things when we talk about security. The first one is authentication. And authentication is how you prove that you are who you claim you are. That's your password that you use to prove that I'm the one who owns this user account, for example. Or if you're having integrations where there are some scripts or tools that needs to access the system, they also need to have some sort of authentication mechanism, whether it's a user account password or if it's this access tokens that we've started recommending now. Of course, authentication is about passwords. So that's where you see the, like I mentioned earlier, the kind of phishing attacks by email. People trying to get you to put your username and password into fake websites that they can use it. And just guessing passwords through this sort of brute force attacks. And again, it comes back to training. You need to know that the people having access to the system know that if they see an email where they ask you for a username and password, you should delete the email. You shouldn't give them the password. So authentication is about proving who you are. Authorization is controlling what you actually have access to once you've been authenticated. So there should be policies in place saying what kind of user should have access to what's within the system at all the different levels. In national level, department manager should have this access. He should have access to the dashboards, analysis tools. You should probably not have access to the individual records, for example. So at each level, you need to specify what are the roles within the health system and what access should each of them have. So while you need this, it's important to keep in mind that the more complicated you make it, the more likely it is that someone will make mistakes and give the wrong person the wrong accounts because they need to choose between 20 different user roles and 20 different user groups, et cetera. A common thing that we see many places is that there might be routines for adding new people when they join the health service or the system is rolled out to new health facilities. But having a process for actually disabling their accounts when they leave is often lacking. So that's also important to make sure that when people stop working in the health facility, they should no longer have access to the data in that health facility. So specifically on access control, in these tries to, it's sort of controlled through three things. Every user has one or more user roles. So the user roles give you access to functionality and sort of performing actions within the system. So example of what goes into user role can be that this person is allowed to add data values or enroll new patients or delete health facilities from the original hierarchy. The second level is the data access. So this is controlling access to specific parts of the configuration. So that can be specific reporting forms, specific tracker programs for collecting individual level data, specific dashboards, indicators, et cetera. So if I'm a user and I need access to report monthly immunization data, I both need to have a user role that gives me access to the data entry application that gives me the authority to add data values to the system. In addition, I need to have access to the immunization reporting form. So both need to be there for me to be able to add data. Third thing is that users are assigned to organization units. And that also limits what they have access to when it comes to data and individual individuals. So if I'm working in one health facility, I will not be able to add or edit data in another health facility that I'm not assigned to. So that's how we control the geographical access of the users. One of the biggest risks that we see in terms of security is often that there are poor routines for doing backups of the system. So there are tools and recommendations on how this should be done, but it's not always implemented. So I think we talked about it yesterday that you need to have not just a backup, but you should have a backup that is somewhere other than your main server. So you need to have off-site backups. They need to be automated. You can't rely on someone remembering, oh, it's Saturday, I need to take a backup. It needs to be happening by itself. And you need to check that the backups are actually working. So not just think that, okay, there is a file going to this off-site backup server, but you need to check that those backups can actually be used. So we have examples from DJI2 where the whole server room where the DJI2 was hosted called Fire, for example. So the server just burned up. And if they didn't have off-site backups, they wouldn't have any of their data anymore. We also have examples where people have thought they had backups, but actually the backup file didn't work because it had never been tested. So they were saving these files, but they were corrupted and couldn't be used to actually restore the system. Very quickly as well, something that can be a bit painful, of course, for the team managing the system, but it's really recommended to have, first of all, that you yourself sort of monitor your security practices and you can do self-assessments, et cetera, but also that you have some sort of external audit from time to time to have someone else come in and actually look at the system and see from the outside whether the system is secure. And one thing we have sort of discussed there, specifically for DJI2 is to have some sort of more like a peer review because we have the HIST network in many countries that are supporting countries, but we can have some sort of collaboration there on reviewing how each of the different groups, regions, countries are doing it to see if there are things we can learn from each other. Yeah, so like I said, having some sort of auditing, but also making sure sort of the internal auditing, not just having someone come from the outside and audit, but actually that the system is set up to provide audit logs. So you can see if someone deletes something, you actually are able to figure out who that was and figure out why and take appropriate actions. Similarly, logging on the server if there are attempts to break in and other accidents on the back end. So like I said, we have a security team as part of the global DJI2 team. So they are of course responsible on the global side. If anyone identifies some sort of security issue with DJI2 platform itself, you can contact them and they will help figure out how it can be addressed. And we also have a process where if there are things, vulnerabilities, security issues identified, that the countries using DJI2 gets notified before it's published anywhere, so that you can actually have time to make patches to the software, et cetera, before those things are published on the website and available for everyone. So the key there, of course, is that we don't want to announce to the world that there is an issue before countries have had time to address the issue. So this is from our security team, what they see as sort of common issues related to security, DJI2. Backup, I already mentioned, actually having off-site backups that you test, very important and something that is not always in place, not having audit logs for and security logs on the system, not having proper policies for passwords of the users. So you can specify, I'm sure you're used to that from other systems. You can say you need to have capital letters, you need to be this long, et cetera, but not using those functions in DJI2. Shared user accounts, so saying, okay, everyone working in this health facility can use the same username and password. That's an issue and that's not recommended because then you're not able to actually see who has been doing what in the system and trace sort of for accountability, trace back what and why this has happened. Also linked, for example, in Tracker, in the case-based system, where you should be able to see what individual has actually accessed the record of this particular person. If you're using shared accounts, that's not possible. You just see that someone from this health facility accessed the record. Installing security updates, also another thing that is not always happening as fast as it should. So for DJI2, we'll talk about the sort of software development and release tomorrow, but we have major releases one or two times a year. Then we have smaller releases with minor functionality and then we have these patch releases, which is small issues, often security related. And those should be installed immediately. So the big releases, you really need to have a process for testing, making sure everything works because they could be changed functionality. User interface could be slightly different, so you need to inform users. But for those security releases, they should ideally be implemented right away. Yeah, then the monitoring security events. And like I said many times, including security awareness as part of training of users at all levels. That's sort of the summary of common issues. Yeah, again, some of the same things here. So I won't go through every item, but we also have this security checklist that the security team has developed. So that's something I encourage you to just look at this. You have the slides, and it's also on our documentation site. And just think through whether you're actually doing these different things. Are you actually budgeting for security? Do you have security as part of your training curriculum, et cetera?