 Yeah, okay. Okay, meeting is recorded. So this session is like planned for those who are interested in security. The previous one was a bit messy because there are people. Yeah, I know. It is, yeah. Okay, but it's recorded here on this microphone as well. Yeah, okay, good. We'll see. At least we made a sound check it was working. So, and it was a kind of a general one. And this one is an opportunity not only to stick to the agenda and the slides that I have, but what we typically do the security sessions, we also try to make it as practically useful as possible, and discuss your real life cases as well. Questions, topics to raise that are not on the list. So I have quite a lot of slides, but it's not an obligation to go through all of them. And we can stop anytime we having a flexibility having a really small audience here, we can discuss or dive in really in the topics that are interested. So to emphasize like a bit more, it's everything related to cybersecurity, everything related to privacy, and other compliance efforts, standards, certification, and other things. And maybe a bit of risk management, but it's kind of a bit artificial. And this is a plan. I have another document for this session where you can also put comments and answer answers. And once you open it, you will also see the link to the previous one. And I also updated some answers to the questions that I had. So yeah, and there were some topics that we didn't cover for this session, and it might be easy to switch between two files, but they're kind of separate. And we'll stop at some point of time and make a break. So let's, yeah, I'll just to put like a bit of context, we'll talk about the HS2 security features first, just to recap what is in place and what you can use. I'm pretty sure that most of you are aware about that. This is the first part. The second part is more technical. It is about most important steps for implementers to do on security, or remember kind of a high level checklist of what to do on security once they have existed system or plan new installation. And the third part, it is even more technical. It is about typical security misconfigurations. And we can go there and discuss or we can do like Q&A session around these topics as well. That's it. So we'll start with, and although many of these things are listed now on the website, I will put some accents on what is not written there because it is more implementation specific. Okay, we start with access control. This is a key functionality, and it is a kind of a natural to everyone. So we have users, we have admins, and we would like to restrict access to certain features. We can have a quite complicated in-contrast structure, and we can have different data regions. We can have different data types, metadata and all things that require quite fine grained access control. The functionality is in place, but as long as you start configuring it, one of the biggest changes is that the more detailed access control you'd like to have, the more difficult is to maintain it. And the immediate recommendation is that once you start managing your system growth, you have users coming, leaving, you assign different groups, roles, permissions in general. At some point of time, especially after two, three, four years, it becomes quite hard to maintain this setup. And this is in product feature on one side. On the other side, it should be possible to make the administration easier. So the non-system, like the answer or the recommendation outside of the system is to have access control policy, but not as a public policy for everyone, but the set of internal rules on how you grant permissions, some internal rules on how you assign permissions to different categories of users. For example, you can say that, okay, so once we create a new responsibility or once we create a new type of activity, we always create a new role for that. To ensure that one type of function within an organization always has a corresponding role in the system. Or we have a new title for a person and we always have only one group of permissions for this role. Or if we change the title as a promotion or someone moving from one department to another, we remove all the previous roles and assign the new ones. So this kind of a basic rules. And they may seem pretty straightforward, but once the system grows, it helps to keep it tidy and clean within the setup. And as long as you don't have that, it becomes quite hard to maintain it. We, on our side, we think about making a tool to validate permissions internally and give some advice on what is assigned incorrectly as a web app. But I have a kind of a very, very early draft of the application. And it still requires a couple of months or maybe three months to develop the rest and share for like test use. But the immediate recommendation is to keep things tidy. Right. Second one, everyone knows is LDAP authentication. It's also core functionality. We support most of the existing LDAP compatible directories. You all know them, Active Directory, Azure and the Open LDAP. Yeah. So many people think that it is a kind of a really good, LDAP was invented like 35 years, 40 years ago. And it is a kind of a de facto standard. But we also need to understand that once you have LDAP connection to the server, it means that between the HS2 server and the LDAP server, it means that the user enters password into the HS2, then this password is sent to the LDAP server and then related to that site. So as long as we trust all the components, it is kind of okay. But in any case, the password in the clear form is transferred through the HS2 instance. For new setups, we recommend using OAuth authentication or any other method that doesn't require us to submit a password to the HS2, some kind of external indicator. So 10 years ago, LDAP was perfect. Nowadays, it is a kind of still standard, still is kind of a default solution. But where we can use external single sign-on or similar authentication, it's better to use it in any way or plan new systems, not relying on the LDAP itself directly. For us nowadays, it's a backhand service just because LDAP processes or the implementation of LDAP processes, passwords and clear form. And if anything wrong happens with the HS2 server, it's kind of exposing the password to the back end of the account. Next famous level thing that is a kind of standard everywhere and everyone loves or hates it depending on their personal experience. Many of you will know use it if using Google Authenticator or similar compatible app you can use also if the organization uses Microsoft Authenticator any third party, you are not obliged to use the Google one. So for new devices, you can use any one where you can scan a QR code and it should be compatible for all kind of platforms and phones. And not all of you know that there are also multiple open source Authenticator applications for feature phones. If you use legacy feature devices, there are apps that can support two factor codes on the legacy device. So the only requirement is to have Java or G2E support on these devices and you can scan the code or enter the code from the legacy device as well. So you just install this kind of app in the same way as games and you can use it. It's called free OTP or several other things if you need more details we can discuss it a bit further on. And we've got several requests that demand email based second factor and we are referring on this because it's a bit problematic because it's always reliable. Do you know what is the most important thing for two factor authentication to work? And do you know what is the most important thing for two factor authentication to work in general? What breaks two factor if it is not implemented? It's time synchronization because both the phone you have and the server should have the same time because they rely on the same time generator and for example the server thinks that it is one time and your device has a different time and the code that is for 60 seconds for example or 30 seconds it expires before the server because if you have different timing so they don't agree on time. So it's important so many people complain oh how the factor doesn't work and maybe just because the server doesn't have accurate time or the client has a device where the time is not synced for whatever reason. So this is the biggest problem to check and typically one of the general security recommendations that is is before running any system you need to ensure that you have a reliable time source that all the servers you have they have a synchronized time for authentication purposes for better cryptography operations and for ensuring that your log records are accurate so that you get the correct accurate timestamp on logs all the time. Yes. For two factor authentication, we lose a lot of communication. They think that you also support the Microsoft engine but the one problem with this from the government was to improve the SMS-based communication. So SMS-based OPD can also improve that. We got this request so I mentioned that we have an email-based authentication request where you send an email to the... It's already here. Yeah exactly it's not there but it's yeah I mean this was a request and we have it in the backlog for SMS but I'm not I'm not certain we are going to discuss the security roadmap next week. Yeah yeah could be yeah yeah but great for their great for the reminder you I've shared the link I'll scroll back one more file with a QR code it's another one so if you can put as a reminder there I remember for myself but if we can like promote this feature a bit and write it down to the comments it would be great. Right okay single sign-on I mentioned earlier that it is a preferred way to do authentication if you have a supporting infrastructure and the main idea for single sign-on is that you don't transfer password between any parties so it is only the authentication server that received the password compared to LDAP and it is not either restored or like transferred to any kind of a third-party system. Also with the proper setup if you have it we can smoothly login the users once and they keep the session running for all the apps that are authenticating through the same system. We've got an experience of many organizations using Octa or e-clog as a different solutions on the commercial and like open source part but and they say that for example it improved their experience a lot because people don't need to remember multiple passwords they don't need to login that often and it is easier to block users in one place rather than to go through all the systems and checking that. Yeah so it's heavily recommended or if you don't have it in your organizations you can for example kick off with a single sign-on implementation using one of their for example available solutions and promote and import other system owners to use it as well. New relatively new thing we use the personal access token if you would like to do some automation so it's a replacement or like a set of passwords you can use tokens that allow you to create automated integrations and access different data based on your permissions but without the need to hard quote passwords anywhere. This is still kind of a new feature and it still requires like extra security reviews and considerations anyway because it allows to to login theoretically to login to a system in another way non-trivial way kind of a hidden way but it is still quite useful and hence simplify day-to-day integration experience. User impersonation this is a new thing also made by request and it allows you to run reports for functionality as a different user so there are multiple ways about may may be used but typical to like another user mode to kind of act as work as super in the Linux or other things so this is quite new appearance in 2014. Yes, I'll say it again. Yeah, if you are not an admin admin but you would like to run specifically configured reports in the batch mode or for example when some user is on vacation and you don't want to share the credentials you can allow someone else to run a report on behalf of this user so if you have setting specific to a user you can kind of connect his or her working environment and run specific reports to this user. Yeah, so of course as an admin you can perform certain things but you will pick up all the environmental settings all the setups for the user. Yeah, but if you would like to make a report and it's also restricted to the admin level only but the idea was to bring in like all the work environments for user and setups. Yeah, auditing and logging also core functionality user activity log system event log and we know that not many of you use like custom custom logging format for the system events but it is possible to configure different types of event logging for system I mean the HS2 and Tomcat application cells not the user activity log which is separate but not or activity or audit or change log but the settings that are available you can edit log for J configuration file and introduce the kind of extra extra logging parameters. Okay, yeah, here we might need to consider one of the logging for the device and I because there's been done a crucial sensitivity when you are using cracker for example given public and I think there is some you know the production is there so we have to track down few users who is actually involved in this. So that's quite different because that's actually not keeping the dog head log so that we find it from the engineer's log. So the engineer can track the IPs and others but when we pass through the application the log so if there is any way we can track down the specific device or IP then we can pinpoint the specific person who is responsible. Yeah typically once and we will probably discuss a bit later but as it asks right now probably it's the right time to discuss a bit. So if you look at the whole like the HS2 setup so you have a Tomcat web server that runs a war file and application in the container and you have a database and you can put Nginx or Apache or any other reverse proxy in front and once a user connects with the HS2 server they connect through the web proxy. So from the web proxy and the web proxy knows the source IP of the user then Nginx takes a request from the user and sends it back to Tomcat and in default setup Tomcat sees that all requests come from the Nginx so he can't distinguish one user from another and from the HS2's container or Tomcat perspective all users come from the IP address of the Nginx which may be on the same machine which makes the task of identifying what's distinguishing users very difficult. So there are different ways to approach this so on one side there can be an update to application to send some data like unique ID of the session to the user. The second is setting on Nginx that allows to pass user IP address the client IP address through Nginx to the backend server. So there are special headers that can be configured to be sent to the backend server so the real IP of a user appears in the HS2 system box. So if you need a kind of guidance on how to configure that we can do it in the like in the pause or I can write it down after the session in the file. So it is pretty standard but it is not like recommended by default and depending on post or virtual post configuration there may be some differences on how to do that but it's possible and I think that to some extent it will be a quick and reliable fix. We typically say about backup but the idea is that we should make a backup of the system itself and store the configuration of the database itself but also store system configuration somewhere because in the case of the data loss it's important to restore not only the database but also to keep the same system configuration to backup the HS2 config, the operating system environment, the postgres settings and then that's all. Now we have a full repository of the HS2 tools that can help with automated setup so it in fact replaces backup so you can for system configuration so you can always reinstall the system from this setup and the second is we can just have scripts to perform a regular backup of the database and restoration. One of the important things is that once you make a backup it's always a good idea to ensure that you can restore from this backup so there were some cases where the backup was not usable and making this procedure in emergency mode is very very painful and you're under stress you never know how it will succeed or not and the next time and you're limited of time and your users are waiting so it is recommended to make a test backup just to try how it works on the test system and see if you can actually restore from the backups you have. It is it is not like guaranteed by from a procedural operational point of view so it's it's really worth testing. And the next thing that we started kind of not promoting but we started using last year and expanded it this year is virtual patching so we also briefly discussed it today through the day that if you have a DHS server running behind the reverse proxy and there is a well known security vulnerability or new security vulnerability you can partially mitigate it or in some cases you can mitigate it by installing a special patch to the engine for Apache configuration. Now if you just run a regular Tomcat we can handle these patches on the Tomcat level itself adding snippets configuration snippets to the container so you can have a patch deployed immediately while we are waiting for preparing an upgrade or just closing the hole before you upgrade to the new version. It is very vulnerability dependent so there are some vulnerabilities that can't be fixed by virtual patching but in some cases it's very useful because it helps to restrict either the insecure functionality either to eliminate the problem or to introduce some temporary fix before we have a full time and like a long-term solution. Any other questions so far? Yes please. Can we rate limit logins or limit the number of logins? Can we rate limit the logins so that I mean sometimes there are two possibilities where there's a continuously kind of password for many tracks so can we limit the number of logins or can we limit the IP address for a specific region? Not within the engine itself but using either reverse proxy or by using Tomcat functionality. So rate limiting is a kind of expensive operation and your Tomcat is generally busy with processing user requests like legitimate user requests and if it has to also to handle the denial service attack or like all kind of brute forcing it means that it will be distracted and will consume more and more resources on like through pushing back on the malicious requests so but it is still possible for example to do it on the Tomcat level there are filters and rate limiting wolves that are available in the Tomcat starting from version 8 so if you're on Tomcat 8 or 9 you have an opportunity to configure it internally. This is one thing. Then but the recommendation is not to do it on the Tomcat level but to employ a reverse proxy for this purpose and then you have anginax or Apache in front that can do it for for you so standard rate limiting in anginax works pretty well request sprouting like reducing it to works pretty well you can make if we use lower strips and additional functionality in anginax you can cure in a geolocation database and the restrict answers like restrict responses per region just checking or the region of the country specific for this request and just block block users were based on the IP address detection so this is possible lower scripting is a bit more like complicated but I think if you have tried to charge GPT it can give the kind of a basic recommendation on how to configure engine at least anginax for for this purpose but the idea is the same you just pre-process request and reject it if the user doesn't match material or exceed it except for the dashboard yeah at the same point of time it's quite like it requires like a heads up it requires a lot of like testing because if you have a lot of users sitting behind one IP on the network other translation it may probably cut them off so you need either to have different buckets or you need to to to be very very smart in our case we have a similar issue so actually we are in the pilot firewall so we use the Cisco firewall we configure that way so you can this out of the example more request is there a subscription will be blocked so that will be next day and what will I ask so we check whether this value that we write this yeah that's a good approach for user blocking so there is another issue that if you for example have entered the password incorrect password for multiple times the account will be suspended for some for some time it is a kind of a well-known behavior and it is still implemented in all Microsoft systems so for Windows for Active Directory it's typically if you have 10 successful attempts your account is blocked for one hour from my perspective it's a kind of a old-school let's say legacy of bad approach because it makes it easier to lock out the legitimate users and if you have either malicious actor who knows the user names or there is a misbehaving application so you can easily cut off your legitimate users before they can do actually anything before any concerns and it may be a bigger threat than just allowing the browser to pass so we continue and the next thing to discuss is a security checklist that from my perspective should start with every implementation many of you've heard about obus top 10 for software secure application development and this one like 10 important things for the HS implementers to know about security and we will go one by one and discuss them in a bit in details first thing is not very technical and it's about defining roles and responsibilities just to ensure that you know who is doing what and that you have clearly defined who is responsible for security and who is responsible for incident pull-up and communication and other critical things after the incident happened or when you are under attack it might be hard to find out who's actually doing what and the main purpose of it preparing in advance is to ensure that people you have some roles assigned in the second you can train and inform and educate people on what to do in case of the incident so there are two things here it also checks that there are important things that people should do in their teams and that they are not overlooked and you have clearly assigned all the roles and responsibilities related to security it can be as simple as one person responsible for everything security related or it can be more difficult someone for privacy someone for technical security someone for communication crisis communications on so it depends on the use case if you're interested we can dive in and discuss the more like the different scenarios as well second thing often overlooked and it's not a trivial task especially if your organization is quite mature create an inventory of assets so it's just to know your attack surface and what can be attacked in in general to ensure that there are no surprises they don't have any forgotten systems don't have any assets that nobody owns maintains remains it can apply to service workstations all kind of anything connected to the network it can be about user accounts it can be about whatever has a potential value and can become promised the most trivial and the most unfollowed advice is backup to data so we met it all the times and it was my recommendation number nine but I moved it to number three because people still don't do backups or the backup is half a year old and it's not usable or we can't restore and once you get an inventory of assets you can understand what you should backup and what should not very straightforward but that's still also it is important to maintain security of the backup data in the same level as you maintain the security of the main main systems system one of the very very typical attack scenarios that happens in real life and attacker for example got on the local network you have a dhs2 installation running secured according to the best standard and then someone else in your team who has no clue about the hs2 he just does a backup has a backup server and there are no security practices there so then an attacker can compromise the backup server extracted backups from the extracted backups decrypt the hs2 admin passwords and log into the main system as is as that and if the responsibility is not clearly defined or is split between different people and attacking tests or backup or any kind of a secondary systems is a very very popular way of privilege escalation or attacking anything if you can't go like through the front door you can find a back door it's just probably open as we mentioned during the first session today authentication still remains the biggest problem and it can start from guessing the passwords using default passwords so we always tell that you please change admin district as soon as possible and we find still find production instances in real life that don't follow this advice and the multifactor single time on not using passwords for ssh and using keys for that so general standard security requirements apply here um number five is about access rights and permissions and we discussed it a bit that it's important to have an internal guidelines of how you do security or access control and how maintain access control to keep things clean and tidy and it is typically accompanied with a regular review access rights review so once you define who has should have rights and what's are the main actors the groups of users their responsibilities and roles it is much easier to do a regular check up once you have this kind of internal guidelines for that often there are things that are overlooked temporary accounts api permissions api keys and all kind of non-interactive permissions so there are there are plenty things that's why important the inventory is important and here it's a kind of a short break and recap so we talk about inventory because inventory helps to create and maintain the attack the like the we have an overview of the potential attack surface we introduce some rules to simplify our life for access control and then we harden the systems based on the inventory we have and the roles and the importance of the permissions and the users in the system so this is a kind of a mutual effect and all these things don't don't work without each other so if you don't have an inventory you can't make a good access control or you can't harden the system because you don't know what you have and if you don't have three roles and responsibilities you can't act in case of the incident or you don't know who what kind of access should have so it is deeply interconnected and you can't do just one piece I think that you're secure we continue so number six is network configuration and we've been in touch with several teams that maintained HS2 and on a certain level those who do system administration for or app administration for the HS2 they have no clue about what's going on on the network level but and networking is kind of a standalone self-containing thing for them so I would say that even if you if you don't have it under your control it's good to have a regular syncop with the networking team to understand what kind of security measures they have and do we need to work together to make your system secure together also in the recent years many of you use VPN for accessing the organization's network in the recent years there are a lot of new VPN related technologies for example many people started using wire guard which is a kind of a lightweight alternative to classical VPN and it allows to do absolutely fantastic things without reducing performance easily protecting system making smooth connection to the different systems from mobile devices from web services from from from laptops and the desktop devices and connecting different systems from internet in the much more secure cheap and efficient way so I would recommend just critically revise what you have on the network and see if the same problem or the same task can be re-engineered and the results are a bit different maybe much more efficient way compared to what we did five years ago ten years ago also mentioned today during the first session software updates not talking about HS2 but about the whole system stack vulnerable software is one of the like biggest problems and for basic security maintenance it's quite easy to keep it automatically updated so for most of the systems you have built-in tools for example unattended upgrades and linux windows update services wsus on windows all mobile devices typically can upgrade so you can relatively easier compared to the old days maintain at least the new security standard I think that if you have automated updates for the server stack switched on you reduce the probability of security incidents like by 50-60% so it comes almost for free because the amount of the critical vulnerabilities that come is very huge and it is much easier to install updates rather than to mitigate these incidents individually the cost of vulnerability or the price for attacks becomes higher and higher on one side and the more and more people are interested in finding new vulnerabilities selling them to the criminal actors or using them for like bad purposes so it means that this market is growing with more and more vulnerabilities are found every day and without automated updates it is not practically possible to keep the large at least especially the large fleet of servers secure you all know this of course connected event logs and even if they don't cover everything at least having some logs and looking for them into them from time to time is a good practice what we typically suggest once you have your system and VHS logs set up or you use a default set up we suggest you connect to VHS to log in into the system make some actions for example change your password and log out and then go into the logs and see if you can detect your actions if you can trace back to this actions and see who was it you to change this password or is there any indication who changed the password or ask questions who has changed the passwords in the system during the last 24 hours or who has logged in during this time who was using some admin privileges pretty sure impersonation and so on so so this type of checks help to ensure that you have an up-to-date configuration and that you have kind of a reliable way to investigate in case of incidents or find out some potential problems early um number nine is password second opinion and sometimes it is a requirement as we also discussed so penetration testing external penetration testing but security is a non-travel domain and we always recommend to talk to peers to colleagues to network ask us as well so we can have any security related questions we can just write to a security at ghs.org and whatever is related to the security of the ghs too or implementation support we will be probably capable of answering within the reasonable timeline and we talked to today that hackers are advancing their skills they're becoming more and more professional they use more and more new modern tools and in the same ways as we upgrade our computers get more ram and the cpu power the same applies to our brain as well so we it's worth to have a subscription to some newsfeeds to learn about security listen to postcards twitter whatever you prefer follow like security experts online and talk to colleagues as well and to ask about some security advice and understand what's going on but other changes in the industry as well it helps quite a lot do you have any questions sometime ago we were making a list of most typical security misconfigurations and we also offered during the training we offered implementers set up the ghs2 setup with some vulnerabilities in place or we said this is a one random setup made by someone can you find what are the most typical and the most dangerous vulnerabilities there so eventually we came up with a list of most common misconfigurations or the things that may be problematic from the security perspective and I think it's a good essence to check your setup and see if you have it implemented and more importantly we will go through this we will be not focusing only on what's bad but about how to fix it and what kind of security implications these vulnerabilities may have and we start talking about passwords again famous or infamous topic so everyone knows that we should use smooth factor but if you can't do that you have different options so how let's go a bit deeper and understand how this mechanic mechanics work so traditionally we consider that the password is a secret and the user doesn't or the attacker doesn't know the password so what the attacker can do he can try brute forcing the password and in order to do brute forcing nowadays you can either go with a random passwords just brute forcing and applying every single chapter starting from two passwords three four five six so on and if you do that on the default setup of the HS2 you'll probably get hit by or some kind of a basically configured it will be hit by the rate limiting feature or if someone has implemented rate limiting on the reverse proxy or with the verification parameter it will be like this user will be stopped there so the attacker then as we also mentioned he will try not to go the front door he will try some kind of a back door and if there is a service that is authenticated using the same password he can try to do some kind of a reconnaissance or intelligence and try to see if he can find another system in the same domain or in the same organization that is not protecting with the rate limiting or to find the user with a weak password or to find the way to guess more passwords using different systems so even if you're rate limiting on the HS2 is in place or even if you have to factor probably someone else on other system is using the same username and password without compassion and you can potentially if you can't go to the HS2 directly to brute force the code you can do it through different service or another opportunity is to do it through API directly so there is if there is an API endpoint that is used for some integrations if some finds it and tries to brute force APIs sometimes have a more lax or rules on how to approach how to deal with rate limiting because they're intended for high intensive automated use so sometimes it is possible to brute force passwords or credentials are the credentials through the API endpoint directly so and we also reiterate that it's important to know your attack attack surface and what other systems are present in the system and think not only in terms of the VHS2 security but looking into the other systems that are on the same network and maybe weaker than your system and those who use SSHT for authentication it's better not to keep them in files but use trusted platform modules so every modern laptop has a built-in GPM that is used for at least a bit locker encryption windows or can additionally store security keys on the system so it can be used as a password storage it can be used as a certificate or private key storage and it can be used for various types of authentication so most of the laptops do have that most and in the old days or since up to the very very recent times the GPM model was a kind of a separate chip on the motherboard nowadays at least for AMD processors it is built in into the processor itself and once you buy a computer that is at least or maybe six was released or assembled six to twelve months from now it will be like with a 90 percent guarantee even if i'm a built-in GPM this regarding the model price and so on so what started as a premium feature now is a kind of essential one because it's related to the hard disk encryption it's related to different trusted operations within the kernel and the operating system kernel so you can use it for storing storing the keys typically if you look on the uh compact disk or the website for the motherboard vendor or the device vendor you will see some drivers and utilities for using the GPM or keeping the keys there the main problem we have here is that if the attacker is able to get on your device and control he can extract the keys if they're stored in the file system but they can't do that if they're stored in the secure memory on external key or and one more consideration i think it is kind of gradually changing the new security standards also require that all admins or people who have like maintenance or admin access to the system they must use hardware tokens or keys for authenticating themselves so just password or a multi-factor with a mobile authenticator is not enough um we also discussed uh lack of security updates and how to deal with them um related to updates uh there there are different strategies and um i think if everything should be done a bit cautiously and uh what can go wrong with the updates um on windows you know that microsoft is releasing updates uh once a month unless it is emergency updates and it's called i think patch Tuesday one day per month when they release all the updates for the operating systems so it means that uh if you have the automatic update enabled after this day all the system get the systems get updates within 24 to 72 hours um so for your windows systems it is important to remember this date and check from time to time that these updates work properly and that they don't cause issues on your setups there were cases when the automated updates stopped working all the updates received for an automated update tool they broke some compatibility typically it is related to printers or networking services and other things there was nothing really severe but uh at least it is important to keep in mind about this schedule and also subscribe to some microsoft news feed for vulnerabilities for laptops or servers um all links systems automated upgrades especially if you run debon or ubuntu they are quite solid they are quite reliable and uh typically um you don't need to do anything once configured so uh they are released as soon as the vulnerability is discovered and the typically within 24 hours you receive this update what is important to remember is that uh on many of the Linux systems the update the automatically installed update will also upgrade some configuration files and in order to ensure that the upgrade is smooth uh you probably should like check from time to time that there are no breaking changes in the updates and that you don't uh or that you follow the operating system standard as much as possible to keep the um all settings in one place tidy rather than spreading it across the system not to create situations where automated upgrade mechanism will be kind of surprised to see the incompatible settings and so on so uh making yeah and then we here we should make a quick detour on general system hardening on one of the very popular questions is oh we have our linux server and we would like to make a hardening using css benchmark or different standards that are available uh does it help and the answer is yes and no yes because these recommendations are generally good they are battle proven they are compliant with different security requirements in multiple countries and they are really improving the security at the same point of time um any change that comes with this hardening tools or recommendations is a kind of a deviation from the standard setup and let's imagine that this the developers of the systems they are testing the standard setup maybe test some configuration options and if you introduce too many different changes into the security setup it can lead to some unpredicted behavior that was not the standard one that might probably not well tested and you really need to understand and dig and research especially as always with open source into what you're doing and if this kind of security hardening will bring more harm rather than benefits so I'm a big fan of like hardening systems but I would really discourage you to just run all kinds of the checklists without proper checking before what they do and understanding the impact which is not so well so this is just a heads up that's why the most secure configuration is the default one with some essential recommended security settings and the last part of it if you go to any website of Postgres, Tomcat, Linux operating systems and many other products including the HS2 we have either a hardening guide either there is a hardening guide a security guide or security recommendations that are explaining what the system developed first thing from the Alpha Secure setup without implementing is the baseline security minimum there two things here encryption is a kind of a default thing but we still meet setups where people connect to the HS2 using HTTP it is not enforced by default because you need to acquire certificates from somewhere less script or a vendor the problem is that it's not only about sniffing the context but it's also about modifying the concept so in there were cases that in certain countries on the mobile network operator level there was some malicious content ejected because the infrastructure was not reliable and the attackers were able to monitor and modify the traffic coming from the mobile operators network so HTTPS guarantees that security settings were applied and you haven't treated data within your connection and nobody can modify it on the fly. Also if you have two integrations for HTTPS it's important to check that they are both consistent if you have to use both and people often for example change security settings for HTTPS tap on the browser and they don't do it for HTTP so there is a little more relaxed access control or lack of access control on HTTP and this brings some inconsistency and potential security issues the same by the way applies if you run both IPv4 and IPv6. Half of the implementers forget that configuring IPv6 security firewall should work also IPv6 and so on so it's also one more thing to remember. This is what we discussed today so if the client IP address is not crossed to the backend due to misconfigured HTTP headers we are not able to detect and distinguish users and incorrect or inaccurate information goes to the HS2 log files so depending on what header we are talking about there are different types of issues but at least X4.4 should be configured and if you lack X4.4 with proto headers on the reverse proxy it means that the traffic can be rerouted or in the wrong way or the security can be degraded on this connection. Running Tomcat as a dedicated user is a must it's an important requirement so if there are other users that share a group with Tomcat or there is a user that is used for different access compared to Tomcat means that this user will also have a group who has access to the HS2 data which might be not desired so one user one group is a one service is a kind of a default setting for also system services and Linux and if you don't change the default setup it should define it. Also it seems to be kind of a secondary problem but restricting access to the database or if you're around multiple services with HS2 and have a database on one server a web application proxy on another or reverse proxy and a web application server on the third one if you don't restrict network connections and you have lack of brute forcing or lack of protection or lack of authorization means that at least one of these components can be probably attacked more than others and they can get an additional attack vector there. Same applies to the big database passwords so if there's a chance that someone will connect to the database directly we always need to ensure that the password is quite secure because brute forcing the passwords on the database is much easier typically much easier and nobody thinks about this problem they want. The same for an encrypted connection to the back end or you can have as between your users and your data center but within the data center the connections are not encrypted. If you don't control the whole environment now you're not sure who is on your network probably a good idea is to encrypt the connection between the HS2 and the database and between the Tomcat front end and the reverse proxy and Tomcat. One more thing about using most recent software this is two pictures have default network security or HTTPS TLS setup for Ubuntu 18 and Ubuntu 22 so if you set up both systems have long-term support so it means that they should be secure in the default configuration if you set up them just downloading and configuring with default settings. If you do this configuration on Ubuntu 18 and run the HS2 behind the engine's proxy and configure a self-significate and test it for security with this website you will get a grade B with several security vulnerabilities in place. If you do it on the most recent Ubuntu 22 LTS you will get the apparently good grade A you can get also A plus for new systems but it's just to demonstrate that the modern software has more protection than the old one so it's not necessary about upgrading but if you can use the new versions or can you keep up and do new versions of everything in your infrastructure it's better to use them unless it has any performance or other considerations it's not to upgrade and LTS support eventually ends. The last important thing here is learn about vulnerabilities early so I mentioned that all major vendors have their vulnerability newsfeeds there are shared newsfeeds we are also like sending notifications to our implementers so if you can for example subscribe and put this closure mailing list that notifies about all kind of the vulnerabilities specifically for Ubuntu maybe Postgres and others and we have our own security notification list that we represent early notification of potential vulnerabilities and as we are quite a secure application we don't scam you so we send it once or two times per year that's all for this presentation and if you have any questions or use cases to discuss we can do it right away yes I think you have a previous value a new value in the logs yeah so you can see what kind of a data was changed but maybe it doesn't apply to all operations but if if you have all the flow you can see which user changed what kind of data it might be hard to tell what operation was it but at least you will see that now it is not enabled by or if you hold the flow you need to enable separately you would like to connect because it previously came to database and it was kind of an extra load for database because naturally operations but now you have it in the file and it is more performance lightweight from the previous session there was a request to discuss one more thing that I will share it again so yeah so we proposed this thing some time ago more specifically last year but that version didn't get a really good feedback so now we are doing it once again okay so we have created a data access agreement which is a legal but still very very affordable or like easy to understand agreement between any organization that uses the HS2 and contractors or his groups or supporters internet as anyone who can access the systems and do some maintenance work consulting and other things and the idea was that we would like to put some legal framework and safeguards around access to the systems ensure that both the organization and the contractor are kind of a bound by some terms and this is a kind of a template it's a something that you can use adapt in your organizations it was legally reviewed as well so can be changed improved and so on but at least it has a bare minimum of the things we highlighted with the yellow things that you clearly want to change in your case but I suggest to go through this agreement and to see what it tells and there were some but in the last weeks I've got some requests about it and I think it's worth looking into so this is a data access agreement between some organization and the consultant in the broad sense and we suggest that it is an individual agreement between an organization and the consultant because the responsibility for access is individual and depending on the contractor's consultant location there can be quite a lot of cases and from the organization perspective you would like to know the specific person who had done something in your system rather than to have in addition to all kind of the regular support contract you would like to have an agreement that focuses on security and privacy and the general use of data so it's not a kind of a support agreement where you have fees kind of a slay and other things but it's a data access agreement with focus on security and privacy that's why it's between an organization and a person we say that we follow data privacy laws in both jurisdictions and it's important not to list because I may be living in the United States the organization may be in Europe or anywhere in Asia but we are saying that both parties are following the laws and both of the laws apply which is the most kind of common common situation and despite the fact that the definition of personal data is different in many countries it still makes sense to include as much as possible here we introduce some initial definitions for what's happening and it has a strong focus on personal data because under almost any personal data protection act part of the process personal data both our organization and the consultant they have certain obligations in case of the data breach so this kind of language protects us so then you list the purposes of what's actually the consultant is doing and tasks and you list in any form any activities that will be performed you can list as many as you want we took some common examples use cases and it may be as many as you want um period of access is self-explanatory we always suggest doing it on a limited time basis it can be as long as you want but just to have a within a time for access and two things that are allowed and is allowed so list things explicitly both lists can be empty but typically you can put it into the agreement that you don't allow the consultant to reboot the system or to do some kind of a sensitive operation without your consent without extra procedures and it is useful for the cases where for example the consultant trying to prove something makes it worse and it also guarantees that both parties have read and agreed on certain procedures how access needs data and then there are the core there is a season of access clause which explains what should happen once we stop access that the credential sorry both dates are returned if there is a physical media for keeping the data and the more important is that there is no resharing of the data so once the party got access further resharing with colleagues friends within the organization is not the subject of that agreement and for any further resharing we need to get this explicit permission from the data owner and the date is confidential of course and one more these things are measured during by the privacy laws but we don't focus on privacy itself we just focus on the regular day-to-day security routines and the useful thing is number 15 it is the data security of the consultant's laptop to ensure that people who access they don't decrease the security of the target system so requiring to use strong passwords encryption having antivirus installed and so and also if the incident has happened and the consultant comes aware of that you should report about it as soon as possible that's it so it is as I mentioned it's a heavily revised version of what we had before and it's publicly available I think we need to put a license on it to ensure that you can freely copy, modify and reuse it but before doing that and doing an official release so if you have a need of such kind of agreement internally I suggest to make a copy download I'll share the link to agreement is in the Q&A document for the first session and you can download it and use and if you have any feedback we can discuss and improve it and that's all from my side for today so if you have any further questions I can answer yes what about access to the server as the same access you serve with someone to help you um for who once yeah never did it before never did it after okay so if you did the data access if you shared shared access to the events to with another organization or something and a login agent to for example and in the past or in the course of the database you know that put your hand back up and how many of you put your hand down uh how many of you put your hand down if you have uh if you had something like this in place when you shared that actually but everybody else here probably could use this I think it's something that's often overlooked but it's very important to act even just just to you know go through the process and then think about like what am I doing here what am I giving some access to and who's getting access to what and you need to think about that and then you need to have some record of that this was shared with this person they agreed that this is how they're going to use that data they're not going to sell it to someone else all of those things right um but I think it's very very yes it's in the document uh that q and a document it's linked there yeah and you can just access it online from the document that we shared for the previous session and once we get the conference materials we'll provide the link and if you still didn't get it you can write to security at dhs2.org and we'll share a copy at once I have one question but this might be for the austrian yeah so the question is uh we share dhs2 uh private data personal data with the sound edges the idea is that they need uh the whether she's taking any services and against that they use some money so the problem is they don't need the service but when we give the API access they can also we can actually reach to the data agreement that we cannot restrict our modality we cannot restrict certain things so they also need the ID the registration of our national ID while number might be named to verify that that's it only attributes might be and why that they are taking service or not so whether there's any possibility because currently we were using earlier characteristics now 39 there is no such functionality so they can access by API if I give to a certain user no data access so we give as a group which is a certain program and sometimes program stays not long so that we cannot leave the specific data element this is the future today so whether that's possible because in that case the view or she might ask about the service data currently it's not may be a concern because it's a manual for instance a way to say TV good question and it depends a little bit how you model your system there are ways to restrict access to uh just in the future obviously you're preparing for that role and in some cases that if you have some sort of the information not segregated uh I don't think it might be because of all the things you make it for the program stages don't interact with the program stages they're going to be I don't know to bear in mind not some some kind of insight into some of the coverage as well but there are other ways to go about it as well right if you could walk a lot back down you can export the data and some other you export it you filter it and get only the data you need you put it on a csv file or give them an access with a chip if you don't give them direct access you can give them access to just the data that is going to be the value I'm generating out of my databases and but there are other ways to go about it as well the final speed I know we're working on kind of doing a review of the authorities that are in the system and probably also the different types of objects that can be shared in a granular action control and so definitely something to look at if you're going to be able to do anything don't and I just said on the following up on like how to be shared access but in Laos like we did this during the covid we had to share the data to Laokay YC company which is was part of our working under the minister of intelligence and it's a private company which has all the mobile data and their personal things so but then the ministry of health is giving the covid vaccine right and they wanted them to be used in the in their app so what it was is to they can search in phs2 based on the id number and other things and then they download the covid certificate data or they made an app so that when the people can download the covid certificate in their app how it happened was the back end services would spoon in and be allowed the same principle so it's not they're not they're not accessing the phs.co so we have given them the need to take and search for these three fields from me via the back end services so and then like we had a token and all of the things we don't give them the access to phs.co so again it's the same thing when you do the integration with other people like we also need to make sure that the security model and everything we have in more before long come back what we still do is to create this username and password very less read only access and give it to them right that we have stopped right now we only use this the token based thing so that like you can fix up you know whether it is a system system or person it's better to switch from the order implementation where we give them the username password they look in the system so I guess we are moving towards in a good way but these agreements should be in place this is a technical document actually a document where people can read but like in the back world like the developers I think we are managing it but there is no actual document like I said this is the the things what we have been agreed on so that's I guess like it's very nice things to implement the technical value in public actually we do the same thing so and we are still to the reviewer we limit access because because of this restriction but for the certification we have full integration but we can we now in a days we get a lot of effects so this is top of the ministry of over-surfing so they want for it there is official review like this data agreement if they signed it so we have to give it but the problem is we don't want to because it will be too bad not to want to develop something why it was to think they want to filter which already they have the interest area so they don't take it so that the money cannot be given twice or something but even because there is something logic at the end so that was really one thing but where do you have to do to find the problem is so so ideally we can have the data he is not in the group see the user group will be there in the user specific actually but there is some issues so maybe I will say that this actually doesn't work also you need to create the experience you can do for that suppose you are required to do the data in that particular then you can create an experience for that and then to the API and also you can use the role and experience in the case today we don't mean always we are here to provide any experience will also there is a it's initially in report group and P2 there is a another one separate level group and P2 there is one one one last thing next then you have to make sure that you have to know about the or how to write then you can be particular otherwise in the case of the data and also you can use the filters and exactly the value of the the attributes and the value you can get the only back in the instance query you can use and when you are passing the that huge value you get only the in augmented professional what you want because we are using in the in the file and then we have the biometrics system they use the data so we have to talk of the data security specifically some people give you only one data in the solution if you ask for other so we have like different tools and if you take this particular you just do that and you can give only one security and the other is like then I'm going to work with that and the information you speak of and if I don't do it then all of them are outside because we just want to slide sometimes we have good products like we have file servers which is just for the they use it for finding maybe in a month or so but they can shift the point of action but they normally forget that they still have real data in the file and even for to assess that data there has to be some BSA or the sharing agreement that's going to be their business sometimes I see that people don't want to because at that time they are not going to be here so they are not going to be here so they are not going to be here and they need to have this something my comment was that for this cases that we just discussed with access control I think we have a microphone here so I think we probably need the library of typical solutions of how to do access control based on the use cases and this is where we really need your feedback because for example John shared their experience you have your own setup and probably we could build some templates or recommend it or not recommend this things to do there not to do not to do because it's a complicated area and once you start it typically develops quite organically and once you have an access control model with granted permissions it is a bit too late to redo all the things that are working and you have set setup and building this rule so using the best practice I think it can save quite a lot of time and make it further maintenance easier yes please actually I am upgrading these systems I will give you my question no it's also called online actually I am upgrading these systems from version to version and I will ask for two security reporting certificates and they have a different agency of different tools for protecting the applications so we are getting the lots of issues and fixing the in-house developments and my question is that in future details one question is capture in login screen there is a two factor multiplication but some you are asked about the capture there is a capture in the details but same registration you have required but there is no login we use the Google capture or another capture actually when you use the Javascript type capture they are not passing the security over then you either use the Google authentication you need to report on everything but in details there is no concurrent login suppose one user is login in the wrong browser and same you can login same time in the Mojira this is a security in the cases there is no options for this another is the error page suppose users type wrong urx they show your socket information some exceptions or maybe Javascript this is also we have to create the another error page suppose you are writing anything you can show the error we have developed but in details there is no so this is the requirements and one is the suppose one user changes password and after the password there is there is the already in the back end the password change but user can use same system with previous password I have some comments SQL injection we have passed the back end suppose we are writing something urnm password urnm password is something else and login we can block the internally on in-house developer but in recharge there is a issue so I think in future in person it can be possible because one thing goes out suppose we have created one user limit user data into operator we have not shown the some well-modeled like imports or import access but the intelligent users maybe open the another tab and and they have no access but they can use easily they can open these models this is called in dangerous in the 2.3 end if you are testing some users they have not access to import access but they can open the import access to another tab you are because you are just seeing the but when you the agency can do this you can audit the tools and this tool violates this type of maybe but because they do not pass this you cannot push this the first immediate comment before Austin answers before I remember there were quite a few issues that we mentioned could you please send them to us in some like email or form because I've heard several important ideas to follow up and it would be great either we can just sync up after this session just to write it down because there are many of them I probably can misinterpret them I have something to comment but Austin you want okay so immediate one on capture I didn't cash maybe we don't have it everywhere we don't have it at all probably we have we have yeah yeah regarding the second one which is about probably protecting or prohibiting logging from other browsers once you have a session I think that it's it's not a feature it's how internet works from our perspective so you can theoretically restrict access to one IP address but it will be very hard to restrict to one browser or especially to one session reliable because you need to rely on this case on something on the client side and the kind of a sophisticated client can always for example take a cookie from your Chrome browser put it into Mozilla and continue using the system in the same way so there is no reliable way to prohibit this and more importantly once you prohibit it anyway for example limiting by one IP address or limiting by user agent or by browser it is much easier to lock out the user for example your browser crashes and then you reopen the page you try to log in and then you get a new session but your previous session from the server perspective it's still running it means that your user is locked out for a time before this session expires and if your section doesn't expire for a day it means that they can't log in for a long time so there is a lot of like complicated mechanics here I'm done we actually are in this one so we have a new current session in the default lock which allows you to set for example you're going to log in to one browser and then you're going to log in to the other browser it will log out to your previous session which is going to be locked in for a long time