 Jan Lieskovski and Martin Preisler with his presentation. Good morning and welcome. My name is Martin Preisler. I work at Red Hat. And today with my colleague Jan Lieskovski, we'll be talking about how you can use ASCAP to secure the cloud. So there was a talk yesterday by Josh Pressers called Security. Everything is on fire. So just in case you weren't scared enough by that, we will try again. So everything indeed is on fire, and there are several types of fires. We will discuss how to fight two of them. One is about software flaws. So having unpatched vulnerabilities in the software you have in production deployment. And the second is configuration flaws. Having your services configured in such a way that allows attackers to attack your infrastructure. So let's first talk about vulnerabilities. There are several kinds of vulnerabilities. The first one I would like to talk about are undiscovered vulnerabilities. We can all agree that these are pretty bad, but they are actually the better type of vulnerabilities. Because they are undiscovered, so it's relatively expensive to exploit them. Because the details aren't public. Somebody has to invest a lot of effort to find these vulnerabilities and exploit them. What is much worse are known vulnerabilities. So a vulnerability that has an assigned ID. It's well-known in the community. There are details how to attack with those available in public. Some of these vulnerabilities are so bad that they've even received nicknames. And fancy names lately like Shellshock, Poodle, and Venom. And probably the most famous vulnerability has even received a logo. So we can all agree that vulnerabilities are dangerous and they are a bad thing to have in our infrastructure. We cannot do anything about the unknown vulnerabilities, but let's try to do something about the known ones. Let's try to find some way to prevent having known vulnerabilities in our infrastructure. But this is getting increasingly more difficult because in today's day and age, we have single-purpose containers in our infrastructure and single-purpose virtual machines. That means that there are many different types. Usually we no longer have that one golden image that we can use and check. And this diversity of containers is making it more difficult to audit our machines. So we need some sort of automation, some sort of a tool that we can run on our infrastructure. And this tool needs to explore and do everything for us automatically. And I'd like to present such a feature in Atomic. It's fairly new called Atomic Scan. What Atomic Scan does is you can give it a container or a container image, and it'll explore it for CVs. It can scan either one container or one container image by ID or by the image name, or it can even scan multiple images, which I will explain later. The output can be a summary of these results. So in this example, we can see that we have a container that has zero critical vulnerabilities, zero important vulnerabilities for medium and zero low-separated vulnerabilities. When we use the dash-desk detail as a command-line parameter, Atomic Scan will give us detailed information about these vulnerabilities, including links to them to their description. And we can also, if you use dash-desk containers or dash-desk images, we can scan all the containers we have or all the images we have in our infrastructure. So this looks like a pretty nice, magical tool, but as a security guy, I want to understand how this works before I trust it, right? So let's very briefly discuss how this works and why it works. So the first thing that happens when this command is run is we look into the container and we detect the operating system version. So be it REL6, REL7, Fedora, CentOS or something else. Then according to that version, we need to find the CVE database, so-called CVE feed from the vendor. So for example, if it's REL7, we need to download the REL7 CVE feed from Red Hat. Once we do that, we load it with a tool called OpenScap, which we will describe later. And OpenScap will look into the container and compare the versions of the software in the container with the versions in the CVE database. So if the vendor is giving you data that some version ranges are vulnerable to some vulnerabilities, OpenScap will report them. So here's how it works architecturally in Atomic Host. We have the host OS with the Atomic Command. On the host OS, we have a super privileged container where all the functionality is encapsulated. Only this privilege container has OpenScap and OpenScapDemon, both projects that power this functionality. And when you call Atomic Scan, a debuff call is issued to the super privileged container, and the super privileged container looks at other containers that are on the host OS to explore them. Okay, but security is a much broader term than just vulnerabilities, which we've discussed so far. We need to discuss the other important part of security, which is getting the configuration right, so-called hardening. So let's discuss this right now. So let's start by explaining what a security policy is. Security policy usually is some sort of a book or a very big binder that's human-readable, and the auditors, they carry it to these companies where they audit it. It contains a list of rules to follow with human-readable text about each of them. Usually it's a description of the rule, what the rule is about, some rationale, so why you should comply to this rule or not. And then some guidance is how the auditor can check that you are actually in compliance with this rule or that you're not in compliance. And of course, some guidance on how to fix the issue if you're not compliant. As I said, this is usually a PDF, some spreadsheet, or some other human-readable text, or even a big book. It's not important really to read this text, but this is just to illustrate, this is a real-world security policy called PCI DSS from the payment card industry. And you can see that these are human-readable rules, and there's no way we are going to read this and evaluate it for all our containers in our infrastructure. So we need something to make this automated. And I'd like to introduce a standard that does just that called ASCAP. It's a NIST standard from National Institute of Standards and Technology, a USA standards organization. And very simply said, it's a set of data formats for taking these human-readable policies and making machine-readable guidances from them. So this is an example of an ASCAP security policy. You can see that it's more structured, and instead of having just human-readable instructions, it also has some base snippets that you can execute to fix the situation or check. Other than that, it contains very simple information. It contains the descriptions, the guidances, everything the human-readable security policies have. So there are two types of ASCAP security policies, two types of use cases. We've just discussed the first, which is to detect CVE vulnerabilities like Heartbleed, Shellshock, and so on. And now we're going to discuss security compliance, which is to ensure that your infrastructure is set up according to some security policy. So as an example, when we're doing security compliance, we're asking different questions than with vulnerability assessment. We're asking whether root can log in over SSH. We're asking, for example, if AC Linux is enabled, we're asking if temp is mounted on a separate partition with no exec. We're asking if we're running any obsolete insecure services and so on. And, I mean, just the standard is not enough. We need some sort of implementation to use the standard. Open ASCAP is just that implementation, and it's a project that we're working on at Red Hat. Open ASCAP is an ASCAP 1.2 implementation started in 2009. It's NIST certified, and it's a library and a command-line tool, but also a GUI front-end is available for easier use, and I'd like to demonstrate this GUI front-end now. So let's give you an example. Let's scan a single Fedora 23 machine with Open ASCAP and ASCAP Workbench, and let's use a common profile from ASCAP Security Guide, which Jan will introduce later. Let's just use this as an example for now. So the first step we need to do is to install the tools. We need to install two packages, ASCAP Security Guide and ASCAP Workbench. After that, we just need to start ASCAP Workbench. When ASCAP Workbench starts, you'll see this screen. Let's just ignore customization and profile for now and focus on the rules. ASCAP Workbench will give you titles of the rules it will check, and when you click Scan and wait roughly three or four minutes in this case, ASCAP Workbench will give you the results with no other interaction necessary, so it'll give you fail and pass results for all these rules. But what if you need to find out why you actually failed for one of these rules? You can click Show Report, and an HTML report opens up in your browser with a breakdown of the results. So in this case, it looks terrible, like 27 rules passed, but 46 rules failed. Keep in mind that this was freshly installed for Dora and a pretty strict security policy. But what if we need to explore why some particular rule failed? We can go into the report, find the rule that we're interested in, and when we click it, we see the description of the rule. So in this case, it's a rule about password minimum length, so about password policy, and this rule enforces that users have password of minimum length of at least 12 characters. So we can read the description, and when we scroll down, ASCAP will explain to us why this rule is failing. So in this case, ASCAP says that at C login depths, there's a password length value of 5, whereas it was expecting 12. So now I would like to give the stage to Jan, who will explain more about different security policies. So hello, my name is Jan Lioskowski. I work as software engineer in the Red Hat, in the same team like Martin. And the primary area of my focus are security standards and security policies. We try to simplify the application of security standards in different software products. And I have came here to present why you should be interested in security policies, which of them are actually already available and what will be the next steps. So why the need for security policies? There are multiple aspects. One might want to answer these questions. I have created a couple of them, and so let's have a look at what we have got. The first point is Linux distributions are multi-purpose, are intended to be multi-purpose. Of course, one would expect different security requirements for some classroom workstation, another for high-performance computing server, and yet another for some laptops that are expected to be used on airports. Then there are those high-level security standards. For example, payment card industry data security standard, to mention one example. The issue here or the observation here is that they are often expressed in abstract language. They are universal in the sense they should be applicable to variety of different operating systems, different software products, and they need to be that universal. In contrary to that, there are concrete operating system details or concrete software details, like concrete steps you need to perform to be aligned with that standard. As Marcin already said, due to difficulty and the complexity of those standards, there might be a desire to have these things automated. A SCAP security guide. Like Marcin briefly mentioned, a SCAP security guide is basically a collection of security policies expressed in a SCAP protocol format. It's expected to be suitable for both the humans. In the sense you can see what the policy will actually perform when you scan the machine, and also to machines they are expressed in a natural way so the machine can automatically process them. Like we mentioned, there might be a desire for automation, so a SCAP security guide provides all the content that is necessary to fully automate the process of scanning and correcting a system. It's a community project. If interested, you can contribute. If you find some bug, you can file an issue. It's open source. All the content we create is released under public domain. So this slide presents some policies that are already available. There are two concepts to mention. One is security policy, and the other is security profile. If I should use some simple analogy, we could map security policy to library. Library for technical literature and profile would be more specialized, would cover more specialized subjects like mathematics, physics, etc. For example, here we can see that currently we ship security policies for RL, Federa, all those derivative operating systems, also for Debian 8. What's interesting here is we are not focused anymore on operating systems. There are also products or software components that are known to have a lot of security falls and therefore require or attention to create security policies for them. For example, Firefox or Java are the two main examples of policies you might be interested in searching for. Of course, if this picture isn't, like, is missing some of your product you would like to see here. It's easy to start contributing. File box, feature request, provide justification why we should be interested in your product and we might have a talk and can cooperate together. So security policies. There are, like, in any other human activity, there are two sides of the coin, the bad news and the good news. The bad news is, like Martin said, that you need to install a separate package when you want to use them. The good news is that in solutions, we often provide the security policies already included, so you don't need to take care for additional installation of these policies, but they are somehow already included there. For example, this is some blog post about, some announcement about Red Hot Cloud Forms or Manage IQ. If you want the interesting point here, it's 8th November last year and they justified the enhancement in the security area and as you can see, they explained it while they are doing it and they added support for security template, implementation guide and SCAP protocol. We can meet security policies also on localhost. This is the case Martin already introduced. Basically the very same tool. We can meet them even during installation of operating system. It's not necessary to split the tasks into two steps, like install the operating system and then enforce the security policy. It's possible to merge these two steps into one. Here is example of tool which is called Oscar Benek on the add-on. It's a graphical front-end for those that don't like graphical installations. I understand that for those thousands of systems, you probably wouldn't use the graphical interface. There is the kickstart equivalent, so you can use this snippet in playbooks and script engines. What's interesting here, basically the important is just the profile role where just by changing the profile name, you can install a system compliant against different policy. Some example, as I mentioned, Firefox is one of the tools that are carrying pretty a lot of security falls. This is an example of what a user might want to harden the Firefox instance more. For example, they might want to disable secure sockets layer and replace it with the security on the transport layer. A node here, these rules aren't interconnected somehow. You don't need to use all of them. It's enough to specify just one of them. It's enough to specify any subset. There is no relation. This is just an example. Another example for the Firefox Valley, what might be interesting is to enable the certificate validation. Maybe all of you remember all those warnings that this certificate is untrusted and if you want to accept the exception. Another example might be if you are bothered by those Firefox windows showing up, you might want to set or specify, enable this rule. The natural question is how where this policy is created. So I will speak a bit just motivation of why we would want to customize the policy. Here is an example of PCI DSS rule requiring some password length and the categories to be present in the password. We can see they require seven characters and two categories. Of course, this doesn't need to be sufficient for each individual developer company. So they might want to strengthen or vice versa, weaken the existing policy, strengthen in the sense they might want to specify or require four categories or less categories in the case for weakening it. Another use case, why to interest in the policy customization is to create your own one completely from scratch. Here is an example, again using Escab Warbench, how we can customize a policy. What's interesting here for us is basically just the customize button and here is how we would do that. We would, in the top, we would use the select all button and specify just those rules we would like to use. In this example, I have created a picture for the first case of Firefox policy, like disabling secure sockets layer and enabling transport-wise security. You might be interested in more details how to customize the policies, so we understand that this is like a complex topic, it's difficult to customize them and we have created a dedicated portal to all the tools and policies related with OpenSCAP. Here is an example of the tutorial focusing in more detail how to customize a security policy, but you can find there a lot of additional information about the tools we present here today and about security policies, et cetera, so check it out. The question might be we have presented a lot of tools and a lot of policies and if there is anything left for the future, a lot of things have been done and if there is anything we still need to do, so the answer is for sure. To speak about in more detail, we want the policies and also the tools to be integrated with more technologies. To mention some examples like Docker, OpenShift, OpenStack, Red Hat Enterprise Virtualization Solution, so if you are interested in these technologies and the topic of security policies sounds like fun for you, we can have a talk, we can cooperate in the future together. So that's for my part. Thank you. Okay, so Jan has talked about how you can get the content for infrastructure and let's now talk how to really deploy it. We've used GUI tools, which is useful in some use cases, but sometimes you don't even have an xorg server available, so you need some command line tools to use. I won't explain this in great detail, but just keep in mind that what we've shown in the GUI tools will also be accomplished with the command line or SCAP tool. We also have convenience wrappers to scan containers and VMs. The advantage of those is that you don't need to install any tools in the virtual machine or container. You can scan them from the outside, which is what security people usually like. They don't like to install new software in their production. Okay, so we've made it much easier. We've come from the book, from the binder, which is a human-readable text, and we now have some machine-readable policy. But we've only done solicited one of scanning, so the system administrators, they still have to load the policy manually and scan each machine manually, and this is not enough, because if something needs to be done manually, people will just ignore it in time and they just stop doing the compliance. So let's now discuss how we can accomplish scanning some resource continuously, for example, scanning a container every Sunday around midnight. So for that, I'd like to introduce a new project in the OpenSCAP world called OpenSCAP Demon. Instead of being a tool, it's a service. When it runs, it provides a D-Bus interface. There's a command line tool available for interaction, but it just issues the D-Bus commands to the Demon. A central task in the Demon is called Task, a central concept, and the task is about evaluation of some resource. So usually a task contains the content, the profile, which resource you want to scan and when you want to scan it, so some sort of a schedule. Tasks can be evaluated on demand or a schedule, as I said. Another goal when creating this project was to make it a bit more interactive, because as you could see in the OpenSCAP tool, you need to put in very long IDs of the profiles, and this is a big pain even for me. So when creating the tool, we made it more interactive. When you're creating a task, the Demon gives you a list of probable choices. So for example, here I'm creating a task to scan a remote machine every Friday, and when I press Enter after setting the target, the Demon tells me the Escape Security Guide contents that are available, and I just need to put in a number. So in this case, I'm scanning a Rails 7 machine, I believe, so I'll put in nine. And after pressing Enter again, the Demon will figure out which profiles are available, and again, it'll let me just type a number and pick the profile. So I think from the usability perspective, there's a big improvement over typing the IDs manually. So in this slide, I've made a screenshot of a task overview. I've created a task for everything the Demon can scan, so that's the local bare metal machine, a Docker container, a Docker image virtual machine that's either running or shut down, or a storage image or a remote machine. So it's like a showcase of what the Demon can do. We really tried to unify the scanning interface in the Demon so that you can scan all the resources in the same way. So after you've created the task and they're scheduled and they're running, you need to get the results. So this slide demonstrates how to do that. You can get overview of all the results. The Demon only stores the machine readable results, and the HTML reports we've shown are generated always from demand and they're thrown away immediately. For debugging, you can also get standard output or standard error from all the results. So I've very briefly introduced the Demon for continuous scanning, but keep in mind that it's a pretty new project and it's designed for smaller deployments. If you have a deployment with hundreds or even more machines, you probably want to look into Forman, which we think is more production-ready and more suitable for these large deployments. So I'd like to show you how it works in the Forman. The concepts should be familiar by now, but you're also creating a policy, choosing the content, choosing the schedule, and choosing what to scan. After the compliance policies are created, you can also see the results from all the machines and view the report. The reports are also generated on demand and the same familiar reports in Workbench and in the Demon integrated into the Forman interface. Thanks for your attention. This is all. Are there any questions? Yes? So the question was, is there anything in the Demon or in Forman that when a machine or container is out of compliance, it can perform some sort of an action? There are fixes in the escape content, so you can run remediation and remediate the situation. So in case some service is installed and shouldn't be there, like RSH, it runs YAM uninstall RSH and it stops the service. We have some plans to, like, undeploy containers that are vulnerable, but these are just plans for now. Are there any other questions? Yes? So the question was, are we scanning package data for the known CVs, invulnerably the scanning? Yes, we are only scanning for known CVs. As far as I know, there's no way to scan for unknown CVs and we're only scanning for the RPM package version, so we're comparing version ranges with the versions of packages that are installed. Any other questions? Yes? So the question was, in Escape Workbench, if I don't have a browser installed, can I view the results? What you could do is you could save the machine readable results and open them in VI, but that's very inconvenient. You could also save the HTML reports and open it with links or some other browser, but I guess that's, again, a browser. Okay, so the question was, if I'm hardening a system, I probably don't have a browser. So in this case, I recommend using Escape Workbench from some machine where you have a desktop system and you can use it and scan a remote machine so you can do all the browsing machine. Any other questions? Yes? So the question was, does Workbench support running the remediations for you on a remote machine? I'll see if I have a screenshot of this somewhere in here. There's a remediate checkbox on bottom right. If you check that and click scan, it authenticates, it runs the scan, and for failed results, it then runs the remediations automatically on the machine. That's a pretty good feature request, but we don't support that yet in Escape Workbench. If I could step into this, I would say that you can like initially scan the system, realize that there are those 10 failures, and create a custom policy like you would select just those three rules and run the remediations just against those three rules. So the question was, can we also scan according to some events instead of just fixed schedule? We are looking into these options, but right now we can only scan according to a schedule. Any other questions? Yes? That's a very good question. So if we have a security policy that, so the question was, if you have a security policy that disables root access over SSH, how do we scan remotely? We have support, we have something called OSCAP SSH, which is a wrapper script. It can do SSH and then sudo on the machine, scan and return the results back. So that's one option you can use. You can also scan like containers and VMs. You can scan from the outside. So even if you don't have any like SSH access to them, you can scan them from the outside. In SK, Workbench has a feature request for this, but it's not implemented yet. I would like to implement the same sudo support. So you can log in as a normal user that's in the real group and then sudo when you're logged in. Any other questions? Yes? So the question was, do we do some research on performance impact of the scanning? To be honest, it really depends on the content. There's some content that it's very demanding. For example, if you're checking that all the set2id binaries come from RPMs, you need to check basically every file on the file system. So we've done, my colleagues who've implemented Oval, some of them have done optimization in Oval, so they cut some of the branches and they make it a bit more performant, but it really depends on the content. We try to make the content fast, but sometimes for some policies, we really need to check all files. So in this case, it's slow because there's no way to make it fast. Yes? So the question was, for the CV scanning, are we using public CV databases or do we integrate the CV data in the policies? You can do both. In Atomic Scan, for example, we download the public CV feed and we use it directly, but in Escape Security Guide, in this example, it's not visible, but sometimes there are rules like, are there vulnerabilities? If the software is actually up to date and there are basically two approaches, we use the official CV or Oval content from the distributions. This content is available not just for Red Hat, but also for other distributions like SUSE, etc. So in the cases the CV data is available, we are using the official content. So the question was, from the time when a vulnerability is published to the time when it's available in the policy, like how much is that? Well, it's a question that I cannot reliably answer. It's a pretty hard question, really. It depends on a lot of factors. For example, for some vulnerabilities that cannot really be exploited, they may never show up in the CV databases. Usually the vendors, they only include vulnerabilities that customers care about or that users care about. The answer depends on many factors here. The primary factor is the importance of the security for all, which is very important. And the period in which they are processed is then shorter. For critical flows it would be very short, and for all security flows they might not even get into the policy. Thanks for your attention. We are out of time. We'll be around. Thank you for the questions. Hello. I've already got one. I have another question. Thank you. If you have more questions, feel free to check it out. I've never used it, largely because I didn't understand the capabilities. But I think I might have to play with it. There are a lot of things done and a lot of things ahead of us and it might seem confusing, like why all of these tools, like why each functionality is separated in some way. People rather have one tool doing everything, but the modularity gives us the possibilities and the functionality we can integrate. I understand. I think so. Yeah, yeah. It's the same with all the different tools that we have at Red Hat. That would be great. Yeah, okay. So you don't want it? I'm all set. Okay, thank you. You can ask for a presentation on the flash until you have the time. Who takes care of this information which is public but is integrated in the... No, okay. So yeah, I do the question more academically, I guess. So there are several problems. Some sort of cooperation with the guys who wrote this... Yeah, yeah. The way it works is, Red Hat publishes an update on the data and is some metadata, like which possibilities it fixes and so on. And when it's public, maybe we have some sort of script...