 Well hello and welcome again to another OpenShift Commons briefing. This time we are going to do some talking about security practices in OpenShift, and we're really happy to have with us Amadeus, who has a rather large deployment of OpenShift and has a lot of experience with it. And Ninad and Deogenes from Red Hat are going to both give us a bit of an in-depth dive into their best practices. So I'm going to let Deogenes, who I work with at Red Hat, take it away and introduce himself on the topic. So go for it. Thanks very much, Diane. My name is Deogenes Rotori. I'm one of the product managers for OpenShift and happy to have my friend now, Ninad, here from Amadeus. We've spoke together at Summit last year and now this year as well. So it's been a very, very healthy relationship for both of us so far. So this will be pretty much Ninad's role. I'll have a couple of moments. I'll be interrupting him for a few things here and there. But I'd like to say that this is like Amadeus experience in securing OpenShift. It's like being the size of the company that they are and dealing with the different number of people from groups and organizations, from different lines of businesses. Amadeus is good to hear it. The things that they have to go through, like the hoops, people they had to convince. So hopefully this will be useful for you, the same level it was for me. So Ninad, please take it away and I'll be interrupting you momentarily. Thanks, Deogenes. So let me just start about what we are. Amadeus, we provide IT services for travel industry and which basically means reservations. When you book a flight to go somewhere, you want to look at prices, a reservation and you want to have your luggage with yourself and you want to board the planes and all these things around flights or hotels. That's the kind of services that we provide. Most of them are business to business company. We operate a couple of large e-commerce websites for airlines. We do payment processing, other services and we have been using OpenShift actually up to three, actually since two years, probably since beginning, one of the first customers and we have been using it in our own data centers. We have been using it in public clouds and we had some questions around security. So let's first talk about security and why we do security in IT and hopefully we don't do one like in the picture here. That's how marketing sees security. We do IT security to protect our assets, to protect our computing capacity and data. We do because we have to protect personal information about our customers, about the users of our customers. In Europe, we have our GDPR general data production regulation, which comes from in place with some strong rules around security and as we do e-commerce and we do payment processing, we also have to be compliant with the industry standards, PCI, PSS, for example. So all these things, when you come with a new platform like OpenShift, makes you ask some questions. During my presentation, actually in the upper right corner, you would be seeing some icons. These icons will actually tell you who the slide concerns. So it actually starts first with the people who are usually the most impacted with security, which are CIS admins and DevOps. So they have the helmet to protect themselves, sometimes to protect themselves for the next guy, which is developer, which not always takes care about these things, unfortunately. So we want to educate him and of course we have our security compliance people who will take care about all these things about compliance and who will react if there are issues and we want to make them happy and to never have any security problems. So how do we do it? And first, OpenShift and containers. A lot of things change and many of the rules are not applicable. We have learned as a company, we exist for 30 years. We have learned a lot about IT security. Things change while the risks remain still out there and to our says, we have been migrating to OpenShift a lot of our products since two years and one message that I like to pass is that the best way to address security and tunneling and company, because you can have friendships between these three different kind of people, is to bring everyone at board to start talking from the beginning and to be honest about what this new thing can do or what this cannot do and what the new risk and then jointly actually address. So far, let's now really move to what we're going to do with security and OpenShift. We have to run OpenShift somewhere, so we have to put in place infrastructure. And of course, we have an OpenShift architecture and we'll not be explaining it because I assumed that anyone attending this briefing already knows this slide quite a lot because I think it was shown in a lot of presentations. And the thing we want is we want to deploy OpenShift somewhere. As I said, we deploy it on either on private cloud or in public cloud. We use usually OpenStack when it's running on our hardware or we use public cloud providers. And how do we deploy there? First, we start and this is something that concerns us as you can see in the icon above your sales admin, so your DevOps people. We first always reconstruct to VM images that we want to deploy. Everything with what the image needs is already prepared. We always use this from mirror repositories and mirror registries. We never actually download something from Internet available and Internet exposed repositories. We can also do a scan. Once we create these images, we can do a scan or use an OpenScap to check if all the patches have been applied, if the image is up to date. And once the images have been reconstructed for a given cloud provider, they are shipped to it and used there. Network design, historically, the result is this approach for especially for the Internet exposed services that you separate your application in different zones. You have DMZ and then you have layered protection for web servers and application servers. They may have several layers like that and eventually database and all these things are running different networks with firewalls inside. As we are running now in a cloud environment, these things change. When we use OpenStack, we use things like security groups from the cloud. The providers have different solutions to isolate the track. Of course, there is a need for sysadmins sometimes to access installations, so access control, which is a bastion server. There is surely the need for upgrade policy. How do we maintain this platform up to date? Basically, we have two approaches. The first approach is that some of our products are what we call boxes, which means the platform, the OpenShift and the application running inside it is one single thing. If you want to roll out a new version, beat a new version of the OpenShift or the new version of the application, we will simply rebuild a new instance. When it is up and running, we will bring down the old instance, which allows us a fairly easy way of upgrading. However, it has a cost of having, at a certain point of time, two instances running. Another approach is for more long-living platforms, would be to do a rolling upgrade by, for example, adding new nodes inside the cluster, inside the OpenShift cluster. Once they are up and running, removing the old loads from existing nodes and taking them out of the cluster. In principle, we are rebuilding machines. We are not upgrading machines in place. The frequency of upgrade policy weekly, monthly depends on the kind of the platform where actually this is running. Our internal OpenShift architecture, when actually we run it inside our premises, looks kind of like this. It may be different when we run it in the public cloud. This is a variation of the standard OpenStack reference architecture for the OpenShift. We use a little bit differently because we have strict separation of internal networks and external networks. In this slide, what is actually in boxes which are with dashes around represent security groups in an OpenStack. For example, for your worker nodes and for infrastructure nodes which share the software defined network, they would be living in one security group so they can communicate between themselves, but no one outside that can actually join this part. Infrastructure nodes, they would be accessible from a kind of DMZ or firewalls or other web protection things that you might have in front of them or my possibilities to be accessible directly from the internet. That's not the case how actually we are working. We always have firewalls in front of it. And then on the right hand side, we have our people who work and who maintain this thing. So first, we have DevOps people, people who might have need to install this platform, who might have need to react on some incidents which would be accessing it through the bastion server stage or which would be accessing OpenShift masters, either the console or command lines over HTTPS. And then we have the developers who in principle are rarely accessing OpenShift directly, especially production OpenShift directly in our case. They will be doing their CI CD development with our internal platforms, creating their artifacts, taking their doctor images, pushing into internal repository where they are validated. And only once those things have been really validated, they will be replicated inside production environment where production environment are able to load it. This is a way of protecting OpenShift environments, production OpenShift environments, loading non-validated or non-tested applications. So now we have OpenShift, it's there, it's running. And the first thing would be to log in. Here I would ask the hygienists to log in, but I will not talk to this time because the hygienists will simply type developer-developer, isn't it? That's because like from my perspective and I've been a developer in my life, like security for me, it's what was for me like only like login, but there are multiple aspects around security which we'll cover on. So some people that end up thinking that security is only our back, security is only content, but it's a huge wide array of security. And to your point, Red Hat has recently published a white paper called 10 layers of container security, which very much talk about lots of things that are concerning when using and developing containers that they spend across network security, container security, Linux process security, content security, and multiple other areas. So as a developer, he might log in with a credentials here, might also often do this thing, the command, becoming effectively root of OpenShift with all the capabilities, which is not the correct way of doing things in OpenShift. We want to have our back that Diogen has mentioned. We want to have some control of what people, who logs in, who can do what things. If it is a developer, his local machine, he might get once he logs in something like this. But if we're running something in production, we might have actually more types of users and it's our approach. Well, the way of authorizing or identifying user in production would be on something like LDAP as a company that's how we manage authentication. But there's also the possibilities of creating its own accounts, specific accounts for OpenShift using different sources of truth for authentication. As I said, LDAP is a common way and already contains sufficient information that could allow us to give different rules and different roles to different users. So, for example, in the given project, I'm the admin, I have the rights. Usually, our approach is that DevOps actually have those kinds of rights. Then we might have people who are some kind of support people can perform some actions and we can have developers. For them, we give them the view rights. They cannot modify, but they can have a look at what's going on and they can use that to investigate an issue or something that happens in production. So far for the airbag, let's now move to some nice things that exist in OpenShift, which is also security, which we like actually to use. We have, for example, OpenShift Secrets, which is a great way of decoupling sensitive information from application code. It's a way how we can manage and distribute everything which is considered confidential operation. It's like passwords, credentials, or certificates. It will not be something that will be used for business sensitivity, like credit cards. That's something else that has to be used. For something that's operationally necessary for applications, it's a really great way because it allows separating that information management from applications. It allows secure delivery of this information to the nodes. If it's present as a file, it will always be on a memory-run OpenShift node, so it doesn't go down to never be on disk, so we'll not, somehow, get hold of this disk, we'll not get hold of the Secrets. It allows access either through environment variables or through files mounted, which is basically the usual way of accessing such kinds of Secrets. It's great. We actually also separate concerns. We have a developer on the left-hand side who describes his spot and it says that he needs an environment variable here based on some Secrets. Then DevOps guy or a security guy can create this Secrets on the master and at runtime, those two will be linked together. It's actually a really great thing that facilitates this way. However, there are some less great things. This is the thing when you're talking with security people and you tell them, and it's like a regular thing to me, because effectively, today's Secrets are stored in clear, or almost clear in ETC databases on master. They're also present in temporary file system and memory on the nodes, and they're also accessible through API of the Kubernetes. Those will be really nice if we could be able to be using HSM or we can look at the vault like a software-based storage of Secrets, and it would be really nice thing to have such features inside OpenShift or some Kubernetes to be able to offload this part of security into such device. So, Diogenes, can you tell me a little bit about what's going on with the vaults and support for such things? Yeah, sure. So, important to point out that when Secrets was introduced into Kubernetes, even before RBAC, so for some people, imagine that there was no concept of RBAC in Kubernetes, that means anyone could access any object anywhere, and that means that as Secrets, they were essentially openly available to anyone that could see. So, we are working with the Kubernetes upstream community to have Secrets on OpenShift, and I think I would like to point out this, and I'm sharing this in the chat right now, and I know Nenad has it there, but I'm also sharing that all the product management that we do for OpenShift is open to the public. So, at any point in time, you can get into this URL and search if there is like any task or any work being done engineering-wise related to Secrets encryption or to vaults. So, if you search, for example, Vault, you see that there is a target for OpenShift 3.late, which is what integrates with other Vault solutions. If you search for Encrypt Secrets like Nenad did, you see we have a card. So, it's important that you keep following this link that I gave and also our Trello board and to see what things are happening. So, for example, this link mentioning that we have committed for 3.6 the encryption of Secrets at rest and what it means. Thanks, Nenad. Okay, those things are not yet there, and for example, we have things like PCI-DSS compliance which requires to have Secrets or Condenses encrypted. So, how do we address it? How can we do our move to OpenShift if we don't have such things? And there's actually a lot of discussion around this. Like, is it really an issue to have the Secrets stored in a TCD like that? Because anyway, TCD are one of the main parts of your infrastructure and if you have some compromise it, you probably have other big, maybe even bigger issues because it can compromise your master and then your cluster, then the fact that okay, they can steal this kind of secret. Then you can also encrypt the disk where TCD runs which allows the packet that are at rest and someone steals your disk, they cannot retrieve this Secrets. Of course, if someone compromises, they will be able to access them. So, there are two approaches that are possible for TCD. There's also possible that actually not all use Secrets to store them in Vault. Now, don't use what's coming with OpenShift and then put a place in the encryption service with a couple of patterns. If you stole a Vault, you might have a sidecar running inside your container that is responsible to connecting to this Vault and giving you the Secrets or you might have an init container which will read the Secrets, create a temporary file and you can use this one. We can do with Secrets is a service which is basically a service running inside OpenShift that you connect to and you get your Secrets. Basically, this becomes a Vault or a facade in front of Vault. Also, in terms of compliance, there are also possibilities to do things like compensating control which basically means if you can put in place something which is as good protection or even better than the protection, well, that might be enough for compliance. And let's say first thing, I said that you can access Secrets through the API. Luckily, OpenShift actually provides a log of activities, what happened. You can see someone tried to access a Secrets, like here, I tried to access the Secrets and I didn't have the right, so I get the 401 errors. So, this is something that can be activated inside in OpenShift Masters, not OpenShift by default, but when activated, you can monitor OpenShift logs and when you see a trigger like this, like 401, I mean, anyone trying to connect to Secrets, you might raise alarm to see if that person has their right or doesn't have their rights. And those works with the processes, you can also check if they are using it, not only the persons. And I don't think it replaces maybe ODD. Okay, here we're no longer talking about OpenShift, part of Linux. What is ODD? It's an ODD demo that can monitor, for example, any system call and provide logs about those. So, you can provide this either in files or you can ship them to our Syslog and then off to an alerting system, which can raise no alert. And it works on a set of rules. The rules can be like, if someone tries to access a given file, well, log something, or if someone or some process tries to do a certain system call, also do something. So, you can actually create rules based on this one or someone tried to access a given file. Let's say there's an alert and let's look if the person had the right or not had the right, or you can have a rules check. You should automatically decide those things. And it's something that plays for, for example, OpenShift Masters. To monitor if someone is really accessing, for example, a TCD. Here is just an example of the rules. A rule that could be put in place. Basically, if whenever someone tries to open a file one, any of the files belonging to a TCD database, we would be logging something via ODD. ODD can generate a lot of logs depending on what kind of system calls you listen to. We are, for example, listening just to the open. Someone may decide to listen on a read if they want to help a huge amount of data. And we can put that similar thing on the nodes. On the nodes, we might be more interested not on any access of a TCD because there's no TCD or any access to the secrets because there's a lot of access to different secrets which are generated by Kubernetes, which could be certificates, a lot of different stuff is present there. We just want to monitor some secrets that we consider important. And how do we do it? As any of the secrets might have a certain type of prefix there on infix. And we, whenever there is a new mount with this prefix in symix, whatever, we start monitoring it or do you think whenever it's down and unmounted, we would stop monitoring and then we would have rules which would say, okay, someone might try to access this secret and this is this user and this user actually has the right to read, so this process has the right to read it, or it might say, okay, no, it's not something, let's raise an alarm, let's investigate. And this at least gives you the possibility to react very quickly actually on the fact that maybe your secrets have been compromised. Mentioning ODD, there's also other interesting features because you can monitor a lot of different things. And there's an openscap tool, it can be very useful there because, for example, if you talk PCI-DSS compliance, there is a set profile which is suggested for this one and can be used as one for the blueprint of the ODD rules that you would put in place on your nodes using it. And there are a lot of different profiles which are applicable. So it's like a teaching tool, I should tell you, maybe not all the rules are really applicable in all the rights, but a lot of them are very interesting ones. There's another thing which we find very interesting in OBSHIFT, which we look into it, not to get to use it, but probably something that's where I start using at some point or something similar, is we have secured communication coming inside our cluster and our clusters are protected, they are on our premises, they're physically protected, but it would be really nice if we can add a layer on top that we can actually also encrypt the information inside our clusters between services inside, instead of using pure HTTP or encrypted MongoDB communication or whatever there is. So it would be really nice if we can always communicate using TLS as transport protocol. That's where we like service signing certificates. It's a feature that actually OpenShift can provide, which is that if we annotate a service with notation, I want to show you there, OpenShift would generate a certificate automatically, it will sign it by its root certificate, and for the more it would provide the certificate to any client services that might use it. And we can use it easily to establish the trust between those two and use it to establish the secured communication between services, consumer and service. Now we move to the containers. So I want to start with all this talk about containers. There's a lot of nice things about containers, how easy it is for them to ship into production and you use them everywhere. And I mean, developers generally, they love them, so great, and want to power developers, want to be as agile as possible. So developers, it might happen from time to time that they really want to be agile, and they really want to, so they start, and sometimes they want to expose a web service. And a web service, it will listen on a port 80, because that's the normal port that we listen to. And of course, port 80, we cannot run on, I cannot start a service which lessons on port 80 online, unless it's a rude or privileged user, because it's support below 1000. And then sometimes they might say, okay, I want to work with Apache web server, and let's take version 2.4.12, which is, I think, quite old, like several years, or they go to Docker Hub and they search and they find this cool, super powerful base image from, I know, Neverlandia somewhere, which is called Black Hat. And they just pull it and they just run it in production. And who knows what this image actually does. So I try to educate them, let's make them learn. I mean, all of these things are easily solvable in OpenShift. So our image is basically we don't allow root access on anything which is business load or functional load, which never have a root access. And one way of first of addressing that is actually OpenShift by default, when it runs application, it can run it, it will run it with arbitrary user ID. And that often comes as a problem for developers, because they take maybe one image, but this image requires a specific type of user. There's no particular reason why it has it, right, requirements, it's just, it's like that, because there are some permissions which are given for specific user, and that's how it works. It's fairly easy to address in OpenShift because this arbitrary user ID OpenShift uses, it actually belongs to a root group. A root group has no specific rights, except it's called a root group. So good practice is that for your images, whenever you need this setting of permissions, for anything that your application needs to access, you give it right to the root group and to the user, as the arbitrary user would want to root group, it will be able to read anything there. The second thing is running on the port 80, an example of Apache, actually even the base images like that come, but it's extremely easy to change that. We can lesson on a port, you know, 10,080 inside our container, in that case our container is not privileged container, and we can map this to port 80 using standard Kubernetes OpenShift services. So we can still keep our logic that, okay, it's a web service that's on port 80, or it's a website, but having sufficiently protected our container from a risk of using privileges which are not necessary there. And there are a few other also services, possibilities in OpenShift, there's possibility to use security context constraints, which maybe we have a need for a privileged container, or maybe we want to access something on a host machine, or have a given specific users, so security context constraints allows us to give this rights to some of the pods, so adding features, or we can go in the other direction, maybe we secure computing and we can set up and we can decide to even restrict even more to say, okay, our pods, they are not allowed to ask for time, we consider asking for time is very restrictive. We can restrict actually, it allows us to restrict calling different system calls. So also these two approaches one for going further, giving more permissions, one going towards giving less permissions for the applications. And of course, I don't want to pull things as as from untrusted sources internet. So in NMDs, all images actually come from our internal registry and we often use this redhead as base images. So we mirror for example, redhead's repository into our own one. If images cannot be mirrored from a trusted source, we rebuild them internally from source code, our internal infrastructure, and when our internal infrastructure never accesses Docker Hub directly from the build machine, so all the builds must be done from actually images which are already present and already trusted. And in production, we'll only give the access to images that have been validated and then we'll mirror them. And one other possibility is actually to run checks about across images to see if they have a security scan. So there's actually two tools there, image inspector, openscap Docker come from redhead. There are other actual tools which do these things. We are using Jenkins pipeline. So one step in the pipeline could be validation of the images that if they have security vulnerabilities are not. It's not something that we consider blocking actually today as we have experienced with images that are marked with security vulnerabilities because I know there's a JDK 1.7 inside and we never use JDK 1.7. It's more for the informational purposes. But I think there are some things coming there to help us also on the redhead site, isn't it? Thanks very much. And I can, yeah, so what you're seeing here in the screen, it's a recently launched container catalog at Red Hat. It has a few interesting things. So we launched this idea of a health index which will tell from a redhead perspective how healthy we think the content of that image is, right? So and here's the example of the JbOS CAP7 image. And if you see from our perspective, we classify it as a B. For B redhead, we still consider it to be a healthy container. So if you click on a specific tag, can you please click on a tag? Yeah. Do you want the security? Yeah. So if you click on a specific tag, it will tell you why do we think that that image is a B. And it will tell like for example, this image has one important vulnerability that has not yet been fixed for at least 30 days. So our health thing is we'll say that there should be no critical vulnerabilities in the image for more than seven days and no important vulnerabilities for more than 30 days. So for the contents we create, ship and package, we own the lifecycle. That means that customers like Amadeus, they can trust our images that they will always be safe. Just an interesting point is that having redhead owning the images is and shipping them into the lifecycle, it's a key point for more business. And I was just recently checking a thread that a lot of people use the Busybox image to build their own containers. And the version that was up until a few days ago had even malware inside the image. So just imagine that the type of impact having malware inside a container image would affect your company. So Theogeny, this is just this morning, one of the ISV's rocket chat put theirs into the catalog. Is that checkable using the same? Yes, it is. So it's the exact same catalog that's used for for our own software. It's also used for any of our ISV images or ISV partner images. So you can go, like anyone can go to access.redhat.com slash containers and search for images that are available there. And they will follow and be and go through the same level of updates and lifecycle management that we apply to our own images. Totally cool. Thanks. Finally some final thoughts about how do we guide ourselves when doing security. So first, we are quite confident we can manage the platform which is secured for unknown container vulnerabilities and make something unknown. There are risks and we are sure that we actually can manage those. It's definitely not a sole application vulnerabilities. It can help the things like secrets, some management, the things like certificates can manage. There's also the question of a multi tenancy. True multi tenancy would be some applications that are completely isolated, something which OpenShift supports. We find this quite a complex solution, still working on it, trying to figure it out. And we always start with principally successful. They are granting new capabilities as applications only not starting with privileged containers at the beginning. There are a couple of things that we will miss and actually many of those are on the wrong map and even implemented. Encryption for secrets, clearly we see that it's coming. We network pull essays to manage who has access to what internal and the grasp. It's actually I think already part of the latest version of OpenShift. It would be really nice to have an image inspector which is something integrated more with the platform itself. So something currently works in an outside tool. It would be nice to have it here. Our back, great thing. Maybe some out of the box new groups, security manager that hasn't managed what has been deployed, managed secrets, all the things that we're really nice to have in OpenShift. Anyway, as I said, we already have something we can fairly operate in very fairly secure manner. And that would be it. Thank you. Awesome. Well, I saw bits and pieces of this at Red Hat Summit, the video and things. So I'm really appreciative of the time you guys have taken to come and give us this briefing. Though the secret stuff in the chat, I was talking with Diogenes and it should be coming out in 1.7, which if I read my release cycle is actually the end of June. So how long would you think it would take you to, if it gets pulled in, would you be incorporating that into what you're doing at Amadeus in the next release of OpenShift and taking advantage of it that soon? So from an OpenShift roadmap perspective, we often are behind Q, it varies between 8 to 10 weeks. And so should be coming OpenShift 3.7, which is around 8 to 10 weeks after that. Yeah, because we do a lot more around the stability and checking it than a few others. So that'll give you probably the encryption of the secrets that you're looking for, hopefully, and we can have you back on talking about other things that are not working. And maybe some things that you're doing. And the other thing that I wanted to say is I wanted really to thank Amadeus for all the contributions you guys have given to OpenShift Origin and the give back and collaboration with you guys has been pretty awesome. So they're a shining example of collaborating with customers and getting contributors into the fold. So this has been very great and I really appreciate it. I'm not looking at any questions though. So you've done an incredibly thorough job. Oh, here's one. How are you preventing images containers from taking all the compute resources of a node? Are you implementing resource limits in Kubernetes OpenShift and or do you monitor customers for abuse cases? Okay, yes. You implement resource limits. That's one way. And we do separately, of course, we do monitor what the applications do because there's also other interests. Do we want to scale up? What are really the needs of applications? And in our case, the customers are currently on voice. No, turn customers, we have quite good control on what they do. But indeed, the principle approach is putting place resource limits and then monitoring would be more like overall seeing what happens. And I can ask for the second question, which is what are we using monitoring? We have our internal monitoring stack if we develop because we already operate something. We have developed stack which we use for monitoring. It's not based on hopefully our monitoring or an OpenShift monitoring at the moment. Are you using Prometheus at all? We're thinking about it? We are looking at it. Can I move? That was last week. Perfect. They have in-house monitoring, yes. Perfect. All right. Going once, going twice, is there and oh, here comes another question. Do you do application security, user authentication, authorizations via OpenShift mechanisms? Or is it inside the application, i.e., external to OpenShift? We have our, for our customers, we have our identification authorization solution, which I've been using for decades. And so we don't rely on OpenShift mechanisms or SSO. We have our own solution, which we actually can run in OpenShift. And they're still asking you more questions about monitoring. What is it based on? How custom is it? What are you using? When we run OpenShift, it's pretty much something which we built for OpenShift. So we do use some nangios for infrastructure things, but it's more of for the legacy stuff or for infrastructure stuff for which is OpenShift, as something which we developed bespoke internally based on open source technologies. Prometheus is one thing that we are looking into. We're going to use a Grafana for displays. And then we have actually already some ways of collecting metrics which have been developed in-house and which we are using. Now we don't have a GitHub link for the solution. It's not something that we have open sourced. We have open sourced other things and we contribute to Kubernetes, but not for this solution. Yes. So there's lots of people, and I think monitoring has become one of the hot topics actually within the comments briefings. And I think next week we're doing another one, Robust Perceptions, Brian Brazil, the gentleman who spoke on Prometheus is going to come back and talk about high level perspectives on monitoring and different solutions and what monitoring actually means. And I think we'll be doing a lot of monitoring talks because there are lots of approaches and everybody has their own special saw. So it would be wonderful if there was one way or one stack that we could recommend. And I think some of the stuff that you were mentioning, Grafana and others are pieces of everybody's puzzle. I can talk a little bit about what we have in mind from a product managing perspective. And this is so far upstream work we're doing. So we're doing upstream work with Prometheus. I've seen that. So there should be a formal integration for Prometheus in an OpenShift upstream. That means OpenShift Origin. If you saw the announcements around Yeager, which is for distributed tracing. So we're working upstream on that as well. So another area we're working on on someone related to monitoring was the Istio announcement for a service mesh that has distributed tracing and overall monitoring on that. So there are lots of fronts where we're tackling upstream. And some of them will be coming to the product. We have the Yeager team, Yuri from Uber and a couple of Red Haters are going to give a comments briefing in the next month or so. So as I said, there'll be lots of on briefing and open tracing and Prometheus coming down the pike soon because we know it's important to everybody. And those are the things that we are interested in. It's like open tracing and the Yeager will be very interesting to look into. And Prometheus integration we're looking at maybe ourselves. If it comes with the product, it will also be a great thing to have it. I'm looking forward for those comments briefings. Well, they're all on the events calendar. So if you go to OpenShift commons.openship.org slash events.html, you'll get there. And there's a whole slew of things coming up. We're going to two days a week, Wednesdays and Thursdays because there's so much coming down the pike. And I think we're almost to our 300th member organization. So we've got lots of people who want to talk about lots of things and different aspects. So it's going to get busy. So in all of this stuff that we talked about today, including the PDF of the slides and the video of the slides will be up on blog.openshift.com shortly, usually by Monday or Tuesday of the next week at the latest. And you can find all of the other almost 75 or I think this is the 76 podcast I've done so far up on RH OpenShift on YouTube. And there's a playlist for it. So stay tuned. There'll be more next week. And it's keep keep it coming because it's been great working with Amadeus and with having folks join us and ask these questions and push the product forward even further. So it's all about you guys. So thanks very much. And thanks again to Amadeus and Diogenes for taking the time today. And thank you Dan and thank you community for the product.