 All right, folks, I'm gonna start my little bit right now. Welcome if you're here in the chat. We're two minutes after the hour and we've got a whole ton of people who are coming in through reception. Just wanted to welcome you to this OpenShift Commons Gathering that we do at KootKon every time. And we're thrilled that you're here and sharing some of your KootKon time with us. So really today is really about unlocking the potential of OpenShift Commons and making sure that you get the most out of today. I'll be here in the main stage moderating all day long. And Stu Miniman will be in track two being my co-moderator for that track. So we're just gonna motor through here. Really what we're trying to do today is take open source and open communities and open collaboration to the next level by creating some connections that will hopefully drive some continuous innovation into your organizations and into the open source projects that we're all working on together in the CNCF and in other foundations and other spaces. And OpenShift Commons, if you haven't joined us yet we'd love it if you would. Our goals are really pretty simple is to promote peer to peer interactions through events like this gathering and weekly briefings and all the interactive groups that we have. They're a pretty active mailing list and Slack channel. And of course, we love it when you contribute code to any of the projects that are part of the OpenShift ecosystem because it's really about creating connected communities. It's no longer about just trying to get your code into our systems. And it's really about building connections across all of the OpenShift communities upstream ones between stakeholders and really Commons is the place for your organization. If you're using OpenShift or working on the projects to accelerate your success. And to do this, we all act as resources for each other and sharing best practices and create forms like this one today to share information across all of the different organizations and projects and product groups that are part of Red Hat and our partner organizations. And we have over 600 member organizations already in the OpenShift Commons. And if you'd like to join the URLs there at the bottom it's really easy to find us on commons.openshift.org. And today's agenda is really four end users by end users. And it's gonna cover three main areas, release updates, roadmaps and beyond. We're really gonna make up the bulk of our first couple of talks. They're gonna have some deep dives into the latest release of OpenShift. And then we're gonna have a peek at the future with Clayton Coleman and Joe Fernandez around the future of Kubernetes as a control plane. But the bulk of what we're gonna talk about today are case studies from folks like Anthem Discover Financials and Rodian Schwartz talking about their implementations of Kubernetes and OpenShift and the different workloads that they're running on them. But we have a lot of great talks by end users giving us insights into what they're doing in the upstream. So you're gonna see some deep dives into the real container tools today. And then you'll also see some interesting talks about OKD, the OpenShift open source Kubernetes distribution. We have a great talk from Anthem about their use of Spiffy and their new initiative around health OS. And we even have a great group of folks over at ODC Nord who are gonna be talking about Submariner. And we do have a code of conduct. We're following the KubeCon EU code of conduct and the same one we use for all of our co-located events. And we're really dedicated to providing harassment-free experience for all of the participants in our things, whether they're in-person or virtual. So please be kind and be professional and use whatever your employer is governing appropriate workplace behavior and applicable laws of. That's really the key of today. So we have an interesting setup today. We're using a new platform to some of you. So this is called Hopin. The first main stage will be this welcome that you're hearing from me. I'm Diane Mueller and I'm the Director of Community Development here at for the Cloud Platform BU and OpenShift and one of the co-chairs of the OKD working group. And we've got two great talks, one by a group of product managers talking about the latest release in the main stage. And then what we're gonna try and do is answer your questions with the speakers via chat and the Q&A tab in Hopin. And if you need support, please go to reception. There's a few folks waiting for you there. And then after we finish that, we'll pause for a minute. You'll see my face come back on again. And then we'll be streaming from the main stage to everything that's listed in track one. And then in track two, you'll find my cohort and colleague and fellow moderator, Stu Miniman, who you may know from theCUBE and who will have lots of Star Wars jokes because it is May the 4th today. And then we have a number of other wonderful case studies as well there. And so that's what we're really gonna do today and we're a few minutes off already. So let's get started. I'm gonna start now and try and kick off the first of our talks with a few of our product managers. So stay tuned while I get ready to queue that one up. Hey Rob, welcome. Good morning. I guess good afternoon. Good afternoon wherever you are. Yes, this doesn't. All right, folks. I'm just gonna restart that. We're on in the Overshift universe for this year. We're also joined by Jamie and Edith who are gonna pop in and out. Okay. All right, folks. We're having a few technical difficulties. So hang on. We're gonna restart and see if we can do this. I'm gonna share my screen again. Welcome to the world of multiple videos. Share audio button. That's what we were missing. All right, hi everyone. Welcome to the OpenShift roadmap update for 2021. Thank you for joining us in OpenShift Commons. I am Rob Sumsky. I'm joined here with Michael Elder. You wanna say hey? Hey everybody. And today we're gonna talk about what's going on in the OpenShift universe for this year. We're also joined by Jamie and Nina who are gonna pop in and out and give us some demos of some cool stuff while we talk about that universe. So I wanna roughly set up our conversation to be around something that we just announced in May of this year, which is OpenShift Platform Plus. And that really is bringing the OpenShift platform up a layer into the multi-cluster arena, realizing that you need multi-cluster security. You need a global registry and you need multi-cluster management to be successful in today's hybrid cloud world. And so we're not gonna focus on the actual product today but we're gonna talk about all the different pieces of that, what's going on in those upstream communities and some cool features along the way. So I wanna talk about this idea of standardized tools because you're not just running one or two clusters but you're gonna run maybe a hundred clusters. And getting tools and policy and management down to those clusters is extremely important because today's applications are spread and more distributed than ever. Clusters are more connected than ever and that means that you've gotta get that fabric orchestrated correctly. So we're gonna talk about all this stuff, some of the tools that are available upstream in both Kubernetes and other ecosystems. So it should be a pretty interesting day. And you're gonna see these two icons as we go through this parachute guide. That's when we're gonna zoom all the way down into a single cluster because what's happening on a disk of an operating system or a policy that makes its way all the way down into the nitty gritty is just as important as zooming all the way up back into that multi-cluster world with this rocket ship. And the work that we do for fast packets for maybe like a telco workload is beneficial to your application even if you really are up at the multi-cluster layer. So you gotta build all these capabilities up and they're all important up and down. So let's jump into it. Security, what's happening upstream? Two main things. First, with the version 1.21 of Kubernetes pod security policies are deprecated. Now this doesn't mean that they're gonna be removed yet. I think that's gonna happen in 1.25. But a few things to note here, the replacement for this is probably gonna be a little bit more simple than what you've been used to. And so complex users might like some of the features of the open policy agent. We're gonna talk about how that fits into some of our tools later today. And you can run that today on OpenShift just fine. And then of course, OpenShift SCCs, that's a security context constraint are unaffected by this. This is kind of what pod security policies were modeled after in the beginning. So we kind of had your back before, we got your back now and we'll have your back in the future too. So one of the cool parts about innovating inside of OpenShift directly. The last big change is user namespaces. This is kind of a kernel level construct that's made its way into Linux. And together with SC Linux, that helps protect your namespaces from each other on the cluster. Now I don't wanna get into too many of the technical specifics, but in the CRI, which is the runtime interface for CUBE, this is now there. And so this is the default for talking to runtimes. And with Cryo, what we use in OpenShift can do the user ID mapping in and out of the container, which is what actually is doing that user namespacing. So we're waiting on Kubernetes to roll that out and then that's gonna make its way down into OpenShift as well. So the proposal there, the KEP and upstream parlance is still moving forward. Some of our engineers are pushing that. So that's one to take a look at when that comes down. All right, another big news in kind of the upstream arena and Red Hat world is advanced cluster security. This is based on our acquisition of StackRocks. We're super excited to have these folks join the Red Hat family and build this into OpenShift and OpenShift platform plus. So we're starting the open source process right now for all the StackRocks tools. And we're really excited about this because it's the most Kubernetes native security product out there. We're gonna talk about some of the cool features and see a quick demo of it in action. But it really talks about this idea that you've gotta secure your entire supply chain from this shift left into where you're actually building code and building containers and to when they're actually deployed and then actually after they're running. Malicious things can still happen even if you pass a container scan, for example. And so what this does is it has a combination of watching host and cluster state. There's some cool technologies if you've heard of EBPF. This is kind of the Swiss Army knife of doing some like host probing into what the kernel is doing. So we take advantage of that plus at the Kube layer admission controllers looking at the audit log and some other cluster events as well as your typical image scanning. So we're gonna see a really cool demo of that in a second. So let's parachute on down. What does this look like in a single cluster? So I talked about runtime security as one part of your kind of threat matrix here. And this is a new capability for OpenShift, completely new. And what this looks like is, let's say we've got a pod and it's got Python in it and we got an application. And there's a vulnerability that gives you some local execution rights inside of that pod. If you're gonna move left and right around in this environment, what you need to do is have some tools. So you might pull down a Netcat binary and then start using it to rec havoc. So here you can see how that actually works because you're gonna go down into the Kubelet when you're running your Python but you're also gonna run that Netcat and an EBPF probe by a security sensor that's installed is actually gonna block that pod from executing that Netcat. So this means that, A, that binary can't be run if it matches your policy but it's also gonna protect network traffic moving left and right as somebody wants to exploit other applications that might be on this environment. And so we're gonna look at a demo and so I'm gonna hand it over to Jamie for more. Add team, welcome to Red Hat Advanced Cluster Security or as we call it, ACS. To set some context, this is a cyber kill chain. It ranges from reconnaissance which is understanding your victim and the system architecture and what you're doing when you're trying to attack based on your objectives. In the world of security, there's a reason that there can be anything from denial of service, stealing data, installing crypto miners for financial gain or really just because you can. The US Consul of Economic Advisers estimated that in 2018 that malicious cyber activity cost the US economy between 57 billion and 109 billion in 2016. Since 2016, it's only increased. So let's think about this in terms of an onion or defense in depth. It should have, security should always have layers to it. There are many spots to spot attackers along each part of the kill chain. Today, we're gonna focus on reconnaissance through exploitation. First, in the cyber kill chain is reconnaissance. We're going to find out everything we need to know about the application to start to exploit it. Then we're going to search for vulnerabilities to weaponize, create a back door and deliver our exploit to get it up and running. Attackers afterwards will frequently look to establish a foothold in their position by installing malware, attempting to escalate privileges and establishing a command and control server. So this is ultimately going to get them to the core objective which can really vary based on their needs. Containerization as a movement has really helped the security team. Containerization has just isolated our attack surface. It's taken post-exploitation activity and made it harder to move through the environment because our containers are self-isolated. This has only helped the security community and now it's time for this community to take advantage of it. So today, I wanna walk you through the anatomy of an attack from a defender's perspective and that of an attacker. As a defender looks to investigate an issue, protect against reconnaissance through delivery and monitor for exploitation. I also wanna show you an attack as it could happen in the wild. So let's get started. So I'm gonna transition over to Advanced Cluster Security here. And as an incident responder, I wanna check the violations. Now, I'm gonna search right away for visa processor. And as I look at visa processor, I start to get concerned. I can immediately see that a shell was spawned by a Java application in their environment. And this concerns me. So I wanna investigate further, I click on it and I can see the gel command and the bash command were executed with several different arguments. I go down, I start looking at those arguments and I can see really clearly here, the container ID, the user ID and that this user is escalating their privileges. They are installing Netcat and then they're using Netcat to initiate a reverse shell in their environment to shell.attacker.com. Well, at this point, I'm reasonably confident that we've been attacked. I'm starting to initiate my incident response procedures and I've lit up the sock. And for those of you who don't know, the sock is the security operation center that's responding to these types of incidents. But right now, now that I'm initiating this procedure, I wanna get more context. So I wanna see how is it that someone has a reverse shell in my environment? So I go over to the deployment and I check out the container configuration and immediately as I look at this, something jumps out at me. The CVE data may be inaccurate. So this piece image is literally so old that the source is no longer providing updates. So you can see here, we're on WN8. This is no longer maintained. So we can also look and see, oh my gosh, this is Apache struts. So one of the top riskiest components is strut, which was a headline vulnerability in 2017. This was the cause of the Equifax breach. And I know that someone is initiating a job or a shell. So at this point, I'm reasonably confident that someone is exploiting struts in order to get into my environment. So I need to address this. Let me switch my persona to the developer for a second. So I scroll down. I have a ton of image findings. I can see the CVE. I can see how long it's been in my environment. If it's fixable and it's CVSS score. And this is really cool information. But as a developer, this isn't enough for me. So I'm gonna go over to my Docker file and I'm gonna check out where these CVs are. So as I look here, I can see the components associated with my image. I can see that there are several CVEs. There are 137 CVEs in what appears to be my base image. And in order to address these, I'm gonna have to upgrade my base image and whatever CVs are addressed in that upgrade, I will resolve here. I'm also gonna check out my run instructions because this is where my application is. And I can see here, if I were to upgrade curl, then I could address several CVEs in curl. And all I need to do is modify this run instruction. So this is really cool, it helps me target where in my Docker file I need to address these issues. But it's still not enough. What do I upgrade my component to? So now I'm gonna switch over to the component screen and you can see struts highlighted here. So Apache struts has 10 fixable vulnerabilities and it's 38 vulnerabilities overall. And in order to fix those 10 fixable vulnerabilities, I have to upgrade to version 2.3.29. You can see it's top CVSS scores of 10. That is terrifying and that's a critical vulnerability. You can see the location of Apache struts and you can even see this component's used in four different deployments. This might not be the only place that I need to address this. So one of the cool things about this view is that you can see struts is a Java vulnerability. So this is a language level indication and the remainder are packages installed on our host operating system. So that's really cool. I know exactly how to address this, but it's not in my traditional manner of addressing this. So Red Hat Advanced Cluster Security provides developers an easy tool in order to look at this information in their CI or on their local host and understand that this is something they need to address. It gives them the context to address it so we can shift left and improve our return on investment in our vulnerability management program, which is ultimately going to address the need to prevent reconnaissance and potential weaponization. So it's really cool, but it's time to go back to our story about struts. It's time to switch to an attacker's perspective. What happens when someone is trying to exploit one of my applications? So here, I have a website running. Now, it's not always going to be as easy as saying, this is an image that's vulnerable to shell shock, please exploit it. But sometimes it really can get that easy. So I'm going to understand and conduct some reconnaissance. So I'm gonna go over to the developer tools. I'm going to check out this website a little bit more. I'm gonna refresh the page. I can see, oh, my server header says that this is based on Apache 2.2.22. Easy fix guys, let us go to exploit DB. We're gonna search for Apache, if I can spell. And we're going to look through all of the vulnerabilities known to be associated with Apache. And now I can start selectively looking at things that are known vulnerable in the environment and start to understand. So if I click on the top one, you can see this is available in Metasploit, which is some solution that attackers use. I can easily just download this exploit and start to test to see if this functions within my environment. One of the really cool things about this is you can see that there's the code live. So we have pre-baked exploit code on the internet, publicly available in a database called exploit DB. And this has shifted the paradigm of how attackers attack. Because no longer do you have to be technically elite in order to exploit major vulnerabilities. All you really have to do is know how to copy paste and read code and be able to modify it somewhat to meet your own needs. Now that's really cool. But back to shell shock. So here I wanna show you an exploit of the shell shock vulnerability. Now I'm a good attacker. I need to let these users know that their site is vulnerable. So I'm gonna go deep into some subreddits. I'm gonna find a good meme and I'm gonna exploit them to let them know that I care. Now I could have installed a crypto miner. I can start to escalate my privileges but I'm gonna be kind. I'm just gonna deface their site today. So let me go to the terminal and I'm gonna execute this command. And you can see here, all I'm doing is I'm defining an environment variable through the user agent header. I am exploiting shell shock by issuing some trailing commands, echoing for legibility and I'm catting the index.html file. So here at this point, I've got a shell. I can begin to move throughout the environment but I'm just gonna have a little bit of fun. I've already gone on some subreddits and I'm going to deface their site. I refresh the page and you can see quite easily, I is in your computer, I am stealing your data, we have ponage, let's go. So that was really easy and in reality, it can be that easy in the real world. So let's switch back to a defender's perspective. Now, if we go to ACS, we can see that a violation occurred really easily by clicking on the violations. And the top one here is what I just did. It's an unauthorized process execution running in the shell shock deployment. And if I look here, I can start to look to see, oh, this person is putting cat pictures on Reddit in my environment. That's really cool. Now I can initiate, but you know what? If I had this in blocking mode, it would have been even easier. Advanced cluster security takes a different approach to blocking. We can now kill this pod. So if this deployment is running in a replica deployment, then immediately, once this activity occurs, I can kill this pod. My attacker's defacement of my website is then reverted back to what is normal. And I have alerted my security team. I fixed my website. And if someone used this to install a potential backdoor in my container, well, I've removed that. That's really cool. I've instantaneously addressed several issues. So let's pretend for a second that I wasn't just a facing a site. A lot of you are looking in the audience right now and saying, shell shock is old news. Someone out there is saying, Jamie, that came out in 2014. This is super old. Why are you even showing us this? Well, to me, this came out three months after I started in cybersecurity. This was my introduction to major vulnerabilities. So I figure, why not make it your own? This is a real world example of something that happened to me. So this and server side request forgery attacks could easily lead to a container compromise the exact same way I showed you today. If that container is privileged, then look out because as an attacker, I'm gonna go right for some authentication credentials. I am going to try to escalate privileges from there. And maybe I can even get my hands on a nice old Kube config. And if I get my hands on a Kube config, then I've got authenticated access to the Kube API and I can exec into a pod to start to laterally move and maybe even install a backdoor or some malware. So let's try that here. Well, I have access to the Kubernetes API. I'm going to do an OC exec IT into the shell shock deployment and I'm gonna run the bash command. Wait a few seconds. And what is this? You can see here, I'm immediately blocked by an advanced cluster security policy. Now that's really cool because I've effectively told the attacker, no, you can install your malware, you can't exec into a pod, you can't get access to my resources through the Kubernetes API. Now, because containers are ephemeral the ability to exec into a pod generally isn't needed. And it is sometimes for troubleshooting purposes but in general it won't be. And by being able to block and monitor exec commands, you should be able to set policies to monitor access to your container infrastructure. And you can also set an exception process through deployment levels to allow legitimate troubleshooting activity. So let's switch back to the UI for a second. You can see here at the top that this is an enforced action. My security team has been alerted that this was blocked and we can look to understand more to see, hey, is this a compromised credential or has one of my employees made a mistake? So a few last notes about the advanced cluster security advantage. If I go to our policy sets, we come with over 676 out of the box policies with reasonable security controls to help you scale security in your open shift environment where customers use these every day to improve how they operate within the Kubernetes and OpenShift ecosystem. We also help you understand and prioritize risk in your environment based on a holistic risk management strategy. So if I click on risk here, you can see our top priority is visa processor. As you look at visa processor, you can see this is vulnerable to Apache struts. It's probably been deployed in an emergency. It's got secrets in its environment variables, a ton of vulnerabilities. And if I even scroll down, you can see it's privileged. So I can go use this container and I can get to the host operating system and see how I can begin to laterally move. Now, I really hope you learned something about the attacker's perspective and the defenders. We're really excited to be part of the Red Hat team. And if you're interested in finding out more about advanced cluster security, wanna request a demo, feel free to reach out to me or our team. Thanks, Rob, and back over to you. All right, thanks, Jamie, that was great. All right, so let's zoom back up into the multi-cluster arena for security. So here's our diagram again. You can see that we've got our multi-cluster tools. And of course, the policies that we just took a look at, we don't want them on just one cluster. We wanna get them down to all of our clusters. And it's not just maybe blocking execution policy, but it's network policy. It's CIS compliance. We're gonna hear a lot more about this in the open cluster management arena. They've got a bunch of tools for this too. And then we're talking about hundreds of clusters here. They might be out on telephone poles and other places like that. So these are remote sites. They can be physically attacked. So we've got our file integrity operator and some other things to look at that host state and make sure that someone hasn't jumped on there and done some malicious things. And then of course, all the way down to the single cluster layer, there is that node layer where we're installing these sensors and these agents and the probes that actually make all of this happen. And so it's really exciting to be able to have one place to enforce all of this across all of your clusters. Here's what it looks like in the user interface when you're looking at network traffic. This is a key part. Obviously you're gonna have applications that are talking to each other. And so you can get a good idea of who should be talking to each other, who shouldn't be talking to each other and then enforce policies around that. Another cool thing that the team has built is actually recommendation engines for hey, we're looking at your traffic and this is a policy that we think you should have based on how these applications normally function and then block things when they're out of bounds of that. Here you can see on the right-hand side a list of some of the policies. So a bunch of stuff, maybe you decide that you never want folks to run curl in an image because that's how you can pull down content so you could block that. So there's a mix of best practices, some kind of widespread things like looking for Heartbleed and other exploits like that as well as maybe you just care about any CVE over a certain threshold that you wanna block that or do something else. Sometimes you just wanna audit it instead. So a bunch of really cool stuff, take a look for that coming in the open source as well as in OpenShift. All right, let's jump over to applications and I may hand it over to Michael. All right, thank you, Rob. So I wanna talk about what's going on in the upstream around just general fleet management. About a year ago, we brought in Red Hat Advanced Cluster Management and we've been on the effort to open source all the components of that. So there's a project that is relatively new, relatively young, openclustermanagement.io. This is where you'll find all of the open source capabilities from the Red Hat Advanced Cluster Management product. This is really focused on bringing together some technologies that are helpful in managing the fleet and also creating some novel and new technologies that help glue together all the parts. So in particular, simplifying the lifecycle for provisioning OpenShift clusters, running on hyperscaler clouds in the data center on virtualization or on bare metal. Simplifying the process of how we deliver and configure the fleet and then also audit for compliance. Does it meet all of our expectations? It does provide some integrated capability with GitOps, but we've also been working on bringing in Argo as a provider of GitOps as well. And then a focus on an inventory of what clusters are in the fleet. How do I dynamically place policies or application content across them and validate that it's running correctly? If you look on operatorhub.io, you'll find a cluster manager and clusterlet operators. These are sort of the core building blocks. They enable us to have a cluster become a hub. We'll see kind of a picture of what that looks like here on the next slide. And then an agent, which can run, all of which runs as Kubernetes native pods. And as I said, this brings together a number of open policy, excuse me, open projects as well. So in this architecture view, what we're really looking at is a view of the hub cluster. And you can see aspects like the API server that the hub cluster exposes. Virtually everything exposed to the API layer is expressed as a Kubernetes CRD. So you can read and write those like you would any Kubernetes API. There's also a search index, which uses a data store to understand all of the parts of any cluster in the fleet and allows you to search that and even look at relationships and find how things are connected across the fleet as well. On the manage cluster, you can see the agent that is that object called the cluster lit. And then there are additional add-on parts of the agent that enable features like search, application delivery, policy management, observability and health management. And here you can see the links to the operators on operator health IO. Now, when we think about cluster lifecycle capabilities, a lot of times we're still thinking about a bare metal host that is in my data center. We're thinking about a virtual machine. We're thinking about something in the cloud. As we see more edge scenarios where computing capacity is pressed further away from the data center, we have to think about how do we life cycle those clusters and those machines. So something that will be available in the near term as a tech preview capability will allow us to take a piece of bare metal hardware that connects back. We're able to provide an eye, connects back to a control layer. We're able to provide an ISO to actually boot that host. And so think of this as a technician installs a piece of hardware in a cell tower in some offsite data center, small data center, maybe even in a vehicle. And that barcode scanner allows them to know, okay, here's the identity of this machine, this host, and then begin a process to link it either as a turn it into a single node open ship cluster where the control plane and the workers, the workloads actually are running on the same host or turn it into a very small cluster. This really is going to be a powerful way to deliver a computing capacity wherever it's needed. And then each of these clusters again, connect back into that control plane provided by the advanced cluster management Kubernetes capability, which is backed by this open cluster management project that I spoke of. Now, when we think about delivering configuration on the next slide, we really will think about how do we express this again with a Kubernetes native CRD? And here's an example that's really important if you're doing any kind of networking sensitive capability, particularly for things like a 5G style workload. You want a particular operator to be deployed on a cluster. You want a particular configuration for that operator to be being available. So SRIOV enables some really powerful capabilities from a networking aspect within the cluster. This is on the right, a set of YAML and policy definition that says I want a certain configuration to be enforced, validate if it's present, if it's not present automatically created. And this allows you not only to work with the SRIOV example, but to deliver any of the operators like the file integrity operator, the CIS compliance operator, along with your own configuration, like certain roles or role bindings, OAuth providers, identity providers, networking storage, et cetera. And so this is what you see as a policy that's targeting a cluster. If we take a step back and sort of again, sort of elevate above the single cluster view, then what we really are looking at is a control plane provided by open cluster management that is communicating with multiple clusters in the fleet and assigning configuration. In this case, there's a concept that we won't really talk about much here, but there's a concept of a placement rule that says match this policy to these members of the fleet. And that control plane is actually delivering those policies and configuration down and validating do they meet my desired state. Now, if I think about this maybe in a slightly different view and I start to think about cluster life cycle, on the next slide, what we'll see is there's really this hub that is the control point. It allows us to manage clusters that are running on hyperscalers, whether I provision a Kubernetes or an OpenShift along with allowing us to actually provision and create OpenShift on the hyperscaler clouds and even on bare metal and virtualized platforms as well. And so what this really means now is that this hub allows us to have one central view of our cluster inventory, regardless of where that OpenShift cluster is running. This is a key aspect of the broader vision around open hybrid cloud. Now, what can I do once I have a cluster that's under management? So that means that I've got the cluster manager operator deployed and running on one of my clusters and then I've got the cluster agent operator running on any cluster that I've either imported or provisioned. And so on the next slide, what we see or excuse me on this slide, what we see is I can deliver a set of governance and compliance capability across any number of the fleet. This is where we're integrating the compliance operator, which comes out of OpenShift along with other community efforts like the Open Policy Agent. There's also capabilities to integrate Falco and have it delivered. We'll see new capabilities over time with advanced cluster security. What this means now is that I have one central location through the hub that allows me to manage an entire inventory or entire fleet. So this is where we think about elevating the view. And if I, on the next slide, wanna sort of think about what does that look like to an end user, to an administrator, to a security person, to a developer, even if I'm kind of validating how these clusters are configured. And what I'm really seeing is this concept of different policies that are applied. And here we see an aspect of categorization and controls. So I can link a particular technical control and say that this is relevant for data standard XYZ, whether that's like a PCI-VSS and NIST-853 or even an internal data standard that is specific to your organization. For each of these, you can see examples where I either I'm simply auditing is this policy in existence on a target cluster? And in other words, is this operator deployed and running? Is this operator configured? Do I have the role of the role bindings that I want running in that environment ready? And so I can either audit, and in one case here, you can actually see there's also an enforcement behavior. So I can use this to do anything from configuring the cluster. I can also use this governance engine to drive upgrade behavior across the fleet for either OpenShift clusters that are connected to a source of images in a public way or that have a disconnected registry as well. So with that, Rob, why don't you take us through what's going on in the networking layers? So there's a bunch of cool stuff happening in the upstream networking arena. And we're gonna look at three of them that kind of come together to make some kind of next generation capabilities happen for us. The first is Submariner. This is a project for cross cluster connectivity. And this is kind of like basically some IPsec tunnels and other kind of stitching together of a bunch of clusters so they can talk to each other. What's cool about this is you'll be able to do service discovery and other cubisms directly across that boundary. So it's really gonna be easy to stretch applications across two clusters, maybe either do some failover because the first step is being able to talk to the rest of those members to either sync data, anything they need to do. ACM is gonna actually orchestrate this for us in the OpenShift world. And so that's gonna be really great capability when used with our next capability which is Istio Federation. So this is the ability to connect multiple service meshes together. Now if you remember in the OpenShift world not everybody has to use a service mesh. This is an opt-in on a per namespace basis. And so if you've got namespaces running on multiple clusters you can connect those together again over that same bridge. So this is a little bit more powerful because you have a little bit more control over exactly how things are federated. And then you'll be able to stretch that identity between pods across that which is what everybody knows and loves about the service mesh among a bunch of other things. And then last we've got a new API for Ingress. This is the Gateway API. You might have known this upstream as Ingress V2 that name has switched to Gateway. And this is a more expressive API than what we have today. So if you think about Ingress today it's kind of a very coarse grain kind of rule matching thing. And then it's up to each Ingress implementation to talk about how it handles sticky sessions and cookies and things like that. So this is gonna have a much more expressive rule set. And that means that an OpenShift in particular will have a swappable way to move that out. So say if you wanna use Metal LB on a bare metal environment to fulfill this need you can do that or then use some of the other OpenShift router components in other environments paired with maybe an Amazon ELB or a load balancer in Azure. So really exciting to kind of meet everybody's needs and so we'll be tracking that upstream as it gets underway. Remember some of the use cases for this it's kind of better than a stretch cluster. So we've got some customers today that like to have a cluster stretch between two data centers that are maybe a few milliseconds apart. This is just a better scenario because you'll have more failure domains there instead of stretching it across. And then just easier HA if everybody can talk to each other and share identity. This means that you can like move dependencies from one cluster to another with those other teams not even knowing about it maybe, which is really exciting. And then lastly to securely connect to like shared central services if you've got a secret vault that's maybe run globally for your entire organization or you've got a global registry like Quay. So kind of makes sense to make a little bit of that management easier if everybody can talk to each other. So let's parachute down into what networking on a single cluster looks like. And so if you remember we have our multis CNI this is what allows you to map multiple network interfaces into a single pod. And the cool thing about this is it's all driven by CUBE. So networking is typically done you would think at a host level. And so up leveling it a little bit into the CUBE layer it means that you can use all these other tools that we talked about to do policy enforcement to push down those policies to mutate things maybe when they're not working the way that they should be. And ultimately then make it really easy for the developers of those applications to use those capabilities. So we're gonna build on that when we talk about our networking story. So let's take a look at another more specific SRIOV example that we talked about earlier. And I wanna talk about this in terms of two personas. So cluster admins are gonna be able to configure this hardware. So this moves packet processing into user land which is really cool because you get basically get line rate speed into a pod. And this is useful for things that might be decoding a radio stream or you've got like a really high traffic API or something like that you can get line performance. And so cluster admins can configure this on nodes maybe like a certain node pool. And then coming in with that ingress implementation for the gateway API have it really, really fastly hooked up. And specify a bunch of config but you as a developer that maybe just wants to start processing packets you're just opting into this with an annotation and you're saying, hey I wanna get a new interface inside of my pod. I don't really care about all the other stuff some cluster admin dealt with that. So it really frees you up to just get your job done with your packet processing. So really cool. And again, we're doing this all in a single cluster but you'll probably wanna do this on multiple clusters. And so let's take a look at that multi cluster arena. We're gonna shift gears a little bit to the Submariner project. And I wanna just give you a little bit more detail about what this looks like when you have cross cluster service discovery. Submariner is going to introduce a cluster set. Now what this is, is a group of clusters that have a high degree of mutual trust which means basically the same namespace on cluster one is the same as on namespace two it's owned by the same folks. And so what that allows you to do is then start exporting services between namespaces on different clusters. And there's two CRDs. Again, we're programming the Kubernetes layer instead of anything higher or lower, which is great. And that just makes it so that you are opting in to sharing something you're not gonna share everything. And then for the service import CRD this is where it gets really powerful is you can consume that via Kubernetes API. It's another Kubernetes object. But what you really wanna do is probably just use DNS just like regular Kubernetes service discovery. And so that's what you can do. So if you look at this DNS address you'll see that instead of cluster.local it's cluster set.local. And what this means is you can now refer to the global set of all of these pods with one DNS name. So you can imagine this becomes extremely useful for building your applications. And so that's just one more powerful thing that you can do when you orchestrate all your clusters together. And then last, we talked about service mesh running over the same IPsec tunnels. This is kind of what that looks like in practice. So you've got all the way down on your single cluster you've got all your sidecars which are forcing traffic through your service mesh. And then it's going over this and then remember all of this can be tuned and deployed through the multicluster policies and networking that's coming down from both ACM and ACS. All right, over to you Michael you wanna talk us through applications? Absolutely. This slide is sort of the where do you come from, where do you go, right? So in this case, Argo is focused on the ability to deliver capability, deliver applications and other objects into the cluster. One of the neat new things that's come out is the operator centric deployment. It makes it really simple to get started with this in OpenShift that's the OpenShift GitHub's operator. And this is also something that can be configured and driven across the fleet using advanced cluster manager for Kubernetes. With Quay, we're really looking at how do we make it easy to source the images that are gonna run against the environment? So a central place for image scanning and integration for scanning from advanced cluster security to allow us to ensure that the images which make up the applications running on the fleet are properly secured. And there's been an upgrade from Python two to three was just recently completed as well for that project. So if we think about applications within a single cluster, I can get started particularly as a developer or a small development team and I wanna use a GitHub centric approach. I can install the OpenShift GitHub's operator and then from there I can see my repo getting delivered into that particular cluster. Now, if I zoom out a little bit I can also think about Argo helping to deliver capabilities across individual namespaces on the cluster, cluster itself or multiple clusters. So I don't know if you wanna switch to the next slide there Rob for me. Thank you, sir. So in this view, what we're seeing is you've got an inventory of clusters is provided by ACM or Open Cluster Management. We have been building integration to allow us to feed that understanding of the cluster inventory to Argo. And then we can use the application object from Argo to deliver applications and content across clusters in the fleet. Those applications are getting sourced, the images for the applications are getting sourced to McQuaid and that's where we can ensure with the DevSecOps approach using advanced cluster security that the images that we're delivering have passed all of our security scans and are carrying any CBD. Now, within Argo, I can see a view of this. We can also sort of zoom out a little bit further and think about the view across the fleet within Advanced Cluster Manager for Kubernetes. And here, what I'm looking at is actually a view that represents a topology of the objects in my world. In the middle, I've got the set of clusters and I can see a list of Foxtrot AP Northeast, Foxtrot DCP, Foxtrot ULIS West. So that's an Openship Cluster that was provisioned in the Tokyo Data Center for AWS. Another one provisioned in the Northern Europe region for GCP and another one provisioned in Northern California on AWS US West 1. But my application is getting delivered to all three clusters and now I can sort of get a central view of what makes up the application. I can see things like the deployments, the routes, the services, et cetera. And what's also kind of neat, something that we're not gonna go into a lot of detail here, but where we have steps that aren't programmable in Kubernetes that require some additional automation outside of the cluster. We can even bring in Ansible to drive that capability. A typical use case we find is when I deliver a change through GitOps or otherwise, I wanna automatically open a service ticket. And so we can actually have an Ansible job, drive the creation of the service ticket that allows us to track the change occurred. Or maybe as in this example as well, I wanna drive the configuration of a load balancer that sits in front of the clusters and maybe that load balancers and programmable through an operator just yet. So I can use that capability. So here, what we've talked about is how we use GitOps as a methodology, how we ensure the security around the images that we're delivering for applications, how we can actually simplify getting started with a single cluster using that OpenShift GitOps capability and how we can scale that out to many clusters integrating both what we do for ACM, what we do for Argo, what we do for Quay. But let's talk about the next layer of abstractions. So Rob, can you tell us a little bit about what's going on with functions upstream? Yes, absolutely. So as Michael mentioned, let's talk about, getting even a one layer higher. So you've heard the word serverless before and then maybe functions as a service. Let's talk about both of those and what they mean. So Knative is the serverless platform for Kubernetes and it's the one that we're backing upstream and it's got a few different components. Serving, eventing and I have in parentheses builds because that's not part of the upstream but that's part of OpenShift and it's a big part of using a serverless platform. So if you think about it, serving is about connecting a container that's running with a request for that container. And this is what allows you to have this model in between where you can do scale to zero. This is a new capability that's only really possible in this model and you can also, instead of just scaling to zero, you can scale up on certain events or trigger different types of applications to do things based on events. That might be when a new image is uploaded, go resize it and store it here, that kind of thing or serve web request, doesn't matter what it is. And so embracing Knative is, it's a new way for gaining infrastructure scaling properties but it's using all the same primitives that you're already building. You've got an application with traffic funneling into it somehow and maybe reacting to a few things but that differs from functions as a service and this is kind of what I'll say in quotes is the true serverless. This is that developer focused, I want to just write a function of code and I want it to run and do this thing. I don't really care about how it's scheduled, how it runs. I just want someone else to figure that out and I just wanna run my code. So that's what lets you have really powerful developer agility because that's all you care about. But that means that this is a new programming model. It's not just fitting into that slice of maybe how you built your Django app or something like that. It really is a different way of programming. And so here at Red Hat, we think about kind of a developer workflow as an inner loop and an outer loop. An inner loop is kind of you on a laptop writing code, writing some tests and then checking that into a source control and then entering that outer loop which is all your integration testing, getting it into the wider app and platform and then ultimately deployed. And so what that means is we need new tools that function intended in this environment which is like CLIs that are terminal friendly but for building functions. So Knative has this KN concept and that's a really great CLI to do this. We also need new CI CD tools. You need to be able to test and make these synthetic events around the things that your application is programmed against and that can be things that are hard to kind of simulate with a Jenkins job or something like that. It just doesn't work. And then because these functions are kind of running all over the place you need ways to aggregate the logs, look at errors. So it starts to more kind of resemble like a mobile application that you have deployed out on hundreds of phones, the error cases there are kind of roughly the same. And so we just need new tools for this. And so Nina is gonna talk us through kind of the current state for functions as a service and where that's going in a quick demo. Thank you, Rob. Hello, everyone. This is Nina. And in this demo, I'll explain what it means and we say that we see serverless as a deployment model and serverless functions as the programming model. We will also see how we can leverage the creative event in the company. So for writing the serverless functions in the local environment, we would need KN and Docker installed. We would also need an account on a public registry such as wayanddocker.io. I am in the directory where I would be creating my function. As you can see, there is nothing here. So to write my function, I would do KN from gate of prompt is for the runtime and we go to corpus, one of the hottest runtime. Or I could use the C prompt for an interactive experience. It's asking which project path I wanna do. I wanna do the same. Notice the default runtime because this comes with a prepackage runtime that lowered the corpus scope. We want to do corpus and the default location is HTTP for these serverless functions but we want to use cloud events. So I am going to write events and the function has come up with. So the function construct has created the directory structure for me and some files. So if you go there, you can see we have three classes function.java is where we would be writing our code, the business value that we want to do. Now let's make this function do something such as translating English word into Spanish. So I will find my prepared code. Here, need bunch of import in interest of time. I have copy paste that too. Now, with the KM CLI, you could build an OCI image. Here, KMPHK will command. It is going to use build path to create an OCI image. And now it is asking me for that registry and I'm going to give it to build a function image. It is going to use the build path to create an OCI image. As you can see our function image has been built. Now to test it locally, all we need to do is create one run that our function image has been built using the best. We test it and we go all the way. We're going to see that it's going to start at the one time and it took us just a few seconds and to test it, we are going to send a cloud event, get a cloud event, send it, the translated result is OLA. The message we are sending is OLA and the translated result is OLA. So now that we know our function is running successfully, it's time to deploy it on our cluster. So I log into my cluster and I am in the project namespace where I want my project to be deployed and all I need to do is scale from floor. It is going to build my image again just to make sure that we have the latest and then it is going to push it to the registry and then deploy it to the OpenShift cluster. Meanwhile, on our cluster, we would create an event source that is going to send cloud events to our function so that our function can translate that event. I am going to use this main source, create event source. We are going to say message hello, schedule, let's do this. If you see the source here, wait till I get my function deployed here. This is our ocp.console. So if I see in my topology view, our function is coming up here. There we go. If we go in our ad again, event source, event source, event source, event source, event source, event source, and it is sending events to this. Here we can see the logs. See that it's started up and our pink source hasn't started sending events yet. There were no events received, so it was terminated, but then event came in, so it's starting again. So as you can see, it does the auto scaling for you and here we go. We are connected to creative events, creative event source cloud thing. We are getting cloud event and our result is the translation of the word hello. And that is all for today. I will stop sharing. And back to Rob. All right. Thank you Nina. That was awesome. So wrapping it up, we looked at functions and how it works, but let's up level it again. Let's zoom up into the multi cluster layer. And as I mentioned, you'll see that we've got some layers here, because you really don't care about this. At a functions as a service user experience, you're going to be maybe in an IDE. You're working with your CLI and you don't really care about the concept of clusters. You just want it to work. You just want to run five or six copies of your thing or if you receive five or six events, you want to spin up some copies of it. You don't really care which nodes it's running on, all that stuff. It's really a different and more powerful way of thinking about your infrastructure. But as we talked about, it does require new tools. So we've got a talk coming up after this, where we're going to hear about this more in detail where, you know, maybe I don't want to see any of these cluster details, but I want the same goodness of OpenShift under the hood to stay tuned after that. All right, so I hope you enjoyed our talk. Where we parachuted down about the building blocks down inside of a cluster, and then we rocketed all the way up and saw how just important as they are to the fabric at the top of it. Getting down clusters, taking care of security at every single layer, taking care of policy at every single layer. And so, you know, these building blocks at the bottom, maybe you've never heard of FIPS mode for encryption. If you haven't, that's good on you. But this is the type of stuff that it matters deep inside the operating system, but it impacts us because we want all of our clusters to be compliant. And that's just one example of many. We talked about fast packet processing, you know, mapping GPUs is another use case where all those building blocks span all these different layers, and it's really, really critical for being successful in a hybrid cloud world. So, I hope you enjoyed it. Thank you, and we look forward to seeing you in the OpenShift community moving forward. Thanks. Well, hello, everybody, and thanks for listening. Hope you enjoyed this morning's FooBars with technical support. All of the links will be live post-taste after the sessions are done, and hopefully in YouTube, some of the tiny fonts for the demo will be a little bit more legible. I'm going to queue up our next talk with Clayton Coleman and Joe Fernandez on Kubernetes as a control blame. Folks are going to hang around in the chat to continue to answer your questions. So, without further ado, because of timings, I'm going to start that talk up, but keep asking your questions and join us in the chat here. Thanks, guys. Thank you. And it did it again. It did not ask me about the wonderful... All right, folks. I'm just going to figure out what's wrong with the sound again. Bear with me. Welcome to our session. Kubernetes is the control plane for the hybrid cloud. This is going to be a more in-depth version of a similar keynote that Clayton is doing at KubeCon this year related to some work we're thinking about in the upstream Kubernetes community. We wanted to give you a bit more context and connect it more deeply to the problems we hear from customers and how OpenShift and Kubernetes are evolving to address those. My name is Joe Fernandez. I'm the PM of the Core Platforms Business Unit here at Red Hat. And my name is Clayton Coleman. I'm architect of Hybrid Cloud at Red Hat, and I've been focused on Kubernetes and OpenShift for a very long time. If you've seen some of my previous KubeCon talks, I've focused on how boring Kubernetes should be, needed to be in order for us to all be successful after a pretty crazy year. I feel like this is the perfect time to talk about things that excite me. Joe agreed that these are exciting ideas that mean we're going to do them, but it means they're a way for us to think about where we want the future of our community and our project and how we can deliver the most value for application teams and operations teams at the same time, which is really what Kubernetes has been about from the beginning. Before we get into the details, let's just talk briefly about Red Hat's Open Hybrid Cloud strategy and what it's all about and how OpenShift and Kubernetes enable that. So Red Hat's strategy is Open Hybrid Cloud. It's something that we've been talking about for many years and really our focus is on two key things. First, how do we enable enterprise customers to build and manage a hybrid collection of apps and services that span from traditional architectures to cloud native, to data analytics, AI, ML integrated and beyond. And then second, how do we enable those apps to run anywhere across a hybrid infrastructure spanning from the data center to multiple public clouds and out to the edge? So OpenShift is our hybrid cloud platform and it's built on a foundation of Kubernetes and Red Hat Enterprise Linux, but provides a comprehensive platform that enables enterprise customers to build, deploy and manage applications wherever they want. If you attended our OpenShift roadmap session at Commons Gathering or any of the other venues, you saw a lot of our recent work has been around how we add new and better capabilities for managing multiple OpenShift clusters across multiple environments. Features that we're developing to help customers manage OpenShift and their applications across a hybrid environment are the same ones that we rely on ourselves to deliver OpenShift as a managed cloud service. So as you can see from this slide, OpenShift is available as a fully managed cloud service across all the major public clouds. And then we also deliver it as a self-managed software solution that you can deploy and manage yourself wherever you want to run it. But either way, Kubernetes is at the core of this platform. Yeah, and I mean, seven years ago, we began this project, you know, working in the community on a broad and expansive, you know, vision for how containers could help make applications teams more successful. It was a really simple idea. It's orchestration of containers, a declarative API model. The API model is about intent, right? That's saying what you want and then making the machines go realize it because we have other things to do. We have to go write those apps. We have to debug those apps. We don't want to be there telling machines what to do every day. The machines can do that for themselves. We heard clearly in the early phases from early adopters, we needed to bring new concepts in. Like declarative APIs are really powerful. Can we bring new concepts alongside all the ones that we're in? That's been successful beyond our wildest dreams. And today, seven years later, a huge number of organizations and companies and individuals run services successfully on top of Kubernetes in a way that standardizes deployment. And so we need to ask, you know, what can we do to move Kubernetes forward? Talk about the evolution of Kubernetes over those last seven years and how it's evolved to address customer needs. And this is just one way to look at it. I looked at it in terms of these three phases. So in phase one of our Kubernetes journey, the Kubernetes API and the core primitives and declarative resource controllers that are part of that allowed users to orchestrate an expanding number of application workloads. And we saw this with customers and partners alike. Yeah, and this is the evolution of Kubernetes has been driven by people putting it into use and then finding gaps and helping us identify, you know, where the project as a group we could go. So Amadeus is one of the, one of our earliest Kubernetes clusters. They started using replication controllers for their long running services. And this is in the days before deployments. They realized they also needed a solution for batch jobs. And at the heart of Kubernetes, we anticipated this, but through that community collaboration, Amadeus was actually able to help and drive those features. It was the very beginning of Cuban. Those features today exist as part of that, you know, collaboration in the community. And, oh yeah, that's right. And also like, before I forget, like Couchbase, one of our key partners, I'm also wanting to deploy databases and containers. You know, this was a hugely controversial topic for the first five years of Kubernetes. And it was really people who were willing to believe that this was a better way to standardize deployments for all their applications. People who put the time in and the community to make sure that these were reliable and stateful sets, which, you know, themselves have gone through a long evolution. Are there two support workloads that need to be predictable over a long period of time? And that was through, you know, those kinds of collaborations that was possible. In the second phase, you see here in the middle, we needed to expand beyond the Kubernetes API, right? And so operators and, you know, custom resources, custom resource definitions, which powered those operators, allowed users to extend the Kubernetes API to manage more complex workloads a day too by adding customized automation that was specific to each component or each service. Yeah, and, you know, early in the development of OpenShift, we made the decision in the Kubernetes community that we wanted to have a small, compact core functionality that wasn't platform as a service, which, you know, it was about running applications. And obviously the scope of applications is practically unbounded. Working in OpenShift, we wanted to contribute these concepts and build them in, but we had no way to do that within Kubernetes itself. And so, you know, at overtime, you know, in partnership with a lot of folks in the community, that led to custom resource definitions and common controller logic, which have, you know, enabled and empowered a huge amount of extensibility over the years. Custom resources let us put the config for the cluster on the cluster. That's something the Cubanman project is used as well. So it's this idea that everything could be extended and you bring new concepts cleanly, really is a key part of Kubernetes. CoreOS brought an kind of formalized and helped settle the pattern, which is it's not just the API and it's not just the controller, but it's the two in the together and it's called operators. The operator pattern is really about hiding complexity, whether it's for deployment or for extending Kubernetes. And, you know, through their work we've done, that's been integrated back into OpenShift. And, you know, we've seen a huge uptake in the broad ecosystem of people extending Kubernetes with their concepts. And we think it can go further. Yeah. So now in this third phase, we're thinking in terms of lots of applications spanning many clusters, those clusters spanning many different environments. And so really what we're trying to do now is explore and do that in this session. How can we better leverage that Kubernetes API to manage services across multiple clusters and, you know, have those clusters running across different clouds, data centers, and edge environments? Yeah. And today, in a sense, we're continuing that extension pattern. We're bringing new concepts. We're doing it, you know, depending on how you approach the problems, some folks are building this out from their cloud console projects like Arc or Anthos. Within Red Hat, we've been actually thinking about this as, what if you had a hub cluster that was a little bit of your management cluster for all your other clusters? What are the extensions you want to add to create new clusters? The ability to run integrations that ensured policy was synchronized across those. And so, you know, it's pretty natural for us to think of, how can we add those new concepts in that make multi-cluster easy? As we've started to go, we know that that's not enough, right? There's always better ways to subdivide work. And so, some of the learnings from, you know, the very early days of Kubernetes, adding new concepts, concepts that we never got around to, going in and building operators and, you know, the broad ecosystem of people plugging into cube. And then this multi-cluster idea is, we started to look at this and we're kind of exploring how can we take some of those ideas and compose them in novel ways. And this is really early. Think about this as a, when I'm going to call that, I don't call this a project. It's definitely not a product. We're calling it kind of a prototype. It's a way to think about these ideas together that can help us look at the same problems we've been having in new and different ways. So, I mentioned, you know, Kubernetes standardizes deployment. We've kind of said, you know, what can we do to improve security? If you've got all these clusters all over the place, there's a lot of duplication. You know, how do you look for opportunities to separate out control planes from data planes and how can you improve resiliency, operational flexibility? If you've got to install more and more stuff into the same cluster, you want them to assume limits. I think to a lot of people today, Kubernetes, the container orchestrator, Kubernetes, the declarative API, they may seem inextricably linked. I think that, like, you know, this is the simplest architectural slide that Joe and I could find of, you know, all of these concepts, and there is a huge amount of detail hidden here. But you think about these pieces, most of us think, you know, these are all part of CUBE, but we wanted to come into this and say, well, you know, what if we change direction? What if it wasn't about all these other pieces, it was about Kubernetes, the API. What would it look like without pods or services, without nodes or cubelets, without controllers or schedulers? So it's like, I like to call this talk, somebody came up with this today, it's like nodes where we're going, we don't need nodes. So it's my Doc Brown impression, and that's about as good as it's going to get. So while I start sharing, Joe, if you can tee me up. Sure, let me just go back here. So while Clayton is sharing his command line, I'm going to show you a preview again of some really early stuff that we've been playing with around these concepts. You'll see this again during the KubeCon keynote, if you are able to attend that, but we figured we'd be able to go through it here a little bit more slowly than behind the scenes look and then ask questions as we go. So hopefully, yes, we're seeing Clayton's command line and we'll take it away, Clayton. Yeah, and just to continue my Doc Brown joke, we've gone back in time. So this is, you're seeing the future from the past. So just pretend like the KubeCon talk has already happened. You're getting a deep dive, and I promise you you won't miss anything. So what would Kubernetes look like without pods? So this is the first question. So pretty standard command line around it. What if the server told me it doesn't know what pods are? Okay, that's an interesting idea. What can I do without pods? So I tried to boil down the list of all the resources in Kubernetes. Name spaces so you can subdivide your work. Different teams can collaborate together on similar, but not identical things. You have RBACs so you can protect your resources. Secrets and config maps and CRDs that lets you extend and lets you put generic data there. This is what we're calling a prototype of a Kube-like control plane. Kubernetes API without pods, containers, nodes with extensibility, client support tooling that works today. You seem to be using Kube control here. We didn't have to throw away all of our tools and we could just take everything we have and move forward. Well, let me stop you there. So if I understand you correctly, you're essentially talking to the Kubernetes API server with Kube control as you normally would, but rather than having it deploy containers to a particular cluster, you're going to use the interfaces that it already has in order to create this notion of a hybrid cloud control plane. Pure control plane, Kube focused. It's the heart of the API. I can create and update resources, but the only resources there are the resources that help me. There's nothing that has to do with running workloads. It's just, what do I need to integrate anything? And so like this kind of comes down to is like, what can we do with this to follow up on your question, Joe? So there's a ton of integrations actually out there today that are, that integrate stuff into a Kube cluster, but that don't actually live on that cluster. So cloud resource operators, I've got a couple of examples scattered in here. We actually shortened some of it so you can actually read it because everybody gets really long with their names. But I can create buckets or topics. I can create functions. These are all features that exist today and various operators and extensions. Sometimes you're dealing with multiple clusters and you need an integration that lets you work. Say like, I want this cluster to expose this part and then I'll go to my other cluster and install that CRD. And kind of one of these challenges is they all require you to know which one owns it. So like if I had a database and it was installed on this cluster, screwed up in the demo, even in recorded sessions, demos are still unfaithful. But that database, I would still have to know where it is. And so we kind of ask the question like, well, a control plane could be the place where it all is. And that means that I don't have to think about which cluster to secure and OK to install an extension there. I could run the control plane and then my clusters are separate. So you're basically taking all those Kubernetes primitives that we described earlier that came out of the first two phases of the project's evolution. Users, roles, namespaces, controllers, and so forth. And really applying them in a different way. So you're now applying them to kind of manage services or apply to usage that really spans clusters and spans the users and the applications that run across those clusters. Absolutely. And it's the basics of Kubernetes, but we don't think about them because we're always talking about services and pods. And so I showed like 10 examples getting installed here. And I think one of the challenges, and we see this today everywhere, is I installed one extension and I installed another extension and I installed third extension. The more I add, the more concepts I have to keep track of. And so teasing apart those problems so that we're talking about them in different ways helps get a lot of things. So if I'm a security team or I'm the infrastructure team, I may not want to know about higher level integrations like this. That's not my job. That's not my role. And so if we can tease those apart, what are the things that would help us tease it apart? So one real challenge is multiple teams sharing a single Kubernetes cluster with something we've been exploring for a long time. OpenShift has spent a ton of time and effort in adding tenancy to Kubernetes, keeping teams apart, making stuff secure. There's a lot of different trade-offs. There's no perfect security. There's just what is right for someone at the time. And this better or worse, a single cluster I think is still one of the strongest boundaries we have. So if we're imagining a world where we have more clusters and we have that cluster with a strong boundary, it led to a real question that I think is super exciting, which is if instead of just having lots of clusters, if we could make getting one more cluster really, really cheap, would we still need to have all those big physical clusters? And Joe, we talk about this all the time. It's a key challenge customers have. Yeah, definitely. You're highlighting a challenge that we've been talking to customers about for years, not just OpenShift customers, but Kubernetes users in general, which is how do you manage tenancy across your various developers and teams? We saw this from the earliest days of OpenShift 3. Customers would start with a single cluster, start with a small team, and then inevitably that would grow. And we did a lot of work around multi-tenancy within the cluster to address that, right? So if you saw where Red Hat invested our resources, it was into evolving features like namespaces and quotas and roles-based access controls. And then even additional concepts beyond Kubernetes, things like the multi-tenant OpenShift SDN and segregate application traffic. We worked then on network policies and more. So despite these capabilities, customers always found requirements that we'd call for creating yet one more cluster, right? So all the tenancy in the world doesn't eliminate the need for multiple clusters. And then as the number of OpenShift clusters grew, so did the need for more multi-cluster management. And we already discussed that earlier. That's really what's driven our roadmap recently around bringing in better multi-cluster management capabilities. But it sounds like what you're talking about here is, how can we make it easier to just ask Kubernetes itself to give us clusters when we want them and then make them available for what we need them for? Is that right? Yeah, absolutely. And this is a little bit mind-bending. And I think there's a lot of things that we're still exploring. But there's a hard limit in Kubernetes. If you want to tease apart all your different extensions and people still need to run in that environment, you still have to install the CRTs together. There's no tenancy for CRTs. So it was really obvious. It was like, OK, we've got this control plane that's really stripped down. And we're adding CRTs to it to make it something that people can work together. So I'm going to show you here. I'm connected to my local control plane prototype. And I'm going to show, as you can see, the URL that we're connected to from kubectl. That's my local server. We're going to switch to a different context that's provided the prototype generates a kubectl file that actually points to two different clusters. So I've called the second one user. And when you look at the config line, what you'll see is the URL is different. And so it's the same server, but I've got two clusters. So there's the first cluster that I developed that I showed you in the pods and installed CRTs into. But in this second cluster, if I call get databases, it tells me it doesn't have any databases. And that's because the different clusters see different CRTs. So if I call kubectl get CRTs, there are no CRTs. So in a sense, the database from the other cluster is invisible. The new tenant can't even interact with it. That's pretty hard security boundaries. So like two different teams on the same, they're talking to the same server. Under the covers, maybe there's some stuff being shared. But to each individual team looks like two completely different clusters. So there's a lot of possibilities in here. Imagine instead of one cluster with thousands of services, what if we have thousands of little clusters each running one service? What are the things that we could change development-wise and operations-wise that starts to split that problem up? Yeah, it's pretty interesting. It also aligns with some of the work we've been doing lately around getting smaller clusters. So we've seen that related to customers who want to run Kubernetes at the edge, which we've been trying to enable with OpenShift, whether that's in a three-node cluster configuration, a distributed worker node, or even single-node clusters. We're also doing some of that work with the IBM Cloud team around a project which we've called HyperShift. And HyperShift is a project that allows you to deploy Kubernetes in a managed control planning model. So what that means is you have a simple management cluster. It's running control planes for a bunch of other clusters. And then the end-user clusters are literally just the nodes that they bring to the party, essentially. So the cluster can be just their one node, and they get assigned a control plane. So kind of lots of interesting concepts that have been coming up lately around, how do we make these clusters smaller? How do we get more of them? And it's kind of interesting to think about in this context when you're saying let's flip the view here and think about thousands of applications, thousands of services, each in their own little cluster versus putting them together. Does that... Yeah. There's a bunch of advantages. Split control plane in your data plane. You keep all your high-level logic on the control side. So if you could have a control plane for applications, you don't need a ton of stuff. And actually, I'm going to show a couple of examples here. But to even get to the point where we could do this, well, I want to have thousands of applications, but if I can't bring all my existing applications, it's going to take a while. And that would be really painful. So a big idea for this prototype is how can we bring as much as possible of Kubernetes forward without having to change everything? So what if we could connect our control plane to existing clusters? So you can turn QB API server, the lightweight control plane, back into Kubernetes as the orchestrator. So I've got a little CRD here. And it's just pointer to a cluster. It's got a Qt config as well. And I got a secret in there that I'm not showing. And then I'm going to apply that to the control plane. And so that's... Oh, okay. It looks like one of those bugs. So it applied the resource and it's created. And so now I've got this created. I'm going to go ahead and create a second one, because if you just have one cluster, it's not a really good demo. So we'll do two clusters, but you're going to get the same error again. Yep. So it went and created the clusters. There's still some bugs. I said it was a prototype, right? So by installing this cluster, what have I done? Well, what I've done is I've imported from those clusters all of their resources as CRDs. So I didn't need to implement that. That other cluster is handling it. So just like we added those CRDs for external integrations, what if cube was an external integration? So I'm going to... Now that I see, you can see deployments are in this list. Actually, let's scroll back up. Yep. Deployments are up here. And so I have a deployment, and I asked for 15 replicas, and it's connecting the two local clusters. And I just run control apply deployment against the control plane, and it goes and creates it. So if I call get deployments now, you'll see it. If I'm going to wait, wait, wait, wait, and this local is kind of fast. So if I do it, I see those resources get created. So I had a simple controller here that split it up, right? So we created the one, and then it was like, hey, what if instead of just running just the deployment, what if we had a little controller on the control plane that would split them up and run them on individual clusters? And some of this is like pretty prototypy. We're still just kind of hacking this together. But the idea would be that you define your app, you take the apps that work today, you stay on the high level details. Instead of getting down into the weeds, we want to focus on deployments and services and integrations with databases. I don't want to be dealing with this low level. It's like, we've kind of come full circle from Kube to API and then back to Kube. But maybe what's different this time around? This is kind of like the big idea. That's why the prototype was so exciting. What if we just didn't add pods back? What would it mean to have a Kube without pods? We've been doing this for seven years. I've got this fully running application. It's got these other clusters out there doing the hard work. Maybe seven years into Kubernetes, maybe that fourth step in the slide that Joe showed is maybe we should be thinking about applications, like pods, nodes, clusters. Those are details. Let's think about applications, services, how I glue things up. To me, that's hybrid. It's connecting all the different things, not just pods and nodes, but all of my application in any cloud, in any environment, on-premise, hosted, service or not. Trying to pull that together. I think some of these ideas could be pretty instrumental in getting to that point. That sounds awesome. That's how you start turning Kubernetes into a hybrid cloud control plane, right? How can people learn more about this? Where can they go? Tomorrow, and I think this is the session before the day before KubeCon. My talk out tomorrow, we'll publish the repository. It's github.com. It's github.com. This is a prototype. KCP is an arbitrarily chosen acronym that has nothing to do with anything about Kubernetes or control planes. It looks that way. We're thinking about this as the seat of a bunch of ideas. We want to see how those go. We're not too opinionated at any point here. Our goal, really, I think as Joe said, is we're trying to bring together these ideas ideas and really move the conversation forward. So I'm excited to be here. I hope everybody loves these concepts. Please reach out to us. And I hope everyone here does have a great QCon. And please watch my much more compact talk now that we've given you the insider's preview to it. Awesome. Well, thanks, everybody. Thanks for joining us. Hey, folks. Welcome back. And we have Joe and Clayton here. And I just want to thank them for joining us here that you can turn your faces on. And you had a bit of a 10-second delay there. So let's just see. There we go. Hey, Joe. Hey, Clayton. I'm not seeing any questions. So I think you've blown everybody's minds. And I'm really looking forward to seeing the folks on the on the stage on KubeCon tomorrow where we're running a little bit behind. Joe and Clayton, if there's any final words you'd like to say, anything since you did pre-record it there, you know, has the world changed in the past week? And is there is there anything you'd like to add now? Can y'all hear me? Yep. Yeah. So, you know, there's there's not much just changed this last week. What we're trying to do is and I mentioned this in the chat is that the repo is not live yet. And the repo is very much about the prototype. And there'll be some extra docs in there that kind of go through these concepts. This is a very early stage. We wanted to start in kind of the where could we go and show a bunch of ideas together without getting locked into, you know, this isn't a project yet. And it's definitely not a product. But for Joe and Joe and myself, as we were talking about this, is we think it's really important to people build platforms on top of Kubernetes to run their enterprises. That's what hybrid cloud is. You know, there's certainly other pieces of that world that aren't Kubernetes. For us, bringing all this together, giving people tools that move things up a level. Someone in the chat called this stateless clusters. And I'd actually like to call this serverless clusters, right? This is, we want to get to a point where when you build applications, you can put a layer between you and the physical infrastructure, whether that's cloud physical infrastructure or on premise infrastructure or Edge or your local laptop. So we are definitely trying to kind of create a center of gravity for bringing all of this stuff together. Yeah, I just add a couple of things. First, in the roadmap session, you heard a lot about advanced cluster management for Kubernetes from Michael Elder and Rob Zomsky and team. So we think that that's a great solution to manage a multi-cluster environment today, cross-cloud, cross-environment. And what we're working on with this KCP prototype and some of these things that Clayton has been experimenting with upstream with our team is how can we take it further and do more around the Kubernetes API itself to better enable being able to work across these types of hybrid environments. There's also a question on HyperShift, which I mentioned. HyperShift is a separate product. It's not dependent on KCP or related to it, but it's for folks who are interested, I dropped the GitHub link. It's basically a deployment model where the Kubernetes control planes run on a central management cluster, right? So you have an OpenShift cluster. It then has namespaces. In those namespaces would be a control plane for a separate end user cluster. So if you have 100 namespaces, you'd have 100 control planes, then the end user clusters essentially are just the worker nodes that get spun up and then they claim or get connected to a control plane. So we're evolving that on its own. The IBM Cloud team uses it. That's how they manage OpenShift on IBM Cloud. I think other hyperscalers do leverage similar approaches for centralizing and managing control planes. So it's not dependent on KCP. It's something that we're already using in OpenShift and IBM Cloud, and we're going to be bringing it to other environments, but it could potentially come together with some of the concepts that Clayton described in this KCP prototype. And yeah, I'm interested to see how that all comes together. There's a couple other questions in the chat, and we are running a little bit behind in time due to some of the technical snafus here, but suffice to say Clayton and Joe will be around in the background answering questions, and we're going to kick off another talk here. Clayton and Joe, I think you can both see the Q&A. So if you want to just answer those questions and I will queue up our next talk, which is with Discover Financials, and they're going to talk about some of the work that they've been doing to streamline the OpenShift developer experience building an enterprise helm chart repository. I did mention in the opening statement, Gambit, that we're going to hear a lot about from end users on how they're using different pieces and parts of different upstream projects, and this is a quite an interesting one coming up too. So thanks Joe and Clayton for taking the time to come and chat with us. Much appreciated. Everybody looking forward to the five minutes of fame that Clayton is going to get on the Kupcon keynote stage shortly. So we'll make this video available as soon as today's gathering is over and tweeted out so that you can watch it and pepper him with questions on the Kupcon stage as well. So please feel free to share the link as soon as this event is over. So with that, I'm going to see if I can get my other speakers in here, and I'm going to queue up their talk while I'm doing this. So hang tight and here we go. Try this again. Share the audio. We're talking about streamlining the OpenShift developer experience by building a enterprise helm chart repository. My name is Austin Dewey. I'm a senior consultant at Red Hat. I've been with Red Hat for about four years, and I've been working with Discover for about two on their OpenShift adoption. Sachin, do you want to introduce yourself? Yeah, hi. My name is Sachin Chowdh. I'm principal system architect at Discard Financial Services. For the Discard, I've been working with three years, and I just started with the OpenShift team a year ago and started working on the Kubernetes and OpenShift platform. All right. So as you are probably aware of, OpenShift provides many benefits for developers and application teams such as application auto-scaling, self-healing applications, it has an S2Y system to enable easy builds within the cluster. There's all kinds of benefits, all kinds of reasons to adopt OpenShift. But of course, anytime you move to a new platform, there's going to be some time that it takes to actually ramp up on that platform. It takes time to become proficient on a new platform. There's a lot of reasons. There's four high-level reasons. One is new workflows, new ways of doing things in the platform, new tools to get there, such as the OC CLI, the Helm CLI, which we'll learn more about later. There's new lingo, what are containers, what is DevOps? So there's all kinds of new terms and phrases to learn there. And then new concepts, and that kind of goes hand-in-hand with the lingo concepts around best practices around containers, around deploying applications in OpenShift, so on and so forth. There's many different ways to approach application development on OpenShift. You might have asked yourself one of these questions, or you might have asked yourself all of them, right? Which tools will we use? There's a lot of tools out there in the community. We know which tools are the right tools for you. And kind of going off of that, how will you actually build and deploy your application invasives? Once you kind of have a process in place manually, are we going to automate that? How are you going to provide reusable components for other teams? There's a lot of different app teams in your organization. You don't want them all to have to reinvent the wheel. You want to provide something for them so that if they can hit the ground running sooner, they can become productive on the platform as soon as possible. Two more here. How do people know what they're doing? Is the recommended solution? When they're new to the platform, how do they know what they're doing is actually the best practice? And what OpenShift resources do we need to configure? If you are familiar with OpenShift and Kubernetes, you know there's a lot of resources. There's deployment services, routes. How do you know you're using the proper resources for your use case? So as a member of an OpenShift operations team or as a member of a DevOps team, we feel that it's your job to provide a set of common tooling and processes for application teams. And by doing that, what you do is you decrease the amount of training and overhead that application teams feel to get acquainted with that environment. You establish a direct and trusted approach maintained by the platform team. When people have questions, they can come to you as the subject matter experts, as the maintainers of their tooling that they use to develop on the platform. Like I was saying, they have a centralized location for getting support. They have a set of reusable components. And so that goes back team to team. You have a set of components that you can distribute throughout there. And then out of the box working solution, they don't have to develop this on their own. They can use what you've already provided, what you've already tested and vetted out to make them more productive. So our solution that we came up with at Discover is to build an enterprise home chart repository. So you can see a few different home charts here, ImageBuild, Node.js, so on. We'll get to each of these in detail in a little bit. First, I want to talk a little bit about Helm. Give a little bit of a Helm 101 before we dive into the nitty-gritty technical details here. So a quick review of Helm. Helm is known as the Kubernetes package manager. It's named that because it's very similar to an operating system package manager. If I say, yum install Ansible, I just expect Ansible to be installed on my computer. Helm kind of works in that same way with Kubernetes. And so quick little glance at Helm. I left a couple of links there about the project. It just received or it just reached graduated status with the CNCF last year, which is very exciting. And they have a highly active development community and there's some stats there you can see. So Helm 101 here, Helm creates a wrapper called charts around OCP resources. And so what you can see here is a home chart wrapping around a deployment, a service, a config map, and a route. So four common resources that you might need to deploy as an open shift application. And then a user, instead of having to go in and create each of these resources by hand, they can just run one command, Helm install, and it's actually going to go and create each of these resources for them. A Helm chart is written by a subject matter expert, by a member of your operations team or your DevOps team. And so this is what they would see. And the benefit that they have is that YAML definitions are dynamically generated. So you can see there in red, those are placeholders for parameters. So the number of replicas that you want in your deployment, the user can specify that. The user can specify the image that they want in their container, the resources there. So an operator, the human operator would write this YAML and then the user would receive the benefit by not having to write all of the YAML there in black. Just the red there would be replaced by their input. So kind of expanding on that, users can figure their installation using values. So there's kind of an example values file there. So you can see, you know, going back to what I was saying earlier, the number of replicas, the image, resources, so on. And to install that, they run one simple command Helm install, and they give it a name, so my app, point it to the Helm chart that they're using, so our screen boot Helm chart, and then dash dash values should be two dashes there. Values.YAML, they're to reference those parameters. And then there you go, it gives you those resources there without you having to actually have written the hundreds of lines of YAML that you would normally require to configure those. So at this point, I'm going to hand it over to Sachin. He's going to get more in detail with DFS Helm chart repository. I'll stop sharing Sachin, and then you can take it away. Thanks, Sachin, for explaining about the Helm and the Kubernetes Package Manager. So at the DFS Discord Financial Services, we have a Helm chart repository where we are maintaining the Helm charts. Now this Helm chart has been maintained by the OpenShift platform team. The source code for the charts are located in the GitHub. We try to package this chart and archive into the Artifactory. We try to maintain the good versioning of all the changes with respect to the change logs, and then AppDev team can pin to a specific version of the chart so that it's not the breaking changes for that application. The Helm chart provides the common set of components for all the application teams, and it's open-ended towards the solution to prescribe the enterprise platform team. So we started with building the image build chart for building the application image, which is transferred into the OpenShift build config. Then we have a specific set of deployment chart for each set of technology, like a Node.js chart for the Node.js applications, Springboard chart for the Springboard application, and then there's a generic deployment chart. So this chart is kind of unmuted towards the need, language, or framework. And then there are a set of common templates of all of our codes. We are trying to abstract that in the library chart. So the way we have structured this is if application developer needs to work with the Helm CLI, they need to install this repository, Helm repository, using the Helm repo add command. Once they add the Helm chart repository into their local workstation, they can work with the specific Helm chart, and then they can issue the Helm installer update releases based on the specific Helm chart, like in this example, the Springboard Helm chart, and then they can provide these set of values. Now there's also a way in terms of the OpenShift UI, which we are going to explore it in the future. But this UI provides a nice way to configure the Helm chart, and then application team can utilize those Helm chart or the UI OpenShift UI. So running the application on the OpenShift, there are two high level steps. One is a build and deploy. So we are targeting in terms of the specific Helm chart for this build and deploy process. So building the application is like building the application image and pushing it to the OpenShift registry. And for deployment, it's going to pull from the OpenShift registry and going to deploy on the OpenShift and create the set of resources on the OpenShift platform. The way it works like AppDev will be reviewing the image build chart. In our case, it's an image build, which is building the image. So it's going to read the documentation, and then we are providing the samples for the AppDev team. So there will be different those samples trying to clone those values into their local repository. And then they're going to issue the Helm install command, and that's how it's going to start building the application image. Now there are set of flags which they can enable so that the build starts automatically or they can manually trigger the build. So once the build is done, next step will be a build is creating the artifact image that will be stored. And then now next step is for deployment. Now based on the technology, the application is built on. They can use either Springboard, Node.js or generic deployment and charts. So they'll review the chart documentation. We are providing the examples of the demos application so that they can refer those values files. And then they can issue the Helm install command with a specific technology related Helm chart, like in this example, Springboard, and deploy those application image to the OpenShift platform. So it also works with CICD pipeline. What I explained in the previous two steps is more of the manual way of using the Helm CLI in terms of the local workstation. But if they have Jenkins kind of a pipeline, then they can plug in these two stages. Like the one is the build stage where they're going to use the image build chart to build the application image. And then followed by the deploy stage where they're going to deploy the application to the OpenShift platform. Now creating enterprise Helm chart repository, there's a specific set of requirements. So the code base, so we are minting the Helm chart code base in the GitHub. And it follows a specific structure. So in our case, there's a series of Helm charts. So we are minting those in the GitHub repo along with the readme and ex pipeline. And then in terms of the storing the Helm chart, we are using the image registry. In this case, if one can use the architecture in access in terms of the image registry for the Helm chart repository. So there are high level design consideration while building the Helm chart. Consider the user experience. Design the charts to require a few values input as possible from the user. So you can abstract some of the values from the user so that user can provide the minimal input in terms of values. You can abstract the service integration as well. Like if they are trying to connect to the external service via proxy or database, you can abstract that in terms of the Helm values. And try to design the chart as flexible as possible so that user can provide its custom input as well in terms of the init container, sidecar container. And you have to make sure that you are versioning the charts properly. So there are no breaking changes. If application is using specific version of the chart and you are upgrading it, you just make sure that the versions are maintained so that it doesn't break the application. Document each chart's intent used. So it's a clear in terms of usage. If it is a generic chart, make sure that you define the requirement for that. Simplify the internal reference such as container registry URLs, enterprise control image values, wherever it's possible. And provide the examples, demos of the project so that community can learn faster and they can utilize those Helm charts. We can make the Helm chart development easier with the library chart. Like we can abstract the common code into a library chart so that it can be used by the other charts. So in this example, we are trying to define the library chart which is going to capture the boilerplate reusable code. And then we can use that library chart as an include in terms of the specific chart. So for example, if you're building a generic deployment chart, you can include that library chart. And also if you are building the spring boot or Node.js, you can include this library chart reference to that. And since we are building the Helm chart, we run through the CI CD process for the Helm chart all as well. Like there's a CI pipeline which is testing, packaging and releasing the Helm chart, patient or specific version of changes. So we are using the CT tool, which is going to chart testing tool, which is the assist of steps when to follow. CT init is going to list all the update chart. Then CT install is going to install the chart. Helm package is to package the chart. And then we can push it to the artifact tree or repository using the call command. Thank you. And let us know if any questions. I see no questions. I see no questions. I think that's amazing because there we go. Let's stop that there. So one of the things that's really interesting to me is because there's operators and Helm and all of it. Thank you for opening our talk. Now I'm just going to pause that there. I love this automated system we have here. But I really love the way that you've taken this use of the Helm enterprise repository to the next level. And I'm wondering, Sachin, what the impact has been on your development team going forward to be able to leverage all of that? Sure. Thanks, Tan. So initially, since it's a new tool, new technology, there's a learning curve for them. But since we provided the examples demo, it was really helpful for the AppDev community to refer the demo, try it out. And we are providing these set of guides in terms of this step by step. So they were kind of using those steps, trying to utilize the Helm charts which we have built. And it was easier for AppDev team in terms of build and deployment. So they are focusing more on the business aspect of what I had to do logic in terms of the code and all. But in terms of build and deployment, it was very well simplified. And since we are releasing the Helm chart updates frequently, we are just notifying where they're releasing all. So they are aware of the upcoming changes. And if they want to utilize those features, they can refer the change log or admin file and apply those updates to their application. So any feedback from your point of view? I know you're embedded at Discover. And how's that been going for you? And where are you taking them next? Yeah, it's been going really great. And I think to add on to your previous question, what I really like about this setup is that think of all the different Kubernetes resources, as different OpenShift resources that there are. And how much configuration actually goes into creating a deployment, a config map, a service, a route. There's a lot of different config options in there. And with the Helm chart, a user can simply provide maybe 10 lines of YAML to create what ends up being thousands of lines of config down the road. So I think that is a huge part of the value that we're providing. And I think that's what is allowing teams to jump onto the platform and actually start being productive so quickly from day one. And like Satya was saying, we do provide demos. We're always available to provide support for those teams. So yeah, I've been really excited about the work that we've been doing. I think that Discover and their application teams are also very excited. As far as what's next, we're going to just keep steering this whole thing forward. They've got a lot of apps that they're looking to onboard. And we plan to just use the Helm chart repository to get them there. Well, thank you guys so much for joining us today. We're really looking forward to hearing even more. We've heard about how you're getting your apps on there. But maybe the next time you come, we can talk about what the workloads are at Discover Financial, what you're actually some of the implementation of some of these applications. I think there's some really interesting things that you guys are doing on top of OpenShift too. So it's wonderful to hear how it's getting done now at the next stage is to see what exciting things you actually deploy. And if we can get you to talk about those, I know you had to jump through a number of hoops to be allowed to talk. And we totally appreciate you for taking the time and sharing your story. So Sachin and Austin, thanks for getting up in the morning wherever you were. And we'll take you to our next talk now, which is from another long time OpenShift group. Amadeus is going to come on now and talk about end-to-end OCP-4 cluster deployment automation in their IT context. And so I'm going to stage that now, bear with me while I shift over to that session. And thanks again, Sachin and Austin, and we'll get the other speakers popped in here shortly. So we're going a little bit behind time. So bear with me, everybody, but we're doing we're doing it. And again, great talks from end users coming now. So hang tight, folks. Thanks a lot, everybody. So hello. Thank you for attending our talk and to end OpenShift 4 cluster deployment automation in Amadeus IT context. So my name is Maria Alejandra Manueli. I'm an OpenShift Red Hat consultant working in the Ironman Amadeus project with my colleagues, Vincent Bronikowski and Tiwa Castan. I will let them introduce themselves. Hello. So I'm Vincent Bronikowski and OpenShift Red Hat consultant and I work with Maria and Tiwa on this project. Hello. My name is Tiwa Castan. I am a senior Assyri working at Amadeus. I'm specialized in cloud platform migration and operation. And I've been working with OpenShift and Kubernetes since 2015. I've been working closely with the Red Hat team all along that project. So before digging into the project itself, let's start by introducing Amadeus in a few words for those who don't know what Amadeus is doing. Amadeus is a technology company dedicated to the global travel industry. We're presenting almost 200 countries with a worldwide team of more than 19,000 people. Our solutions help improve the business performance of travel agencies, corporations, airlines, airports, hotels and more. So why is this project? Let's go back a few years ago to understand where Amadeus comes from with its cloud journey. So we started our cloud journey in 2014, even before the very first version of OpenShift 3. At the same moment, we also started our partnership with Red Hat, working closely with them on that topic. In the next few years, we at Amadeus gain quite a solid maturity in the use of OpenShift, deploying an operating dozen of clusters, mostly in our private cloud at first. Those private deployments are part of an internal project called Ironman. Ironman, because its purpose is to provide a kind of a super yes and pass on-premise and for the long run. Those deployments are currently running on two Red Hat products. So OpenShift 3, we're currently on 3.11 and OpenStack. Then in parallel, we also started to extend our capacity in the public cloud, so both on Google and Amazon. Then since mid 2019, we started deploying OpenShift 4 in the public cloud, mostly in Azure. And we're quite impressed to see how the installation, the upgrade and the overall management of the platform has been transformed, simplifying our OpenShift model for both D1 but also for D2. As you all know, the major event of 2019 is the COVID-19 crisis. And Amadeus being in the travel industry business, we have been impacted. So we could not grow our Ironman private cloud as in Tundin, which became quite a blocker for Amadeus Megaton to the cloud, as there was still the need to further clodify our operation model and applications. So in 2020, the low use of our classic infrastructure brought the opportunity of potential repurpose of several hundreds of servers to create a new yes cloud. And due to the usage of older hardware, this stack has been called Ironman Lite. And the purpose of this Ironman Lite project was to leverage the existing UNO's hardware and provide new cloud platforms with a minimal cost and an excellent operational model thanks to OpenShift 4 and this to continue Amadeus migration to the cloud. So let's talk about Amadeus technical requirements for this project. So in Amadeus, we know of quite some experience deploying OpenShift 4 in the public cloud, mainly on Azure. But deploying it on-premise was a completely different challenge. And that's why we requested the expertise of Red Hat to help us on this task as the core of this project was run to Red Hat product. OpenShift 4, so it was 4.6 at the time of the project and OpenStack 16. So we had some precise requirements. First, we wanted a deployment model and operation of our private cloud as close as possible from the public one on Azure. And this was a single way for SRE teams to manage our cluster. The one point for our deployment on-premise, we have no direct internet access. So we needed a fully disconnected installation mode where all the artifacts are fetched from internal repositories. We also wanted to use an API installation. So API stands for Installer Provision Infrastructure. And this to have the full cluster infrastructure provisioning self-managed by the OpenShift operators and thanks to machine and machine set OpenShift resources. This was freeing us from the burden of managing ourselves, the infrastructure and enabling easily great features like cluster autoscaling. Then, fourth point, we wanted to use Calico as the SDN. As we use it a lot to enforce proper network security, in Amadeus, we wanted to leverage a Calico feature like a global network set to represent external sidern blocks or also global network policies. This to enforce some rules at a global cluster level. And those features do not exist with other CNAs. Finally, the idea of this project was to be able to create a full cluster with like a single command consuming a single config file input. And this reusing some automation already built internally by Amadeus to deploy some cluster on Azure. So we don't have to struggle to recreate clusters and have a kind of cluster as a service model. Okay, I will now let Vincent explain you more in detail about the project and the automation that has been built based on those requirements from Amadeus. Thanks, Tipo. So the main thing we want to share with you in this talk is the experience of intercollaboration with a client and the Red Hat in a consulting project. So we started this project with a week of Navigate Workshop. The Red Hat Navigate is a tried and test framework that helps our customers identify obstacles and align their business goals to deliver successful solutions. So the Navigate Framework was delivered through a series of workshops which covered a set of considerations for OpenShift. And each day we had different workshops that were talking about different subjects. So the project was delivered by the Red Hat as the principal leader with a collaboration of Amadeus. And as a team, Red Hat constantly worked as an autonomous team within Amadeus. And the delivery was as worked very well given the maturity of Amadeus in the OpenShift adoption journey. So I will talk a little bit to you about the implementation and delivery of the platform that Amadeus required. So based on the prerequisite that Tipo Castan told us, so we deployed on-premise cluster, which were on three-eighths. So to prevent crashes if some server went down, it was the same thing for all the storage which were using Cinder in different eights. So everything was well dispatched on the infrastructure. And as a prerequisite, everything is being deployed with the IPI fashion. So you have to provision machines manually when scaling up the clusters. I will now talk a little bit about the big challenges that we had during the implementation of this project. And the first one was the Calico integration. So it was a big challenge because basically the use case of having Calico as an SDN on OpenShift, on OpenStack, has basically no documentation. So we had to do a lot of work and research to understand how this component could be working on this specific integration. We had to do a lot of back and forth and testing in order to understand how Calico worked and how to make it viable with the needs of Amarius. And in addition to that, after having Calico well configured, we had to configure and understand the Tigera operator, which was the component that basically deploys it automatically when creating clusters. So in short terms, the thing that we had to do a lot was diving into the code of this component to understand what's happening to make it work and make Amarius happy about the Calico SDN. The next thing that we had as a challenge is having an OpenStack on Primize versus Cloud provider solution. So since Amarius wanted to have an excellent operational model, it was kind of complex to start from simple deployment using on Primize OpenStack and iterate through everything and all the prerequisites that OpenStack can have. So starting from the storage, from a network configuration, etc, etc, and how to deploy, for example, multi-added cluster. So this has kind of complicated to have an on Primize environment that was working as well as a Cloud provider. And for example, even some really specific feature weren't working as intended. And we had to do exchange with the engineering teams to have everything fixed for the next releases of OpenShift on OpenStack. The last thing, the challenge that we have seen as a big challenge was being reactive with all the Amarius assets and needs. So the integration were quite complicated. And with everything of that, we had then to integrate with all the tools that Amarius already possess. So since they are really mature, for example, they have a tool that is called the Amarius wrapper that helps them to integrate manifest when creating clusters. So all the work we have done previously in cluster, we had to then integrate everything with the in the Amarius manager with all the tools. And since Amarius is really mature, when they had some questions or some needs, because this project was quite new, we had to come fast with a tailored solution that were quick and elegant so they can be happy and who make everything work properly. So I will let Mara talk a little bit now about the automation tool and the technical speaking and how it has been implemented on the Amarius. Thanks. Thank you very much, Vincent. So while performing navigate workshops, we do this exercise where we ask the client to identify business priorities. We get them to vote on their top three. And then we go deep into the explanation of why these were chosen. So from those, the main business priority identified was operational excellence and then efficiency. So at the heart of this conversation, there was the topic of automation. We can see on the screen and extract of this exercise where automation was brought up several times. You could definitely see that this was at the organization's mind. Every talk of processes involved the question, can this be automated? This wasn't something new for Amadeus either. They had already advanced quite a lot on the automation topic as Vincent and Tiva mentioned there was the existence of what they called a wrapper which leveraged the OpenShift installer to add manifest like, for example, the Calico manifest or the two operations during the install, as well as an automation tool based on Terraform to optimize the installation of OpenShift clusters in Azure. So the easy adoption of Amadeus, the easy adoption of automation by Amadeus made it so we could work side by side to create a process that allowed us to spun up clusters on one click. So automation became very quickly one of the top priorities of this project. So the OpenShift installation was automated from start to end. So how did we do this? Let's dig a little deeper on the technical side. So the automation tool was based on two technologies, heat orchestration and Terraform. So first, let's talk about E-Templates. As mentioned, AES was deployed with OpenStack and then OpenShift was installed on top of OpenStack to create a private cloud solution with infrastructure as a service and platform as a service capabilities. So E-Templates described the OpenStack infrastructure for a cloud application in a text file such as the one you see on this slide. This text file is leveraged to create a stack of infrastructure resources such as networks, submit, the bastion server and others. The high integration of E-Template as an orchestration technology for OpenStack made it an obvious choice as a technology to use. This allowed us to deploy all the OpenStack prerequisites to install OpenShift on top of OpenStack. So now that we know the bastion server was automatically deployed by E-Templates, you may be wondering how did we did to actually configure the bastion server? For this, we used CloudyNeed. So CloudyNeed is an industry standard that identifies that cloud is running on doing good, reads any provided metadata and configures the system accordingly. So this allowed us to configure the bastion server during boot time automatically. So the moment the bastion server is spun up, repositories are configured. As Tiwa mentioned, it's a disconnected install, so we needed to configure internal repositories of Amadeus. Then packages are installed. The clients and the OpenShift installer, clients such as the OC client, OpenShift client and Swift client are installed. Then the installation objects such as configuration files are downloaded, in this case, from a Swift container. And finally, with the run command resource, CloudyNeed launches the cluster installation. So where does Terraform come in? So Terraform is the glue that brings all of this together. So to give it a little context, Terraform is an open source infrastructure as code tool that includes an OpenStack provider, which allows us to create OpenStack resources, such as, for example, leveraging e-templates to create an orchestration stack. The Terraform also includes an Azure provider, which is why it was used as a tool to create the automation for Azure clusters. So how is Terraform the glue? Well, we have three main things that we needed from this automation tool. So we needed to deploy data templates in a sequence. So for example, we need to deploy first data template with the project to install OpenShift and then everything else that's deployed on the project. Then we needed to create installation objects and installation configuration files from different templates. And we needed the ability to variableize these files and these objects. So all of this was allowed by Terraform. So at the end, the automation looks something like the diagram you see on the screen. If we go a little deeper on the diagram, the Terraform creates an orchestration stack, which is the one you see right now as a stack tenant on the diagram that deploys the project where OpenShift tool will be installed, sets up the user role assignments, configures the code test for this project. Then Terraform creates the floating IPs for the ingress and for the API as well as this with container that will store the installation objects. The installation objects are created from a Terraform template and the floating IPs, cluster parameters, project names are added accordingly. And then finally, an orchestration stack, which is the one you see as stack server is created, which deploys the OpenStack prerequisites, the bastion server, and finally launches the OpenShift cluster. So as you can see here, this stack deploys the networks, subnet, router, the floating IP for the server, the ports, and as well the cloud config resources that will configure the bastion server. So this automation tool allowed Amadeus to deploy clusters in a repeatable way and on demand, serving as cluster as a service. So this already made this project really, really interesting. But in my opinion, there's something even less tangible than technology that made this project interesting. And it's the fact that this part of the project was looking to standardize processes within Amadeus. So this required to create a synergy between Red Hat and Amadeus. So while creating this tool, there was always the conversation where would this tool be compatible with what is already done or will be compatible with the existing infrastructure and workflows. So this goes along really well with the Red Hat principle of putting the client first and adapting what you do to your client's need. So now we'll go on to specify the benefits and wins of this project. Okay, so the first point, and I think it was maybe the main goal of this project, is that we are now able in Amadeus to provision production ready cluster using OpenShift 4 in our private cloud. And this allows us to move traffic from our legacy infrastructure and support thousands of transactions per second on our new on-premise cloud platforms. The second point, and it is also a consequence of the first item, this project enables Amadeus to continue its cloud migration. First from the legacy infrastructure, but also from the existing OpenShift 3 platforms. And this leveraging all the great features of OpenShift 4 for the on-premise deployment. So one of the main benefits of this project as well is the automation of the OpenShift cluster, which allows to create clusters in a repeatable way using one single configuration file and on-demand. Last but not least, the maintenance of a good relationship with a client, which is Amadeus, which is a faithful and long-time client. And it's cool, especially during these challenging times. Thank you everybody for attending our talk. And I want to give a special thank you to one of our colleagues, Steve Odomai, that worked with us along this project. And the project wouldn't have been possible without him. Now we'll be answering questions on the chat. Well, hello everybody. And welcome back. And thank you all. And especially Thibault from Amadeus for joining us here today. Much appreciated. It's always a pleasure to have Amadeus sharing what they're doing here. And there have been a number of questions that have been coming in. And people are asking in the chat through Q&A and stuff. Shingo was asking, do these services and Amadeus have infrastructure stress tests, like chaos engineering before release? What are suitable tools for stress testing in a private cloud? Would one of you like to try and talk to that one? Yes. Can you hear me? Yes, definitely. Okay, perfect. So yes, definitely we are doing some tests. I wouldn't say that the testing is industrialized yet. We are mainly testing the storage part. We know that the storage can have a lot of impact, especially when the ETCD is stored. So there are tools like FIOs that are used to test the performance of the storage. Then apart from that, of course, there are more, I would say, functional testing for the application itself to test the response of the application itself, because it's important to test the infrastructure, but also the application or running it. So these are more functional and dedicated teams in Amadeus. And Vishy is asking about how long have you been running this on-prem cluster and how much painful is it to upgrade the cluster, particularly, say, for Calico, when the implementation had challenges? Is there some feedback, Vincent, Maria or Thibault, that you have on now? So the installation of this project was done last year in November. So it's been going since November. And for the upgrade with Calico, it has been as with any other SDN. So it really hasn't been a problem once it's installed. It's just during the installation, we had to verify the integration of Calico with OpenStack. And then there's several requirements that Calico had to be able to perform the installation. But once installed, the upgrade has been done without a problem. It's not automated though. It's the key piece of it. I'm not seeing any other questions coming in from the chat or in the Q&A. I think you've answered them all. So if you don't mind hanging out around in the chat and seeing if things come up, we are always, as always, at gatherings running a little bit behind. So I'm going to thank you, Thibault and Vincent and Maria for taking the time and joining us here. Luckily, I think it's all in your time zones. So I can see light coming in the window from Thibault's thing. So that's pretty good because here it's still pitch black and on the West Coast. So I'm totally appreciative of you taking the time today. And I'm looking forward to hearing what comes up next in your road maps too. So stay tuned. People, they'll be in the chat and you can ask questions there if you have them. And now we're going to queue up a talk from Solius from Smanthus from Six Digital Exchange with a couple of Red Hatters, Radu Domino and Marcel Harri. And so we'll start that one off now. And thank you all for joining us and stay tuned. And here we go, trying once again to start the broadcast correctly. I'm going to add these people. And it just takes a minute to get everybody set up here. So hang on. Goodbye Thibault. And Vincent, it's letting me remove you. Here we go. We've got two out of three going. That's not bad. So here let me share the screen and kick off the video and see if I can do this. Share the audio. Cost reduction. There we go. All right. We're having just a little technical difficulty again. So hang on. We're going to start again and see if we can do this right. Welcome to our talk on Cost and Resource Management on CI-CD Infrastructure by using OpenShift. So a few words about ourselves. My name is Solius from Smanthus. I work as an SRE at SDX. My name is Marcel Harri. I work as a cloud architect in the services team at Red Hat. I mainly focus on customers within the Alps region and their own infrastructure and OpenShift worktop. Hello. My name is Raghu Domno. I work as a cloud consultant in the services team as Marcel in the Alps region. I'm focusing mostly on OpenShift and cloud native ecosystem. Okay. Thank you guys. So we all work at SDX at the moment. This is a six group company providing financial market infrastructure services in Switzerland. There are over 200 employees and we build PLT-based solutions. We're part of Blockchain Network. Primarily, we use i3 Quarta, but as you can see, we also partner with Hyperledger and Enterprise Ethereum Alliance. So we build a number of products where we provide trading infrastructure. I will give the word to Solius to do the run month. Thank you, Radu. So just to finish a few takeaways. So auto scaling, as you can see, it works, but it requires proper configuration and some tuning to be done. GitLab turns out to be a very flexible framework, which allows us to configure building software for developers. And we found that container-based hybrid infrastructure suits very well for these kind of work. So thanks a lot. Thanks for tuning in. And see you in the next one. So hello everybody and welcome back. And as everyone can probably figure out, we're running about 20 minutes behind schedule, which is pretty normal for a gathering. But I really wanted to take a moment and thank Solius and Marcel and Radu for joining us today. There have been some questions in the chat. I'm looking to check in the Q&A to see if anybody's come up with anything. Vishy is asking, why can you just taint one known exclusively for this purpose instead of this priority scheduling? Prove that one and release it out to the wild. I'm wondering if that's something Marcel or Radu could answer. Radu, you want to take it over? Oops, Radu, you've got yourself on mute. Let me unmute you. I'll unmute for everyone. There you go. Well, keep trying. Hit the unmute button. Okay. So the question was about the white priority. Yeah, maybe or so basically by tainting a node, you would not allow any workload to be provisioned and then it would not be part of the standard Kubernetes scheduler selection. Thus, it would not count into the autoscaler position of whether it needs to add new nodes or not within the machine set. And so you need a way where there is workload that cannot be provisioned and then when a new node gets in, it needs to get provisioned. So also if you have a taint, you would need to remove that taint at the moment when all the other nodes are full. And this is not something that the cluster of the scheduler works on. It basically looks at is the Kube scheduler able to fulfill all the requested resources. And if not, I'm adding new machines to the machine set. Or I'm removing ones if there are more than enough resources available. I think that probably answers his question. I'm looking to see if he's asking again any follow-up on that, Vishy, in the chat. No, I thought maybe Salas, if you had a few words, someone's asking another question in chat. Here we go. How do you handle secrets for jobs? That's a question for Solius. And Solius, you need to unmute. Unmute as well. So there you go. See if I can unmute for you. And you're still muted. All right. I think you have to click in Solius and unmute yourself. Maybe not. Maybe we'll have to pause and answer that question in the actual typing of the chat. And thank you very much for the presentation. You guys, if you can hang out in the chat and answer any further questions, we'll queue up our next group of speakers here. And thank you very much for taking the time and joining us today and answering the questions. And thanks, everybody, for your patience with the Hop-In platform. We are running about 20 minutes behind, at least. And next up, we have the Bank of Oklahoma Financials. It's going to talk about modernizing a banking application using the IBM CloudPak for automation on OpenShift. So if you're up for that, however, we're going to stage that now and get that running. So take care, Solius, Marcel, and Radu. Thanks. And we'll talk to you all soon. Thank you. Bye. Bye. Thank you. We're going to try that again. Hello, all. I'm Faisal Kader. I'm a specialist essay in the sales team that manages this account. I specialize in OpenShift and related red-hot technologies. I'm Kubernetes and OpenShift certified. I closely work with Lance Preston, the infrastructure architect from BOKF, in setting up the different OpenShift clusters until production and grounded. On these clusters, IBM CloudPak for automation with its file-net product is posted. I closely work with him, helping him navigate through all the challenges with the infrastructure and provided the necessary enablement to help him complete this effort. Working together, we managed to tackle all the roadblocks and have led to this great success story for the bank and red hat. I'll hand it over to Lance to share his experience. Hello. My name is Lance Preston. I am an infrastructure architect with BOK Financial Corporation, which is headquartered in Tulsa, Oklahoma. I'm here today to talk about linking app modernization via the IBM CloudPak for automation on the OpenShift container platform. We started our app modernization journey basically in the fall of 2019 when we began planning to replace our legacy file-net platform. I believe I'm sharing that information on the screen now. I'll talk about what our legacy platform looks like here in just a minute. Basically, at that time, what we needed was a solution that was going to be supported for the foreseeable future as well as being more easily scalable and that could support the needs of our business, which included adding some additional products within that content management platform provided by IBM. In the fall of 2019 is when we really began talking seriously with IBM and red hat about building out a new environment for our content management platform. It was at that time that they suggested that instead of just upgrading the components in our legacy platform that we look at going with containers and Kubernetes and specifically the OpenShift platform. I'll talk a little bit about what our legacy environment looks like. We basically had the IBM FileNet P8 platform, which was version 5.2. We were running the IBM content navigator separately as a front end to FileNet for our users to access the documents and the content within FileNet. This was all running within the WebSphere application server network deployment edition version 7.0 and 8.5. We had a combination of both and these were connecting to an Oracle database that was clustered within PowerHA. All of these were running on our IBM Power8 hardware running the AIX72 operating system. I will show you what that looked like from a visual perspective here. I'm not sure if I can get that any larger. Basically, all of these are AIXL pars in our IBM Power platform, our Oracle database, that these L pars, which are the content platform engine, which is where the majority of the content is served up. It is stored on our network storage platform on site. This is all on-premise, by the way. Then the content navigator application, which were load balanced and would front end the content platform engine applications. We also have a legacy piece that's called the application engine, which is used by a couple of other lines of business. All of these are the 8.0 servers, the application engines were running WebSphere network deployment edition version 7.0. The CPE servers were WebSphere, ND85, and the content navigator was also WebSphere, ND85. These are all separate AIXL pars in our on-prem Power8 environment. Growing this and expanding this is a little bit of a chore. We can obviously add additional resources to each of these servers as needed, but if we need to stand up another server to scale out the environment, that's obviously a lot of work. We have to build another AIXL pars, we have to install WebSphere, we have to join that into the existing cluster and get the applications deployed to that. The solution that we came up with, with IBM and Red Hat, was to deploy what we consider our modern environment, which is basically the IBM CloudTact for Automation. We are currently at release 20.0.3 6.2 for the CloudTact for Automation platform. That is running in Red Hat OpenShift 4.6 environment on our on-prem VMware VSphere 7 platform. All of our Red Hat OpenShift nodes are just essentially Red Hat Linux Core OS VMs running in our on-prem VMware platform. We have the IBM CloudTact for Automation deployed to that environment. We are still running the Content Platform Engine, which is the IBM FileNet P8 product, but it is the later version. It's version 5.5. It also is still front-ended by the IBM Content Navigator, which they refer to as the BAN application within the CloudTact for Automation. But these are now running WebSphere Liberty within the containers within this application. Here's a visual of what that looks like. All of these environments are still connecting to an Oracle database on-prem that still resides in our AIX platform in the IBM Power 8 hardware. This now looks like this, essentially. We have our on-prem VMware cluster, which is a couple of dozen servers, at least, in our production environment. Those VMs are our Red Hat Core OS nodes. We've got the three master nodes. We currently have four worker nodes that are all running Red Hat 4.6.16, I believe is the version we're currently at. These are all on-prem VM data source in our VMware environment on our enterprise stand. We have a management node that is used to deploy the applications. That's where we run the OC or the kube control command line interface for the cluster. We do also use the OpenShift GUI for management of that cluster from time to time but a lot of the tasks that we do, we do command line through this management node. This all is still front-ended through our on-premise load balancer appliances, and then all of the cloud pack for automation applications, those containers and pods, they reside within this cluster. They run on various nodes within this cluster and then the persistent storage for those applications as well as our content within the file net platform is still on our NAS platform, our network attached storage. And then those environments still connect to our on-prem Oracle database instances which are clustered together in a PowerHA environment running on our AIX72 platform. And again, this is all on-premise. So it took us quite a while to actually get to this point, to get this environment stood up. There was a pretty steep learning curve from our perspective to be able to get this environment in place but we did receive a lot of help from both Red Hat and from IBM. Red Hat specifically on getting the OpenShift cluster stood up and working and any problems we had with that and then of course IBM with the cloud pack for automation packages that we got deployed. We struggled with it a little bit but we were able to overcome any issues or difficulties that we had with assistance from both of Red Hat and IBM and have a test dev cluster stood up as well as a production and a disaster recovery cluster stood up. This image is depicting our production cluster but now that we have this in place and now that it's stood up, scaling out these applications is much easier. We can just scale up the deployment and immediately have additional instances of the application that are serving our customers as well as upgrading the cluster itself is relatively straightforward. It's a very simple process and then upgrading the cloud pack for automation packages themselves, that bundle, it's a much easier process than the legacy platform. When we used to have to do an upgrade for file net or for content navigator, that was pretty much an entire weekend. Several employees working together for the majority of a weekend which is when we could take the downtime for the application to have it offline for hours or half a day at a time in order to do the upgrades and test it out and make sure everything is working before we went back into production Monday morning. Now that upgrade process can occur basically within a couple of hours at most, generally less than an hour and it can be done with the applications online. We just deploy the new application, apply a new configuration file, a YAML file and the upgrade kind of happens by itself in the background with the old version of the application still available until the new application is fully deployed and those containers come online and it's just like flipping a switch and then backing it back out is just as simple. Much easier in the new environment than the legacy platform. I can't really speak to the performance of the new environment yet while we do have this production environment built out. We were still in the process of loading content and data to the file net platform to serve to our customers. So they currently are still using the legacy platform but as we migrate that content to the new platform and make that available, more users will eventually cut over to this new platform and we will have a much better idea at that point of load and the performance that the new environment can provide. We're optimistic. We don't have any reason to believe that the new environment won't perform as well as if not better than the legacy platform but we have yet to have any concrete data on that. That's basically what I wanted to present to you. So thank you for your time. I appreciate you paying attention. I hope you find this information useful. Have a great day. Thank you. Thank you Lance for sharing this wonderful insight on this topic and thank you all for listening. Lance, myself and my associates are available to take any questions in the chat. All right then. Lance, thanks again. Really thrilled. It's really nice to see someone using the IBM cloud packs for apps in this manner and to help migrate some of this legacy efforts here. Do you have any words to add now that you've seen this? I've seen a couple of questions in the chat but I think they've all been answered Faisal. So thanks again for this talk with us. Any final words? Faisal, can you unmute yourself? I can't see your video but I know yeah. So this has been a tremendous experience so far. Is there anything Lance that that could have made it a little easier for you during this journey? Do you think of anything top of your mind right now? Yeah. So specifically I think it would have been much more beneficial to take some of the the the OpenShift training classes ahead of time right before the project had started so that we had a little bit more experience and knowledge of the platform prior to you know going down that road. You know we pretty much learned it from scratch and kind of on the on the job experience as we went along. So I think having a couple of training classes under our belt prior to starting the process definitely would have been helpful. I have since since we've started the journey I have taken a couple of training classes and they've all been very very useful and very beneficial. So I think having that knowledge you know a year ago definitely would have been helpful before we started the project. All right well that's a good plug for the some of the OpenShift training. There's lots of opportunities with online stuff with TryOpenShift.com and LearnOpenShift.com as well as all of the great classes and courses that are available. So thanks for that tip Lance and Faisal thank you for organizing today's talk. I really appreciate it. If you can stay on in the chat we'll queue up the next final talk from Anthem on their Health OS system and I know we're everybody thanks for bearing with us we're barreling through this content today to get to the the end of our journey and so you can all continue on yours with KUKON this week. So the next talk is coming from a group that have been doing some really interesting work with SPIFI and other projects. It's a talk is called Health OS Enabling Standards Based Healthcare Interoperability with Cloud Native and Zero Trust. That's a lot of words but I've got to queue up that talk now and thank you guys for for joining us and bring in the next group of speakers. So thanks again and we'll look forward to seeing you at the rest of KUKON today. My name is Bobby Samuel and I've got Frederick Couts here with me and we're going to talk to you today about Health OS and Enabling Standards Based Healthcare Interoperability using Cloud Native and Zero Trust. So first of all I'm Bobby. I work at Anthem. I lead up the Health OS development as well as precision insights. Frederick would you like to introduce yourself? Hello I'm Frederick Couts. I am a Director of Software Engineering at Sharecare and I collaborate with Bobby and Anthem on Zero Trust and a variety of architectures and systems. So the way we're going to walk through this today is we'll start with the business case or the business challenge and then we'll we'll move into the technology and then be here to answer any questions. So first of all you know what's what's the challenge? What's the what's the point of all this? So Health OS is something that we've created internally here within Anthem and payers are seen as the middleman pain point across the ecosystem and causing abrasion across various user segments whether it's provider member or even to other payers. But we also sit in a position where we have the richest longitudinal view of data and that's whole health data about the person. So Health OS helps us operationalize our health health data to drive improved outcomes and reduce costs and overall you know increases efficiency. So we'll talk to you about how we do that but at the foundation of it all Health OS is a platform. It's a hub whose primary emphasis is interoperability and then driving world-class experiences and uses machine learning and AI to drive insights and also actions. So just to talk about the business architecture and how the pieces fit together. At the bottom we've got the data layer and that data layer it focuses on integrations with EHRs. It's got payer and clinical data and then our data about members or our constituents is based on fire or the fire standard. On top of that layer and this is where we'll get into cloud native and zero trust but in that space in the security layer and our platform layer we've got a number of things that are running and happening. So insights and action apps live here and are created here. We've got tool sets or IDEs and tool sets to rapidly build, validate and or deploy health apps and then this is where we'll talk about where we're implementing zero trust to do workload identity management and then on top of that we've got interaction layer. So the the cool thing about Health OS is that or one of the many things about Health OS is that whether it's a UI UX that Health OS manages or a UI UX that someone else manages whether it's another EHR or a homegrown app that we have those all plug into and have the benefit of connecting back into all of these health apps and back into the place where we've got the the rich data stores. So this is the ecosystem that we've been putting together with our client end point client application end points to connect as well as SDK our SDKs to build and rapidly deploy apps. So in our ecosystem what what's the what are we trying to do this for and at Anthem we have a number of partners we work with we have a number of partners that we connect with in various lines of business but the big problem is is they're not connected. Anthem's connected to them but they're not connected to each other and what this allows us to do is to connect all the apps to each other. So Health OS allows us to connect to Anthem's data ocean. It allows health apps insights and actions to run and connect all these different apps. So we bring our digital ecosystem together and we bring our EMR systems together that we connect with as well as internal systems that exist within Anthem. All of these things working together focused on better outcome for the for the member and so let me like zoom back out into what's our ecosystem and where Zero Trust kind of fits in. So we've put Health OS in the center once again action apps and inside apps. So an example of an inside app would be what benefits are covered by for Bobby or does Bobby have this in his formulary this particular drug in his formulary or treatment in his formulary and action could be scheduling an appointment it could be one click prescriptions or you know painless prior off one click prior authorizations and so those things run together and then using Zero Trust connections we connect to various clients like the desktop or it could be and I'm going counterclockwise right now but like the desktop it could be on AI ML tooling that we've got running that we can make insights available third-party health solutions and then even to clients like third-party clients like Telehealth OS which Telehealth has seen a huge rise in popularity and usage due to the pandemic and then EMR apps or or apps that that do payment acceleration as well as just traditional EMR platforms like in large hospital systems Epic, CERN or Athena Health and all of these connected together working together once again focused on our member's health and improving the health of humanity and so what we'll do is we'll dive a bit more into how we're putting all of these things together on a cloud native Zero Trust foundation to deploy this ecosystem so Frederick let me turn it to you thank you Bobby so before we jump into Zero Trust let's talk a little bit about some security basics very often when you speak with a security or information security person you'll often hear about the CIA triad we actually look at four things now but the first three in the CIA is what traditionally people would look at those three are confidentiality is the information protected against unauthorized viewing or access we look at integrity has the information been modified in a way that was unauthorized how do we protect it from being modified we also look at availability is the information available when you need it to be and there's a fourth thing that has been added in in more recent times called non-repudiation which is how can you ensure that a entity that has performed a transaction cannot back out of that transaction and there's multiple reasons for this which could include at the business layer how do you prevent fraud how do you how do you ensure that you can observe the system and know that that's what the state was likely to be it could also be based upon trying to make sure that the that when you're looking at security systems that you know exactly who you're connecting with and that it hasn't been swapped out with with someone else so in general there's now four main categories that people tend to look at there's a couple others that people will bring in as well but these are the the main four that that you tend to see so using this particular framework we then take a look at what are the business requirements what is the what is it that we're trying to to protect what is changed so when we look at at the zero trust space and why it's important one of the things that we want to look at is what is what is the what are the previous assumptions that we've made and what is the reality that we're seeing today what is what has changed and the differences between that assumption and reality can be seen in the form of cyber attacks where people will perform data breaches will run ransomware denial service attacks forging identities or so on and the policies that we tend to apply in from a regulatory or policy perspective may also end up ossifying some of those assumptions end up entrenching those assumptions in such a way that they can be difficult to respond to so zero trust is about realizing that we have these gaps and then building up a new framework that is more flexible in order to allow for response to these type of of conditions and to allow for additional controls to be put in in such a way that it enables other organizations other parts of your organization your digital organization or your developers to be able to make the changes necessary to meet your mission but at the same time still maintain that control to hit your confidentiality your integrity your availability and your non-recreation goals so what is your trust I try to distill it down into a into a small image and this is the simplest I was able to find in the very in the top half of this you have perimeter defense which is the common gold standard that you see within many environments that is where you have a trusted network in that network you have your services if you need to connect to another network you may have a firewall that you put in between them in order to protect entities in one network in from entities in another network but the problem is that if you end up in attack with an attacker in one of these networks then there's a lot that they can do there a lot of damage that can that can be done in the zero trust model instead what we say is well what if that network was not trusted no it's not implicitly trusted that doesn't mean the firewalls go away it doesn't mean that you're that you're not trying to protect the network but it means you're no longer saying this network is the implicit thing in which we base our trust so once you no longer trust your network then you have to look at where you push the controls and the controls end up being at the services themselves so if you look at the bottom half of this you can see every service when it connects to something else has some form of a of something resembling a firewall something that is a control that allows you to determine what you want to send over those links to those other to those other devices if an attacker enters into your network again that doesn't mean you're no longer at risk but it's yet another layer of security that you have that doesn't allow for implicit access to things simply because they're on the network so to build our zero trust framework we started with three main foundations this is identity policy and automation identity is what is it that identifies your service what is it identifies your user or identifies your data how do you know that what you're looking at is the thing that you are that you're looking for how do you attest that identity policy is how do you develop the rules and apply those rules and enforce those rules across the across identity from the automation perspective is how do we take this from let's say you have a single system and you can put a person on that system to defend it when you say when you start to try to scale this out to a large number of systems hundreds of systems thousands of systems tens of thousands of systems you need to have automation in place that is able to help you assign the identity and enforce the policy but also bring in things like observability so you can audit what's going on and to have controls over well what the automation is capable of doing or what it's not able to do so it ends up being three intertwined primary pillars that that have to be put together in order to build a zero trust framework so we've been working on a reference implementation for this in a in the cloud native environment and we focus around three primary things so if you notice i in the triangle i actually made them link up so you can see identity we're using spiffy inspire for policy we're using open policy agent for automation we're relying heavily on things like network service mesh now these aren't the only things in the infrastructure but they're the representative of the type of things that we're trying to accomplish so we'll go over each of these in more detail soon we also build this on top of kubernetes we build it on top of systems like open shift we build it we we build in automation on the infrastructure side we have getoffs style processes that we're bringing in and underpinning all of this you still need observability across the whole stack you still need patrol over the over the whole stack so it ends up becoming this this model that that this particular thing represents that all works in coordination to deliver the infrastructure that is part of help less so what spiffy inspire are is that they provide identities to your workloads so most people are familiar with the user identity you put in your password you log into a online service you have that user identity in this scenario we're looking at workload identities so every workload receives an x519 certificate this is the same type of certificate that when you log into your bank your bank will use an x519 certificate to identify who it is so we're relying on the same type of of primitives and principles in order to secure the workloads when a workload connects to another they use a new feature within tls that is available in tls 1.3 and presumably above as those are released and that is mutual tls mutual tls is where your client is able to validate your server like you typically can from a browser validating your bank but simultaneously the server is capable of validating the identity of the client so you have this two-way validation that occurs within a trust domain so we're able to create these identities that live within a trust domain that allow them to establish their identities and these identities are constantly rotated out every hour they get rotated out and by default if you're using spiffy inspire and every time that you assign a new certificate you perform a verifiable attestation and what we mean by that is that the system will ask for an identity we will look at the properties of that system you might have a tpm module that you're working with you might have a identity document that is within aws or within gcp or other similar systems that have some cryptographic material inside of them that help prove something about that system so we are able to build our spiffy identities with attestation that is rooted in these cryptographic materials from from these types of systems this also has a very nice effect because since we're performing this mutual attestation and validation between systems in many scenarios it reduces or also eliminates the need for long-living bear children so in other words you don't need to pass in a secret the fact you're connecting in with a specific identity is enough for the system to recognize what type of a system it is and what type of policies need to be applied in terms of policy we're looking at things like open policy agent and open policy agent allows you to to consume the identities that are produced by a system like spiffy inspire and allows you to decide what does this system allow to do what is what are its capabilities that that it is able to fulfill and when when you when you create these particular systems or the properties we're looking for and that led us to open policy agent is has to be something that's human readable has to be something that is that meets the the look and shape of common policies so in other words you could have how do you classify data how do you classify workloads how can you say this system as phi and create defaults that say don't allow them to connect the systems that don't have phi or vice versa and then from there we can carve out patterns that the system is allowed to perform in this example we took this from open policy agent orgs it's one of their it's one of their examples they have on their front web page and you can see a request that says pet owners are allowed with a specific id that is verified by the jwt which is which is something that identifies the user cryptographically is allowed to receive information or is allowed to make a request against against this api in a specific way if and only if the request comes from like say this is in front of a database if and only if the request comes from a client that we that or a workload that we have identified so it gives us a lot of flexibility to define the exact type of shape and policies that that we want in a human readable way that also allows us to get this policy into get it allows us to to have code reviews on these policies to share them with with other stakeholders so we can get their opinions on whether this policy meets their requirements or not and it gives us that that change over time so we can see how the policy has changed when did it change because it's all checked into into get we also rely on a new technology called network service mesh network service mesh is another cncf project that is looking to automate low level networking systems so we're looking at if you're familiar with the osi model we're looking at layer two layer three we're looking at frames in ethernet and ip and other similar level areas and what it does is it facilitates the underlay to services so typically when you're running in kubernetes you'll often have multiple clusters you want to connect together in some way and when you connect them the assumption is there's are there's already connectivity established between both systems what network service mesh allows you to do is to acknowledge that there may not be a connection that's there and you may need certain things in place in order to make that connection work so this allows the operator to say in order for this connection to occur i needed to have a firewall an intrusion detection system needs to go through a certain vpn gateway a certain vpn concentrator so network service mesh allows you to automate these processes through a cloud native api with native support from spiffy inspire and open policy agent and it provides you a cryptographic non-repudiation of that connection chain so in other words in this example we have on the left health os app going through a specific vpn gateway to a specific vpn concentrator to a specific health app we can get the cryptographic identity of everything in between and see what is a system connecting through is it connecting through systems that we trust is it connecting through systems that that are required to do we have everything in here that we need in order to establish the connection by policy and enforce it on an ongoing basis finally we look at get ops and from a get ops perspective the workflow this is more of a process side that is then committed as a as a service so from the process you have a developer developer will make some form of a commit into the source code uh system like such as get then the c i c d system your continuous integration system will see those changes that have been put into get and will then render them into your your test environments into your staging environments your production environment the every change goes through source through source control every change goes through get which gives us that audit auditability it gives us that chain as to who made the who made the change we also have control from the qa side so in fact through when you're looking at regulatory concerns in this space it's it's important that your developers are not allowed to push into production you have to have a separate group of people a separate team that is able to look at what changes are there and decide whether or not those changes should hit production so when you start looking at things that are pci compliance or hippocompliance systems you tend to see this pattern quite common so that you don't have a single place or a single person who is able to push these type of things in so the qa team is then able to determine at what rate and when something is promoted from testing the staging or staging to to production a really great example of a system that you can use to achieve this in your own infrastructure is flux so highly recommend that you go look at flux and give it a try it hooks up to github and gives you that initial path towards automating in in this style so with that i want to thank you all for joining us and learning a little bit about health os and zero trust you please consider that these are the type of technologies that we're using we're using open shift with kubernetes we're using spiffy inspire open policy agent network service mission envoy please join these particular communities there's a lot of things that you can work on in those particular spaces and if you're interested in the type of things that we're working out please reach out to either bobby or i and we'll help you navigate the the path whether it's coming to work with us directly or whether it's trying to work in the same area in your own industry so please please come and join us with that we have time for questions and thank you very much well hello all and let's see if we can get ourselves unmuted um there we go bobby there you are stew i hear track two has ended successfully thank you very much for that um we have you know a bit of time left here before the clock runs out on us so um i'd be curious fredrick and bobby um i know um i had asked you earlier there is not quite yet a landing page for health os anywhere itself how can people get in touch with you if they're interested in this initiative and get get going here so there's multiple ways to to get in touch with us but um fredrick why don't you start with um look at the open source it depends on the level of interest what where are you interested in working and what is it that you're just learning more about so fredrick you want to start with the open source and then i can i can talk about the the other pieces sure so in the open source space there's a few communities that have been focusing on building the core components so starting from the top we have spiffy and spire if you go to spiffy.io there's a range of of places where you can interact including github there's a slack channel for and all of these have a have their own slack channels as well and there's also in the policy side there's an open policy agent which i believe is open policy agent.org network service mesh is is another community which is focusing on the automation of the of the layer two layer three components that you saw which is network service mesh.io and we also we also have the broader communities as well such as scuba netties cncf and the open shift commons that you can join in and help in the various the various levels and there's also a variety of other downstream communities that are consumed so for example in network service mesh there is now support for wire guard and wire guard has its own community that's attached there as well so there's there's a lot of different places where people can focus on on helping out a variety of different levels if you're interested in the open source area and you're feeling overwhelmed with like where to get involved or you want to know like based on your use case if in the context of zero trust where a good place to start with me like don't hesitate to reach out to to me and i'm also happy to to help guide you through with introductions or or meeting people as well so we want people to get involved and and uh if your interest isn't in maybe specifically one of those areas and you've got a use case or you you you want to learn more about what we're doing to Frederick's point you can reach out to us connect with us we're happy to have conversations talk about we'll fit in we're looking for people and want to work with people that are mission driven because this work is not easy um changes as we we saw in the previous few uh examples presentations changes difficult and we kind of glossed over that we're at the end of a change process but changes is top of the middle of the transformation so mission driven um and people that you are committed to um come into the journey is what we're looking for and we'd be happy to chat with you about it awesome well i i did mention uh i did notice a mention of GitOps in um in your presentation and yesterday's present conference those GitOps con i'm not sure if any of you attended that um that are in the chat but i'm wondering bobby if you can talk a little bit about where you are in the journey around GitOps at Anthem because i think that's top of mind for a lot of us so i've i've i've been in other past and other history so um we haven't always had the luxury of starting with the green field um in some in most of my career we've started with uh legacy applications or applications that existed before i i was part of the team in this case for health os we were able to start um it was green field um in in the spaces that we were working with um in the area so we are complete cloud native and so we had a great chance to just start foundationally from the first teams we built was our dev ops or git ops team and uh we started with automation in mind so our ci cd pipeline is uh is is continuous the the d part isn't cd isn't something that we subscribe to but the uh we went from um releases that were once a quarter um to now uh we are down to uh they're they're big feature releases because it's a platform and when you release you have to be careful what you're putting out there but some of our teams are are now um releasing monthly and then we've got actually one team that is releasing you know seven to ten times a week through their uh through their git ops pipeline their ci cd pipeline so it's been um it's been interesting it's been um a journey and automation things don't always work the way that you would want them to or imagine them to so we engineer around them uh but it's taken some resilience and um we're we're thankful for those tools but um we also want them to get better is that it's another way to say why don't we get better and um one of the places that we have struggled is that in the management console space um in in this and just creating awareness around the clock so when something isn't going to work or something's uh has failed that we're the first to know um and that's been a lot of our focus it's not just getting the pipeline but it's the monitoring of the pipeline so that of the pipeline so that um we can every event is is becoming less and less an event every every push becomes less and less event and um it's it's scriptable and we we also look for tools where we can script and it's not um you know manual intervention and typing in um uh you know ip numbers to you know whatever else you might imagine so it's been it's been a journey and um I we'd love to showcase it at some point show you how we're doing it oh well we'll definitely give you the podium to do that um again because um I think that that sharing of those stories improving the tools those are all really what we're here about today um trying to make sure it happens and um your feedback is sort of essential and um I know like especially with the get-off stuff um and the stuff that we're doing here at Red Hat and in the open source world that's that's been top of mind for a lot of stuff the the monitoring and the notification pieces are essential um so Frederick any final words of wisdom I know um we've had you on many times before and it's always interesting and it's always kind of bleeding edge um stuff that that you kind of bring us to uh and are there any things going on at KubeCon since that's that's first of mine here too that we we ought to be watching for so uh yeah KubeCon is quite a it's quite an event there's I know it's still pretty hard to do some of the engagement because of because of COVID-19 I highly recommend if you're not on the on any of the Slack channels get on them people are on them all the time they have a set of channels that are related towards KubeCon and if you want to meet people and have conversations strongly recommend that you hop on and and just say hey I want to talk about zero trust or and jump into the appropriate channel or I want to talk to the people from from Spiffy or Opa or similar and find find the room that has the most people in that that has adjacent uh relations to those topics and just ask away and uh see who see who answers and establish establish some form of communication with them we don't have those hallway tracks anymore right now and at least not until the next one and this the hallway tracks are one of the most important parts of KubeCon so it's we need to find ways to to work around the work around the issue other than that there's a lot of really amazing talks there there always are just depends entirely on your interest so definitely find the ones that interest you I think I think that was some of the best advice too is like the Slack channels around KubeCon if you're in the Kubernetes Slack in the OpenShift users or the OpenShift Dev Slack as well channels there are really pretty key and if you're not yet an OpenShift Commons member you can always go to commons.openshift.org and and fill out the little form there and we'll onboard you into yet another Slack channel OpenShift Commons which is where Frederick and I and others hang out and chat all the time and it's a great way to meet people who are doing these things inside of OpenShift but all of it's being done in the open so please do reach out connect with your peers and definitely keep keep us involved on Bobby and the HealthOS stuff that definitely put you back on stage and give you a venue. All of the talks from today will be now published via YouTube shortly and we'll tweet out a little notice that they're all live and I wanted to take a moment and say what Stu has probably been saying all day long may the fourth be with you all thank you for sharing may the fourth with us it's been a very Star Wars kind of interesting day for for us on the main stage quite a few little glitches there here and there but thanks for sticking with us we really appreciate it we so appreciate having end users come and talk about these technologies rather than having Red Haters talk about it it's wonderful to hear it from the people who are actually on the front lines using the stuff and what I'm really loving about today is how many of the end users people like Bobby and Frederick and Solis from Six and everybody else from and the folks from Discover and I think you saw Stu in your track the folks who are working in the open source now I think that's the really big change that we're all seeing is how many more of our end user organizations are now participating out there in the open and working on and contributing directly to these projects and driving a lot of them so I think that's really the big change in open source for us it's not really of as much of a vendor driven space as it was in the past though all of us vendors are out there and contributing to if you heard from Clayton and the all the folks talking about the different projects that Red Haters are involved in so there's plenty of us out there but there's even more end users now getting involved and kicking off new initiatives and you know Health OS for example about every single time we host a gathering the last time it was the Enterprise Neuro Systems AI ops in the telco space they had their foundation and they were all coalescing around that it's a it's an amazing world out there right now and I'm really grateful for everybody for sharing their their stories and their contributions it's always enlightening and much more interesting than hearing me blather on and Stu as always you're new to this game in the commons you've been doing the cube and all kinds of other things and market insights thanks any final words from the Star Wars fan that you are well Diane hey it was great to be a part of this I always loved going to cube con as you said that kind of user created in collaboration so many of the projects out there was I had a challenge nothing existed I created something found out other people had the same problems you know listening to like the Argo CD it's like oh hey into it let's you know hear what they had I had interviewed them you know back in the past so that co-creation you know loved all the sessions over on track two especially Diane I know you've got a special place in your heart for the okd content there so that really good discussion just how much effort and work and giving back eventually before they got things in production so had a lot of fun over there and yeah had some good Star Wars humor as part of it too as the joke is most people are like hey today Star Wars day in my household you know it's Star Wars day every day so yeah well it was the red hat version of the Darth Vader mask that behind you that was really the thing that they got me going when we first started talking about all this so I'm totally grateful I can't wait till we can all be back in person again and a huge shout out to everybody Chris Short who's out there still streaming on OpenShift TV for us and producing us in the back I'm really grateful for that support and now we have three more days of KubeCon and KubeCon is rolling behind us in the background as well so jump into those KubeCon Slack channels look for us in the red hat expo booth or whatever it is that they set up for us and we'll be taking your questions there and if you're interested in having the podium my favorite thing to do is make other people talk so if you have a talk or a project or a tech initiative that you're looking to get some awareness of please reach out to us and we'll definitely give away the podium that is the whole point of OpenShift Commons it's all about the end users and getting everybody connected so once again thank you all Frederick and Bobby and Stu for all your efforts today totally appreciate it now it is now officially 7 a.m. where I am so I am going to get another cup of coffee and try and stay awake for the rest of today so thanks guys much much appreciated