 Alright, hi everyone. Welcome to the OpenShift roadmap update for 2021. Thank you for joining us in OpenShift Commons. I am Rob Somsky. I'm joined here with Michael Elder. You want to say hey? Hey everybody. And today we're going to talk about what's going on in the OpenShift universe for this year. We're also joined by Jamie and Nina who are going to pop in and out and give us some demos of some cool stuff while we talk about that universe. So I want to roughly set up our conversation to be around something that we just announced in May of this year, which is OpenShift Platform Plus. And that really is bringing the OpenShift platform up a letter into the multi-cluster arena, realizing that you need multi-cluster security, you need a global registry, and you need multi-cluster management to be successful in today's hybrid cloud world. And so we're not going to focus on the actual product today, but we're going to talk about all the different pieces of that, what's going on in those upstream communities, and some cool features along the way. So I want to talk about this idea of standardized tools, because you're not just running one or two clusters, but you're going to run maybe 100 clusters. And getting tools and policy and management down to those clusters is extremely important, because today's applications are spread and more distributed than ever, clusters are more connected than ever, and that means that you've got to get that fabric orchestrated correctly. So we're going to talk about all this stuff, some of the tools that are available upstream in both Kubernetes and other ecosystems, so it should be a pretty interesting day. And you're going to see these two icons as we go through this parachute guide, that's when we're going to zoom all the way down into a single cluster, because you know, what's happening on a disk of an operating system or a policy that makes its way all the way down into the nitty gritty is just as important as zooming all the way up back into that multi-cluster world with this rocket ship. And you know, the work that we do for fast packets for maybe like a telco workload is beneficial to your application, even if you really are up at the multi-cluster layer. So you got to build all these capabilities up, and they're all important up and down. So let's jump into it. Security, what's happening upstream? Two main things. First, with the version 1.21 of Kubernetes pod security policies are deprecated. Now, this doesn't mean that they're going to be removed yet. I think that's going to happen in 1.25. But a few things to note here, the replacement for this is probably going to be a little bit more simple than what you've been used to. And so complex users might like some of the features of the open policy agent. We're going to talk about how that fits into some of our tools later today. And you can run that today on OpenShift just fine. And then, of course, OpenShift SCC, that's a security context constraint, are unaffected by this. This is kind of what pod security policies were modeled after in the beginning. So we kind of had your back before. We got your back now, and we'll have your back in the future too. So one of the cool parts about innovating inside of OpenShift directly. The last big change is user namespaces. This is kind of a kernel level construct that's made its way into Linux. And together with SC Linux, that helps protect your namespaces from each other on the cluster. Now, I don't want to get into too many of the technical specifics, but in the CRI, which is the run time interface for Kube, this is now there. And so this is the default for talking to run times. And with Cryo, what we use in OpenShift can do the user ID mapping in and out of the container, which is what actually is doing that user namespacing. So we're waiting on Kubernetes to roll that out, and then that's going to make its way down into OpenShift as well. So the proposal there, the KEP and upstream parlance is still moving forward. Some of our engineers are pushing that, so that's one to take a look at when that comes down. All right, another big news in kind of the upstream arena and Red Hat world is advanced cluster security. This is based on our acquisition of StackRocks. We're super excited to have these folks join the Red Hat family and build this into OpenShift and OpenShift Platform Plus. So we're starting the open source process right now for all the StackRocks tools. And we're really excited about this because it's the most Kubernetes native security product out there. We're going to talk about some of the cool features and see a quick demo of it in action. But it really talks about this idea that you've got to secure your entire supply chain from this shift left into where you're actually building code and building containers into when they're actually deployed and then actually after they're running, malicious things can still happen, even if you pass a container scan, for example. And so what this does is it has a combination of watching host and cluster state. There's some cool technologies if you've heard of EBPF. This is kind of the Swiss Army knife of doing some like host probing into what the kernel is doing. So we take advantage of that plus at the Kube layer admission controllers looking at the audit log and some other cluster events as well as your typical image scanning. So we're going to see a really cool demo of that in a second. So let's parachute on down. What does this look like in a single cluster? So I talked about runtime security as one part of your kind of threat matrix here, and this is a new capability for OpenShift, completely new. And what this looks like is, you know, let's say we've got a pod and it's got Python in it, and we've got an application, and there's a vulnerability that gives you some local execution rights inside of that pod. You know, if you're going to move left and right around in this environment, what you need to do is have some tools. So you might pull down a Netcat binary and then start using it to wreak havoc. So here you can see how that actually works because, you know, you're going to go down into the Kubelet when you're running your Python, but you're also going to run that Netcat. And an EBPF probe by a security sensor that's installed is actually going to block that pod from executing that Netcat. So this, you know, means that, A, that binary can't be run if it matches your policy, but it's also going to protect network traffic moving left and right as somebody wants to exploit other applications that might be on this environment. And so we're going to look at a demo and so I'm going to hand it over to Jamie for more. Ad team, welcome to Red Hat Advanced Cluster Security or as we call it, ACS. To set some context, this is a cyber kill chain. It ranges from reconnaissance, which is understanding your victim and the system architecture and what you're doing when you're trying to attack based on your objectives. In the world of security, there's always a reason for an attack. The reason can be anything from denial of service, dealing data, installing crypto miners for a financial gain, or really just because you can. The US Consul of Economic Advisers estimated that in 2018 that malicious cyber activity cost the US economy between 57 billion and 109 billion in 2016. Since 2016, it's only increased. So let's think about this in terms of an onion or defense in depth. Security should always have layers to it. There are many spots to spot attackers along each part of the kill chain. Today, we're going to focus on reconnaissance through exploitation. First, in the cyber kill chain is reconnaissance. We're going to find out everything we need to know about the application to start to exploit it. Then we're going to search for vulnerabilities to weaponize, create a back door and deliver our exploit to get it up and running. The attackers afterwards will frequently look to establish a foothold in their position by installing malware, attempting to escalate privileges and establishing a command and control server. So this is ultimately going to get them to the core objective, which can really vary based on their needs. Containerization as a movement has really helped the security team. Containerization is just isolated our attack surface. It's taken post exploitation activity and made it harder to move through the environment because our containers are self isolated. This has only helped the security community and now it's time for this community to take advantage of it. So today, I want to walk you through the anatomy of an attack from a defender's perspective and that of an attacker. As a defender looks to investigate an issue, protect against reconnaissance through delivery and monitor for exploitation. I also want to show you an attack as it could happen in the wild. So let's get started. So I'm going to transition over to advanced cluster security here. And as an incident responder, I want to check the violations. Now I'm going to search right away for visa processor. And as I look at visa processor, I start to get concerned. I can immediately see that a shell was spawned by a job application in their environment and this concerns me. So I want to investigate further. I click on it and I can see the gel command and the bash command were executed with several different arguments. I go down, I start looking at those arguments. And I can see really clearly here the container ID, the user ID, and that this user is escalating their privileges. They are installing that cat and then they're using that cat to initiate or reverse shell in their environment to shell dot attacker.com. Well, at this point, I'm reasonably confident that we've been attacked. I'm starting to initiate my incident response procedures and I've lit up the sock. And for those of you who don't know, the sock is the security operation center that's responding to these types of incidents. But right now, now that I'm initiating this procedure, I want to get more context. So I want to see how is it that someone has reverse shell in my environment. So I go over to the deployment and I check out the container configuration. And immediately as I look at this, something jumps out at me. The CVE data may be inaccurate. So this piece of image is literally so old that the source is no longer providing updates. So you can see here, we're on WNA. This is no longer maintained. So we can also look and see, oh my gosh, this is Apache struts. So one of the top riskiest components is strut, which was a headline vulnerability in 2017. This was the cause of the Equifax breach. And I know that someone is initiating a job or a shell. So at this point, I'm reasonably confident that someone is exploiting struts in order to get into my environment. So I need to address this. Let me switch my persona to the developer per second. So I scroll down. I have a ton of image findings. I can see the CVE. I can see how long it's been in my environment, if it's fixable, and it's CVSS score. And this is really cool information. But as a developer, this isn't enough for me. So I'm going to go over to my Docker file. And I'm going to check out where these CVs are. So as I look here, I can see the components associated with my image. I can see that there are several CVs. There are 137 CVs in what appears to be my base image. And in order to address these, I'm going to have to upgrade my base image. And whatever CVs are addressed in that upgrade, I will resolve here. I'm also going to check out my run instructions because this is where my application is. And I can see here, if I were to upgrade curl, then I could address several CVs in curl. And all I need to do is modify this run instruction. So this is really cool. It helps me target where in my Docker file I need to address these issues. But it's still not enough. What do I upgrade my component to? So now I'm going to switch over to the component screen. And you can see struts highlighted here. So Apache struts has 10 fixable vulnerabilities, and it's 38 vulnerabilities overall. And in order to fix those 10 fixable vulnerabilities, I have to upgrade to version 2.3.29. You can see it's top CVS scores of 10. That is terrifying, and that's a critical vulnerability. You can see the location of Apache struts, and you can even see this component is used in four different deployments. This might not be the only place that I need to address this. So one of the cool things about this view is that you can see struts is a Java vulnerability. So this is a language-level indication, and the remainder are packages installed on our host operating system. So that's really cool. I know exactly how to address this, but it's not in my traditional manner of addressing this. So Red Hat Advanced Cluster Security provides developers an easy tool in order to look at this information in their CI or on their local host and understand that this is something that they need to address. It gives them the context to address it, so we can shift left and improve our return on investment in our vulnerability management program, which is ultimately going to address the need to prevent reconnaissance and potential weaponization. So it's really cool, but it's time to go back to our story about struts. It's time to switch to an attacker's perspective. What happens when someone is trying to exploit one of my applications? So here I have a website running. Now it's not always going to be as easy as saying this is an image that's vulnerable to shell shock, please exploit it, but sometimes it really can get that easy. So I'm going to understand and conduct some reconnaissance. So I'm going to go over to the developer tools. I'm going to check out this website a little bit more. I'm going to refresh the page. I can see, oh, my server header says that this is based on Apache 2.2.22. Easy fix guys. Let us go to exploit DB. We're going to search for Apache if I can spell. And we're going to look through all of the vulnerabilities known to be associated with Apache. And now I can start selectively looking at things that are known vulnerable in the environment and start to understand. So if I click on the top one, you can see this is available in Metasploit, which is a solution that attackers use. I can easily just download this exploit and start to test to see if this functions within my environment. One of the really cool things about this is you can see that there's the code live. So we have pre-baked exploit code on the internet publicly available in a database called exploit DB. And this has shifted the paradigm of how attackers attack because no longer do you have to be technically elite in order to exploit major vulnerabilities. All you really have to do is know how to copy paste and read code and be able to modify it somewhat to meet your own needs. Now that's really cool. But back to shell shock. So here I want to show you an exploit of the shell shock vulnerability. Now I'm a good attacker. I need to let these users know that their site is vulnerable. So I'm going to go deep into some subreddits. I'm going to find a good meme and I'm going to exploit them to let them know that I care. Now I could have installed a crypto miner. I can start to escalate my privileges, but I'm going to be kind. I'm just going to deface their site today. So let me go to the terminal and I'm going to execute this command. And you can see here all I'm doing is I'm defining an environment variable through the user agent header. I am exploiting shell shock by issuing some trailing commands, echoing for legibility, and I'm catting the index.html file. So here at this point, I've got a shell. I can begin to move throughout the environment, but I'm just going to have a little bit of fun. I've already gone on some subreddits and I'm going to deface their site. I refresh the page and you can see quite easily, I is in your computer. I am stealing your data. We have ponage. Let's go. So that was really easy and in reality, it can be that easy in the real world. So let's switch back to a defender's perspective. Now if we go to ACS, we can see that a violation occurred really easily by clicking on the violations. And the top one here is what I just did. It's an unauthorized process execution running in the shell shock deployment. And if I look here, I can start to look to see, oh, this person is putting cat pictures on Reddit in my environment. That's really cool. Now I can initiate it, but you know what? If I had this in blocking mode, it would have been even easier. Advanced cluster security takes a different approach to blocking. We can now kill this pod. So if this deployment is running in a replica deployment, then immediately once this activity occurs, I can kill this pod. My attacker's defacement of my website is then reverted back to what is normal. And I have alerted my security team. I fixed my website. And if someone used this to install a potential backdoor in my container, well, I've removed that. That's really cool. I've instantaneously addressed several issues. So let's pretend for a second that I wasn't just a-facing a site. A lot of you are looking in the audience right now and saying, shell shock is old news. Someone out there is saying, Jamie, that came out in 2014. This is super old. Why are you even showing us this? Well, to me, this came out three months after I started in cybersecurity. This was my introduction to major vulnerabilities. So I figure, why not make it your own? This is a real-world example of something that happened to me. So this and server-side request forgery attacks could easily lead to a container compromise the exact same way I showed you today. If that container is privileged, then look out, because as an attacker, I'm going to go right for some authentication credentials. I am going to try to escalate privileges from there. And maybe I can even get my hands on a nice old kube config. And if I get my hands on a kube config, then I've got authenticated access to the kube API. And I can exec into a pod to start to laterally move and maybe even install a backdoor or some malware. So let's try that here. I have access to the Kubernetes API. I'm going to do an OC exec it into the shell shock deployment. And I'm going to run the bash command. Wait a few seconds. And what is this? You can see here, I'm immediately blocked by an advanced cluster security policy. Now, that's really cool, because I've effectively told the attacker, no, you can install your malware, you can't exec into a pod, you can't get access to my resources through the Kubernetes API. Now, because containers are ephemeral, the ability to exec into a pod generally isn't needed. And it is sometimes for troubleshooting purposes, but in general, it won't be. And by being able to block and monitor exec commands, you should be able to set policies to monitor access to your container infrastructure. And you can also set an exception process through deployment levels to allow legitimate troubleshooting activity. So let's switch back to the UI for a second. You can see here at the top that this is an enforced action. My security team has been alerted that this was blocked. And we can look to understand more to see, hey, is this a compromised credential or has one of my employees made a mistake? So a few last notes about the advanced cluster security advantage. If I go to our policy sets, we come with over 676 out-of-the-box policies with reasonable security controls to help you scale security in your OpenShift environment, where customers use these every day to improve how they operate within the Kubernetes and OpenShift ecosystem. We also help you understand and prioritize risk in your environment based on a holistic risk management strategy. So if I click on risk here, you can see our top priority is Visa processor. As you look at Visa processor, you can see this is vulnerable to Apache struts. It's probably been deployed in an emergency. It's got secrets in its environment variables, a ton of vulnerabilities. And if I even scroll down, you can see it's privileged. So I can go use this container and I can get to the host operating system and see how I can begin to laterally move. Now, I really hope you learned something about the attacker's perspective and the defenders. We're really excited to be part of the Red Hat team. And if you're interested in finding out more about advanced cluster security, want to request a demo, feel free to reach out to me or our team. Thanks, Rob. And back over to you. All right. Thanks, Jamie. That was great. All right. So let's zoom back up into the multi-cluster arena for security. So here's our diagram again. You can see that we've got our multi-cluster tools. And of course, you know, the policies that we just took a look at, we don't want them on just one cluster. We want to get them down to all of our clusters. And it's not just maybe blocking execution policy, but it's network policy. It's CIS compliance. We're going to hear a lot more about this in the open cluster management arena. They've got a bunch of tools for this, too. And then we're talking about hundreds of clusters here. They might be out on telephone poles and other places like that. So these are remote sites. They can be physically attacked. So we've got our file integrity operator and some other things to look at that host state and make sure that someone hasn't jumped on there and done some malicious things. And then of course, you know, we're all the way down to the single cluster layer. There is that node layer where we're installing these sensors and these agents and the probes that actually make all of this happen. And so it's really exciting to be able to have one place to enforce all of this across all of your clusters. Here's what it looks like in the user interface when you're looking at network traffic. This is a key part. Obviously, you're going to have applications that are talking to each other. And so you can get a good idea of who should be talking to each other, who shouldn't be talking to each other, and then enforce policies around that. Another cool thing that the team has built is actually recommendation engines for, hey, we're looking at your traffic and this is a policy that we think you should have based on how these applications normally function and then block things when they're out of bounds of that. Here you can see on the right hand side a list of some of the policies. So a bunch of stuff, you know, maybe you decide that you never want folks to run curl in an image because that's how you can pull down content. So you could block that. So there's a mix of best practices, some, you know, kind of widespread things like looking for heart bleed and other exploits like that as well as maybe you just care about any CVE over a certain threshold that you want to block that or do something else. Sometimes you just want to audit it instead. So a bunch of really cool stuff. Take a look for that coming in the open source as well as in OpenShift. All right. Let's jump over to applications and I may hand it over to Michael. All right. Thank you, Rob. So I want to talk about what's going on in the upstream around just general fleet management. About a year ago, we brought in Red Hat Advanced Cluster Management and we've been on the effort to open source all the components of that. So there's a project that is relatively new, relatively young, openclustermanagement.io. This is where you'll find all of the open source capabilities from the Red Hat Advanced Cluster Management product. This is really focused on bringing together some technologies that are helpful in managing the fleet and also creating some novel and new technologies that help glue together all the parts. So in particular, simplifying the lifecycle for provisioning OpenShift clusters, running on hyperscaler clouds in the data center on virtualization or on bare metal. Simplifying the process of how we deliver and configure the fleet and then also audit for compliance. Does it meet all of our expectations? It does provide some integrated capability with GitOps, but we've also been working on bringing in Argo as a provider of GitOps as well. And then focus on an inventory of what clusters are in the fleet. How do I dynamically place policies or application content across them and validate the disk running correctly? If you look on operatorhub.io, you'll find a cluster manager and clusterlet operators. These are sort of the core building blocks. They enable us to have a cluster become a hub. We'll see kind of a picture of what that looks like here on the next slide. And then an agent, which can run, all of which runs as Kubernetes native pods. And as I said, this brings together a number of open policy, excuse me, open projects as well. So in this architecture view, what we're really looking at is a view of the hub cluster. And you can see aspects like the API server that the hub cluster exposes. Virtually everything exposed to the API layer is expressed as a Kubernetes CRD, but you can read and write those like you would any kubenative API. There's also a search index, which uses a data store to understand all of the parts of any cluster in the fleet and allows you to search that and even look at relationships and find how things are connected across the fleet as well. On the managed cluster, you can see the agent that is that object called the cluster lit. And then there are additional add-on parts of the agent that enable features like search, application delivery, policy management, observability and health management. And here you can see the links to the operators on operator hub IO. Now, when we think about cluster lifecycle capabilities, a lot of times we're still thinking about a bare metal host that is in my data center, we're thinking about a virtual machine, we're thinking about something in the cloud. As we see more edge scenarios, where computing capacity is pressed further away from the data center, we have to think about how do we life cycle those clusters and those machines. So something that will be available in the near term as a tech preview capability will allow us to take a piece of bare metal hardware that connects back, we're able to provide an eye connects back to a controller, we're able to provide an ISO to actually boot that host. And so think of this as a technician installs a piece of hardware in a cell tower in some offsite data center, small data center, maybe even in a vehicle. And that barcode scanner allows them to know, okay, here's the identity of this machine, this host, and then begin a process to link it either as a turn it into a single node open shift cluster where the control plane and the workers, the workloads actually are running on the same host or turn it into a very small cluster. This really is going to be a powerful way to deliver a computing capacity wherever it's needed. And then each of these clusters again connect back into that control plane provided by the advanced cluster management Kubernetes capability, which is backed by this open cluster management project that I spoke of. Now when we think about delivering configuration on the next slide, we really will think about how do we express this again with a Kubernetes native CRD. And here's an example that's really important if you're doing any kind of networking sensitive capability, particularly for things like a 5G style workload. You want a particular operator to be deployed on a cluster. You want a particular configuration for that operator to be being available. So SRIOV enables some really powerful capabilities from a networking aspect within the cluster. This is on the right a set of YAML and policy definition that says I want a certain configuration to be enforced, validate if it's present, if it's not present automatically created. And this allows you not only to work with the SRIOV example, but to deliver any of the operators like the file integrity operator, the CIS compliance operator, along with your own configuration, like certain roles or role bindings, OAuth providers, identity providers, networking storage, etc. And so this is what you see as a policy that's targeting a cluster. If we take a step back and sort of again sort of elevate above the single cluster view, then what we really are looking at is a control plane provided by open cluster management that is communicating with multiple clusters in the fleet and assigning configuration. In this case there's a concept that we won't really talk about much here, but there's a concept of a placement rule that says match this policy to these members of the fleet. And that control plane is actually delivering those policies and configuration down and validating do they meet my desired state. Now if I think about this maybe in a slightly different view when I start to think about cluster life cycle, on the next slide what we'll see is there's really this hub that is the control point. It allows us to manage clusters that are running on hyperscalers, whether I provision a Kubernetes or an OpenShift, along with allowing us to actually provision and create OpenShift on the hyperscaler clouds and even on bare metal and virtualized platforms as well. And so what this really means now is that this hub allows us to have one central view of our cluster inventory, regardless of where that OpenShift cluster is running. This is a key aspect of the broader vision around open hybrid cloud. Now what can I do once I have a cluster that's under management? So that means that I've got the cluster manager operator deployed and running on one of my clusters and then I've got the cluster let agent operator running on any cluster that I've either imported or provisioned. And so on the next slide what we see or excuse me on this slide what we see is I can deliver a set of governance and compliance capability across any member of the fleet. This is where we're integrating the compliance operator which comes out of OpenShift along with other community efforts like the Open Policy Agent. There's also capabilities to integrate Falco and have it delivered. We'll see new capabilities over time with advanced cluster security. What this means now is that I have one central location through the hub that allows me to manage an entire inventory or entire fleet. So this is where we think about sort of elevating the view. And if I on the next slide want to sort of think about what does that look like to an end user, to an administrator, to a security person, to a developer even if I'm kind of validating how these clusters are configured. Then what I'm really seeing is this concept of different policies that are applied. And here we see an aspect of categorization and controls. So I can link a particular technical control and say that this is relevant for data standard XYZ, whether that's like a PCI DSS and NIST 853 or even an internal data standard that is specific to your organization. For each of these you can see examples where I either I'm simply auditing, is this policy in existence on a target cluster? In other words, is this operator deployed and running? Is this operator configured? Do I have the role of the role bindings that I want running in that environment ready? And so I can either audit and in one case here you can actually see there's also an enforcement behavior. So I can use this to do anything from configuring the cluster. I can also use this governance engine to drive upgrade behavior across the fleet for either open shift clusters that are connected to a source of images in a public way or that have a disconnected registry as well. So with that, Rob, why don't you take us through what's going on in the networking layers? So there's a bunch of cool stuff happening in the upstream networking arena and we're going to look at three of them that kind of come together to make some kind of next generation capabilities happen for us. The first is Submariner. This is a project for cross cluster connectivity and this is kind of like basically some IP sec tunnels and other kind of stitching together of a bunch of clusters so they can talk to each other. What's cool about this is you'll be able to do service discovery and other cubisms directly across that boundary. So it's really going to be easy to stretch applications across two clusters, maybe either do some failover because the first step is being able to talk to the rest of those members to either sync data, anything that you need to do. ACM is going to actually orchestrate this for us in the open shift world and so that's going to be a really great capability when used with our next capability which is Istio Federation. So this is the ability to connect multiple service meshes together. Now if you remember in the open shift world, not everybody has to use a service mesh. This is an opt-in on a per namespace basis and so if you've got namespaces running on multiple clusters, you can connect those together again over that same bridge. So this is a little bit more powerful because you have a little bit more control over exactly how things are federated and then you'll be able to stretch that identity between pods across that which is what everybody knows and loves about the service mesh among a bunch of other things. And then last, we've got a new API for Ingress. This is the Gateway API. You might have known this upstream as Ingress V2. That name has switched to Gateway and this is a more expressive API than what we have today. So if you think about Ingress today, it's kind of a very coarse-grained kind of rule-matching thing and then it's up to each Ingress implementation to talk about how it handles sticky sessions and cookies and things like that. So this is going to have a much more expressive rule set and that means that an open shift in particular will have a swappable way to move that out. So say if you want to use Metal LB on a bare metal environment to fulfill this need, you can do that or then use some of the other OpenShift router components in other environments paired with maybe an Amazon ELB or a load balancer in Azure. So really exciting to kind of meet everybody's needs and so we'll be tracking that upstream as it gets underway. Remember some of the use cases for this? It's kind of better than a stretch cluster. So we've got some customers today that like to have a cluster stretch between two data centers that are maybe a few milliseconds apart. This is just a better scenario because you'll have more failure domains there instead of stretching it across and then just easier HA if everybody can talk to each other and share identity. This means that you can move dependencies from one cluster to another with those other teams not even knowing about it maybe, which is really exciting. And then lastly to securely connect to shared central services if you've got a secret vault that's maybe run globally for your entire organization or you've got a global registry like Quay. So it kind of makes sense to make a little bit of that management easier if everybody can talk to each other. So let's parachute down into what networking on a single cluster looks like. And so if you'll remember we have our multis CNI. This is what allows you to map multiple network interfaces into a single pod. And the cool thing about this is it's all driven by CUBE. So you know networking is typically done you would think at a host level and so up leveling it a little bit into the CUBE layer means that you can use all these other tools that we talked about to do policy enforcement to push down those policies to mutate things maybe when they're not working the way that they should be and ultimately then make it really easy for the developers of those applications to use those capabilities. So we're going to build on that when we talk about our networking story. And so let's take a look at another more specific SRIOV example that we talked about earlier. And I want to talk about this in terms of two personas. So you know cluster admins are going to be able to configure this hardware. So this moves packet processing into user land which is really cool because you get basically get line rate speed into a pod. And this is useful for things that might be decoding a radio stream or you've got like a really high traffic API or something like that you can get line performance. And so cluster admins can configure this on a node maybe like a certain node pool and then you know coming in with that ingress implementation for the gateway API have it really really fastly hooked up and you know specify a bunch of config but you as a developer that maybe just wants to start processing packets you're just opting into this with an annotation and you're saying hey I want to get a new interface inside of my pod I don't really care about all the other stuff some cluster admin dealt with that so it really frees you up to just get your job done with your packet processing. So really cool and again you know we're doing this all in a single cluster but you'll probably want to do this on multiple clusters. And so let's take a look at that multi-cluster arena we're going to shift gears a little bit to the Submariner project and I want to just give you a little bit more detail about what this looks like when you have cross cluster service discovery. Submariner is going to introduce a cluster set now what this is is a group of clusters that have a high degree of mutable trust which means basically the same namespace on cluster one is the same as on namespace two it's owned by the same folks and so what that allows you to do is then start exporting services between namespaces on different clusters and there's two CRDs again we're programming the Kubernetes layer instead of anything higher or lower which is great and that just makes it so that you are opting in to sharing something you're not going to share everything and then in the for the service import CRD this is where it gets really powerful is you can consume that via Kubernetes API it's another Kubernetes object but what you really want to do is probably just use DNS just like regular Kubernetes service discovery and so that's what you can do and so if you look at this DNS address you'll see that instead of cluster.local it's cluster set.local and what this means is you can now refer to the global set of all of these pods with one DNS name so you can imagine this becomes extremely useful for building your applications and so that's just one more powerful thing that you can do when you orchestrate all your clusters together and then last we talked about service mesh running over the same IP sec tunnels this is kind of what that looks like in practice so you've got you know all the way down on your single cluster you've got all your sidecars which are forcing traffic through your service mesh and then it's going over this and then remember all of this can be tuned and deployed through the multicluster policies and networking that's coming down from both ACM and ACS all right over to you Michael you want to talk us through applications absolutely this slide is sort of the where do you come from where do you go right so in this case our go is focused on the ability to deliver capability deliver applications and other objects into the cluster one of the neat new things that's come out is the operator centric deployment that makes it really simple to get started with this in open ship that's the open ship get ups operator and this is also something that can be configured and driven across the fleet using advanced cluster manager for Kubernetes with quay we're really looking at how do we make it easy to source the images that are going to run against the environment though a central place for image scanning and integration for scanning from advanced cluster security to allow us to ensure that the images which make up the applications running on the fleet are properly secured and there's been an upgrade from python 2 to 3 was just recently completed as well for that project so if we think about applications within a single cluster I can get started particularly as a developer or a small development team and I want to use a get up centric approach I can install the open ship get ups operator and then from there I can see my repo getting delivered into that particular cluster now if I zoom out a little bit I can also think about Argo helping to deliver capabilities across individual namespaces on the cluster cluster itself or multiple clusters so I don't know if you want to switch to the next slide they were out for me thank you sir so in this view what we're seeing is you've got an inventory of clusters it's provided by ACM or open cluster management we have been building integration to allow us to feed that understanding of the cluster inventory to Argo and then we can use the application object from Argo to deliver applications and content across clusters in the fleet those applications are getting sourced the images for the applications are getting sourced from clay and that's where we can ensure with the dev sec ops approach using advanced cluster security that the images that we're delivering have passed all of our security scans and are carrying any CBD now within Argo I can see a view of this we can also sort of zoom out a little bit further and think about the view across the fleet within advanced cluster manager for Kubernetes and here what I'm looking at is actually a view that represents a topology of the objects in my world in the middle I've got the set of clusters and I can see a list of Foxtrot AP Northeast Foxtrot GCP Foxtrot US West so that's an open chip cluster that was provisioned in the Tokyo data center for AWS another one provisioned in the northern Europe and the northern Europe region for GCP and another one provisioned in northern California on AWS US West 1 but my application is getting delivered to all three clusters and now I can sort of get a central view of what makes up the application I can see things like the deployments the routes the services etc and what's also kind of neat something that we're not going to go into a lot of detail here but where we have steps that aren't programmable in Kubernetes that requires some additional automation outside of the cluster we can even bring an Ansible to drive that capability a typical use case we find is when I deliver a change through get ops or otherwise I want to automatically open a service ticket that's what we can actually have an Ansible job drive the creation of the service ticket that allows us to track that a change occurred or maybe as in this example as well I want to drive the configuration of a load balancer that sits in front of the clusters and maybe that load balancer isn't programmable through an operator just yet so I can use that capability so here what we've talked about is how we use get ops as a methodology how we ensure the security around the images that we're delivering for applications how we can actually simplify getting started with a single cluster using that open ship get ops capability and how we can scale that out to many clusters integrating both what we do for ACM what we do for Argo what we do for quay but let's talk about the next library layer of abstractions so Rob can you tell us a little bit about what's going on with functions upstream yes absolutely so as Michael mentioned let's talk about you know getting even one layer higher so you've heard the word serverless before and then maybe functions as a service let's talk about both of those and what they mean so Knative is the serverless platform for Kubernetes and it's the one that we're backing upstream and it's got a few different components serving eventing and I have in parentheses builds because that's not part of the upstream but that's part of OpenShift and it's a big part of using a serverless platform so if you think about it serving is about connecting a container that's running with a request for that container and this is what allows you to have this model in between where you can do scale to zero this is a new capability you know that's only really possible in this model and you can also instead of just scaling to zero you can scale up on certain events or trigger different types of applications to do things based on events that might be when a new image is uploaded go resize it and store it here that kind of thing or serve web request doesn't matter what it is and so embracing Knative is it's a new way for gaining infrastructure scaling properties but it's using all the same primitives that you're already building you've got you know an application with traffic funneling into it somehow and you know maybe reacting to a few things but that differs from functions as a service and this is kind of what I'll say in quotes is the true serverless this is that developer focused I want to just write a function of code and I want it to run and do this thing I don't really care about how it's scheduled how it runs I just want someone else to figure that out and I just want to run my code so that's what it lets you have really powerful developer agility because that's all you care about but that means that this is a new programming model it's not just you know fitting into that you know slice of maybe how you built your Django app or something like that it really is a different way of programming and so here at Red Hat we think about kind of a developer workflow as an interloop and an outer loop an interloop is kind of you on a laptop writing code writing some tests and then checking that into a source control and then entering that outer loop which is all your integration testing you know getting it into the wider app and platform and then ultimately deployed and so what that means is we need new tools that function intended in this environment which is like CLIs that are terminal friendly but for building functions so Knative has this KN concept and that's a really great CLI to do this we also need new CI CD tools you know you need to be able to test and make these synthetic events around the things that your application is programmed against and that can be things that are hard to kind of simulate with a you know a Jenkins job or something like that just doesn't work and then because these functions are kind of running all over the place you need ways to aggregate the logs look at errors so it starts to more kind of resemble like a mobile application that you have deployed out on you know hundreds of phones you know the error cases they're kind of roughly the same and so we just need new tools for this and so Nina is going to talk us through kind of the current state for functions as a service and where that's going in a quick demo. Thank you Rob. Hello everyone this is Nina and in this demo I will explain what it means when we say that we see serverless as a deployment model and serverless functions as the programming model we will also see how we can leverage the Knative event in company so for writing the serverless functions in the local environment we would need KN and Docker installed we would also need an account on a public registry such as wayanddocker.io. I am in the directory where I would be creating my function as you can see there is nothing here so to write my function I would do KN from gate of prompt is for the runtime and we go to corpus one of the hottest runtime or I could use the C prompt for an interactive experience is asking which project path I want to do I want to do the same node is the default runtime because this comes with a prepackage runtime like node corpus go we want to do corpus and the default invocation is HTTP for these serverless functions but we want to use cloud events so I am going to write events and the function has come up with so the function construct has created the directory structure for me and some files so we have three classes function.java is where we would be writing our code the business value that we want to do now let's make this function do something such as translating an English word into Spanish so I will find my prepared code here need a bunch of import and in interest of time I have a copy paste of that too now with the kmcli it could build an ozi image here pain mark build command it is going to use build path to create an ozi image and now it is asking me for that registry and I'm going to do those to build a function image it is going to use the build path to create an ozi image as you can see our function image has been built now to test it locally all we need to do is click on one but our function image has been built using the vets we test it and local all you're going to see that it's going to start at the one time and it took us just a few seconds and to test it you're going to send a cloud event at a cloud event the translated result is all the message you're sending is hello and translated result is hello so now that we know our function is running successfully it's time to deploy it on our cluster so I log into my cluster and I am in the project namespace where I want my project to be deployed and all I need to do is scan from the floor it is going to build my image again just to make sure that we have the latest and then it is going to push it to the registry and then deploy it to the openshift cluster meanwhile on our cluster we would create an event source that is going to send cloud events to our function so that our function can translate that event I am going to use this in source create event source we are going to say message hello schedule let's do this because the resource here I am going to wait till I get my function deployed here this is our ocp.console so if I see in our topology view our function is coming up here there we go if we go in our ag again here we can see the logs see that it's started up and our pink source hasn't started sending events in there were no events received so it was terminated but then event came in so it's starting again so as you can see it does the auto scaling for you and here we go we are connected to k-native events k-native event source cloud thing we are getting cloud event and our result is the translation of the word hello and that's all for today I will stop sharing and back to now all right thank you Nina that was awesome so wrapping it up you know we looked at functions and how it works but let's up level it again let's zoom up into the multi cluster layer and as I mentioned you'll see that we've got some layers here but they're blank because you really don't care about this you know at functions as a service user experience you're going to be maybe in an IDE you're working with your CLI and you don't really care about the concept of clusters you just want it to work you just want to run five or six copies of your thing or if you receive five or six events you want to spin up some copies of it you don't really care which nodes it's running on all that stuff and so it's a really a different and more powerful way of thinking about your infrastructure but as we talked about it does require new tools so we've got a talk coming up after this where we're going to hear about this more in detail where you know maybe I don't want to see any of these cluster details but I want the same goodness of OpenShift under the hood so stay tuned after that All right so I hope you enjoyed our talk us where we parachuted down about the building blocks down inside of a cluster and then we rocketed all the way up and saw how just important as they are to the fabric at the top of it getting down clusters taking care of security at every single layer taking care of policy at every single layer and so you know these building blocks at the bottom maybe if you've never heard of FIPS mode for encryption if you haven't that's good on you but this is the type of stuff that it matters deep inside the operating system but it impacts us because we want all of our clusters to be compliant and that's just one example as of many we talked about fast packet processing you know mapping GPUs is another use case where all those building blocks span all these different layers and it's really really critical for being successful in a hybrid cloud world so I hope you enjoyed it thank you and we look forward to seeing you in the OpenShift community moving forward thanks