 Hello, my name is Burr Sutter from Red Hat and we got a lot of fun things to show you in a very short period of time We're gonna dive right in and get right to it right now You can see there's a lot of different ways to get open shift 4.5 based on Kubernetes 118 I could start here, but I want to show you how I installed multiple clusters around the globe I came into this thing called advanced cluster management We're gonna hear more about this in a second But it was super simple for me to set up a cluster here in Dublin You can see based on Amazon or in Sydney based on Google and of course even the cluster that I'm looking at right now Is sitting in Texas so I have my three clusters around the globe I can hit this add cluster button and say create cluster and this is exactly what I did I can call this the Burr cluster if I want and then pick any of my public cloud providers and for instance I can pick Google Cloud here pick what version of open shift that I want to lay down there pick my connection It knows what DNS name I have mapped to that cloud provider already And of course the last thing you really have to say is what region would you like it to run in and in this one The one I did earlier runs in Sydney down in Australia southeast It's a symbols that the set up your cluster and then once you have your clusters running you can dive right into them Now I want to make this point very clear The ability to set up a new cluster is super easy and across all three public cloud providers Including bare metal or on-premise or even things like vSphere and other solutions, but look here I have this thing running on Amazon as I mentioned. There is my Amazon user interface. You might love it You might hate it, but there it is I have the Google running here also and that's running down in Sydney down in Australia and I have Azure running over in Texas So all three public cloud providers, but the big win for all us Kubernetes users as it's Kubernetes It's Kubernetes. It's Kubernetes everywhere and experience is the same But I want to turn over to Michael Elder who's going to take us the rest of the way with ACM All right. Thank you very much Burr Let's take a look at what all we can do with this really powerful capability that Red Hat has recently made available So in your environment, we saw a couple of clusters created. I've got an ACM cluster A hub in particular that has several different open shifts from both From all three of the major cloud vendors as well as running in a data center environment So each of these clusters runs in agent that allows us to see what's going on in that cluster It allows me to drive configuration changes and deliver applications We'll see the all of those examples here in just a moment So for example, if I wanted to trigger an upgrade, I can see what versions are available to every cluster that I have under management If I want to simplify upgrading several clusters at once if upgrades are available I can trigger an upgrade in a batch and all of them will immediately go off and start triggering that upgrade process So now it simplifies a administrator's job who's trying to manage many different clusters Understand the inventory and drive changes in behavior But now if I have clusters attached into management, what are the things I might like to do? In particular, we often find users want to understand what's going on in their clusters So our search capability actually indexes everything any api or crd that's available in that cluster You can now search against Rackums database and understand what's going on and this if I have an app that is spread across many different clusters Now I could do something like find everything That is part of a particular namespace. So let's look at a namespace called WordPress app And I can see that it has lots of different parts pods replicas secrets deployments, etc I might want to drill into what clusters is currently running upon See if it's healthy or not healthy So search becomes a really powerful way to understand the state of those clusters Now understanding what's going on is really only part of the problem We also want to be able to drive changes against that environment And so we start to think about how do I drive a consistent configuration story everywhere? So within advanced cluster management, we actually bring a policy management capability That allows me to drive things like configuring authentication or authorization for every cluster We use a very simple concept of placement rule that matches against the labels that I've assigned to the cluster It will record the decision and tell me whether or not I'm compliant And this particular example will push an OAuth configuration directly in to any cluster that I need To have this particular policy applied. Let's take a quick quick look at one example Here I've actually got a concept of image manifest vulnerabilities It's already deployed against two clusters kilo alpha kilo bravo And I can see that it's discovered image vulnerabilities in those clusters Let's target another cluster So we'll actually go and just edit that live And here I can say in four secure images I'm going to go and assign this condition to my charlie cluster We'll add that new label Done And then over here on the right, we're very quickly going to see it pop up I'm going to see if I can catch it in the act So here we'll look at I now have a new decision recorded And then over the next few seconds, we'll actually see that container security operator Automatically get pushed out to charlie and bring it into compliance Now the other aspect of managing config is really about applications So when we think about applications, we're delivering Any kind of deployments typically these are managed in github They're managed in helm repositories, object store, et cetera And the notion of rackums delivery model allows it to pull directly from those sources And syndicate the app across your environment So here again, I'm still driving them with placement rules I have a set of cluster selectors that define What labels clusters need to have to run part of that application I'm defining subscriptions that link back from some source And in all the examples that we see here, these are actually driven from a public github repo That same application is actually available Here and it actually points to the get repo for kubernetes And then does what's needed to configure that application So this is just a whirlwind tour What we went through is how we can drive upgrades of clusters How we can manage configuration policies across those clusters How we can deliver applications and how we can search So with that burr, I'd like to pass it back to you That is absolutely amazing. I love what you did there with the search I love what you did there with the policy management And of course the application management Really cool stuff to see all those components in that centralized location So at this point, we got to dive in and do something else We're going to see some k natives, some tectons, some Kafka And the magic of how that's been done now in OpenShift With William Marchito-Oliveria Thank you Burr. Here's what I got for you So I have an OpenShift cluster here with some operators already installed I have the NQ streams that's based on this StreamZ CNCF project That we donated to CNCF That's our Kafka operator And we have pipelines based on tecton and serverless as well Based on k native So now what I'm going to do is deploy a serverless container, a serverless application I'm going to start here from a container image that I built before I'm selecting the image here, the application name comes up And here with a single click, right I can make that container run as a serverless application, right It's very straightforward And by selecting that I can also configure a couple other things From that serverless application So for example, I'm just limiting the number of concurrent pods Being running to handle those events So why that application is coming along here I can also repeat that same process If I want to import a project from Git, right That same experience, again, I can start from a Git URL That's going to trigger a build And I also have the serverless, the k native service option here to run as well And I can do that also with a Dockerfile, right If I have a Dockerfile here or in a Git ripple I can also deploy that application as a serverless application as well So our application is already running So while I'm doing that, let me also demonstrate How you can create your own Kafka cluster Using the MQ Streams operator, right So I'm just going to go back here to the admin console I'm going to transition to a different project And select Kafka So I have one Kafka cluster running already I'm going to start a new one Let's give this one a name So my Kafka cluster If I could type my Kafka cluster There you go I can configure a couple things for the Kafka cluster as well Number of brokers Details about security and whatnot I can also use a YAML, right If I want But I'm going to stick to the user interface for now Hit create Let's take a quick look at all the resources coming up So you can see that this is a real Kafka cluster, right Again, all the brokers, zookeeper and everything else are starting up And you see again all the details about configuration and security as well So that's pretty cool But I'm not going to wait for that Again, we don't have enough time Let's go back to our serverless project and our serverless application The application, as you can see, scaled down to zero Because of course no events were sent to that app But that's going to change now Because I'm going to add an event source to that application And here you can see the list of event sources available in the system I'm going to use Kafka and I'm going to select here the broker URL Pick a topic, consumer group I could also set some security settings But again, I'm not going to mess with that now for the demo Create That's now set So I have an event source associated with my application But I still of course have not started sending events to the app So I'm going to start a new process here That is essentially a Kafka producer running inside OpenShift And send a couple messages And you see that application coming up Scaling up as I send those messages So test message There you go Test message Two Let's take a quick look at the logs of our pod The logs So you see here the text that I just sent Let me send one more Test Three There you go And this is a cloud event So even though I sent a string That's going to be converted to a cloud event here I can also start this other application That is essentially going to post a couple of different JSON objects to that application as well So just you can see the scale starts to happen And you can see the number of pods going up To match the demand Again, to match the number of events that I'm sending to that application Let's take a quick look at those events You can see that the event here, it matches an order Again, just like you would get from any Pretty much any e-commerce Again, just an object that is an object and whatnot So this is all great I have my application running But what you can do as well with OpenShift is Also create a pipeline So that I can automate the CI and CD for my application I'm going to start here with the pipeline builder That is based on Tecton I can select again a number of tasks So the first thing here for my pipeline is going to be a git clone The next thing is going to be a jib, Gradle build Because that's what I'm using for my app And the next thing here is going to be a KN create And then I can come here and configure specifics and whatnot But again, we don't have enough time So what I did, I already have a Yemo For that whole pipeline that I built before I'm just going to copy and paste that here Hit create So that's the same pipeline that we did before But now I can start that pipeline Again, specifics for where that container image is going to land And here I'm just picking the PVC that is going to be used As a workspace to share the resources Right, so now the git clone has started Essentially once this is done You have her serverless application deployed And that's pretty much it That's all I have for you today Thanks That is totally awesome stuff I love it There's one more thing though I want to make sure we show Because we want to show people this concept of the virtual machine I mentioned that we now have virtual machines as first class citizens in OpenShift If I come down here and click on workloads You'll see virtualization right here Now why might you have a virtual machine in your OpenShift environment In your Kubernetes cluster as a first class citizen Is because as a developer and I'm a developer I want to make sure that I have access to my legacy application infrastructure As well as my new cloud native systems that I'm working on So I might have a simple virtual machine that in this case Let's go ahead and build one from this wizard I'll call this my accounting application let's say And then I can load it up from a certain source This is of course the virtual machine disk image I can pick one from a container or maybe a URL or actual disk image In this case I'll pick the container image Because I already have that in my copy and paste buffer This is based on Fedora in this case But you could have CentOS You could have Windows servers You could have a rail system I'll go ahead and pick the Fedora Go ahead and say that this is a t-shirt size Small, medium, large We'll just call this actually this is a small one Let's make it small And then of course just make desktop just to keep it easy But this is an actual working application We'll show you that in a second And all I got to do now is answer a few more questions If there's special networking configuration Special storage configuration If there's other aspects of it But in this case I'll just say review and create And if we look at our list You'll see that this is loading in that image And all I would have to do is hit start virtual machine So it'll take a few seconds to start up that virtual machine In my overall cluster So let me just go and show you the one I have running right here Called MyFedora One I already launched earlier And the cool thing about this Is to kind of prove to you that this is a virtual machine I'll click on console And how about this? Let's try this So sudo, sudo, there we go System CTL If I can type that correctly You can then of course interact with system D And I've already started launching processes in there For HDBD, FTP And other components that I have prepped inside that application Because it does have a bunch of transactional data here Let me kind of show you that real quick And let me show you my transactions Over here Right there So there's a bunch of XML files for my legacy application Now one last thing I'll show you to prove that all this is working I'll come back over here to Advanced cluster management for Kubernetes And let's see if we can actually find those guys Those virtual machines And I can type virtual machine here And you can see there is in fact My accounting virtual machine and my Fedora running out there So again, Advanced cluster management Sees all around all these clusters And gives you one complete experience To manage all of it across the open hybrid cloud If you'd like to learn more, visit us at openshift.com