 Yeah, so we we Pretty much got a whole new set of features coming in in Opus 4 2 and these lights were updated yesterday around 7 p.m. And I also went to watch The game yesterday from inter so we should Talk about that if you guys want to talk about that. It wasn't that great But but anyway, I'd like to introduce myself. My name is William. I'm a product manager at OpenShift and I have here with me Chris but Speaking of Mike So I'm Chris Blume. I'm from the storage for you Nice and we're going to then talk a little bit about what's coming in 4.2 And a little bit of all the different features that we have throughout the platform as well This is going to build very nicely on what Brian already talked because again like some of these slides would be already We were the word already covered by him. That's great because I have a lot to talk about here But with 4.2 we have some specific teams that we were tackling as part of this release, right? One of the things that we we got as feedback from customers. For example was around air-gapped Installs so we are provisioning that now as one of the features in the platform We did a lot as well to expand the workloads that we can Now run in the platform more specifically Enabling GPUs to run a little bit easily on Kubernetes And then of course a lot of the features that we have now as part of our developer experience and developer tools That also run on top of the platform so Brian already mentioned this one of the things that you get with every OpenShift 4 cluster is telemetry you can It's an optional thing again You can disable that if you want but by enabling that you do allow this backend system that we have at Red Hat to read this kind of metadata about your cluster So it's all anonymized and whatnot but with this data We can not only see the the overall health of your cluster, but we can also Let you know that for example a new CVE came up and you also have for example a cluster that need to be patched In order to get that CVE Applied we also found again like a number of bugs that For I would say like 20% of them We were able to detect those bugs and let our customers know without actually talking directly with those customers So that was pretty much only based on the data that we received through the telemetry With 4.2 we are adding some new Platforms and new providers that we can now support through the two different ways that you can install OpenShift 4 if you're not familiar with that We have pretty much two ways to provision infrastructure for OpenShift 4 one is called IPI and the other one is called UPI So with IPI, it's what we call the installer provision infrastructure That is pretty much I'd say the the easiest way to get an OpenShift 4 cluster up and running and with With that installation we pretty much managed the whole stack so from the operating system to all the different Network settings and the creation of those resources on these different cloud providers like everything is then provisioned and configured for you using the installer But as an alternative you also can use the user provided infrastructure, and that's pretty much then you are on the hook in order to The mic not working properly Okay So with UPI you can then you are then on the hook to provide the infrastructure the networking and the operating system and what not But then you just use the installer to install the OpenShift bits on top of that infrastructure, right? So if you're going to then let's say kind of summarize all this different ways that you can install OpenShift on the OpenShift on the OCP side as we call it, right? We have done this full automated install the pre-existing install and we have also of course the the hosted offerings That we have in two flavors as well one of the flavors is what we call Aro or Azure Red Hat OpenShift So with that way you are running OpenShift, but that's managed jointly by Red Hat engineers and Microsoft engineers you can Create an OpenShift cluster directly from the Azure console. That's kind of very nice if you are an Azure customer But we also offer another flavor of that which is OpenShift dedicated and with OpenShift dedicated the management of That cluster is then done by the Red Hat engineering team in our SREs If we are then pretty much putting that side-by-side the UPI and IPI Installation modes, this is what the the installer provides for you. So again, it's pretty much automated all the way but then you have also the Things that you as a user have to provide when you're doing the UPI route But the installer can still do some of that for you for example generating for example the the configuration of Ignition and I'll get into more details about that later throughout the talk Looking at the the specifics of how that materializes for you as a user You see here for example the installation procedure on Azure and the installation procedure on GCP It's pretty much the same thing again if you're not familiar with OpenShift 4 you download this installer called OpenShift install It's a binary and when you run that binary you can pretty much say create cluster It's going to pop up and ask you a few questions Which cloud provider you want to use your credentials or SSH keys and whatnot and from there You pretty much hit enter and let the installer run for 30 minutes 40 minutes And you got a cluster up and running that is pretty much ready to to rock as a production ready production grade installation It's very I'd say straightforward and you can see how that's Compatible and very similar experience across all these different providers specifically that would 4.2. We are adding GCP support and Azure as well Just expanding a little bit on that disconnected installs Even though a lot of the installation and the procedure that we have been describing about OpenShift 4 it It relies of course on connectivity with the internet and with those back-end systems at Red Hat We also heard feedback from multiple customers Especially the ones running in industries like of course banking and help and whatnot that they want to run Complete air gap installation procedures. So we are now offering that as well with 4.2 but because of course you still want some of that experience that you get for Running upgrades for let's say multiple clusters that you have running on your infrastructure You can still do that But you'd have to of course have a local copy of those containers and those patches and those updates available Behind your fire. Oh, and that's kind of what we allow you to do now with 4.2 Another feedback that we heard from our enterprise customers was around the need for egress proxy So again, like sometimes you have to go through a corporate proxy in order to do any kind of connectivity to Internet and whatnot and this is kind of a nice way to Configure the entire OpenShift cluster the entire Kubernetes clusters and all those services to go through this proxy But you can configure that now from like a centralized location in in the cluster Specifically talking about OpenShift 4.2 and what version of Kubernetes is available with 4.2. That's one 14 One thing to notice here is that as we transition from 4.1 to 4.2 to 4.3 We're skipping one version of Kubernetes and that is something that we we can do and we will do When we think that's Something that makes sense So we're looking at the features available in OpenShift 115 and we are looking at the versions in 116 Given the timelines that we have to ship the 4.3 release we're like, I think we can skip that version and Get the the upgrade and the update handle it by the platform for you. So again, this is something that If you're looking for a specific feature in Kubernetes, it's good to know But if you are not as much concerned about that, but you were concerned about the whole upgrade process Even though we're skipping a version there that is all handled By us by by OpenShift. I mentioned this Capability of enabling GPUs in Kubernetes if this is something that you are you have tried to do in Kubernetes before You know that it's not the most straightforward procedure and you know that there are a lot of specifics according to the Implementation of GPU or the provider of that GPU that you have to do at those at the cluster level, right? Drivers and whatnot. So we are automating that and simplify that using an operator We call an FD and through this operator again You pretty much have a click install experience to enable GPU in every single loan that you have a GPU available Very I'd say powerful and easy to use again if you're doing any kind of AI ML workloads This is something that will be very interested in Leverage that I'll transition now to Chris to cover So I was told that you Not everyone could hear me earlier. So I'm Chris from storage BU and one of the cool things that we got or we're about to get in 4.2 is We will have the CSI the container storage interface in there and that will enable us to add plug-ins storage plug-ins to Kubernetes They're not in a Kubernetes tree. So we don't have to commit code to the Kubernetes project and wait for every Kubernetes release to get updates there we can develop that much quicker and then just use to CSI interface there and The Openshift container storage plug-in will leverage that but we also have a couple of third-party Developers that provide plugins there looking at the storage devices The storage operator will automatically set up Default storage class depending on where you deploy your Openshift cluster So if you go in AWS, you already have a default AWS storage class and you can go ahead and Directly use that storage class or if you're in VM where you have that available as well and the cool thing Additionally is we got local volume and the raw block as well so you can use whatever is locally available on your Openshift nodes and you can also forward Raw blocks into your containers to use for whatever you need and that's Especially helpful if you want to deploy something that's IO intensive like databases or anything like that looking at Openshift container storage We do get a completely new back-end so previously in Openshift 3 that was backed by a cluster of us and now we're switching that over to Ceph and Nuba and For the Ceph part, we're gonna use Rook and for the Nuba part. We're gonna call this the multi-cloud gateway as Everything in Openshift 4 is based on operators This will also have an operator to cover the install and the lifecycle including updates migrations and everything And out of the box it is designed to work at scale So if you have a lot of PV's then this is correctly designed for you already We also correctly identify availability zones in your Clusters, so we will try to Replicate Between those availability zones so you you never lose your storage obviously And another thing is we're very closely integrated with the Openshift So you do get your storage monitoring out of the box as well and Just to give you a quick look at that This is when everything is good. It's all green. You see your capacity your consumers still help happy about it and then When a node fails this all goes in into red and You see that obviously Something has failed You probably can't read it, but a node failed in this case and then we do see that You're supposed to do something and this in the back end also hooks into Prometheus So you get the alerts to wherever you configured And if you already use Open Shift 3 and you're wondering hey, how do I get over to OpenShift 4 then? We do have a migration tool for that so a migration tool will get you from OpenShift 3 over to OpenShift 4 and That will also include the persistent storage so you will be able to migrate over from your OpenShift container storage that's cluster based over to the OpenShift container storage. That's Saphon Nougat based. All right Thank You Chris So let's continue here. You heard a little bit about cloud.redhat.com. So that's our back-end system to manage the installation of OpenShift across all these different platforms and providers that we have So some of the updates for that particular System right we call it OCM, OpenShift cluster manager and in that view now One of the things that we are adding in OpenShift 4 is the ability that from from that single console you can pretty much launch into your specific OpenShift cluster console Straight from from from that view right so again as you manage like tens hundreds of clusters like that is something that It's quite handy from from that same view you also see right there in the corner that for example one of the versions of OpenShift that you're running there's an update available and again since we ship those updates over the air From there you can click update and you have the cluster then download and perform the update for you again That's a very I'd say streamlined experience to this kind of Software right especially if you're considering how complicated it can be to update a Kubernetes cluster like this is amazingly easy Another thing that we are adding to this view is the ability to do a cluster monitoring as well So in this screenshot we we don't have that here, but there is a new tab there At the top called monitoring that gives you access into like high-level monitoring data for For that particular cluster Let's see what else. Oh, and of course from from this Interface as well. You can also create a new cluster Using OSD. So if you want to create like a new managed cluster you can manage You can view the status of that cluster and create a new cluster from here as well Specifically talking about metering metering in 4.2 now is considered GA feature And with metering you can pretty much see consumption of resources for your cluster and break that down per namespace per pod or Even of course cluster wide as well This is something that comes also very handy Especially if you are running this across multiple providers Sometimes it's very hard to understand where your consumption actually is going which pod or which application is actually taking most of the Resources that you are running in your cluster and this is a kind of a way to do that again It doesn't matter where you're running you get this consistent view and report for that particular data set With cluster logging specifically if you're running 311 Starting with 4.1. We already made a lot of progress towards Optimization and performance. We are able now to restore pretty much three times the amount of logs in the same In the same cluster, but we are also reducing General resource consumption by 50 percent. So it's a lot of improvements there and with with 4.2 We're also integrating the the monitoring of this Logging infrastructure into the general cluster monitoring as well and you start to get alerts for when that infrastructure is not working properly So This is all good now. You have your I'd say base platform infrastructure running But you also now want to bring this workloads to to your platform and Brian touched a little bit on how Operators work and also that we have an operator hub I would highly encourage for you to browse the operator hub and see if there is anything there that it might be interesting for you Or to create your own operators and submit it there as well But we also in bad An operator hub experience inside the open-shift cluster for you and By doing that we also bring I would say kind of a special class of operators as well So those are of course our red hat products and also certified operators that they went through a very rigorous process through red hat to certify that those operators respect like security concerns and all sorts of other Validations that we do and that's kind of another I'd say peace of mind for you as an administrator that when you're using those operators those are Take consistent, right? With 4.2. We are also adding a new capability with operators in OLM. That's the Automated dependency resolution So if you're building an operator that for example depends on another operator until 4.1 You could do this declaration of dependency, but it was static So you could just like let the cluster know but it would still have to do some work to install the dependent For for that operator the dependency with 4.2 We are now handling that automatically and for example in this case if your operator depends on Yeager and depends on Cockroach to be You don't have to install those the OLM The operator lifecycle management is going to then pull those dependencies and configure that for your operator Now let's talk a little bit about the open shift console because we also did a lot of improvements here in 4.2 one of the things that we are Adding here to the cluster is the ability to extend the console For ISVs so again as you provision your Operator on the cluster you have now the ability to extend the console and Configure it a little bit how that experience is going to look like and let's say expose your console or your your your CLI Inside the open shift console itself like this is something that is very powerful again If you want to customize how your customers are going to see your operator your Software inside open shift We are also Adding a new dashboard again as you log it into open shift 4 You get this nice summary view of like what's happening with your cluster What's the overall health status and whatnot, but we are adding a couple more things here in 4.2 around like top consumers You can filter that down by resource by CPU and memory network and whatnot And this is one of the really Cool features that we are adding to 4.2. It's something that we are calling the developer console or the developer perspective There is this now toggle at the top of the console that you can go back and forth between the admin view or the developer view The nice thing here is again like as a developer Maybe you don't care as much about like what's happening at the SDN or at the network level or with all the specifics that are happening at The cluster you can really just focus on building code Deploying our application and what's happening with all my applications running. This is something that is it looks really nice And we have more demos and more slides on that too For example using the developer console if you want to create a new application you have like a couple of flows a guided flows that you can use For example, maybe you want to start from git You pretty much input the github repo there You say a couple things around like what kind of application that is it's a Java application It's a Python application you hit create behind the scenes. We're going to clone that github repo trigger a build Wait for the build to complete push that to the internal container registry and make their application deployed available for you that same flow now works for a couple different kinds of workloads such as for example serverless applications or traditional Kubernetes deployment as well and Once that application is then deployed this is pretty much what you get you have like a topology view that lets you see What are the applications that I have deployed in my namespace? What's the relationship of those applications? Maybe you want to access the logs for that particular pod. It's all very intuitive and integrated now, right? another Thing that you can see is the relationship between those applications again Maybe you want to group that in some way that makes sense You can now create like this this artificial grouping saying that this application and this other application They're part of a broad like a bigger component, right and and kind of orchestrate that you can of course Auto-scale that application here or I would say manually scale the application here using the console Or if you're deploying that as a serverless workload that is going to auto-scale and you can see that here as well As we talk about of course the developer console, let's talk a little bit about the developer tools One of the things that Brian also touched was on cold ready workspaces With 4.2 we're or actually it's not with 4.2, but right after 4.2 because Cold ready workspaces doesn't actually follow the same exact schedule as OpenShift. It's shipped as an operator that you can install and you can release in a different cadence, but Right after 4.2. We'll be releasing cold ready workspaces 2.0. That is based on eclipsages 7 There's a lot of new Capabilities in this release and it's the one release that I'd say it's like was really Re-architected to run on top of Kubernetes. So I'll highly encourage taking a look at that as well One of the things that we announced last week or two weeks ago Sorry, not sure but we announced recently is cold ready containers So cold ready containers is a way for you to have pretty much everything I'm talking about here an OpenShift 4 cluster running on your laptop. Sure. You do need Some kind of powerful machine But you do get all those things installed and the pretty much like you have these three steps like you set up Start and you have an OpenShift 4 cluster running on your laptop With all the operators that you want you can install more operators if you like That's also something that say that's very nice and that's delivered on I'd say multiple platforms as well So Linux windows and Mac One of the things that with 4.2 We are releasing as developer preview is the OpenShift pipelines or the new version of the OpenShift pipelines based on Tecton. Tecton if you're not familiar with it, it's a it's a new project that was Created based on K-native based on K-native like it started when K-native and now it moved to be a standalone project And it leaves now under the CICD foundation the cloud native The continuous delivery foundation. It's another foundation under the Linux foundation foundation foundation foundation But it's it's a it's a new modern CICD platform that It was designed to run on top of Kubernetes It was designed to deal with containers so again some of the assumptions that you see from more traditional CICD systems Like they they're pretty much now revisited and you can re architect that to be I'd say Design for this modern applications and modern workloads It runs as an operator, of course. It's one of the say Three like flagship operators that we ship as add-ons to the platform pipelines is one Another one is OpenShift serverless I will have a whole session talking about this in the afternoon I won't dive into details right now, but Sufestino that I'm really happy to announce that this is now a tech preview in 4.2 And we are marching towards making it a GA in the future release And you learn more about all this in the afternoon talk service match as well I'll mention more about that in the afternoon talk, but with 4.2. We are making service match now GA Which is something that I'm really proud and happy to announce again given that It took us quite some time because again these two and everything that was happening in that community and whatnot If you're following that, you know a little bit about history But we're happy to to say that it reached that Maturity level down that we're very comfortable saying that it's a GA technology in the platform and With that I would pretty much end with the high-level roadmap, but I have covered all if not Most of the items here already throughout the slides I May have went through quickly For some of them, but as you might imagine like this whole session is usually delivered by 10 or 12 p.m. For two hours So I kind of managed to try to summarize here for 30 minutes for you. Hopefully that went through well And that's pretty much it. Thank you