 So next up, we're going to pull two of our, hopefully, CMAC. Is CMAC here? Yes, yes, there you are hiding in the back. Two of our product managers from the OpenShift team, CMAC, and Daniel. And we're going to get you to use these guys. And we are sort of running on time. We're running ahead of time, and behind time, and all other things, so you. And there you go. All right, without further ado, here's OpenShift 4. Thanks, Dan. Hey, everyone. Good morning. Hope you found your way through the traffic in this morning. My name is Daniel Messer. I'm product manager in the OpenShift BU. And I'm here today with my colleague Simac. I'm just going to get up. I'm product manager for Developer Experience for OpenShift. And we want to talk you through some of the changes that are coming in the upcoming release of OpenShift and also give you an overview of, to use Diane's metaphor where the pocket is going, where we were aiming throughout the year. And we're going to start with Daniel, walk you through some of the areas, and then I'll take over and focus a little more on the Developer Experience and what we have for developers in the upcoming release. Yeah, right. So today we're going to talk about how OpenShift 4 enables the next generation of cloud-native application platforms, and specifically Kubernetes-native applications. So there are three key themes here that we are going to walk through. First is how we are going to continue to deliver on the promise of actually delivering an enterprise-ready production grade Kubernetes distribution. We are then going to show you how we enable basically cloud-like self-service management experience on cluster, on all supported footprints. And then Simac is going to dive into the Developer Experience and how we basically give developers all the tools they need in order to iterate faster and essentially forget about the underlying platform. So when I joined Red Hat in 2014, OpenShift 3 was actually just in the works, although it wouldn't be until a year later when we would actually release 3.0. And it was quite astonishing at the time because nobody really was using that thing called Kubernetes at the time, right? So it wasn't even 1.0 yet. Yet Red Hat internally was very busy in essentially rebasing OpenShift 2 entirely onto Kubernetes, right? Coming from a very different container-based management system that we had developed on our own before, completely shifting towards that thing called Kubernetes that wasn't even 1.0 yet. So we're here today, five years later. We are with OpenShift 4. And we are making similar big bets, which is basically saying OpenShift 4 is going to be a self-sufficient platform that is enabled and featured by Kubernetes operators. And what this means is, while operators play into several parts of the ecosystem, what this means for OpenShift is that there is one big win, which is getting the system self-aware and self-managing. In OpenShift 4, we basically manage the entire stack, also what's underneath of OpenShift, including the operating system and, optionally, including the infrastructure. So we move away from these two maintenance window model where you have a maintenance window for your rel, you patch CVS there, and then you have a maintenance window for OpenShift 4, or OpenShift, where you have other CVS and enhancement and bug fixes. This is going to be one cohesive thing that you update as a unit. And we basically extend control by the use of Kubernetes operators all the way down to the infrastructure, which means that every service that makes up OpenShift and that makes up this Kubernetes API and all the artifacts below that make this happen on the hardware infrastructure there are exposed as Kubernetes objects. So this is the premise of an operator. You basically put some software on your cluster. You put all the smarts and SRE principles in that software. And you surface a very lightweight API to the outside so users can actually consume the service. And that's how we model everything that makes up OpenShift. Everything is an operator. We actually wrote 42 operators to control the entire platform. And 42, I think, is a legit number for this use case. So this allows us to pull off some nifty tricks when it comes to installing. We have completely revamped the installation experience. We have actually separated the tool that we use for the installation from the technology stack that we use for the upgrades, which gives us much more control over the upgrade process. So the installer basically lets you provision in a fully automated way an entire stack that makes up OpenShift 4, including the infrastructure from cloud or hardware providers underneath. That's what we call full state automated install. So if you have nothing else to go, you can get a fully featured production ready OpenShift 4 cluster up in just about 30 minutes. So from the complete green field in terms of hardware or cloud instances all the way up to the console. If you have existing hardware or you want to have more control over customizing the operating system, we also support that. There's an installer mode that works with existing infrastructure. And then next to that, we have a completely managed offering from Red Hat, where we basically stand up the cluster for you, we manage the cluster for you, and you just get to consume its services and worry about your apps on that cluster. This is OpenShift dedicated, which will lower out on OpenShift 4 pretty soon as well. Now, we've started supporting a variety of infrastructure providers, both on-premise and off-premise for OpenShift. And we will continue to broaden that support. We basically here differentiate between installer provided infrastructure and user provided infrastructure on the right-hand side. Installer provided means that you basically give us access to the infrastructure API. So that's your AWS account. That's your OpenStack credentials. That's your GCP credentials. And we will basically let the installer create all the necessary infrastructure artifacts to install OpenShift. We also have another mode, which is called user provided infrastructure, or UPI. That is basically you providing the infrastructure, right? You setting up the network, the storage, the compute resources, and you also get to have a say in the operating system of the worker nodes. The only thing that retains or remains under the control of the installer is actually the control plan. And that's very important because this is how we basically give you the abilities to apply updates in a very consistent and very stable manner. So updates on this thing are going to be a breeze. Because we entirely control the entire cluster state with operators, exposing it entirely through YAML, we basically have very deep insight into what's going on in the system and how it is configured, and can basically make very smart decisions around how we upgrade. When we upgrade such a system, it basically comes down to updating the operators that control the various components. And then the operator in turn knows exactly how to update the domain it is under, which is under its control, right? So essentially, you get the updates from a Red Hat online service, and they are simply applied from DUI. And this is a process that every junior administrator can do. We want updates to be so simple and casual for you like updating your apps on your mobile device, right? It should be something that you can basically do confidently during the day in production, right? And that's why we will basically release on a much more frequent basis updates to the platform. And we are feeling so confident about that that we will give you the option to basically opt into automatic upgrades in the background, right? We also introduced a notion of update channels. So you basically have the ability, like in your browser, to change to a beta channel, or a nightly channel, or a cannery channel, and get updates much faster than you would on a stable channel. But basically, this is something that the cluster is now orchestrating itself, because the logic that does all of this is sitting inside the cluster. It's not an external thing anymore that imposes itself on the system. We have complete control of the system on a continuous basis because the operators watch the state continuously. One thing that allows us to do is the base operating system, which is Rail Core S. This is basically a combination of both of the best of both sites, which is Fedora Atomic and Container Linux from Core S, which makes up a very small Linux distribution running the Rail kernel. So this is Rail, right? It runs the Rail-8 kernel, and it inherits the entire hardware and software certification ecosystem as a result of that. But it's packaged a little bit differently, right? It's an immutable base image that you are not supposed to directly access or configure. Now, that's where some people usually get afraid and say, oh, you take away control from my beloved operating system, how I'm going to install my monitoring or my favorite metering agent there or logging agent. That's all taken care of on top of the cluster, right? So in this kind of new world, we basically install agents or drivers with operators. And we've actually done so in OpenShift 4 already. We support NVIDIA drivers, GPU drivers as an operator. We support system monitoring agents as an operator, which you can install directly from the operator hub in OpenShift, right? But this is your base, and Rail Core S is always going to be used for the masters, right? So this is where we retain some tighter control on the control plane. But in the user-provided infrastructure mode, you can also use well-regulated Enterprise Linux 8 as the base for the worker nodes. So here's one of these platform operators that I have been speaking about earlier. So the Kubernetes machine operator, for instance, is a very interesting prospect. What it allows you to do is basically model the underlying infrastructure from a provider perspective into the cluster. So this operator knows how to talk to the AWS API, for instance, or knows how to talk to the OpenStack API. It knows how to talk to GCP and Azure. So it is able to surface some core concepts of these compute platforms into the cluster. And all of a sudden, we have a new object in Kubernetes called a machine. A machine is basically representing a compute node in the system that either hosts a master or a worker. So it's an actual Kubernetes object, very similar to a pod or a service, right? You can look at its YAML. You can edit the YAML. When the edit is allowed and valid, the operator will detect it immediately and start to change the machine's configuration so that it matches what you put in the spec section of that object. It's normal Kubernetes tuning. There's no additional commands required. You could continue to use Coup Control. And very similar to basically how replication controllers control fleets of pods, there's a machine set controller controlling sets of machines. So if I wanted to scale my cluster, I basically go edit the machine set object. I increase the counter from three to four. And then the machine set operator in the background starts to talk to the infrastructure API and provisions in the machine. And like two minutes later, I have a new node joining my cluster, right? This is as easy as it gets when it comes to running a Kubernetes cluster these days. And I haven't seen any other Kubernetes distribution that does this today at this very core. So essentially, you can see where this is going, right? You can take the same concept as horizontal pod autoscalers and apply to machines. And we have done so. We have actually a cluster autoscaler that watches CPU memory and network utilization in the system. And you can set up events so that it basically starts to increase machine sets and scale your cluster on demand, both up and down, obviously, right? So this is how powerful it gets with operators. And this is all accessible from within the cluster, right? You still are editing cube objects. Now, let's talk a little bit about how these operators enable cloud-like self-service experience on the cluster. So the first step to make this feel like a cloud offering is providing a central control plane, a central management pane to the users so they can see all the OpenShift clusters that they've deployed. Something I didn't tell you before when we talked about the installer is that whenever you install an OpenShift cluster, there's an anonymized telemetry sent back to Red Hat. And the unified hybrid cloud console on cloud.redhat.com is the place where this ends up. So you have a list of all the clusters you've deployed across all the footprints, either on-premise or off-premise. And you can see some very basic data, like which version they are running, how utilized they are, how many nodes they have, and if there's an upgrade available. So that's the central point where you can get the single source of truth of all your OpenShift deployments. And that's also where you basically manage the subscription of your commercial OpenShift container platform distribution. Next is the operator framework. This is actually something we've launched a year ago. And it's an open-source project aimed at the wider Kubernetes community. So like Diane said earlier, everything we do in OpenShift eventually flows back to the community. The machine set operator and machine API operator that I told you before is actually part of a SICK. It's called cluster API. It's very active. People are very enthusiastic about this because Kubernetes gains control of the underlying infrastructure. And the same is with operator framework. It's an open-source project aiming at the wider Kubernetes community trying to force the ecosystem of operators by making it on the one hand easier to actually get started in writing an operator. There's an SDK component that allows you to write an operator in Go. Also use Ansible templates to write an operator or convert an existing Helm chart into an operator. And then there are also on-cluster components like the operator lifecycle manager, which basically help the cluster admin provide a safe route for the operators onto the cluster. Now, how do operators play into the perspective of running and getting managed services on the cluster? If you think about how you would get a database running for a local developer example, everybody can basically launch and containerized Postgres database, for instance. But this doesn't really give you anything. You're still in charge of actually maintaining that. All you've basically done is you launched the Postgres daemon with some fancy flags that tell the operating system to create the impression to Postgres that it is alone in the system. But that's all you did. There is no further operational logic to that. Now, the next step that developers usually take is using one of those managed services you find in many of the cloud providers, where you basically get, in this case, a Postgres daemon or a service as a service. You get to basically say, I want to have a Postgres database. And five minutes later, you get the endpoint you can connect to. You get to do some very basic admin capabilities or activities on that instance. But all the rest of it is managed by the cloud provider. So these are usually virtual machines that you're being built for and then some extra money on top for the service. But that's it. Basically, the cloud provider has all the logic or all the operational responsibility for that service. And it usually fulfills that with an SRE-like organization. So that is something that is very easy to stand up for, but effectively, it's somehow hard to get onto the cluster, because it's still sitting outside. It's using a different user authentication mechanism. It's a different tooling and different UI that you need to use. So this is where operators come in. Operators basically take these SRE best practices that cloud providers are using to provide managed service in the back end and codify this in software. And this software is running on the Kubernetes cluster. And it's basically exposing native Kubernetes APIs for you to interact with and request services. So very much like before, I can now say I want to have a Postgres database. But all I basically do is I create a new object on my cluster called Postgres database. And it's a YAML format. And it has a spec section. And in the spec section, there's probably a version tag. And the amount of read-only slave replicas and stuff like that. I control the complete configuration of the database from that one particular object. And that basically runs as a workload in my cluster. And therefore, it's available wherever my cluster is available, which is my local workstation, my staging environment on Cloud Provider A, or my production environment on-premise or Cloud Provider B. It doesn't matter. You don't need to learn anything new. You basically only interact with that operator's API. And that's your contract. To foster this ecosystem, like Diane showed earlier, we have launched a so-called public registry for operators called operatorhub.io. This is where the community can gather around posting their operators and sharing them with the wider community. And we also have an embedded version of that operatorhub in OpenShift, which also includes the certified operators of our partners. So we are working with a lot of partners these days, like Crunchy Data, Cystic, Redis, Couchbase, and many more to basically certify operators on OpenShift and support them together. That means if you have a problem with your operator or your workload, you can file a ticket with Red Hat, and we will basically triage the support in the background with the vendor. And we basically make sure that this operator stays comfortable as OpenShift 4 moves through its relief cycle. Operators are also first-class citizens. So we have talked about platform operators before, which enable OpenShift to take control of the underlying platform and its own infrastructure. But it's also a concept for these back-end services, like your favorite database, your favorite message queue, your distributed tracing framework. So we want operators to basically provide these back-ends as a service, and therefore, operators are treated as a first-class citizen on OpenShift. When you think about an operator's nothing else than a pod running in the deployment on your Kubernetes cluster. With probably a good amount of white permissions, usually cluster-wide permissions, and it's going to be a long-running workload as well. So we want to take extra care of that operator and not just have people fling them in via Ham charts or basic Kubernetes manifests. So basically, when you package an operator for operator lifecycle manager, the lifecycle manager essentially instantiates all the things that make up the operator for you on the cluster. So the deployment, all the RBAC, the Custom Resource Definition, and so on. And it keeps maintaining this as the operator gets updated, because this is another concept that we want to establish. We think software should be as up-to-date as possible, right? It should be as easy to update the software as you update software on your phone when you see that little badge icon light up. We basically enable you to think of operators as some software that comes from a catalog that you can attach to operator lifecycle manager, and you can essentially express a subscription against one of those operators. The subscription is essentially an intent to say, I want to install this operator on my cluster, and I want you to keep it updated as soon as updates come in via the catalog. That means either automatic or with manual approval from the cluster admin, you can get new operator versions automatically installed on your cluster and potentially be able to manage newer versions of the application stack and automatically update the already existing instances of the managed application to a newer version to protect you from CDEs, bugs, and all these other things. Last but not least, from a developer perspective, all of which I've shown you so far up until this point is essentially cluster admin territory, right? The user doesn't see this. What the user sees is basically the services that an operator authors. So the operator essentially introduces custom resource definitions, which essentially are API extensions to Kubernetes. So they introduce new objects into the cluster, which we surface as part of the developer catalog. So next to my all existing templates, run times, programming languages, I can now see new services offered by, for instance, in this case, the MongoDB operator. I have a new object called MongoDB cluster, which I can instantiate. And poof, 16 seconds later, I have a MongoDB cluster up and running. This cluster is still made up out of pods and services and secrets, config maps, persistent volume claims, and what have you. But you don't need to manage this. You don't even need to be aware that this exists, just as you are probably not aware what exists behind the managed Postgres database on AWS RDS, for instance. But you, as a developer directly from the catalog, just consume this service. Speaking of developers, I think it's time to dive into what else we have in the back for those guys. All right, so now let's talk a little about what we offer developers in the upcoming version. All of this is really interesting, but the point of that is really to be able to build more applications faster, easier. That's really the ultimate goal. For developer, for application developer teams, the result of that is that they have a more cloud-like experience. They can create stuff on demand faster. If they need a cluster, they can get it up really quick without having to go through operation. Myself, in the last six months, I have installed as many OpenShift clusters as I did the last four years. That's simple. It becomes for a developer like myself. But what are we doing for developers? The first thing is that we have invested in a CLI that focuses more on developer scenarios. So Cube, CDL, and OC, they are really useful tools, but they focus on Kubernetes objects. So there is a translation to be made, which is usually in the mind of a developer itself, himself or herself, how an application, with whatever definition I have for my application, maps to these objects. So through the CLI, we try to abstract away some of this aspect, make it a lot simpler to have a Git push experience essentially on OpenShift. You create your app, it ought to create. You push the existing directory, being that the source code of your app, the Java, Maven app, or Node.js or something else, pack it up and send it to OpenShift and create the objects and get deployed. All of that happened behind the scene. And the most important piece is the third part, the watch functionality, that you can ask the tool to watch the changes that you're making in your local workspace and sync it automatically to a pod running on OpenShift. So I can iterate on my code the exact same way that I do locally. If I have a Spring Boot app running and I run it in the Dev mode so I can modify my classes, it automatically reloads, recompiles and reloads the class. I can test it in the browser immediately. We're aiming for the exact same experience but on OpenShift so that it can take advantage of the other services that are also deployed on OpenShift. You don't have to run them locally. If you're using a Postgres database that's running on OpenShift and you deploy your pod with your Spring Boot app, iterate, make a change in the code, deploy automatically through the watch, and test it on the platform so that it's consistent stack, the same version of JDK, the same version of Spring Boot. You have the exact same stack that the application would use when it's deployed in the staging environment or production environment. This was released last week as beta so I definitely encourage everyone to give it a go, give it a try, and provide feedback how it's working for your scenarios. The next thing we are doing is that we are taking a step back and while we love the Kubernetes objects and the interface to interact with them through OpenShift console, but we don't want to leave the application concepts behind either. So we have created a new console that focuses specifically on developers and applications. And the application concept is a king there. So everything is around the application concept, grouping the objects, how they relate to each other to make sense of it for an application that a developer team defines. That gives an overview, a topology of how objects are related to each other for a specific app. It brings more of an app-related capability like CICD pipelines and other aspects that are coming along as we go forward. So in every release, we add a little more of the developer scenarios into this Dev Console so that they have both personas, both type of users, covered really, for someone that wants to be deep into Kubernetes, directly interact with bare-boned Kubernetes objects. That's what OpenShift console provides. And for Dev teams that they want to be a little, this stands from Kubernetes objects to be abstracted away and focused on the applications that have the developer console that gives them that capability. The next area is that we are working on a cloud-native CICD that is coming as a preview on OpenShift 4.1 and coming with more scenarios afterwards. We have historically had Jenkins, shipped Jenkins on OpenShift, and we will continue to do so as we go forward. But let me just take a little test. How many of you, in some form in your organization, are using Jenkins? Great majority. How many of you love Jenkins? All right, so I don't need to explain why we did this. So we are working on a bit upstream on a project called Tecton Pipelines, which aims to standardize the terminology, the CRDs, the concept, the building blocks for creating CICD on Kubernetes. It's designed for Kubernetes environment. It runs in containers. And it's designed for cloud-native apps. What does that mean? You can use cloud-native apps in any sentence these days. What it specifically means is that we are walking away from that Jenkins model of there is a central place that owns the configuration, that owns the plugins, that owns pretty much everything to know about CICD. And everyone else is just a client of that central thing. We are moving away from that model to be a central thing that is just responsible for running stuff, like spinning pod and do whatever the team has defined. So teams can own their pipelines. What exactly want them to happen in it? What plugins they want to use? Every aspect of the pipeline can be owned by separate teams. And when you create that pipeline, Openshift or Openshift Pipeline, generally, it would just execute that. So we remove that central ownership of the pipeline and distribute it to the teams themselves. The consequence of that is that there is no central thing to manage. There is no babysitting a central CICD solution. There is no scaling issue because it's a pipeline running in separate pods. You don't have that central master to be in control of everything. And we, of course, distributed through Operator Hub. So in a few weeks, when you will have this appear in the Operator Hub, you can install it and start using the pipeline through the Tecton CRD. So it's a bunch of CRDs that you define your pipeline similar to the rest of Kubernetes objects that you would use for deploying your applications. Another area we have invested and focusing a lot is serverless scenarios. So Knative has been a very popular project since inception upstream. And Red Hat has been heavily involved in it with Google and other contributors in the space. So we're bringing Knative and Openshift distributed through Operators that allows you to run a serverless application. Essentially, turn any application into a serverless scenario based on a container. What does that mean? That you deploy your application. Your application automatically scales through zero. A request comes, an event comes, application wakes up, serves it. When there is nothing to happen, it's going to go back to zero again. So it enables functions as a service scenario, serverless scenarios. We already have a partnership with Azure that it enables you to run Azure Functions on Openshift. Anywhere that Openshift installed, through Openshift serverless, through the Knative constructs and Openshift. And we are pursuing that partnership with other fast providers as well that gives you the serverless base, essentially, to run a function as a service or any service application that you need on Openshift regardless of when, where it is deployed. The next area is the service mesh. Service mesh is also an up and coming area in application development. We could call it also in the network space. It is a network that is dedicated to services. The networks generally have been quite dumb about the application. There are a lot of talk about packets and routing. Service mesh is a network that is designed at a service level. So it's very aware of how services are talking to each other, which service is talking, to which service, how traffic is flowing between them. Having that type of network allows us to define rules at a whole different level, how traffic should flow between services, which services allow to talk to which service. Or look at it the way around. Like looking at this network, we can also extract how services are talking to each other. This is one of the most valuable piece of information that every developer team I've talked to, they've been asking me that. If you can look at, I don't know, most developer teams, if they have a big application, they don't know the exact architecture of the application. They know the piece of it, but they don't know exactly how traffic is flowing between everything. A service mesh gives you that view. That you just look at the application, and you extract that from how applications are talking to each other. And most important of all is that it provides this for any application deployed as a container. So we have had some pieces of this functionality before when people were building new cloud native application. When you were using Spring Boot, people could add some libraries in it and integrate it into JGear, or Zipkin for open tracing, or add other libraries that does smart load balancing, or traffic control. So it was possible before with using extra libraries for each programming language or framework that you used to get some of this capability, but that was only limited for the new applications that you build. The OpenShift service mesh, you get that for any pod deployed on the application. You don't have to touch your application code. You just get that out of the box because it sits around the application. It doesn't intrude inside the application. The next area is a series of tools you release under the CodeReady brand. CodeReady WorkSpaces is one of which. It gives you a web-based IDE. It gets deployed on OpenShift. And every developer can log in and create a workspace for himself with a stack of tools, like your Java developer. You want a workspace that supports Java 11 with Maven. So you get a stack for that within the browser. You can code in it. It's a very similar experience to Visual Studio Code. It's actually compatible in the new version, the plugin system with VS Code. So you can use a good portion of the same plugins there. And having this browser race gives you the capability that, well, on any machine I go home, I have the exact same workspace as I had at work. I can code on it. Or even better, I want to share my workspace with a colleague because I'm stuck on a specific problem I'm solving in my code. I want him to take a look at it. Well, in the old wars, when all of us were in the same office, I could just go grab him. But now my colleague is sitting in Atlanta. I just could send a link to him that opens the exact same workspace that I have on my machine. It opens it for him. He can directly code. I can watch as he is coding. So it gives me a new level of collaboration that I can have with my colleagues developing on the same code reactor within the browser. And with that, I'm going to wrap up our talk today. So the message here is that we're working really hard across all these themes to give you a cloud-like experience across the services that you consume on OpenShift. And also, for the day-to-operation itself, give you a cloud-like experience for installing OpenShift and managing OpenShift. We try to optimize as much as possible all the way to the operating system down. And on every release, on our quarterly releases, we're going to make this deeper, better integrated, expand it to other areas. And the same goes for the developer tools. We focus a lot on looking at the gaps where the frictions are and try to plan on every release to make it better. So come talk to us after the session. Give us feedback. What you're missing. That's exactly what we need going forward. Thank you very much. Thanks. That was really good. Thanks, D-Mac. It's always wonderful to see you. Thank you.