 Welcome to the platform services roadmap for OpenShift. I'm Rob Somsky. I'm a product manager at Red Hat, and I'm joined by my colleagues, William and Siamak. Today, we're gonna be talking about the platform services roadmap for OpenShift. But before we begin, I just wanna talk about OpenShift 4 in general for a few seconds to set some context. With OpenShift 4, we re-imagined what a cloud-native Kubernetes tech stack looks like from how it's installed, bringing things like immutable infrastructure into the mix, and adding operators to manage the entire state of the cluster, like a team of operations superheroes. Starting with this theme of automation and day two management top of mind means that the cluster is really smart. It's actually extremely smart. It can upgrade over the air. It can react to its own state if it's degraded, as well as gain proactive patches for problems that other Red Hat customers are seeing before you've even hit them on your own cluster. All that automation keeps pushing innovation up into the tools that your developers are using. When you've got a really stable, really dynamic cluster, you can make it really easy for a cluster admins to push up a stream of updates for things like service mesh, pipelines, and serverless, new versions of Kubernetes, gaining all the innovation that happens in that upstream community. That innovation is baked into every layer of OpenShift, and it forms a holistic platform for running your hybrid cloud. Now today we're gonna focus on these blue boxes, the cluster services that enable your development teams to write code, ship software, and manage their application stacks. But other sessions covered the other boxes in great detail, so check those out on the agenda. Now when we talk about the application stack, we mean one that's born in a cloud native world. Tools like Istio, Knative, and Tecton empower your applications to take advantage of the new automation that the infrastructure can provide. And producing apps that are super charged in this new paradigm does require new tools like Eclipse, Che, Quarkus, and the Operator Framework. We're super excited to see what you build, so let's dive in and see how this stack is the perfect match for OpenShift. Now, platform services are some of the key value ads to the platform. And what this does is tie these features into your workloads so that they have the unique capability that the platform provides. So things like we talked about earlier, service mesh, pipelines, and serverless, as well as other capabilities like doing user tracking and charge back across the platform, as well as have all of your applications grant access to our full stack of the logging infrastructure. So we're gonna dig into each one of these in a little bit more detail. Tracking usage across a cluster is important for every business, but especially in multi-tenant clusters. OpenShift's metering operator allows for cluster administrators to schedule reports and track usage for CPU, RAM, and other metrics as your developers are consuming resources inside of their namespaces. Once it's installed from Operator Hub, new charge back screens will appear in your console, giving you the ability to look at the reports you have scheduled, schedule new ones, create customer reports, and even query those via a live API. What's exciting about metering is it's just a really great piece of technology as a general usage collector, and it's very unopinionated about how that data is used. This makes it perfect for plugging it into business intelligence tools that your company might run, or other specific workflows that impact how your business units and your teams are set up. You can use metering as the data source to power those workflows. At Red Hat, we also use metering as an opinionated usage collector for two of our products, Red Hat Cost Management and the Red Hat Marketplace. Let's dig into both in more detail. The Red Hat Cost Management tool combines IaaS usage data with Red Hat subscription usage to give your company visibility into your spend and map that spend to different teams and projects that are underway. This is a hosted service that works across multiple clusters and multiple clouds bringing you that true hybrid capability. A really cool feature of this is to track persistent volume usage across OpenShift, which can really sneak up on you, especially when you have a ton of users on all of your clusters. This can really add up, especially if you're using and storing data that you don't really need to keep around. The Red Hat Marketplace allows enterprises to curate software that teams have access to, and when you negotiate enterprise-wide purchase agreements, you can track all of that in one place. Now, just like Cost Management, the Red Hat Marketplace can track this across clusters, giving your developers a single location to find certified software that can then be installed across all or some of your OpenShift clusters. Procurement teams also can simplify approvals, track application usage from cluster to cluster or team to team to make sure that that usage remains within their desired levels. Operators remain very popular with OpenShift users, and this year will bring enhancements to the experience of building operators and managing operators as an administrator. The first big improvement is unifying the object model into a single operator object, which will help users of the cluster discover operators, but also aid developers in testing them, especially combined with our new ability to bundle custom functional tests together. Also updated is a new simplified SIMVIR-based upgrade logic, which will join our more advanced update graph capabilities that exist today to match those really complex needs as well as very simplistic needs. Improving our tools for building customized catalogs and mirroring that content into offline environments is both useful for operator developers and cluster admins. This will make it easier for developers to test their upgrade paths and register in progress operator catalogs. You can picture it like a nightly catalog, for example. Cluster admins will very easily be able to bundle specific versions of operators instead of grabbing the entire latest catalog. This change will be great for improving mirror times into disconnected environments, as well as better curation overall. You wanna get your developers just the sets of tools that you want them to. And lastly, we're very excited to bring the operator framework into the CNCF as a sandbox project. We've been hard at work on this and we're very excited to see more innovation and community engagement as we go with the CNCF's backing. With that, I'm gonna hand it over. Thank you, Rob. Now let's talk about OpenShift Service Smash. As some of you might already know, a Service Smash is a dedicated infrastructure layer for handling service-to-service communication. It is responsible for the reliable delivery of requests to a complex topology of services that usually comprise a modern cloud-native application. In practice, the Service Smash is typically implemented as an array of lightweight network proxies that are deployed alongside an application code without the application needing to be aware, often leveraging a pattern called sidecar containers. As we continue the development of OpenShift Service Smash, we are increasing the flexibility of how our customers manage their systems and how we, as Red Hat, can aid with that. At the beginning of April, we released version 1.1 of OpenShifter Smash. This release is backwards compatible back to two versions of OpenShift. Through the operator-managed install, we can introspect the cluster version and perform the correct steps to ensure you have an working system. In this newest release, we have a number of exciting things to talk about. We've fixed the number of bugs regarding DRC handling, which can improve how traces are captured and managed with Yeager, and we have also improved how Kiali allows for traffic visualization. Here is an example of one of the new exciting features in Kiali, the ability to drill down into charts when you see something interesting to inspect. But beyond those high-level details, there are a number of aspects of Istio which are specifically exciting to talk about. For users who are allowing C to Dell to fully provision and manage the internal certificate authority for mutual TLS, the setup is even more streamlined. Beyond that, we can also monitor and reconcile issues with expiration of the underlying root certificate. There are also a number of verbs that have been added to the Istio CTL or Istio Control, tooling which can provide improvement for the troubleshooting of misconfigurations. In the context of traffic management, we have new items. The new authorization policy mechanism has now graduated to a beta status. While, of course, there are details to be worked out, this will move us towards a better granular control over how users can engage with services running in the mesh. Users of traffic mirroring a very popular feature also called Dark Launches will be excited to hear that now they can send a quantum of the traffic to a service rather than the whole copy of all the packets. This will provide developers and site administrators a way to test their code earlier and more often, leading to great service reliability. Finally, the implementation of a much more efficient regular expression engine has been completed within the proxy, which brings performance improvements, especially for complex checks. OpenShift serverless. Serverless workloads are increasing in popularity for cloud and on-premise deployments. We have baked serverless capabilities into the platform with OpenShift serverless that enables almost any containerized application to run as serverless. That means you can choose any programming language of choice and enable auto-scaling behaviour, scaling up to meet the demand and scale down even to zero saving resources. Beyond auto-scaling for HTTP requests, you can trigger those serverless containers from a variety of event sources and receive events such as Kafka messages, file uploads to storage, timers, recurring jobs and more than a hundred other event sources like Salesforce, ServiceNow and email, all powered by CamoK. OpenShift serverless is based on the open source project Knative, one of the fastest growing serverless projects in the market. This ensures that you don't suffer with locking concerns and can still get the innovation from a growing open source community. We're very happy to say that serverless now is GA, which means you can take it to production and run anywhere OpenShift runs, delivering a hybrid cloud experience and a hybrid serverless experience with portability and flexibility. Let's look at the serverless operational benefits. Without serverless containers, you eventually have to deal with one of these two problems. Over provisioning, when you have too many containers running and IT has to meet the cost of running those idle resources or under provisioning, when you have more requests than the number of provision containers, which essentially leads to a poor quality of service or even lost business revenue when you miss those critical transactions. With serverless containers though, the number of containers try to match your demand as you can see in the picture here in the right. It saves time and cost on your IT department creating a more direct line between IT costs and business revenue. The availability of market capacity in the system also help you increase the density of our clusters, allowing customers to run more applications with the infrastructure they already have. The user experience of OpenShift serverless is at the heart and center of OpenShift. On the console, you can visualize your serverless applications, the services that can trigger those containers and set the traffic distribution for multiple versions of an application. The installation is done through an operator that enables a great day one experience and an even better day two. Delivering over-the-air updates with bug fixes and CDEs that can be applied automatically. You can also leverage the CLI, KN, the official CLI of KNative to create applications and even sources for those applications. With a single command, you get a service deployed and URL to access the service within a couple of seconds. The time seen here illustrates the creation of a service, a route, a deployment and the download of the container image in the cluster, which is pretty fast. Thank you, William. The next piece I want to talk about is OpenShift builds, the next service and platform services. It's an API that allows you to build lean images from application source code and binary on Kubernetes itself. So you can create really slim images. Let's say you have a Java project, Java Maven project and you want to build this Java application from source, but you don't want all the build tools to be included within the resulting image, your runtime image. Your runtime image should be as slim as possible, including the minimal dependencies like the Java runtime environment and the application binary. So through this API, we can trim down and cut out all those dependencies and keep them limited to during the build time and produce images that are really lean. They only include dependencies that are required at the runtime. It also opens the space for using any of the Kubernetes build tools that you're familiar with and are quite popular in the Kubernetes community. You might be using Builder for doing Dockerfile builds or source to image if you want to automatically turn the source code into a binary image, cloud native build packs, Canico, Jib and other tools. So it provides a quite an extensible API with supporting build strategies that are available in the Kubernetes community. And at the same time, it has a very pluggable architecture. So you can extend it to more custom strategies that you might be using within your organization because you have a special needs. A lot of our customers, they create their own strategies because they are doing RPM build or they need to do security signing and other aspects of build that are not part of the standard build through this build tools, but they can modify it and add their own way of building their artifacts within the same API and use it using the same tooling that OpenShift Build provides. And most importantly, OpenShift Build is portable and can run on any Kubernetes platform. It is based on CRD, so you would install the operator on any Kubernetes and it can consume the same API around the same build regardless of what kind of build tool you're using. And OpenShift Builds, we have just recently started with this project. It's going to be developer preview on OpenShift 4.4 and we are rapidly iterating in the community on it to take it to GA, hopefully within this year. How does OpenShift Build API work? This is kind of a simplification of the internals of OpenShift Build. So there are a number of CRs, custom resources that represent builds. There's build, build strategy that within the build, you specify build, build strategy should be used. You want to use the source to measure cloud native build packs or canik or something else. And that defines really the gist of how the actual build should be done. But you just choose a strategy. You don't have to go deal with all the details of how build packs, for example, are used or how source image is used. You provide your Git repo or application binary to the build API and you also specify what kind of base image you want to use and what kind of image the build contains the build tool. The build strategy obviously comes with these good values for this, right? So when you choose source image, it knows, for Java, for example, it knows which image you should use that contains all the build tools for your Java application and which base image is appropriate as your resulting image, right? So you take JDK image that contains Maven for the build tool image and you would pick a very slim JRE image for your base image. Then it builds the application. It layers your application if it's a Java example here, you probably have a jar file. It lays it over the base image that JRE slim produces the application image. You can quite simply change the strategies if you want to go to a different strategy within the same API. You don't really have to change much in the API. It's just the one attribute changes. So what builds does is really abstracting the way that you use these build strategies, build tools and runs them on Kubernetes and it provides a set of tooling around it. So through CLI and console over time, you can interact with these builds. Like I said, it's quite a beginning, but if you have a real launch this project and hoping that throughout this year, we're going to deliver much more experience and tooling around it to help you in building images on Kubernetes platform regardless of what platform you're on. To show you examples of how the API looks like also, in the slide, you can see two build objects on the left side. It's a build that uses build packs, cloud native build packs. On the right hand side, you can see the same application is being built by source to image. And the ownership builds will consume these build definitions and produce the image where you're using the appropriate build strategy that is chosen. Behind the scene of ship build is powered by Tecton. So we're using Tecton as the engine that runs these builds using those build strategies that are defined. The next area I want to talk about is OpenShift Pipelines. OpenShift Pipeline is a cloud native CICD framework based on Tecton. It brings Tecton pipelines on OpenShift. In other words, it provides a Kubernetes native declarative API, a series of standard custom resources to define your pipeline. And more importantly, it runs, it's native to Kubernetes. It runs as isolated container. So every pipeline runs as a series of containers that on demand are scheduled when your pipeline executes. The advantage of that is that you're, first of all, you don't have any central CICD server to manage. It's a capability of the platform. You only have your pipelines. And when you want them to run, they run in containers. You don't have that central thing to manage and govern and have a central team to nurture and upgrade and take care of that. And the second thing is that since your pipelines are completely isolated from each other, you can, you have full control as developer teams, you have full control over these delivery pipelines, what kind of plugins you want in them, what kind of activities should happen in them. If you want to upgrade the JDK version that is used within your pipeline, it does not affect anyone else or anyone else's pipeline. If you want to use a certain plugin, same aspect. These are for some of the issues that a lot of our customers have run into and they're using more centralized or traditional CICD servers that are in use. Tecton is standard, right? So with your pipeline that are created using Tecton custom resources, you can run them on any Kubernetes platform. And in OpenShift, we strive as a part of which your pipeline to provide a really good developer experience for using Tecton pipelines, including visualization within the developer console and interacting with pipelines right there. The CLI to allow you to interact with pipeline through the command line tool, but also bringing it into the IDEs and VS code editor. And you shouldn't have to leave where you edit your code as a developer. You write your codes within the same environment. You should be able to interact with these technologies. You can see a couple of screenshots on the slide of how Tecton is exposed through the various tooling that exists in OpenShift. You can see the diagram of the pipeline, the connection of that to the projects that you have deployed or applications you have deployed in topology, the logs of the pipelines right within the console, or within the VS code, you can get a visualization and code completion and help you author the pipeline. So we are working very hard on adding more and more capabilities across all this tooling within VS code and console and CLI to make it really simple, not only to interact with pipeline, but also create them and author them and bring a task ecosystem to autocomplete or visually edit or create pipelines. The next group of services I want to talk about are application services. Application services on OpenShift are services that help developers create cloud-native applications on a platform, taking advantage of these building blocks. Instead of creating everything from scratch, there's a large series of programming languages, databases and different type of middleware and a lot of services from BitHead partners, more than 150 services that are available as operators. So you can consume the services when you're building application and a lot of them, since they're operator based, they behave like managed services so you don't even have to manage operation of the services yourself. But just take a look at what services are available. So let's talk about languages and run plan that are available on the platform. A collection of programming languages, runtimes and databases come built in in OpenShift. These are all like supported images that officially get shipped as a part of OpenShift. And if you have the application based on this popular programming languages like Java, Node.js, Python and so on, you can immediately, using those build technologies that I mentioned earlier, start building images off this application and deploy them on platform. There are also series of databases that are delivered with the platform that you can use them for within your development and testing environment to deploy and use them in your building application. At the same time, a series of runtime, obviously, like a shared server is quite popular for serving static content or PHP applications. There's Tomcat, EngineX and Redhead SSO because a lot of cloud native application requires a single sign-on when you want to centralize security management across these microservices. They're all shipped as a part of the platform, so you can start using them in your building application. And for applications that require more advanced type of middleware, like if you have traditional monolithic applications, Java, Enterprise that you want to move to containers, or you are integrating your microservices with backend services that are more traditional or legacy application across the organization or automating business processes or business rules, Redhead middleware provides a very rich collection of middleware based on JbOS technologies, OpenLiberty, and some of the cloud packs that come from IBM that you can use on the platform and enrich the application that you are building. And beyond that, obviously, we have the partners or partners that are building application and services for OpenShift. The majority of them are exposed within the operator hub. When you go through operator hub in OpenShift, you would see a wide categorization of different type of services that enable DevOps or data services or databases, security, or CICD and so on that you can deploy. And a lot of them, like I said, they run as operators, so they behave like managed services. You don't have to operational take care of these application services and use them within your application and deploy them on OpenShift. One of the other additions to the platform is the application binding operator which allows you to connect partner applications, any application that is backed by an operator to your applications. So that credentials for whatever that operator is provisioning for you, being a database or a message queue or something else can be automatically injected into your application and consume that as environment variables or secrets or other ways. It is a similar model to how open service broker function before service catalog for people who are familiar with that to make it available for operator services as well. The powerful thing about it is that it works based on labels. So if you keep really pulling your application frequently, which is what you would do, especially if you're in a development environment or if you have a high velocity of deployment and production, the labels would match deployment again and re-inject those credentials into the new deployment. So you wouldn't need to really do anything to get access to those operator backed services. They would be available in a new image or new container to deploy as well. Under the hood, this is powered by Kubernetes CRDs and it's easily integrated into your tools like the rest of Kubernetes essentially, Kubernetes objects, and it works across any Kubernetes object. So you can bind it to a pod or even like a deployment or we're working even to make those credentials available just as a secret. So the platform doesn't have to dictate how you want to consume the credentials from these operator backed services. You can consume it in any way you want, for example, in a CI CD flow possibly. The application binding operator is available today in the operator hub in OpenShift. So we can go through the admin console to operator hub and install and start using it. And the last category of service I'll talk about are developer services. Developer services are all focused on the developer's day to day life. So there are a set of tools that help developers be productive on the platform, but also how to, they package and deploy their application on, they can onboard their application. That says that it starts with developer console, obviously, the developer perspective within the OpenShift console that focuses on an application view rather than Kubernetes constructs. There is help for a package and installing application. There is a developer focused CLI as well for fast development iterations on your workstation and you want to modify code, deploy it quickly within the container and you want to skip building the image every time and which reaches usually a headache when you have to do it like every minute or every couple of minutes have to do that for hundreds of times a day. Developer CLI reduces that pain by making it really iterative and when you make it change in the code it would just sink it inside a container and update the application that you can test the result of your change immediately. There are the IDE plugins and Visual Studio code extensions that we created around OpenShift and different technologies in OpenShift so that developers can interact with the platform right within where they are coding. They don't have to leave the coding platform. Code-ready containers gives developers a local instance of OpenShift so they can write on their own laptop. They don't have to be dependent if they're in offline environment or maybe online but they want to have full control a single instance on their laptop. And there is Code-ready Workspaces that gives a more collaborative Kubernetes native web-based IDE. So there's a rich set of developer tools and services to combat OpenShift to make the life simpler for developers on OpenShift and Kubernetes and make them more focused on the code rather than Kubernetes aspects. And why do we work on this particular set of tools? We look at the development process from an end-to-end perspective really from the whole development process of doing the code and debugging on your local environment, building a package again and running the application and when the developer is comfortable with it commit that to the Git repository, run it through CI, CD, and deploy it to the operating environment. So the series of tools that if you provide and work on we try to make sure that complexities of interacting with OpenShift and Kubernetes are addressed through each of these phases that developers and code go through and make life easier for them in their interactions with the platform. I mentioned developer console to give you a little more in-depth view so within the console there are two perspective there is admin perspective that focuses on Kubernetes administrative concepts and there is developer perspective that focuses on end-to-end flows around applications so you can see within the topology for example visualizes how different components of your applications are related to each other how they map to Kubernetes objects if you even deploy a Knative application or other type of applications even like visualize the traffic that is flowing between them. So we work really iteratively hard on this piece of the product as well to address the needs of developer on Kubernetes to make life really easy for them and do not force them into Kubernetes constructor to the Kubernetes way of thinking if they don't want to. Of course parallel to this there is admin console that allows developers that want to be in a Kubernetes environment to interact with the platform more from Kubernetes perspective and to the Kubernetes objects. Helm 3 is another addition to the platform on OpenShift that is fully supported from OpenShift 4.4 it allows you to package, install and update your applications on OpenShift it is widely used in a lot of customers a lot of teams already build Helm charts for deploying their applications it is FallHelm 3 one of the advantages that it has is that the telecomponent is gone so you're on fully client side model there's a Helm CLI that comes with a platform you can take any Helm chart provide the values and configuration customization that you want to layer over a Helm chart and deploy it on the platform it creates releases and the deployment of your application the good thing about a Helm chart in Helm 3 is that it follows Kubernetes RBAC so you have the same controls and security that you apply to the rest of your application you can have it on a Helm and the charts that get deployed on within your name spaces we also bring Helm to the surface within the developer console so you can see Helm charts within the developer catalog developer catalog in the developer console is where you find content to deploy different type of services application services that we have been talking about and once you deploy the chart you will see releases appear as a part of the in Helm navigation item you also see them exposed in topology so you would be able to interact with Helm charts directly within the console in addition to Helm CLI and other tooling that is available of course within the Helm ecosystem Code ready workspace is like briefly mentioned is a web-based developer workspace so it runs on Kubernetes on OpenShift and gives you a complete stack of what you need to start developing your application you can create canned workspaces that provide a very familiar experience like VS Code and once you click and create that workspaces gives you all the tools that you had predefined in a Git repo for what you need for your application you need a certain version of Maven to build your application or you need Java support and it may be Partomcat or EAP or some other applications ever on the site so you define everything that you need within that stack or workspace and put that in Git and any new developer that gets on board it or maybe you work from different workspaces or from home or work from different part of the organization or your team is distributed they all have access to the exact identical stack that can create it within the browser and start developing it is based on Eclipse Chair and it gives you an additional space that complements your local workstation within your own laptop or desktop environment that you develop what I would like to finish with is a quick overview of along with the capabilities that we have planned to add to the platform across platform services, application services and developer services throughout this year toward the second half there are many capabilities but we don't have time to go through all of that but we are working hard across all the teams to deliver the capability that our customers are asking so that it helps them to manage these workloads their workloads on the platform build cloud native applications easier and be more productive when they are doing that using these services and with that I'm going to wrap up today's session thanks a lot for listening