 Hi, welcome. This is the What's Next OpenShift Roadmap update, but this is the developer edition. I've got a whole group of my PM friends on here to cover a lot of the developer topics. With that, we'll jump right into it. Just a reminder, so there's a series of presentations that happen with OpenShift Roadmap. There's the What's New, which talks about, hey, this was new in 4.9, and then the What's Next, what's coming in the next 6-12 months. This is going to focus on that up to 12 months. This is the look ahead, things may change. There's a lot of changes, the market conditions, other things, priorities come up. What you see here may or may not happen, just standard PM disclaimer, and this is the developer edition. We're running these alongside the main OpenShift one. Let's dig into it a little bit. I just want to talk high-level the priorities within the developer tools area for 2022. One of the quick pushes, and this is cross-Red Hat strategy overall, is around managed services. Really trying to figure out how we can take everything we've done with our open-source components and really bring them together in a single managed environment that has a great experience for delivering those pieces. You'll see things that don't necessarily bring in one-to-one here as an open-source project or an offering to a managed version of it becoming and wrapping those with an experience. That focuses on Red Hat's application cloud strategy, so sitting above and promoting OpenShift, if you will. On-boarding, continue to drive on the Dev Sandbox. It's been a great way for a lot of developers to get their hands on OpenShift and various installed tools. Continue to work on ways to improve the developer, getting the hands on our tools, software platforms, and then being very productive with them earlier. As quickly as possible, we're removing a lot of friction. Also, trying to continue to, like it's always been a goal, is that platform adoption. It's not about us winning at the best we want to have with the great tools, but it's not a single tool competitive market. Our tools are there to help drive the developer productivity with our platforms in run time, so we'll continue to push on those items. Just a reminder, we have a very broad portfolio of tools that we work on. This highlights some of them. Some of them we complement with some basic tooling, all the way from code debug with code-ready workspaces, VS code plugins, IntelliJ, Eclipse extensions as well, command line tools, developer console and OpenShift, ODOS, CLI tooling, build and package through the standard OpenShift build capabilities to which we'll hear a little bit about coming next with ShipWrite, Helm and Operator enablement, all the way to enabling different technologies like serverless, service mesh across the platform, and integration to a sneak and leveraging some of the other tools that exist for providing secure software delivery across the platform. There's a lot of tools we've covered here. As I mentioned, this fly doesn't cover everything. There's projects like the JQ, service binding, developer sandbox. We'll talk about this studio code name, so a lot of things we'll cover that aren't part of this slide itself. With that, I'll turn it over to Devon now. Hello. Hi, I'm Stevan Le Meur. Devfile, for those who don't know Devfile, they describe the best practices for end-to-end application development, and they are basically a codified definition of a portable developer environment. They help our customers and the developers to move into this everything as code era with having a developer environment which are completely repeatable and reproducible. We are using Devfile as a core foundation and enabler of the developer experience of most of the tools that we are providing for inner loop, and we are continuing the expansion of the adoption of Devfiles by all the different tools. Being Open Shield Developer Console, ODO, the different IDEs that we are supporting and Devs on Box. We will be working on adding the ability to define and configure the Devfile registry at cluster level. When a tool will be connecting directly to a cluster, the Devfile registry will be configured directly into the tools. We will provide offline support for Devfile registry, and in order to facilitate the onboarding and the setup of the toolings, we will be working on solution allowing to analyze and detect the proper Devfile for a project source code. Along with that, we will add the support for Dockerfile build in the inner loop support of the Devfile. Last but not least, Devfiles is on its way to become a Cloud-native Sunbox project as well, and we have contributors and contributions coming from the team from AWS and JetBrains as well. So great validation of our approach on this. Thanks. Moit. Thank you, Steven. Hello, everyone. I'm Moit Suman. I'm the product manager for desktop ID tooling, and I'll be going through what we have next for 2022. So we have different set of products, which we have the ID extensions for, for both VS Code, Intelligent Eclipse tooling. Starting with the OpenShift extension, the idea is to provide a seamless developer experience where the users can provision the Devf Sandbox clusters or support Podman or install their hand charts directly from the ID. This is going to be an ongoing efforts throughout the year, and we are trying to make sure users can provision all the clusters quickly from the ID and start working on it. The second one is providing a support around serverless functions. So we have a KNative extension, both on the VS Code and the Intelligent ID, and there will be a release which will be coming in a few weeks for the serverless functions, and that's going to be one of the important milestones for KNative development. The other one is having a developer experience around Intelligent Qubinitis. We already have a Visual Studio Code Qubinitis extension supported by both Red Hat and Microsoft, and we are extensively working to have the same feature parity on the Intelligent side of things, so that users can do advanced cluster resource management styles, view their logs directly and see the local push of their application directly on the cluster using the Qubinitis extension. The next one is to have a support for Tecton, the same for both IDs, basically having the support for extended pipeline history, showing the log retentions and even supporting the latest Tecton Habon cluster for any custom pass catalogs. So this will be an ongoing effect with what we are having with Tecton CLI and that will be replicated on the ID itself so that as a developer their experience will be similar. The other one we are going to work for this year will be to have a remote container support where users or customers can run their VS Code extensions within a remote container inside the VS Code and that remote container can be in Qubinitis instance or an OpenShift or a Podman instance, and that will allow them to quickly provision all those scenarios within their containers. One of the important updates we have for Code ID Studio, this has been a very successful product in the past and we have based on the telemetry and user feedbacks, we have decided to do end-of-life for Code ID Studio. That's going to be on April 14th. There won't be any future releases for Code ID Studio, but the ongoing effort will be there on JBoss tools. So whatever plugins, the middleware plugins which are there on Code ID Studio will be supported with respect to JBoss tools. So from the customer point of view, the experience will be there, we don't have to worry, the only update is it's going to be end-of-life for Code ID Studio. And as Steven mentioned, the ongoing efforts for depth file support will be there on the ID so that we have the consistent approach be it on ID experience or developer console. That's it what I have for the ID tooling. So I'll pass it on to Kasturi. Thank you. You're on mute Kasturi. Hi, my name is Kasturi. I'm the product manager for Code ID Workspaces. On Code ID Workspaces, this year's going to be a really action packed for us as we embrace a lot of new things that we're going to do. In the first half or I should say the end of first half, we're coming up with a new name, but net-net the product score value proposition remains the same, but it's just more to align with the branding the developer tools is going to undergo. So our new name would be Red Hat OpenShift Dev Spaces. And we are also in the process of revamping our architectural design with a switch to Dev Workspace. This switch has already happened in our upstream chain and will happen in Code ID Workspaces probably end of first half. And these changes are essentially driven to make things more lighter, flexible and kind of also enhance further in terms of scalability and high availability. So that's something for you to look forward to and embrace it as we kind of do the switch along with the new name. And apart from that, there has been some feedback from the customers to make things simpler and easier. And to that end, we're working constantly not just with this release but we've been doing that with a couple of releases where we would simplify constantly improve the install, making it much simpler as possible. And also, improving the startup times of Dev Workspaces. That's something we kind of all the time working to make it as quick and as easy as possible. We've heard from you and I think we are also seriously working on getting production ready support for VS Code and JetPane based editors. Tecton plugin again, something that's available in the upstream. And they're looking to get that working on Codery Workspaces on popular demand pretty soon. But these are very, very at a high level on what we're going to do but there is much more for us to do. And if you want to kind of know every single line item that kind of sums up to what's on the slide, then there is a reference on the backup slide which talks about the Codery Workspaces future in terms of the roadmap. What I also like to bring to your notice is after the 2.15 release which kind of schedule some time, February and March, installation of Codery Workspaces as an iron service and OST will be deprecated as the basement customer and install Codery Workspaces from operator hub instead. We're also dropping the support for OCP version 3.1, 4.6 and 4.7. That's something I want to bring to your notice. Thank you, and that's pretty much for me, or you, Steve-Anne and Steve. Yeah, so on our container desktop tooling initiative, we are going to start a new initiative on a developer container desktop tool. And this tool is going to provide a UI for managing containers and leveraging one man. The goal is really to enable the developer to pull, test, run, debug and inspect their containers on their local dev environments, but also to provide a bridge to Kubernetes and OpenShift. The target is really to help the developers who are working with containers, but targeting to run them on a Kubernetes or OpenShift cluster. And also into this, we want to enable self-service consumption of our managed services to the developers. So it's something new. We are starting to kick off this effort on this tooling. Hope you will hear a little bit more in the next few months. Did you have anything else, Stephen? No, I'm good on this side. Okay. Thanks. So with Code Ready Containers, which fits into the space, it's done a lot of great work to bring OpenShift to a single instance onto the developer desktop and has done some recent work to enable PodMan as well. But what we're doing is we're discontinuing the Code Ready Portfolio brand. One of those changes with Code Ready Containers is moving towards OpenShift local. And so a lot of the features you saw seen coming Code Ready Containers will move into an OpenShift local or kind of blend into this developer container desktop as well. And so one of the things we'll have go through this rename. And one area that we'll see some growth, whether it will be in one area, is how we align with other distributions of which if there's this project micro shift, which is a smaller footprint that's being used for some of the pitch deployment. And we'll look at how we can leverage that probably in a kind of a community supported kind of way first as a way for a slimmer API locally as for those that need it. And with that, I'll turn it over to Serena. Hi, I'm Serena Nichols. I'm going to be talking about Odeo right now. So Odeo is our faster and straightforward CLI for developers to write, build, and deploy apps on both OpenShift and Kubernetes. But in 2022, we're going to be releasing V3, which will start at Dev Preview. The V3 release will focus on three major themes, onboarding with guiding, guided experiences, where we're going to be looking to provide more in-tool product getting started so that developers can get the context help needed to easily start with that technology area. Here you can see an example of what Odeo and it might look like. We're also going to be providing both inner loop and outer loop support where inner loop allows Devs to work with the same IDE and local workflow that they're used to while deploying apps to OpenShift with a single command or even automatically. And we'll also, as I mentioned, we'll also be including support for outer loop and V3. Our third major theme is around increased consistency between our developer tooling with Dev Console, Odeo, and OpenShift Connector. Stevan already mentioned that where the Dev files are going to provide that consistent layer between our tooling. We'll also be working on providing consistent terminology between the products, as well as guiding users to other tools in our portfolio as opportunities for awareness and learning. So for example, now that you've deployed this, go into OpenShift Dev Console Dev perspective and monitor from the topology view. So these are some of the things that you'll be seeing in the next year. And if you want to learn more about what we're doing, you can go to odio.dev. I'll pass back over to Stevan for the next area. Thank you Serena. So on our developer services, we have Service Binding Operator, which has been GA'd back at the end of October and is getting some level of adoption and usage by our customer. So we will be working on continuing, stabilizing the APIs and working upstream on the Service Binding specification with the folks from VMware. And we will also expand the number of compatible and compliant services. So this is a specification that all the services must be confirmed with the spec so that they become enabled into our tooling. We will also look at supporting services which will be provisioned to Mcharts. That is also the way we observe the developers using quite a lot and oftenly to deploy their services. So we want to enable Service Binding Operator to work with those services. And we will look at the bridge experience that we can build between a secret management solution like Archical Volt and Service Binding Operator. So how could you inject secrets that are stored into Volt directly back into your application? So that's for Service Binding. Next slide. On Helm, so as Moit already mentioned, we have an effort that is getting done on integrating Helm in the OpenShift connector plugins. We are also working on the multicluster support and we will be looking at improving our approach on the CLI and providing Helm CLI as an OC plugin. So this will allow to interact with the same Helm engine, the one that is on the cluster and avoid the discrepancies between Helm versions that can be installed on the local environment of the developers and the one that is running on the cluster. And another interesting thing that we will be looking at is the ability to export the data to the cluster. So we want to export an existing application as a Helm chart because what we observed with our customers is that most of the customers who are creating and authoring Helm charts are starting from an existing application. They do an export of their Kubernetes resources and then they clean up all the emails and templatizing them. So we want to see how we can help on that journey the developers to leverage Helm in the goal of better packaging their application for Kubernetes. Next slide. And I think this one is back to Serena on the web terminal operator. Thanks, Devon. Yeah, so around the web terminal operator currently we're still at dev preview, but just to mention that what this does is it provides a command line terminal feature inside of the OpenShift console. So this was released a couple of releases ago, but we are going to be coming GA very soon. Over the next year we're going to also investigate a number of enhancements for the web terminal, including supporting additional CLIs in the terminal, for example, FN and PML. Also being able to retain history within the terminal so that if your terminal times out and you close it and reopen it within a single session, your history would be retained. We're also looking into supporting multiple tabs within the command line terminal and last but not least, improved discoverability of the available CLIs. So this image shows an example of what this may look like. Now I'm going to pass over to Seema to cover another loop. Thanks Serena. So OpenShift pipeline, OpenShift GitOps are the center of our loop and our mission is to enable GitOps workflows across a wide area of use cases within Red Hat product. So initially the closer to mind when we talk about GitOps workflows is application delivery, infrastructure as code and configuration management, which is what a lot of our customers are start with, but our eyes really on that wider number of use cases that together with ACM, ACS, and also Ansible, we can enable across customers cases like configuration management and edge, when the number of devices are large or fleet management through GitOps workflows, policy as code and compliance as code that is extremely top of mind for customers, MLOps and so on. And this is something that we keep in mind as we go throughout the year in each quarter. Next slide, please. Specifically on OpenShift Pipelines OpenShift GitOps would be looking at is three main themes is go through what we are focusing on right now. One is to make sure that GitOps is standardized as a workflow across both pipelines and GitOps, not only for the type of workflows that customers have on top of it, but for managing and configuring these pieces of software themselves. Secure software supply chain, that's an important theme for us and in many other products that Red Hat right now and improving the operational experience of running these customers, running these products at customers when they have platform owners that want to provide them as a service. That's another aspect that will be that's a common theme that we'll be focusing on. More specifically, when we look at OpenShift Pipelines Pipeline as code or like enabling the GitOps workflow for managing CI itself, that's a huge area of focus for us moving like hoping to reach GA throughout this year and also some of the other aspects of dealing with CI like support for approval, workflows and manual approval in the middle of a pipeline, concurrency control and features as similar to that it goes into the table state capabilities of pipelines. Around security, provenance and signing in attestation is an area of focus both applied to the images produced through CI, through a tickdown pipeline and also the pipeline itself and the task runs that the pipeline consists of when it executes. And the last bit is focusing on the operational bits from the tickdown task ecosystem. One aspect is that we want to make sure that customers have a way to create a particular set of golden tasks so that they allow their developers or SRE teams within the organization to use and be able to have that exposed within the platform and have that integrated with the rest of the ecosystem that we have, the rest of tooling that we have like Dev Console and CI and so on and also some of the other aspects of managing these pipelines on especially across a wide number of teams focusing on pipeline history and logs, for example, that right now we will lean a lot on OpenShift logging this stack and we want to bridge some of the gaps that exist there between pipelines and logging this stack, make it easier for maintaining those logs or the details of pipeline that are executed in the past. On OpenShift GitOps, we just heard from Stefan around HILM so we want to bring the experience of using HILM GitOps workflow to a much better level. There are some of the aspects of working with HILM charts and Git repo that we hear from customers in any improvement. We'll be looking at those. Bootstrapping, Argos CD and GitOps workflows is something that we have had available since last year in OpenShift GitOps we want to put a lot more focus on it going forward to get customers easily get started with Git repo that contains a particular layout that they can push our applications to. Around security secret management and integration with secret managers is a huge ask. We will be focusing a lot more on that. We won't be supporting any secret manager ourselves but we want to make sure that customers easily can use Argos CD with other secret managers and HashiCorp which is the most widely used alongside the cloud provider's key managers. On the operational experience Argos CD multi-tenancy is something that a lot of our customers deal with because most OpenShift customers out there are large multi-tenant clusters and we want to bring that experience on par with OpenShift and Kubernetes multi-tenancy. Argos CD has on top of Kubernetes has its own multi-tenancy model that gives you two options. We want to make those two a lot more aligned and similar to the rest of the software that customers manage on OpenShift and also provide a lot more guidance on managing cluster configurations through the GitOps workflows. Not everything in OpenShift or not everything about any software really is GitOps compatible. So customers look to us for guidance of how to do particular things that do not sit well within a GitOps workflow and we will be providing a lot more of that kind of guidance throughout this year. Next slide please. I think over to Rob. Yeah, I'm Rob Gormley. I'm the PM for OpenShift builds and a lot of our focus here is getting builds to like projects to tech preview and then through to GA. What we're looking at there is trying to figure out the best way to kind of begin deprecating V1 towards entity year and making sure that we have as we get through tech preview all the things we need for that solid release there. Dev console support, build triggers, pensification performance, the things that kind of are required to turn this from an early project to a full featured setup that just facilitates cube native builds and projects we're talking about things like sourced image, build packs, build etc. Providing a path for migration from builds V1 to builds V2 be that automated or guides and support for our users there. We're also looking at continuing our development and iteration on the various CI plugins. One of the things that we're looking at there is deprecating support for built-in Jenkins. We found that that's very high maintenance and takes up a lot of developer cycles and moving that to a more native C pass integration is going to be the way forward for there. Also looking at focusing particularly on GitHub actions the biggest client for CI plugins but making sure that as we build out that and shut out further we're also maintaining the ability to continue to support other CI providers Azure, DevOps, Jenkins and so forth. As you keep going through there let's see, enhancing pipelines, CMAC alluded to a bunch of stuff there in terms of making sure that we have the tools necessary to again make sure everything is product ready image signing log retention bundles and DevSecOps integrations there. Alright, back to talk about the OpenShift console and the developer experience inside of that. So in 2022 our main themes around the developer experience in the console focus around onboarding platform adoption as well as addressing request for enhancements. We're focusing on a number of new features as well as improved experiences in many areas. Let me tell you about some of the things we're working on. In our import from Git flow or bring your own code flow you'll see that we're now defaulting to secure routes. If that doesn't suit your needs don't worry just hop over to use the preferences where you can change the default settings to be used in both import and import from Git and deploy image. Now note that that's going to be released in early 2022. Things that we're also looking at is when deploying Node.js apps we are looking into enabling devs to enter optional parameters for the NCM line commands. Regarding helm charts we're going to be looking into providing a quick start in the home catalog to show the users how they create a home chart repository which is namespace scope via YAML which will pull in additional helm charts into that project. Following that will provide a form-based experience to accomplish that same user flow. We're also looking into how we can enhance our service binding close which are unlocked by the service binding operator. Things like providing form-based service binding creation for increased discovery as well as the ability for users to name their service bindings. Also looking into enhancing that operator back service catalog to provide discoverability of which services are compliant with service binding. Now let's dive into application availability. Once you've built your application in the dev console and tweak things to work exactly as you'd like we're working on a number of features which will allow you to move your application from your current project to another project or even to another cluster or even check it into your Git repo. As some of you know we already do have the ability to export your application to YAML from the topology view. This feature is currently available with the GitOps primer installed and you could even try it on the dev sandbox today. But other mechanisms that we're investigating are exporting your application to a home chart as Sivan mentioned exporting your app and checking it into your Git repo which would allow that next step to have it managed by GitOps. And we also have our migration team looking into a way to import an application from another cluster into the cluster you're currently logged into. Now moving on to the theme of usability and desirability let me share some of the items that we're investigating including feature requests that we've heard from our users. Since many of the developers don't want to touch YAML we're continuing to provide or looking into provide additional form-based flows for both creation and editing. We're also moving if we're thinking about the navigation and the developer perspective we already provide the way for a user to add custom nav items there. But the feedback that we've heard is that users want to reorder those custom nav items so we're looking into that. And regarding our topology view we're always looking at ways to make that more efficient and usable but we're also spending some time to address some scalability and performance concerns. These won't only be addressed by code changes but also by some changes in the experience. So as you zoom in or zoom out you may see automatically see more or less detail. Last but not least regarding the desirability we're working on we're working hard on providing a dark theme across all of OpenShift console. Now finally let's do a deep dive into portfolio enablement. As you know developer console allows a number of features by itself but as we install additional operators we're really unlocking additional features. So with the serverless operator, with OpenShift serverless operator we're looking into pulling in support for event syncs. We'll start with visualizing them in topology and then providing an event sync catalog for easy creation. We're also going to look into the ability to have Kubernetes services as syncable resources when working with channels and brokers and we are also providing a event sync catalog as well as Kafka Sync support. When looking at OpenShift pipelines Asseomic discussed many of these items previously some of the workflows that he discussed we'll be looking at supporting in the console in the future. So we already do have some level of integration with Tecton Hub with a pipeline builder but we're looking into supporting local Tecton Hub instances and being able to do that from inside the console. Also enhancing that pipeline's code experience and allowing for bootstrapping from the console as well as adding a git repo. Around BuildZ2 which are enabled by also called ShipWrite that Rob discussed we are talking about pulling those into the UI just to show that those resources available and then following that up with having BuildZ2 be that default build in our import from Gitflow. And finally we're working with the cryostat team as they're looking to provide a dynamic plugin which will allow developers a seamless experience from within the dev console to produce, analyze and retrieve JDK flight recorded data from their Java Java cloud workloads to help with some co-piling. And with that I think I'm going to pass back over to Brog. Thank you Sprina. Many of you are familiar with the developer Sandbox where it had OpenShift. Let's talk about the recap for last year it was launched in 2021. So overall we have launched some new things in Q1. We did a data science fans were probably seeing the Rhodes Sandbox so it enables to create data science models without having to provision OpenShift on your own. It comes bundled with the entire stack of data science. We enabled customer communications with some public Slack channels so that people can just talk to us directly onboard a bunch of new operators, serverless web terminal. We ran a hackathon for our managed Kafka service and the developer Sandbox in OpenShift experience and it was very well received in APAC with about 230 participants across nine countries. So continuously delivering new things so that we can have more folks come in onto the Sandbox to try out our portfolio try out the experience and also use it to demo it or set up projects for customers if we are reseller. Next slide please. When we look at the trials from an OpenShift perspective developer Sandbox is the highest system that is used to try out OpenShift and the good news is it also leads to the follow up journey which is after the Sandbox we will try out either managed OpenShift or OpenShift itself managed on public or private cloud and then that leads to sales opportunities and closure of maybe even existing opportunities. So we can see the developer Sandbox is influencing new trials of our platform and also influencing single year bookings for Red Hat. Next slide please. So what's coming in the next few months is we're going to be adding more and more into the developer Sandbox. We're onboarding the OpenShift Pipelines operator we're onboarding the database of the SAS operator especially with the launch of the Rota service coming up pretty soon. We'll be merging the Rota Sandbox now that we've added out into one Uber Sandbox which is going to be the current Sandbox that we have but now it's going to have a lot more in it so now we can imagine the customers can not only create data science models they can also create applications that consume those models and even run the pipelines to run a few builds and sample tasks that they want to automate the end to end part of what it takes to deliver a data science model into an application. So very exciting. It's going to be a much broader reach across our Red Hat customer base. Continuing on the free-to-feed journey as Serena mentioned about the ability to export an application and then import it into a cluster we're going to be onboarding those capabilities into the Sandbox and then tying it back into our console RedHat.com so that you could actually come into the Sandbox, create an application, export it, create a new cluster let's say it's a Rota cluster on AWS and then kind of import the application directly into there so providing a free-to-feed journey there. To increase customer acquisitions we are going to be working on a few things on the product side more on the marketing side. One of the things that we are targeting on the product side is having an activation code-based application. So that you could have a special event where you run, you can have a code that you can share with everybody, you could say code can support up to 250 new sign-ups keep it valid for the next 24 hours and in that way we can now find who came in for a particular event make sure they all come into the same developer Sandbox environment in terms of the cluster and then you can run promotions and the subscribers can avoid having to do phone verifications and things like that. They can just come in directly and share the activation code. So we are kind of working on that capability. Next slide please. So App Studio, I don't know how many of you have heard of it. It's a code name so please don't go by the title but it is targeting a managed developer experience that is going to be focused on delivering a lot more of our app cloud strategy but this is the first step into it so I would like to share some details about it. We are working through the flows that as a developer when you are creating a new application there is a flow that takes to go from an application to a scaled out deployment of the application across a multi-cloud platform starts from wanting to create an application all the way to packaging, connecting into services and can affordably continuously delivering it onto the cloud platform that is suitable for that application. Next slide please. And for that as we all know there's quite a few steps that developers have to go through and some of these steps are very often and some of the steps are highly complex and may not have to be done as often but it starts by how can I create a secure container from the source code that I have all the way down to how do I deploy it to all the cloud platforms, how can I access the data that I need to fix issues when it comes up, when the application is running across the different cloud platforms and so there's a lot of steps that need to happen and for developers there's a lot of technology complexity and a lot of know-how that has to be kind of learned and built in before they can go from 0 to 60 miles an hour so currently the 0 to 60 honestly just takes a long time, it is not the 5.9 seconds that you're used to in the public cloud world. Next slide please. So what we are announcing App Studio at Red Hat Summit is it hosted fully managed experience to solve some of these problems. Basically enable developers to build full stack applications including Java, easily connect to all the leading cloud services and tools that they are used to, especially in the cloud world, adopt certain DevSecOps practices without really becoming an expert in DevSecOps and deploy to any cloud platform of choice. That's the mission that App Studio is working towards and to make it easy what we do is things like let's make it zero friction of packaging the software, we have secure run time so let's not let the developers kind of fumble around figuring out how do I create a container that is going to be secure, provide seamless integrations kind of hide the abstract out the technology and make it easy for them to do the work that they need to do without having to become an expert in the underlying pipeline products, GitOps products the platform of OpenShift itself we are just basically creating a developer experience which is managed by us and then in the end give them a single pane of glass for all your applications across the software the development life cycle and multi-cloud environments and that is something which is very unique in the industry, like everybody has specific ones nobody has that kind of a layer where you could do all of this from one place and just run applications across a cloud platform that suits you next slide please so in the beginning we are going to target certain kind of applications which are suitable for App Studio full blown front and back distributed microservices, applications that span across cloud platforms and geos, applications that are basically going to be scaled up with demand that comes in next slide please at the high level when you look at the market texture what this means is customers can bring in the services and the tools that they are used to that they want and they could be tools and services given to them by Red Hat, by the cloud vendors themselves by other even ISVs like GitLab and HashiCorp and GitHub and then you bring in the experience that is on console.redhat.com and you can have a layer in the middle that this makes it very easy to tie these applications that you are creating with these services and the tools that you want to consume have a iterative environment so you don't have to worry about creating a Kubernetes or an OpenShift environment just to try out your application it comes bundled with it, you just come in you start running your application you can then commit to basically buying OpenShift environments when you are ready and so it's not like you had to go buy one on day one just so you can iterate over the application it will give you that, it may take you months it may take you days, it's up to the customer and then once you are ready to deploy your applications you can basically pick and choose the cloud platforms, you can pick and choose the OpenShift environment that you want on top of it and then we are working towards a hybrid app cloud, the compute part of it where we abstract out all these things and we basically are just the customer can literally pick and choose certain placement rules and requirements and we just go deploy irrespective of the underlying cluster Next slide please So to get there, ambitious project and we are going to be doing it in a very measured way, iterative way, it's managed it's hosted, so the good news is we don't have to follow product release cycles, we want it to make it really fast and iterative take customer feedback, bring it in deliver the next capability at the end of every sprint, so we are going to start by launching a private preview around summit timelines and we want to provide certain capabilities in that, the key one being you should be able to create an app, iterate over it in the bundle sandbox, create a Rosa cluster and just deploy it to Rosa, so now you have a multi cluster, multi cloud environment experience right on day one of private preview and then we go past it right now we start bringing in some of the key managed services we want to connect in with enabling some team level capabilities because you know that you are not going to be working on this alone so we will start expanding it introducing some paid tiers for cost recovery and also for SLAs and just kind of give a very highly qualitative experience to the developers and for Red Hat to find to kind of interact with these developers who are coming in from organizations and you know who want to really go into the cloud and are looking at how can we go cloud first but in a agnostic and a fast way and as minimal effort as possible so as you can tell there is a pretty good roadmap ahead of it a lot of it is going to be reacting to what customers want us to do and yeah it's pretty exciting time with that turn it over to Steve. Thanks Brog yeah we come to the end here some additional resources we've done the 4.9 what's new developer edition so we're looking at doing a 4.10 one coming up we're going to look out for that we'll update some links here as we have them but if you do want to reach out there's the depthful PM mailing list which covers everyone on this call and yeah thanks to Kasturi Moet Parag, Rob, Serena, Savan and Siamic for pulling this together and if you do want the gory details of all the different things that are happening you can look in the back of the slides and we have the detailed roadmap for all the different areas we covered that cover basically the quarters or more or less the rest of this year as I mentioned continues to be a growing focus things may change, things expand a lot of great stuff here so please reach out if you have any questions if you have any feedback we'd love to hear it and thanks for listening and watching