 Welcome again to another OpenShift Commons briefing. Today, we're really thrilled to get an update on the latest release of OpenShift Commons. OpenShift Commons is an OpenShift container platform. 3.7 is out the door. And Steve Spiker, who is part of the product management team, is going to give us a, basically a quick and dirty, very fast overview of everything in 3.7. You can ask questions in the chat. We'll have live Q&A at the end. And we're going to try and do this in right around 30 minutes, which is a lot, because there's tons of new features in 3.7. So I'm going to let Steve get started and take it over there, Steve. Thanks, Diane. So Steve Spiker. I'm working on the part of the OpenShift product management team. And really excited to talk to you today about all the new stuff coming in and OpenShift container platform 3.7. And so as Diane mentioned, this is an abbreviated set of presentation material. You know, some of us have been pushed to the back of something like 120. I didn't count the final one that covers everything in sort of detail. But I'm just going to hit some of the highlights. And so we'll go through that in the next 30 minutes. Feel free to put some questions in the chat. I know Diane and Tchar are in the background. Can I keep an eye on that? But we'll just go forward. So a quick interslide as far as OpenShift and the base of what it is. And we look at what Red Hat provides with OpenShift. It's kind of a complete layer for your application, you know, from the standardization down to a secure operating system as far as enterprise Linux goes for abstracting away the physical, virtual, public, private, whatever infrastructure you're dependent on. And then beyond that, how do you manage a larger number of those computers, right? So they need for cluster management or orchestration of your applications and deployment across all those different compute nodes and a lot of different requirements there. So that's the main Kubernetes part. And then on the type of things that help you leverage that platform with a number of capabilities around the management of those applications, the deployment automation, build automation, number of services, and the ability to self-service. So we'll dig into some of those. First, let's talk a little bit about the timeline and the roadmap. So typically we put out a release about every three to four months. And so our three to six release where we've just delivered in August. It seems like yesterday and here we are getting ready to put out three seven. And as you know, you can take the Kubernetes distribution number and add 2.0 to it and you get the open shift numbers. We're continuing to move forward with this pace and continue to pick up the latest and greatest Kubernetes distribution and adding in the capabilities during our validation and then distributed to our users. So within the container platform, it's our distribution of the software of OpenShift, namely the OpenShift Origin repository open source project. And so what you'd look through there and then as we build on top of those was the main capabilities we've delivered on it and around it. It's really around enabling some capabilities around multi-cloud services. So there's always this key and I'll dig in more about what we've done there around bringing different platform services onto the platform and better ways to integrate with a different infrastructure and automation for delivering of that. So digging in, what is sort of the main pain point you've struggled with in IT? A lot of times it's like, how long is it going to take me to spin up a VM to do some testing on or to do some development on or what's it going to take for me to be able to request access or some amount of data on either a test database or even a production one, you know, it's going through the process. It's typically got to open some type of ticket, whether it's service now or whatnot. You have to, you know, clearly identify your business, the age, the request itself, wait weeks, months, hopefully not. You get the approval, eventually come back, you have something that is the credentials you need to connect to that thing and then hopefully it can work and hopefully you understand how to inject it into your application. Well, the whole point of this is to automate that away, right? So the request, the consumer says, hey, I want a piece of this service, fills out the needed form. The service provider will receive that request and then either immediately or asynchronously return back the needed pieces to connect to it and then inject it on that way. So good stuff. Part of that model is built off of an open API. So there's a multi-vendor standardization effort that has taken the proven service broker API that had been part of the Cloud Foundry service catalog feature and put a lot of effort into kind of baking in and really hardening some of the definitions around that API and then leveraging that for work within Kubernetes and OpenShift. So what is a service broker? A service broker is really just automated standard thing, entry point into the service. So it's the one who, as the name would say, brokers the conversation between the service catalog or really the consumer through the service catalog. The catalog is just a manifestation of all the services that the service brokers provide and then the service provider will facilitate that request. And so one of the big things we've done in the 3.7 release is we've completely redone the user experience. So when you're looking at the initial login, you'll see a page not just telling you about, hey, you need to create a project or hey, here's the project you need to work with. It will say, here's the service or here's the things you can do on the platform and also some additional material to help you get started. So there's an ability to take a tour. So it'll just take a minute from off the presentation here and I'll drop over to actually a running instance. So here we have a starter cluster that's running in 3.7. As you can see, it's the new experience. I wanted to, I could take the guided tour to help me understand what I can do and how I can leverage the interface. I can jump right in to some of the languages. That's like, wow, I like JavaScript. I'll just grab this and quickly be able to provision a Node.js application to be able to run this. So I'll give it a new name. So I'll give it one, two, get repo. And just like that, I've deployed my Node.js application. If I wanted to, I could kind of quickly go over here and see what's going on. Also, if I wanted to play around kind of locally, I could just go to Object Origin. I could look at the releases. This is the upstream. I could see there's a 3.7, 0, RC 0. What I've done is I downloaded the OC command and I can do an OC cluster, take it up and then I can point to the version I want to. I can say deploy the service catalog. And it's that simple. It's up and running. I've done that already. So I've preloaded it. And here's an instance of that running. It looks just like what I just showed there hosted by Red Hat. That's part of our online starter tier. But here it is running locally in my environment. And so I can go to my project. I can see I have different applications already running. If things don't look healthy, whatever reason, I can kick off a build. And I can also see the great integrated experience we have here, including that came in 3.6. They all come together here and seeing the things working to build logs in line and seeing everything I need to see at a glance. So that was a quick view at that picture and a bit more of a real time or live version of it. So what makes all that happen? So I mentioned the service broker API and a service catalog implementation that exists. And we're going to be rolling out the template broker. So now you can take your OpenShift templates like you've been deploying today and continue to deploy those applications. We're rolling out a new Ansible service broker, which you can define an artifact called the Ansible Playbook Bundle and we'll deploy that. We demonstrated way back in Red Hat Summit, some Amazon integration. We're doing work to bring that forward as well. And since it is an open service broker API, it's an open API. People can write their own brokers. You can bring third-party brokers, et cetera. So Ansible service brokers. This is a relatively busy start as a number of pieces, but it's actually a fairly simple concept. So basically there's some broker that's pulling content from a catalog itself, from someplace that's got to hold that image. That image is just basically a bundle somewhere and it can provision a set of services. So that bundle itself just looks like a runtime to execute on it and a standard set of verbs that, files that match verbs. So if you match it up to the open service broker API, the different things you can do, provision, deprovision, bind, et cetera, and matches those things. Plus it adds some capabilities to do some additional validation, such as testing. So a lot of good stuff there. I mentioned the template service broker, so that again gives you the flexibility of leveraging the same interface to bring it onto the platform. It also adds in the capability to inject config through binding. And so binding is the action to not just provision, but once you provision, you want to inject your application with configuration of that service you're consuming. And the binding operation is part of that process. So we talked about the initial experience. One thing I didn't highlight, but there's a neat way to be able to search, you can either from the search bar itself, you can do it through the filtered views down below. There's many easy ways to manage getting at what you want. And hopefully there's a lot of great content in there, so you need to search to find what you want. So this is showing the binding operation. So if you have an existing set of instances, you can actually combine them together. Another great piece is notifications. So you bring all of the notifications under the single bell at the top. So we're all kind of used to seeing this now, whether it's from GitHub or other places, is a common paradigm of a notification. And so now a number of key notifications are here and you can do all kinds of things as you see. There's your markup red, clear them, et cetera. So you can easily get the information you need and understanding what's going on in your environment. And if hosting is not your game and you have the need to hack locally, we have something for you. So I showed OC cluster up, which is the foundation for mini shift in the CDK. That provides a way to run a single node open shift cluster on your machine and allows you to really validate out your application in a number of ways. And one of the, some of the different features that people have been asking for, it's really been helpful, is like around multiple profiles or instances. So now I can easily say I have this profile that kind of matches a certain set of applications and I want to quickly switch to a different type of profile then you can do that now within the CDK mini shift. So let's move, I guess, we're going to start at the top of the stack. We're going to move down the stack. So we're looking at like the orchestration layer. The one thing I'm just going to point to is that there's already a comments webinar that talks about some of the work that's going on in the Kubernetes 1.7. So I encourage you to go watch that. And then also there's a number of exciting projects we can just want to highlight was the custom resource definition. So that's one of the, some of the different ways that extensibility has been integrated into the platform. So now it's easier to add capability into a running Kubernetes instance itself, including encryptions of secrets within that CD, daemon sets and upgrades to both staple sets and burst mode as far as a scale up goes. So let's take you that a little bit more. So a lot of work around installation and upgrade improvements around what we do with our Instable Playbooks that we ship product. So we have the capability to migrate at CD as it was before the 3.7 upgrade and also be able to scale out the CD cluster. So it's quite important. The other is the ability to have a more modular installer. So you have the, you may want to break things up in the roles and different playbooks so that you can target certain ad hoc administration tasks. So a fair amount of work has gone into provide those capabilities in a new install experience around sort of breaking, breaking up the installer itself into phases. And so you'll see that as well. And now jumping over to networking. If you've been following along, especially within the OpenShift releases, we've had this core feature of a network policy and tech preview for a couple of releases and happy to announce it's coming out of tech preview. And so we've done the work to validate it and actually have supported. And so it's a great way to have really fine control access and rules around our policies on communication between the different services you have on the platform. So in the past, you would have the ability to either talk all services within a given namespace or you would have it opened up across the cluster. And then you could do certain things around joining namespaces if you had the multi-tenant STM plugin, which would pair those namespace networks together and then they could have access to everything between those two. Now you have the ability to find, say this specific service at this port can only receive incoming traffic from this source itself. So really a key feature. Some other stuff around networking is we allow some flexibility around the cluster IP ranges. So it really allows for things around multiple set, some nettings for host, getting this request a lot. So glad to be able to provide this. And this is also a popular topic is reference architectures. And so we have a lot of, you know, one of the great things about OpenShift is the ability to run on an, as I mentioned, on bare metal, on virtualized instances, on different infrastructure layers and through different configurations. So with that level of flexibility, it's also very valuable to have reference architectures and implementation guides around that. So you'll see, as we put out releases, we usually have a little bit of time after release when we roll out updates to these reference architectures. So those will be rolling out as the release rolls out as well, or after the release rolls out. So while that was a, I did not do a, a true coverage of everything that's part of the orchestration layer. But the whole point was just want to hit on a couple of highlights and refer to you to the proofs presentation highlights, some of the core capabilities. And then talking a little bit about what's going on within the container space. So one of the key things is a project called Cryo, which you've probably heard in the number of the Commons meetings as well, the broadcast as far as what's going on there. We're doing work to validate that as part of, because it's a container runtime just focused on Kubernetes. And so we're doing validation within OpenShift until we're coming out with a tech preview of that and continue to test and validate that and strengthen that and look forward to moving that out of tech preview status. One of the, a couple of the things that just wanted to reiterate is since it's focused solely on the Kubernetes use case, we're able to, Cryo is able to keep itself as a very minimal and secure architecture. So it doesn't have to have to concern itself with other use cases that may expand the overall capability of it and they expand the number of interfaces and then possibly different attack vectors. And with that as well, since we're focused on that, that given use case, we're able to really focus on scale and performance for that without having to be concerned about a wide variety of cases there as well. And the great thing about it is you don't have to change anything. So like someone asked recently, well, how do I notice the differences? Like what, you won't, I mean, from an end user, it's just how containers run in the background. You still build your images the way you do today. It'll just run them and you'll end user will not know any difference, hopefully they'll notice better performance. Along those lines is when you're looking at how you build those containers, you also want to have a solution that will allow you to have minimal dependencies as well. And so build as a Daemonless tool for building and modifying OCI based images. And so we look forward to rolling this out. And so this is just in a preview fashion as well. So builder is great stuff. System containers come out in support for, for rail and atomic hosts. And this is a key part from the operating system update that can, that openshift container platform is actually is using in a tech preview fashion. So it allows you to bootstrap some of these capabilities. So if you, the container runtime is actually running within the system container itself. So it's a, it's a more flexible way to be able to manage and run the various pieces of the platform. And also allows for a set of capabilities as mentioned here, you know, it's real simple to, to upgrade and rollback pieces within the managed by the system containers. And so that gives that level of isolation and flexibility as we've seen with containerized application. And it's now coming at a lower level within the operating system stack itself. So I overachieved and shooting down my, cutting down my slides and trying to get through the content. But one of some of the things that I just wanted to reiterate is it's, this is, there's so much content. I just didn't want to take too much time to like cover everything in a bunch of detail. But I also wanted to make sure I covered some of the key highlights. And the thing to, to note is that it's, it's a release coming out. You'll see announcements about it over the coming weeks, both the announcement of it, the availability of it, the way you get your hands on it, the, the, there's the ways you can get your hands on it today. As I mentioned is, is hopping into the, the opensh, the github.com open shift origin repository, grabbing the 37 RC release and playing with it there. As a other aspect, as I mentioned, you can go to openshift.com, register for the open shift online starter. We are starting to roll out the 37 upgrades here. We have it available in the Canadian regional cluster today. And then, yeah, there's, there's all these great ways you can, you can always get your hands on the product that's kind of early and see what it does and, and also provide feedback through github issues, the community forums, open shift commons, many, many good ways to provide feedback. So with that, I'll see if there's any questions. I see if I think the main one really is, you know, the rest, I think we have addressed, but I think the main one is, can they, can people get access to 3.7 RC code online? I think is the question. So they can get access to 37. Yeah, the only code that you can get available today is through the, what's built through the open shift origin repository. And then the, the actual release bits as far as like when Red Hat will release the open shift container platform 3.7. That's due out in a couple of weeks as far as one of those bits will actually be available. I think the other question that's actually has been answered in the, in the chat, but I think it's worthy of saying out loud for people who are watching this as a video is people are asking, can you work now entirely without the doctor Damon, use trio and build it. We're doing some tech preview there. So that is not. Yeah, so see there for, from Ben is I answered some of these were right. It's, there still needs to docker for, for some of the building aspects. But if you're looking to isolate some of your workflows, I mean, there's always some live interesting things you can do within open shift and containers to label or isolate workloads so that you have builds only land on certain nodes. And so you can even further isolate different workflows if you're, even if you're experimented with different things. But again, the builder and cry are in tech preview in 3.7. So. Let's see, there's a couple of other questions once you pop over and take a look and see if they do that. Jonathan's asking, can you talk a little bit about how system containers work to isolate Kubernetes from the host OS? Are these Docker in Docker or VMs or something else? You want me to take that, Steve? Go for it, Ben. Yeah, so system container, all it is is it's still a regular, you know, Docker container. We just add a little bit of metadata to the image. And then we run it slightly different. So if you've done existing containerized installs for OpenShift, those will drop the unit file on the host. We carry that forward with system containers. What's different though is we store them on disk. We leverage OS tree for the deduping. So, you know, again, the classic example of that is if you're troubleshooting a node and, you know, you fill up a Docker pool and you blow that away with a system container as that kube role or as the container runtime role, you won't blow away those really important roles if you ever wipe the storage pool, right? So it gives us that kind of extra resiliency for bits. You would classically associate with being a part of the operating system, but you could still iterate and get all the advantages of running in containerized. Does that kind of help? I think so. I think that was a good answer to the question. Let me just go back here and see. I think we answered the question in chat too. Are you imagining service brokers replacing templates? And you did walk through the templates being able to be used them as backends to service brokers? Yeah, I may just be clear that the brokers just an API or means to provide a service at the end. There's still got to be something that defines what that application is or does some of the provisioning and so templates are still, you know, supportive way to do that. Chris had asked a question. I don't think we actually answered it. Can we access and set up accounts on the cluster where 3.7 is running? I think it was around the networking slides. And how does the networking policy compare to Ixio? It's just coming out. So, yeah, the... So the... Sorry, can I interrupt somebody? So, yeah, it's network policy is really just focused at I think Tshari mentioned it there is the layer 3.4 aspects where Ixio is layer 7. Really, I mean, you look at network policy, it's a declarative model to define kind of the communication past between the different surfaces and what's allowed and not. And Ixio itself is a service mesh which has a lot of different capabilities around policy as far as like, you know, what network policy kind of does. And then also kind of a programmatic kind of routing aspect so you can do intelligent routing based on whatever criteria you define a program in, among other things like telemetry and different reporting aspects to understand what the different, you know, usages are, you know, what's actual traffic was being allowed, denied, et cetera. So, yeah, those are kind of, you know, always complementary and also Ixio will provide a fair more capability as well. There was one other question there. There's one other one. Yeah. Do you have example service broker definitions or links to the documentation for how to create these? Maybe pop over to where the documentation is. Yeah. So the, that might be good to follow up with a link in the blog post itself, but I'll start digging into it. We've, we're working on our increasing our enable material. We've, we have someone called it their pre alpha version of even a go SDK to help end users when writing brokers and working with the appropriate partnership teams to help enable folks, partners of writing service brokers itself. So, yeah. Like, yep. I don't have a good link for documentation. So I'll continue to search. Plan release date for 3.7. I think you mentioned. Yeah. The bits will be available by the end of the week. I'm not the end of the weekend of the month. So we're looking to, to the official red hat, you know, the push of container platform will be available. I think the target date is November 29th. So we'll keep an eye out for that. Yeah. Just in time for coupon because we do everything based on when the next event is, but folks to that. Judd is asking with rail now released on arm is red hat. Helping move going to arm. The off topic, but I don't know. I don't know any status on that. So not aware of anything. If there's things about 3.7 and we did this really fast today that your people who are on this call want a, yes. I want to deeper dive into just email me on the mailing list or email me at D Mueller at red hat.com. And I will try and stage this. Actually, I'll, I'll chime in on arm real quick. This has been on the, on the rail side and eventually from the whole container platform. I will say that we're, we're definitely looking at, at multi arch being, being on an option. So we will have base container enablement. Coming up soon in rel for arm. So probably around the 75 timeframe that, that all extras repository and base images will be there. But then, you know, from there it's, it's, it's, you know, it's a lot more work to bring open shift to it. So, but, you know, that, that first phase is definitely pretty solidified at this point. You're getting me excited here. It's pretty cool. So, yeah. So again, everybody, thank you for joining us. Oops, there's a mention of Federation is mentioned in the 3.9 category. That's a good topic. Could you talk at a high level about the Federation roadmap? Yeah, so the, the, the Federation roadmap, I can talk about. So if folks don't pay attention, the Kubernetes community, there's the, the, the SIG Federation has been doing work around understanding, you know, what's the right path forward for how Federation fits into the Kubernetes. So there's a couple of different pieces of work around, you know, what's the right use case, which is the right approach. And so they're doing some analysis around those pieces, you know, whether it's a single control plane that aggregates a number of back ends and that replicates sort of state across those, or is it sort of another tool that sort of sits on the side that is controlling and working across those. It's going to some type of that validation. You know, we're, we're, when you look, when you break down the number of use cases around Federation, there's a lot of different pieces that play into it, as far as global load balancing, DNS, geo replication, replication of images and state of configuration. So we're doing a work kind of focus more on the registry into that scenario towards the 39 release. And so that's where, that's where it stands. Cool. So I'm going to do my final pitch here for questions. And so if you have one, pop it in the chat and we'll try and ask it. The other thing that I want to just remind everybody is that the OpenShift Commons gathering the face-to-face for all the upstream project leads and road maps will be happening on December 5th in Austin, Texas the day before KUKON. So if you're coming to KUKON, please consider registering and coming for that because you'll get folks from the engineering team, Clayton, Dan Walsh, and Raul. A number of the PMs will be there as well as folks from the project leads for Kubernetes and folks from Amazon talking about running OpenShift on there. So there'll be a lot of good folks in the room and lots of customer case studies. So I'd love to see y'all come there. And here's one last question for you. For the OpenShift specific add-ons to Kubernetes, are there considerations of working towards a modularized plug-in e.g. a Helm model? Very good. So a modularized plug-in. If I understand this question correctly, is it maybe just a you want to run Helm with OpenShift or is it wants to follow a model like Helm as far as modularized plug-ins? I guess I can answer both of those. So one is we published blog articles of what it takes to run Helm with OpenShift and so there's some ways you can run it. Your Taylor server within a given namespace that will allow you to deploy Helm charts. That's not something that's, you know, we don't ship and support Helm or Helm to help Taylor today, but that's a, you know, we define how you can do it. As far as running a Taylor cluster-wide server, there's some considerations there you'd have to be aware of if you're doing that. I mean, it sort of depends on your own deployment and what you're okay with there. As far as, and what we're continuing to track and watch in the upstream, within the SIG apps around Kubernetes, there's, you know, that primary focus around that is around enabling a number of application tooling if you're all around Kubernetes, not that it's any given supported set of tools or endorsing one, but clearly Helm is one that is used widely by the community and they're going through a process of defining what Helm 3 is and so we're, you know, continuing to work with that to find a way forward to provide a supported fashion, a supported solution as needed to meet the customer use cases. So yeah, any feedback there and we're continuing to look at it and evaluate it. And we're continuing also to evolve how we build OpenShift and how we add the different capabilities into Kubernetes and so as you look at the service catalog, it's actually built in as an add on itself and so then you can, you know, sort of build the various components of OpenShift as sort of add-ons, if you will, to base Kubernetes. Yeah, there you go. There's another follow-on question for that. So it's a two-part question, I guess. So if you took Helm question out of that and just said to use something to install OpenShift components more of like I want to have a Kubernetes and I want to inject OpenShift components. Yeah, so that's a model we're heading down as you well know. We started with this journey around building OpenShift based on Kubernetes and contributing to Kubernetes well over three and a half years ago and so when we built some of the core concepts, it was done in such a way without a true and extension model kind of built in or an add-in model. We've worked with the community to help build those things in and start to build new capabilities that way, but at the same time, there's a cost involved in time to move things to be more of this add-on or plug-in model. So we're continuing to evaluate those things and do what makes sense based on our customer needs there and how those add-ons exactly get installed, whether technology is home or something else. I think that's still to be determined, but since currently Red Hat doesn't have a Helm-supported solution, I would have to see. I think we're at a close here, Steve. Thank you very much. I know I'm trying to get you to do this in 30 minutes or less. It's impossible. So thanks for all the questions and Tushar for joining us and Ben as well. We'll do this again soon. And again, hopefully you'll all join us in Austin where we have lots of engineers in the room to answer even more questions around 3.7 and look for the blog post shortly. All right guys, thanks. Thanks everyone.