 Good morning, good afternoon, good evening, wherever you're hailing from, welcome to a special KubeCon EU office hour. This office hour will be discussing OpenShift 47 and addressing any questions you may have about the platform itself. And, you know, anything else that comes up along the way. So let's do a round of intros. I'm Chris Short. I'm the executive producer of this thing called OpenShift TV. Josh Berkus. Yeah, I'm Josh Berkus. I'm Red Hat's community person for all things cloud native, particularly Kubernetes. And I invited the OpenShift developer advocate team here to talk about OpenShift 4.7. And so, Ryan, you want to start? Introduce yourself. Immuted. Totally muted. I always thank you. Thank you for the extended introduction. I'm Ryan Jarvan and you can find me in most places. Usually muted in the first minute or two of the chat. But Ryan Jay online. And yeah, I'll hand it off to Jason Dobies on our team. We're all developer advocates on the OpenShift team, mostly on this call. So welcome. Hit us up with your questions. We'll keep an eye on chat. Yeah, the rest of us are all on the same team. So in terms of introduction, I'll just say let's you develop your stuff. My favorite language is Python, although I've been doing a lot more parkers related stuff. Been spending obviously a lot of time at OpenShift in the past four years working with this team. And of course, I'm getting a phone call. So let's head over to Brian. Hey, yeah, I'm Brian Tannas. I am also a developer advocate focused on OpenShift. I mostly focus on some of the platform tools like things like serverless and tecton and things that could kind of make your life as a developer a little bit easier. Well, I guess maybe or complex depending on how you look at it. You know, there are some things that do make it a little bit easier. But I do focus on those. And I'm sure we'll get into some of those details on here with some Q&A. My favorite language, like Jay's same thing, would be Python. I've recently been learning a little bit about Elixir, kind of curious about Erling and how that's going. And, you know, some of the different pieces of where that's used. I think it's kind of interesting. It's distributed built in. So it's kind of, I don't know, it's different. It's neat. But I've been focused on that a little bit. And yeah, so I guess we'll pass it off to Natali. Hey, there. How are you? I'm still there. Yeah, welcome back, Natali. Welcome back. It's been so long back to back. Yeah. Hey, hey, happy to stay still here. My name is Natali Vintom, part of this team of developer advocate for OpenShift. My focus, I think I'm, I'm a Java developer. And I'm also at DevOps. So this is me. And my expertise is within OpenShift. And all DevOps, so CI, CD, GitOps, of course, we talked about it before. So happy to jump here in the call with my team talking about OpenShift for developers. Thank you for joining us. Appreciate your diligent duties. This is your third stream today. Yeah, this is third. Cool. It's KubeCon. It's my time zone. Come on. So yeah, I have Right. Like you got to, you got to show up, right? Yeah. Awesome. Thank you so much for being here. Okay. So I don't have questions from the stream yet, folks. So by the way, we are here to answer your questions. We do have a little bit to talk about in terms of open to 4.7, but you can interrupt at any time with a question. And to give you a little incentive to step forward, we have some nice OpenShift for shirts to give away if you participate in the session. And in the meantime, let's go ahead and get started. Ryan, I think you wanted to talk about the developer sandbox environment. Hey, I would be happy to mention muted. No, I'm not. No, you're not. You're good. You're good. Faked myself out there. Yeah, developer sandbox is one of our newer offerings. I it looks like there's a link in chat that that Chris got for you. And yeah, it's a definitely easy way to get your hands on a free OpenShift environment. There is a limited time trial. I think it's 30 days currently now. Yeah. Yeah, we just updated it to give you a little extra time. And once the 30 days is up, you can sign, you can click through and get another sandbox. Usually there it may be that 30 days later, we have a limited capacity limitation. So you might not get one right away, but usually click on one, you get one immediately. So pretty easy setup. A couple caveats, I should point out since we're the developer advocate team, and we want to give you the straight no BS. Here's here's the trade offs. The caveats on the sandbox are you do not get admin access. So if you had operators that you were hoping to install, that's not an option on the sandbox, you can still go in and download code ready containers and set that up on a system with plenty of RAM and CPU and go to town with that you'll have full admin access there. So plenty of options to get started with OpenShift if you're if you're new to it. And I'm assuming folks on that are joining this call are already well aware at what OpenShift is. But since it's kubecon week, and I maybe there's a few new folks, we're trying to offer really developer focused tools built into the cluster. So you have things like a really nice web console, we should be showing some of this right now. Web console with ability to manage CRDs and other Kubernetes resources and ability to do builds and deployments on the platform, including things like hosting your logs, all kinds of great stuff to have really tightly integrated. Any other follow ups on that topic? How to get how to get OpenShift? Or anything about sandboxes? Or just if somebody's just getting started, is there a live tutorial or something similar for 4.7? Oh, we have one other place you can get free access. It's not quite a live tutorial, but the other free access area I can link you to is the OpenShift Playgrounds. We have just recently released to OpenShift 4. It doesn't have a whole lot of step by step introductions on how to interact with that Playground area, but you could definitely go through 4.7. Yeah, 4.7 doesn't give you a lot of details on how to go through the 4.7, but you can go straight to the docs and I'll put a link to the 4.7 docs. Pretty much do just about whatever you like from the official docs. If you're not familiar, there is the whole concept of learn. OpenShift.com. We have recently added OpenShift 4.7, which is latest release of OpenShift and all of that. But on Learn.OpenShift.com, we have a set of scenarios and tutorials and things that you could use to help bolster your Kubernetes and OpenShift knowledge. So we have some getting started with what is a container. We have some things about OpenShift, some of the beginner type things like this is what a pod is, this is how you would expose a service. So those types of things are there. There are also tutorials for more intermediate concepts of the tools that Ryan was saying that you wouldn't necessarily be able to access on that developer sandbox like OpenShift Pipelines, which is Tecton. We have some news around that, which we'll talk about here in a second, that we also have some tutorials around GitOps, so Argo CD. We have some things around Knative. Those are all on Learn.OpenShift.com if you're interested in learning a specific thing with OpenShift, or some of that knowledge also applies to Kubernetes as well. Actually, before we go into some of those other topics, it occurred to me, I wanted to ask another question around the introductions and I forgot about it. Just quick round. I want everybody to pick a favorite new feature or new advancement from 4.7. So who wants to go first? Oh, man, I'm going to jump on this because I have a clear favorite that I really want to highlight the new web terminal. I don't think it's installed in the sandbox yet, but we have a web terminal operator that you can install using. It's not on operatorhub.io, but if you go to the operator hub that's built into your cluster, you can find the web terminal operator that gives you almost like if you've used Google's Cloud Shell, it gives you embedded terminal that you can pop open. And it's context sensitive. So if you're within a certain project scope or for upstream folks in Kubernetes, if you're working in a certain namespace and you boot up a terminal in that namespace, it should already have your kubectl and oc credentials already initialized and set up to have you work within that namespace and everything. So it's really nice. I'm setting up a new laboratory workshop lab that's going to use that terminal quite a bit. Nice. Also a bit before someone steals mine. So two days ago, I guess we announced GA for OpenShift Pipelines. This is something that I said, I did a demo I guess two months ago, and I was asked to put together this kind of just developer experience on OpenShift. So I had this four container application spanning a bunch of different languages. And one of the things I'm like, well, it'd be kind of cool if part of these containers was built by pipelines because I did one to oh, no, I did one using just a general source to I and there was a couple of different lessons going on. But I was like, all right, cool. This is my chance to play with pipelines. And once I got things working and saw the, and I should say I probably stressed that incorrectly, but once I got multiple pieces working and you can see it in the UI with that pipeline and it's showing the progression and the parallel tasks, it was a really cool feeling and how easy it was once I got the initial setup in place to just add new steps, add new steps. And then I admittedly and intentionally over engineered it because it was for demo purposes. So I did across three tasks, what could have been one. So what should have just been stuff goes in and then stuff comes out like, no, we're going to blow this out to like 12 different tasks, but it looked really cool. And I was impressed. I've been impressed for months now seeing the early UX markups for what the pipelines were going to look like in the GUI. I understood the abstraction, but finally sitting down and putting my hands on it being like, okay, this is going to be this particular task. I'm going to reuse this one. And this one's going to be custom and all just fell into place. And it just started to kind of click. And then, like I said, once I got the kind of initial two or three pieces in, then it was easy. It was super easy for me to over engineer it and just keep adding more pieces on top of it. So it was a really cool feeling. I'm not going to lie, I've never been particularly interested in setting that stuff up. I've used Jenkins to the extent that I was a developer and I would watch Jenkins turn red and yell at someone and it was typically me turning it red anyway. So actually having set one up. Yes, exactly. We've all been talking about breaking the build or doing something wrong. So actually setting one up and kind of putting the thought into these are the steps that make sense. And this is how to actually dig into the logs and stuff was really cool. And I was really impressed with how it shaped up. Nice. That stuff's based on Tecton, right? Yep. Awesome. So if you're from the upstream world, this is hopefully the same stuff that you're using on other Kubernetes just tightly integrated into the console experience. So hopefully easier to use and navigate. Yeah, and there's so much between what's in the developer perspective or whatever in the web UI, I should say, leave it that way, is a really, really neat UI. Using the Tecton command line worked. I can't remember what I did for each of it. It was just kind of throwing stuff out and poking around with it. But the Tecton CLI works. TKN, I have no idea how we say that out loud. I'm still struggling with Kube Cloud. I don't want to have to pronounce that one. But then dealing with all of that, like I said, the abstraction of the general resources and stuff like that was all fairly logical. And that knowledge stemmed pretty well from just general Kube knowledge. Yeah, we have a task, like that task is going to do something and there's steps, you know, and to run that task, you have a task, yeah, like these are specifics for pipelines. But those are things that make sense. And the developer console on the OpenShift web console makes it easy to go and build, like you're saying, build that whole step, right, which is cool. Let me interrupt you real quick before you go to your thing, too. Ryan's last point about installing the operator adds components to the web console. Part of that demo was this really cool that I started with a vanilla cluster and then navigated to Operator Hub. And obviously not showing any of this, but if you ever propped around OpenShift, you've seen any of our demos, you've seen a fair amount of this. And then it was this kind of customize your cluster after that. So I was like, Oh, I want serverless. I'm going to hit install on that. Oh, I want pipelines. I'm going to hit install on that. The pipeline ones is particularly interesting because you hit install. And as I was talking over it, the pipelines menu appears in the left. And it's a really cool like makes for a great demo, because you're like, to dog, we magically have pipeline stuff. And then today that's how it works. It's not like a demo smoke and mirrors thing. But that that notion of I had this base cluster, and it's just click, click, click and kind of customize it with all these added on features is really cool. Sorry, right? It's the it's the marketplace, a different piece of software that you can use and you can compose your stack and OpenShift. So you want pipeline, you want to get up, you want to serverless service mesh, what does we call platform services formally? And you can install as a as a software as a service, right? You go into this marketplace, which is operator hub, and you install those platform services on top, then from the developer catalog from the web console, you can build your application. So you install the component that will be used and then your application will will will use it. And then on top, you install your application with automation for creating containers or automation to invoke the pipeline you installed previously. So the user experience really improved the developer experience really improved with OpenShift web console. And I agree with Jay, I like this operator as a service at a marketplace inside OpenShift to build your your platform as as you need it. Very quick. In the the holdups there from a developer perspective, platform services are things that you can't install as a developer. Usually you'd need admin credentials. But I think the really powerful thing is I can install this cluster on any cloud and not have to rely on platform services that are not C and CF sanctioned platform services. I don't need to go to Google's log stash or Stackdriver or, you know, I can use the the actual upstream technologies running on the cluster powered by or backed by operators. So really cool to have that all show up so easily and to be available on any cloud. I agree. No no lock in. No, at least there's no specific place lock in. You can this is all open source software. Of course in the OpenShift version, we support a certain version. We back the support to do the version, but it's all open source, all upstream and it's all operator hub. So in this case, it's really no lock in and you can install in in any place that you want. Let me jump in quickly because I don't know how commenting works with the restream bot, but some of you need to ask me how long can we use the dev sandbox? So it's 30 days and someone correct me if the policy has changed. But after that, the cluster goes away, but you can get another one. Yeah, I'm seeing not so they haven't changed that policy. So we signed it with your Red Hat developer account. That's free. You get your cluster for 30 days and then again, you have to make sure you back up anything after that. There's no kind of extension, which is let's call it what it is intentional, because back in the old days of God Open Shift 2, I guess we had this hosting and people would use it to host their WordPress logs kind of indefinitely. And I know because I was one of those people while I was on the team. And it was a spectacular abuse and I kind of miss it. You'd think as an employee, I would get access to that. But yeah, so 30 days and then you can renew it as many times as you want, just keeping in mind that it's not an extension. So you're going to have to make sure anything you have out there that you may want to save some resources or anything like that. If you're not using proper GitOps where you've already stored that stuff in version control and recreate it. Otherwise, it's going to end up getting knocked out on you. Please tell me someone's going to use that as a jumping off point to talk about GitOps, because that was a real softball of a transition. Well, I was going to ask, you know, Natalia didn't mention what his favorite feature was. And I probably would speculate, but I'm not sure. So we'll see. Yeah, you know, these, you know, these GitOps integration is really, really cool with ARGO CD. I think this is going to be my favorite from now. Yeah. Before it was Tecton. Now it's this one. It's still cool. And lots of work in progress also to bring the same experience, you know, with Tecton. We have this tight connection with the web console. Really, if you look at Tecton, Tecton has also a dashboard. It's a kind of standalone. You can install this dashboard, but it's not that in advanced. The integration of Tecton inside OpenShift is really powerful from a user experience perspective in the dev console. And I'm looking forward to the works also to integrate this GitOps experience in the web console. Because as you know, the web console is also programmable, is extendable via API and I think there will be a talk at Summit by Ryan and Brian and Jay talking about how to customize the dev console. So the customization comes also with the operator. If you install the, for instance, the Quay operator, which is an operator for having a registered enterprise registered in OpenShift, you will have this adding on the web console. If you install a pipeline, you will have more adding. And so this is a very powerful and cool. And I'm looking forward to the GitOps integration to the web console, which are coming. Natali, I would like to say congratulations on your efforts on launching the new Learn topic area. We have a new learn.OpenShift slash GitOps that Natali did a lot of work to help get ready. So if you're interested in learning more about Argo CD and GitOps, definitely check that out. I'll post a link in chat here. Nice work on that. Thanks, Ryan. Thanks. And thanks for your great help. To be honest, you will, you have been part of this effort. I want to thank also Christian Hernandez. Most of the content comes from his workshop, GitOps workshop, and also our colleague Dewan that collaborated on making this a scenario. So we're looking forward to any hints or feedback suggestion for extending this GitOps learning path. At the moment we do getting started and then customized and sync waves and hooks. But we would like also to touch base the new GitOps CLI called CAM, which is able to bootstrap either the pipeline part and the GitOps part. And then we will also like to mention and covered in some way. I don't know if it's possible in Karekora. We would like also to cover that multi cluster with ARCOSID, which is the hottest. It sounded right good, but right before you muted, yeah. But you went very robotic on us for a second. Yeah. All right. I think it's okay now, though. Yeah. It sounded fine. Might have fixed it, Ryan. All right. All right. So questions. Do we want to get to those? Well, there's there's one here that was related to that, which is does the open to pipelines include tecton triggers? That's what I was. Yeah. Yes. Definitely. When you install the operator, you have tecton APIs. So you have tecton and also tecton triggers. So when you install the operator, it comes from a certain version of the tecton core and tecton triggers. So yes, the answer is yes. And we track those version between the version of the operator and the version of the tecton upstream project that installed by the operator. So definitely you can have webbooks to your pipelines. And also we have also some good tutorial on how to get started with pipeline and and and on OpenShift and also with webbooks. Maybe we can share it also in the chat. Nice. Okay. And we're doing that. We have a couple of questions, kind of related questions about a completely different topic. We've got a couple of people watching who are OpenShift on VMware users. And so one of the questions, somebody wanted to confirm that 4.7 works on vSphere 7. And I think it does. So we'll have to confirm that. vSphere 7 update 2. Yeah. Yep. Okay. Second question, this one came from Slack. I, I, Armin says I was just talking with VMware about the entry vSphere provisioner for OpenShift, which is broken for them. And he wanted to know when it might be fixed when you plan to support deliver the VMware CNS with OpenShift. As far as I know, it should already work. To my knowledge, it does. Anybody got anything on that? Yeah, I'm not familiar. Yeah. I would, I haven't heard it not working. Right. It doesn't, there could be an issue. I don't know. Yeah. Okay. Armin, so it sounds like it might be actually a specific issue to your infrastructure. And I feel free to follow up with a Red Hat individual. Short at redhat.com. I'll get you to the place. So just send me an email. Cool. Or you can ask in the Slack that you're already in, particularly tomorrow when we have more people online for the conference. Cool. Okay. More questions. Oh, by the way, JP Dade said that his favorite feature is Windows workers. Right. But there is a caveat there when it comes to networking, I believe. I have no idea how Windows networking even works. Well, no, it's it's the cluster networking. It has to be of a certain type. I forget it has to be that OVN? Kubernetes one? Yeah. Yeah. Okay. So we have a ready for a long get-ups, etc. question. Is the new get-ups operator multi-tenant today? He said they say we've been using the Argo CD community operator for infrastructure get-ups approach with cluster wide permissions, but now application teams are also asking for dedicated instances in their namespaces. Is there a way to do this separation between one cluster privileged and several namespace privileged Argo CD instances possible? And and no, they haven't really tried to get-ups operator for this yet. Okay. Well, this is, you know, this question is a kind of architectural question. So usually the use case is having one single central Argo CD that control multiple cluster. But if they want to do this multi-tenant approach, that is definitely possible inside the same OpenShift cluster with the fine grade that role binding or cluster role permission in multiple cluster. I don't know. I'm not aware of any work on that. And I don't think the get-ups so the get-ups operator is bringing Argo CD with OpenShift AirBug and all the permission and stuff that is needed. But I don't think that we will add more in terms of this Argo CD multi-tenancy. So I think it's a question that need to be investigated more, maybe with CMAC offline or in chat and Slack. But I don't think there is a huge difference in this use case between the OpenShift get-ups operator and the other one. So you're saying, yeah, what we could limit access to like who accesses it with our back rules. But as far as like having multiple instances of Argo CD running for like each BU or whatever, you know, how you designate your your cluster, like we don't think that exists or we're not that doesn't exist. Yeah, yeah. I think the use case is Argo control multiple namespace or multiple cluster. Yeah. So this is the multi-tenancy. I don't know if there is a cascade of multi-tenancy possible, something we need to explore, investigate. So JP data asks, what is the difference between OpenShift SDN and OVN Kubernetes other than the ability to move around Windows containers? That's a good question, you know, because the OpenShift SDN is based on an open source project called OpenFi Switch. So this is the SDN, the implementation of the software defined in network. OVN is a kind of in our implement is a kind of superset. So basically, OVN in OpenShift, you still OpenFi Switch, but it's a kind of superset that can talk also with non-Linux implementation, like in Windows, you can use OVN and under the hood, there is an implementation. In our case is OpenFi Switch still. So the difference is there isn't a versus. So OpenShift SDN is OVN because we implement in this way, OVN and OpenFi Switch. And thanks to this OVN, we can also have a heterogeneous computing system like Linux and Windows host, for instance. So and Kubernetes is using OVN for a superset for all the SDNs. So this is the not the difference is the context about OVN and OpenShift SDN. The biggest the biggest context, I think, is that you have to you have to choose one at setup time, right? You can't flip them easily or at all in most cases. So that is kind of the big decision making factor you have to use, right? Like. Yeah. And I think JP Day mentioned a very advanced topic, which is the kind of tunneling. I think we're going beyond the developer scope here. Yeah. But this is a really niche information, which is VxLan for OpenShift SDN, so OpenFi Switch and Genive Tunnels, kind of tunneling, IP tunneling used in the software defining network. So thanks for this very niche information. I hope all the developers appreciate that. Right. OK, so some clarification around you can migrate from OpenShift SDN to OVN, but it will be disruptive. Guarantee you that there's no going back unless you have an SED backup to restore with the old network on there. So yeah, it's a good point. Is Argo CD the single source of truth? If yes, no backups are needed, still, what if something changed on the cluster, some deprecation of something and Argo CD manifest helmet, et cetera, can fail? So in disaster recovery, you're more safe to have a backup via some service instead of Argo CD recovery, right? So there's a little a few layers to that question. Yeah, what if you had a backup or a DR site, like your service, you know, would I guess, you know, still be up or working or whatnot or you'd have easy access, but with Argo CD, right, like you're still should be able to get back to the state, at least for that application or those things that were, you know, running or managed with the Argo, like you still would be able to access and, you know, get to the same state because the state is in get, right? Right. So what is one better than the other? I mean, OK, follow up on VMware from Slack. How do I need to install that? Yes, this is metal UPI and add the CSI. I'd like to have this out of the box with IPI. So this is definitely an answer. This is still the vSphere CSI question. Yeah. Andrew, if you can chime in on that, that'd be great. Yeah, you showed up to the wrong office hours session. This is the developer chat. And so we're not prepared to answer questions about VMware installations on this segment. Unfortunately, my age typically comes in once I have a cluster running. You need to have an interest as you need to have a cloud integrated deployment. So either UPI or IPI then deploy the vSphere CNS CSI provider as a day to operation. So say if the Andrew, thank you, Andrew, for chiming in there. And thanks to the audience for the questions. Keep them coming if you have more that you'd like to. In regards to multi tenancy, one thing we upgraded in the last year or so has been our support for Helm charts. That used to be just one Helm repo for the whole cluster. And now you can do multiple Helm repos and do kind of a multi tenet. Have one group of developers that are using a particular batch of Helm solutions and then a different group that's all sharing the same cluster. So you don't have situations where every developer is admin on their own cluster. And then you're not quite sure how permissions are going to work when you roll to production. I always like to do as close to production as I can in my development phase. So it's nice having everything locked down and multi tenet right from the start. Yeah, about the GitOps approach, right? I was thinking before in the previous show. So isn't that similar to Gitflow? If you recall, Gitflow, right? It's a methodology of developing software where you have a feature branch and then you change the brand, you do the hot fix or you roll back and you go in the massive branch or main branch if you want to deploy to prod. So I was wondering if that fits also these new GitOps because the GitOps is for deploying apps but also keeping a manifest for the infrastructure. So can we kind of use Gitflow in GitOps? What was thinking, what do you think about it? And if you ever use Gitflow, do you think that would be feasible here? I haven't used Gitflow. I'm not sure about that. I mean, it's been a long time since I used Gitflow but like Gitflow, I believe, if I remember correctly, is the process through which things get committed that would then spin up, you know, an Argo CD instance to say, go reconcile all those changes against a cluster. I believe they can work in tandem. They're not exclusive, right? Yeah, it's kind of, they come working together either for the app and then for the manifest controller by GitOps or by Argo in this case. So the feature branch can be a branch where you can create a feature name space where you can try things and then roll back, roll forward. I don't know, it was kind of an idea I was thinking if those two words can talk each other from the past experience with Gitflow for developing apps. Yeah, I'm really interested to see how that kind of Gitflow work or whatever the result of it is, it's going to need to do perhaps phased rollouts. And I don't know whether it should talk to Helm charts and do a Helm chart rollout or whether I should be using K-native services to do a rollout or whether I should be using service mesh to shape traffic around and achieve a rollout, right? There's multiple different ways to approach the solution there. So hard to know what people are gonna expect. Right, and always the answer is use the method that works best for you, right? Like you under org, right? You have to come to a consensus there around methodologies. And it could be that you decide to go with Gitflow and Tecton or Pipelines and off you go. And that's a viable solution. There's nothing wrong with that. I mean, yeah, like what kind of granularity of like levers and tweaks do you need? Like do you need the ability to automatically scale this application based on services that are coming in? Like it's gonna be up to what level of tweaks that you need for that application and what you need. So exactly like hard to exactly say it depends on the application, right? So Will Morrison points out, honestly a Gitflow driven deployment would be pretty resource consuming as you can have many branches living in parallel depending on the size of your team or your project, of course. And when it comes to GitOps, we think about, here's the application, here's the infrastructure, they're in repositories of their own and they get deployed together with Argo kind of deal and everything gets reconciled accordingly. So I mean, that's a good point. If you do have a lot of Gitflow driven type things it could be a hard switch, right? Like there is that possibility always. Yeah, good point Will. Yeah, I think we need to consider also this. Coming on for that seven experience, I think maybe Brian can say more about it. I think the user experience across serverless and eventing improved a lot. It's very easy to create an application serverless app connecting to a Kafka stream and react to some events. So this is something we saw in previous version, but now it's more easy to set up all this topology, which are kind of complex, but from the OpenShift web console, it looks very easy. And I know Brian tried out also something about it. Yeah, yeah. So too, well, I guess to finish the first question to begin with, two of my favorite features I guess I didn't mention that would be the serverless updates that we have on building out the whole eventing stream. So we have been adding this into the OpenShift developer portion of the web console incrementally, right? So OpenShift serverless came out serving the aspect of scaling up an application and handling the, giving you an automatic route and things like that. That aspect came GA, generally available in OpenShift before the eventing aspect of serverless or Knative came out. So OpenShift serverless uses Knative. And the serving aspect came first because you need to run your application and be able to scale that. And we made it easy to be able to build out a serving service in the OpenShift web console. The only difference that you do when you deploy an application is you click a little radio button that says, I want this deployed as serverless instead of as a traditional Kubernetes deployment. Very easy, right? Like that's what I would expect, if I'm looking at the web console, it's very simple. Like I don't even have to think about it. This is now a serverless service. That then allows me to use this service as something of auto scales and whatnot, but it also allows me to hook up to events. So serverless eventing, which enables event driven development, which I think is insanely interesting. And that's probably my favorite aspect of everything that comes with OpenShift and OpenShift 4.7 was the way that we could build that eventing flow in the developer console. So we could basically go through and instead of having to work with YAML or having to work with the KNCLI, which is good, but instead of having to work with the command line, I could use the OpenShift web console to build out an eventing flow that will connect up to Kafka using the Kafka connector that is now generally available or an included in the OpenShift latest update and whatnot. So we will be able to connect to Kafka and be able to hook up and see messages come to our application and maybe wake that application up if we get a Kafka message, do some stuff, and then it'll go back to sleep and use no resources, right? So like eventing, event driven development enables that type of flow. And I think that that is probably the most interesting aspect. The other thing that I think also is interesting, which I know we talked about earlier, well, we didn't talk about it, but as in the stream earlier was the OpenShift service mesh 2.0, like that came out around 4.7. It's been, there's been a couple updates and stuff since then, but they changed some of the architecture of Istio and made it a little bit different instead of going for microservices and split up. It's all together and one more monolith that's easier to manage for these particular set of services, right? So they went a little too far and they kind of backed it down. There's also some other things that are interesting. Geali makes it easier to monitor things that are going on in the service mesh. And there's a lot of updates there that make it easier. Some of those things come through to the OpenShift web console and you could see kind of, hey, this namespace has service mesh enabled or we can enable it easier or whatnot through the web console and that's included and that's pretty awesome and interesting. So anyways, yeah, those two things, I guess, would be my main event driven development. It's a lot easier in OpenShift now instead of working with the ML, I could work with that web console and then, yeah, service mesh, sorry, the updates with service mesh. To supplement your favorite feature, Sebastian really likes being able to set traffic splitting rates for Canadian revisions directly in the console. That makes me and Amber joking around back and forth about being developer advocates and demos and that makes for a really cool demo when I've actually presented the one that the definition guys use where you've got the blue, green and color, yellow, different backgrounds and then you just start flipping rules and all of a sudden it's going, Firefox people are going to one place or like he says, having the revisions start to slowly creep in and you see like you jam on refresh a bunch of times and you see green once in a while and then you just kind of shift the ratios and all of a sudden that's coming through a lot more. It makes for a really cool, very visual demo and that's just on top of the feature itself being useful. The, actually I have a technical question for that is what's doing the splitting there? Is Canadian doing that itself or is that actually being handled by Istio? So initially it was handled by Istio but now it is, it starts with a C but I can't think of what it is. Contour? Contour, that's what it's called, it's contour. Okay. So it still uses Envoy. Istio's using Envoy. It's just a more simplified control plane and yeah, it's using project contour for that. Nice. So we do have a question here. It involves one of my favorite features, Quick Starts. I think this is available on 4.8 but don't hold me to that. Can I make my own custom Quick Starts? Yes. So, Brian might be able to answer a little bit better. I don't know which of us have had the experience yet to play with them but the general premise is, I hate mentioning operators as much as we do but at the end of the day, there's a feature in Kubernetes called Custom Resources and if you haven't seen it instead of just using things like pods and deployments and the built-in resources, you can define your own. For Quick Starts, there is a new resource type. I'm gonna assume it's called Quick Start or something very similar. You create a new one of those. So it's a YAML based resource like any other and that contains the information, all of the information for the individual Quick Start. So being able to just write that YAML file, get it deployed and then your Quick Start appears in there and that's actually just how they're written in the first place. If there's no fancy thing that we're doing to inject the Quick Starts above and beyond, just writing those and then packaging them in some way. The other thing is that we're still, we're working in particular with that team to enhance it going forward. So what you're seeing today is definitely not the end state. We're looking to add things like these pre and post steps. So as you come into your Quick Start, is it going to set an initial state on top of your cluster or something like that? Grouping mechanisms, you will say these five belong to a particular path or something like that, which is a very long answer to, can you write your own? Yeah, we actually expect companies and users are gonna say, hey, this is training material for our new employees or this is the kind of thing you'd want to use to bootstrap a project or something like that. Like there's a number of reasons why companies would want to use them outside of what we would do from a teaching and advocacy type perspective. Ryan, Sandy, check me out. I misspeak on any of that. No, that sounds totally correct. These Quick Starts will show up as a card or a tile in the OpenShift dashboard when you go to create new content. And if you were as an organization, very opinionated on how people should do their rollouts, whether they should use Istio or not, you could potentially type all that information into a Quick Start and then tell people prescriptively, here's what you need to click on next. You could even highlight different UI components and say, head on over to the developer perspective. And there you'll find all of your developer folk, they kind of direct them down a guided path. So yeah, Quick Starts are a new feature, great opportunity for helping encourage folks to take the correct next steps. Kubernetes can have a lot of terminology and a lot of things to learn. So helping people have kind of guided next steps is potentially a huge help for them. So the Quick Starts can only be, it's just steps of how to do something. And then the person has to go through and go do them to follow the thing. Like it won't do any automation or whatnot on the cluster. So if you want them to create a namespace or whatnot, they have to go physically create it, right? Yeah, but keep in mind though, there are ways to reference, or at least they're coming. There's the intention of having it be able to flip around the UI for you. So click here to start the create namespace process and it'll bring you into that UI. This is the intention, I'm not quite sure what is there today. And let me ask the follow-up question, could we hope that someday we see Quick Start features as CataCoda has? Funny you mentioned that, because our team is obviously very interested in that. And it's one of the reasons why I mentioned we've been so heavily involved with that team because we've got all this experience right in Quick Starts. And we were very quick to be like, okay, these are the kind of things we can do today. I don't think it necessarily needs to map one to one where I think one of the most dangerous things you can say is like we've always done it this way. And I think not necessarily saying, well, this existing CataCoda needs to exist in Quick Start, but us being able to come in and say, hey, these are the types of things we've done and these are the types of problems we've had to solve is influencing that further direction. So now I'm committal to say we're not gonna see the exact, we're not going to be able to say we're going to see the exact same features or parity or anything like that, but the intention is to be able to do the same sorts of things we do with Quick Start in terms of depth, in terms of just teachability. Like if that's what we do is this advocacy role. So we have our strong opinions on what we think we should be able to have a user be able to self-drive because the four of us plus the one team member who's not here don't scale beyond just us humans. So we're really invested in this idea of how can we make this self-service content as powerful and as flexible as possible. Right, and Ryan points out in chat, Quick Starts plus Web Terminal Operator, you're getting almost to that point, right? Like, pretty quickly. Yeah. So there's a question here. Yeah, and you could have that installed. No, go ahead, Ryan. You could have that installed on any cluster really. That's going to be available as part of the getting started experience on code ready containers. So if you get a code ready containers or even sandbox, you should be able to see some of those Quick Start tiles that Red Hat has pre-populated in there. But yeah, that's definitely something you can customize and give specific getting started information to folks in your users of your clusters, for sure. And there's a question here. I'm asking Christian in our Slack about SSSOAuth, or yeah, SSOAuth, I don't know why that's so hard for me to say, but it is. So I will get back to you on that. About the Web Terminal Operator, is there a per developer file storage solution, like a user home directory, allowing each dev to have its own settings, command history, whatever? That's a good question. Ryan, you're muted. Mike, those are some amazing things that happen in there. One is you look so animated and confident starting with that answer. And then when I pointed that out, you're expletive, you said, as you realized it. And then the one second here, I go again, which is phenomenal. I'm glad we have voice instead of just podcast, because that kind of visual thing makes this sort of... All right, yeah. So I paced it in a link to how you can customize this Web Terminal Operator. You can add a volume mount and you can customize the default terminal image. Currently, you can only set one console image cluster-wide. In the future, it may be possible for you to set different command line console, the image that we use for that console environment. You might be able to swap that out, maybe per namespace. But currently, you only define one command line image for your whole cluster. So, but you can also add volume mounts. So folks get a home directory where they can add their own binaries in there potentially. So, I'm sorry. Who asked the SSO OAuth question? The SSO OAuth solution will be in a future version, looking late for eight or really for nine timeframe around August-ish maybe. So again, that's a future date. It could slip. It could move forward. Who knows? Yeah. Then someone was asking, is there a cookbook or some sort of tecton OpenShift pipeline is going to be made of some sort? I feel like there's a lot of learning things with tecton on O'Reilly, but that's really a book. Yeah, I'm not. Maybe there is some of them that there's some others that are working on. Yeah, Nathalie, you're working on a pipeline's book? Pipeline specifically. No, I'm working on the Modernized Zander Press Java book, but it's not related to pipeline. So for pipeline, for tecton specific, I don't think there is any working progress from our team as far as I know. There is a working progress for sure for developers on OpenShift, but for pipeline, I'm not aware of any work from us. Yeah, there's definitely some... Go ahead. Learn.OpenShift has some content there. I've updated the content to 4.6, OpenShift 4.6. So you should see some relatively recent pipeline's content. If you find books on tecton from anyone else in the cloud native space, it ought to apply. You'll just have a nicer UI if you show up and visit the OpenShift console. Yeah, we can start with the tecton CLI cheat sheet. That would be great. Isn't there one already? I don't know. Let's check opensource.com. I feel like that'd be the place where we enable sysadmin maybe. The cheat sheet should be on developers.redhat.com. Ah, got it. I don't know why I said that so without making it a sales pitch. So the cheat sheet, Sammy's referring to developer.redhat.com has an entire section of cheat sheets. Scroll through right now. There's Kubernetes. I know I've written one on Builda. There's one on Podman. I don't know if one already exists today, but if not, it's definitely something we can write. Those I actually really enjoy writing. It's kind of nice. Yeah, that's kind of low lit. Straight to the point. Yeah, low effort, high impact kind of activity, for sure. I would call it low effort. I would say I don't have to get my fluffy writing voice on to get right to the point of it. That's what I mean. The cheat sheet is very nice. You don't have to worry about English. Let's see. The tech time on OpenShift getting started. Okay, thank you for dropping that in there, Natali. Okay, yeah, I think we're actually I'm supposed to the questions. Yeah, and we're at time. It is the end of the KubeCon day, particularly for those of us who got up for the beginning of the KubeCon day. Yes. He says with absolutely no bitterness about doing the cheat sheet. Not whatsoever, right? So, thanks everybody for joining us. I posted some links in the chat because we're going to have a whole bunch more office hours this KubeCon week. Also for people who participated in the session, you can claim your OpenShift for sure. And so I pasted that link in there. And thank you so much for joining us and answering everybody's OpenShift questions. Yeah, been really great. I'm glad to have this one in the can, ready to be refreshed and shared with everybody at any given moment. So yeah, this is already available on YouTube if you're not aware of the OpenShift YouTube page. It's here on the type from the archive shortcut. And that's, you can literally learn everything we've done for the past year and one day, live streaming from that link right there. Wait, I know we're out of time, but you're in one day. It was yesterday, literally the anniversary of- Yesterday was literally the first birthday of OpenShift TV, yeah. Five hundred and thirty hours of content in 247 days. One pandemic. Yep, just one. Just one. All right, folks. Thank you so much, audience. Thank you so much for attending. Thank you, everybody here. I greatly appreciate all of you. And we are going to sign off for the day here on OpenShift TV and Josh is going to go take a nap or whatever Josh does. Take it easy out there, stay safe and we'll see you tomorrow. Thank you.