 many, many clusters popping up in the customer's landscape. So they started to have tens or hundreds of clusters, as I mentioned. And one of the key elements of a containerized journey is the containers and basically the images from which you run those containers. So as you guys probably know, a container is basically the instantiation of what we call the container image. And that image needs to be stored somewhere, which is the registry. And as you start to have those tens or hundreds of clusters, you want to have something that is your centrally trusted registry for your content. And that's basically what the registry, what Quay provides. It's an advanced registry that provides features like geo-replication and mirroring, security scanning, image builds. And the way you can envision it is that you can have a Quay instance that ships images or centrally manages the images that are going to be used by your tens or hundreds of clusters. And because of those geo-replication capabilities, you can have mirrors that are deployed close to your, I would say, geos. And then you can optimize the traffic. So you can have central images that are mirrored to local Quay registries, of course, in an automated way. And then you can optimize the traffic between your regional open shift clusters and that regional Quay registry that you have there. But you are still optimizing the management of your registries. And because the registries are such a critical part of your architecture, Quay has an HA architecture. So basically, you can have disaster recovery scenarios or architectures. And that's also something that you probably want to envision when you have critical workloads that are running on your open shift clusters. So of course, Quay can work with open shift and any other, I would say, consumerized solutions. So it's a fully standardized registry that is compliant to the OCI format and such things. Yeah, we have a question. So thank you for the explanation. And my apologies for throwing that out there. I know that this is one of those things that sort of understanding some of these building blocks is what we're about today. Let's go ahead and dig into it. So where do you want to go from here? Yeah, so from here, I would say, now that we have a better, I would say, understanding of the overall architecture or architecture of the platform, let's see how we can get access to open shift, how it gets installed, how it gets administrated in terms of content. So yeah, that's going to be our first take on the platform. And then afterwards, we're going to switch our hats and then see from a developer perspective what are the services that we provide and how it can be also used. OK, so first thing, how do we get access? We now know what open shift is. We want to try it. We want to experiment with it. How do we do that? So if you go to try.openshift.com, you're going to end up on this page. And it shows you that you have at least three starting points where you can start digging into the platform. There's what we call the developer sandbox. And it's basically a 30 days free trial that we host for you. You just have to go there, request an access, and you are going to get an open shift environment that is created for you. Of course, it's going to have some limitations in terms of resources. We're not going to provide you 100 nodes free of charge for 30 days. Well, you probably don't need that if what you're doing is looking for a developer sandbox. Exactly. So this is a full-fledged open shift with all the latest innovations that we provide in there. It's not restricted, I would say, in terms of content or in terms of experience. It's just that it's limited in time. So it's 30 days to try and see if that's something that you want to go further with. You can start as managed services with your cloud accounts. So for instance, if you have access to AWS or Google, you can order an open shift as a managed service using your credentials. Or if you want to do things, I would say, more traditionally, you can try to self-host open shift in your environment, whether it be on the cloud or on your data center. And that's going to be the third option that you have here. Right. So there are some important options. And I know the managed services option is becoming very, very popular at this point. Yeah, correct. So I've seen the numbers. And it's definitely something that is getting much more used by the customers because, obviously, they want to concentrate on providing the end services and not spending time managing platforms and such things. Right. But of course, as always, we have those customers who maybe have unique security requirements, certain kinds of compliance requirements, or they just have that sort of build it yourself mentality sometimes, whatever the case may be. There is still that option to do it. But it sure does seem like managed services is becoming more and more popular for the exact reason you say is that it accelerates the process and gets you where you need to be without taking on a lot of additional overhead, which for many organizations, they might not be willing or able to take on. Yeah, exactly. And plus you get the benefit of having the engineering and the support, the SREs that come from Red Hat and the jointly managed teams. So that's also something very, I would say, valuable for the customers because you get hands-on experience from day one. And it's not something that you have to acquire or recruit some Kubernetes experts that are basically provided by the service itself. Right. So if I might, we did have some technical difficulties earlier where we were not necessarily streaming everywhere. We seem to have addressed some of those. And so maybe some folks are joining us. I just wanted to say very briefly today, we are rewinding, coming out of the deep dive, and really just talking about what is OpenShift and getting some background from Jafar on the fundamentals of what OpenShift is, what comprises it, and what it's about from both an administrator perspective and also from a developer perspective. So Jafar, back to you. I just wanted to put in a little bit of an explanation there for the folks who might have been joining us late. Yeah, sure. Thanks. So I'm going to go and hit that starter trial for the self-managed part just to give you an overview of how you can do things. And so basically, you see that there are a lot of supported architectures. You can deploy on bare metal, Z-Power, OpenStar, Crab, V-Sphere, or just any agnostic infrastructure, which is what we refer to as the UPI, user provision platform. And depending on the path that you choose, you're going to have guided instructions to do the installation. So let's see, for instance, this one. So you have the two paths which are the installer provision, UFRA. So basically, if we choose the case of V-Sphere, we are going to provision everything that you need to have a fully running OpenShift cluster with minimal involvement from the person that is doing the installation. If you choose the other path, you're going to have a little bit more work to do, but still, it's not going to be very complex. So you have an installer, which is basically a CLI that you can use. And the command that you need is basically OpenShift install create cluster. And that's going to pop up with some questions regarding the credentials that you need, the domain that you are going to install, the cluster name, the type of platform that you want to install it on, et cetera. And in a matter of answering five to 10 questions, you're going to have the process kicked off. And after 45 minutes, you are able to have a running OpenShift cluster on your target architecture. Now, we understand that some people may want a more user-friendly experience in terms of how they can do the installation. And that's why we provided what we call the assisted installer, which is a UI-driven way to install OpenShift. And so we have done a previous show last year on how to install a single-load OpenShift cluster with this AI being the assisted installer. And so here, I'm just going to go and hit Create Cluster. It's going to ask me for some parameters where I want to deploy the platform or the version. And afterwards, all I have to do is, so I can say, I want to add hosts. And the way it works is you are going to generate either a full ISO or a minimal ISO image that you are going to load on your target environment. So this is mostly directed at bare-metal installations. And it's going to generate an ISO that you push on your bare-metal instances. And the node on which you run the ISO is going to report back to the UI and get configured as a master or a node, for instance. So it's a very easy way to do the installation. And I really love the way that we are making it even easier for our customers to deploy OpenShift on their target environments. So that's basically what I wanted to cover regarding installation. So you have the CLI, which was the traditional way of doing the OpenShift installation. And now you have also that Assistant Installer. So it's here in Tech Preview, but we are going to release it as basically we have the traditional workflow where we have Dev Preview, Tech Preview, and then it goes into GA. OK, so that was it in terms of installation. So what do we have when we have a running OpenShift platform? So now I'm logged in into a cluster. We can see, let's have a look at the version. So this is something that I have just provisioned. And it's a 4825 cluster. And one of the very nice features that we added with OpenShift 4 is that you have the ability to automatically update the cluster. It's one of the core reasons why we decided to adopt what we call operators. And let me actually pose on that. Yes. Yeah, because it's the third time I have mentioned operators and we still maybe don't know what that is. So let's pose and see. Well, for those of us who are of a certain vintage, we hear operator and we think of a person. Exactly, yeah. And once you enter OpenShift world, you've got to understand that you're going to hear the word operator over and over and over again. And we are not talking about an operator as a person. It is instead, Jafar. Yeah, so I actually, in the early days of trying to enable people on what operators are in terms of Kubernetes landscape, I had a funny picture where it was actually a gif where there was a guy getting into a box, like entering into a box. And it's basically, if you look at the traditional way of doing things, you have the operators, which are the knowledgeable guys that understand how to install and manage your Java servers, your databases, whatever specific proprietary solution that you have. They understand how they configure it for HA. They understand how to manage upgrades. They understand how to recover from failures and such things. And basically, you really need that knowledge in order to be able to provide a 99.999 person of service availability, for instance, or something like that. So when you switch to the containerized space, you have many things that are much easier to do. It's easier to package your content, your application. It's easier to run it. Kubernetes can do basic things like, OK, the application has been shut down, meaning the container has stopped. I can restart it and redeploy it somewhere else. But what if your application needs some very specific order in terms of how the components need to be started? Or if you want to do an upgrade and there are some things that need to happen in a very controlled way? So Kubernetes does not understand how to manage a, say, for instance, a couch base database, which is one of the operator-based databases that we provide in the OpenShift catalog, thanks to the work that we have done with our partner. Kubernetes does not have that knowledge. It only knows that there's a pod. The pod can be shut down or started. And that's it. So basically, the operator, in terms of container architecture, is a side companion to your solution that really has that embedded knowledge that has been encoded in there by the people who better understand that solution. So let's take the couch-based operator, for instance. Couch-based have implemented the way installation should be done, the way scaling should be done, the way data propagation should be done in terms of automatically distributing the data across the different nodes when you have, for example, a new node that joins or something like that. They have implemented things like automatic recovery. Say, for instance, you have a cluster that is comprised of three nodes in terms of your database, and one of the nodes has an issue. So the operator has that built-in knowledge to say, OK. First, I need to start a new instance, and then I need to do those specific actions, which are probably something that a traditional operator would have known, except that you need the guy to do it. Now the knowledge is embedded within a side container that we call the operator that is shipped with the solution and does all of that magic in the background. So basically, it's going to monitor the solution. It's going to adjust whatever needs to be adjusted to make sure it runs. And because we wanted to have such high level of automation and resilience or reliability with the platform services that were shipped within OpenShift, we decided to fully embrace that operator model with OpenShift 4. So just for the background, the operator model has been created by some people that we have acquired through the tectonic acquisition, through the CoreOS brand, basically. And now, of course, it's become a much broader standard, and everybody in the Kubernetes landscape speaks about operators, but it was not something that was existing before, I think, 2016 or something like that. And because we have those engineers and we had early on that understanding of why operators are such valuable for Kubernetes, we created operators for all the infrastructure components of the platform. So for instance, we said that we have a registry. The registry is managed by an operator. We have the HCD services. They have their own operator. We have something like the Kube API operator, something that manages the underlying infrastructure, like the nodes, how they get configured, how they get updated, and such things. So I think we have over 37 operators just for the platform content itself. And again, the reason why we did that is it's because it provides you with so much built-in automation that you can delegate to the operator and then have everything happen magically when you hit that update button here. So with the previous version, we didn't have that. It was a set of Ansible scripts that you had to run manually and cross fingers and hope that everything went fine. Now I'd say it's a smoother experience. You can choose between different versions. So something that we heard from our customers is that they wanted to be able to upgrade, even maybe try the leading edge versions or what we call release candidate versions before we even released the GA for a specific version because they want to accelerate their testing or their validation process. So basically they said, we don't want to wait for you to release a specific version before starting to do our sandbox tests and see if we can upgrade to that specific version. We want to have a little earlier access and be able to do that on the fly. So we started to publish first. We created those candidate channels for the updates, which basically provide you with, I think, weekly updates for OpenShift in terms of patches and such things. And we also provide nightly builds for the installers for the components, if the customers want to go and download them and install them. So it's, I would say, a much more iterative process and you have access to those features much easier. Well, one of the things that's interesting, Jafar, is if you think about IT historically, there's generally the, if it works, don't mess with it. If it's not broke, don't fix it. And there's not always that eagerness to move on to the latest and greatest in the bleeding edge. And I think that's how things were for a long time. It seems to me that we really see a very different kind of behavior in the world of OpenShift, where there's that real eagerness for the next thing that's coming out and the new features and so on. And that it really is almost turning things on its head, where instead of that reluctance to go with the latest and greatest and if it broke, don't fix it. There's the sort of behavior that you're describing where they're saying, no, we want to see it before it's even out so that we can plan for it, so that we can prepare for it so that when it does come out, we're ready to leverage it and hit the ground running. Does that seem like a fair assessment? Yeah, so there's, I would say, two sides of that cone, which is, so if you look at the Kubernetes iterations or release frequencies, it's a new release every three months. And nobody in the traditional IT enterprise level would even think of upgrading their core systems every three months. So that's a very challenging way because they have validation processes and such things. And only if they are 100% sure that the upgrade is not gonna affect their running workloads, they are going to think about the possibility to do it and not necessarily do it. So that's basically what we wanted to enable is to have that shifting in the mentality that yes, we can do things in a much more agile way. We don't have to wait 18 months to upgrade to the last version and wait for four versions of patches before we are confident in something that has been released. Of course, not everyone is gonna be able to upgrade their platform every three or every six months. So we still have a lot of customers, of course, that have, they are regulated. They have, I would say, obligations that force them to have a very rigid validation process and they cannot upgrade whenever they want. So of course there are customers who are still doing that once a year or something like that, but there are also customers who are much more into what you described and they upgrade every six months or even faster for some environments where they feel that they can have that flexibility for their end users. So again, it really depends on the types of workloads and the types of regulations that control your platform, but what we did is actually we gave them the ability to do it, which it's a start and depending on how flexible they are, they can benefit from that feature. I know that that's something that I do whenever there's an upgrade, I just go there and hit upgrade on my clusters. Usually it works fine. It takes 45 to one hour to fix. When it doesn't work, I go back to engineering and there's been some issues because that's also... Surely not, oh, surely. Why we do that and basically that helps us make fixes before we go into the GA. So if we have customers who are experimenting with the candidate releases and they face issues and they submit the fixes, that's a way for us to gain some time and be able to fix the issues before they land into the GA versions. Yep. Okay, so that was again the explanation of the core change that happened with the operators in the platform and that's also what makes it easier for us to basically update the platform. Some of the, I would say, benefits also of doing these changes is we make it even easier to actually manage or administrate the platform through the UI. So here, as you can see, I'm logged in as an administrator and if we look at some things like, oh, we have the ability to automatically add new identity providers from the UI. I can say, for instance, I want to configure GitHub or add that configuration or just upload a new HC password and the operator that is responsible for this section here which is basically in OpenShift is going to see that there's a change request, which is I want to add a new identity provider and it's going to automatically reconfigure that specific operator and it's going to provide me with an option to identify or to authenticate with that identity provider. So if you think about the way things were done before, you'd have to go to a configuration file, you'd have to edit the configuration file manually and then you'd have to restart the services on all of the OpenShift components to make sure that the change that you have submitted has happened. Well, that's the way we did things before. Now with the operators, you just make the changes in here. The operator does what we call the reconciliation. It's basically it's monitoring the desired state versus what it has in its current configuration and it's going to say, oh, okay, so you want me to add a new identity provider? Okay, let's do that. And it's going to reconfigure the authentication component automatically in the background. So that's basically how we changed the way we administrate the platform. It's really to try to delegate things as much as possible to the underlying operators that are responsible for a specific area that they are managing. So let's continue with that operator story and administration and let's have a look at what we have in this current cluster. So I see that I have three masters, so that's a requirement because we want to have three HCDs and so that's a traditional architecture in communities. And we have two workers which are, so this is currently deployed on AWS and I have basically two nodes. And as an administrator, I'm often responsible for managing the infrastructure. So people ask for more capacity and I need to provide that infrastructure. That means in a traditional way, I'm going to deploy new operating systems. I'm going to configure them, et cetera. And this is something that we also wanted to make much easier for our customers. So we have what we call this notion of machine sets and just basically it's a grouping of machines or nodes. And we have three zones in here. We have EU, West A, West B and West C, but we see that one of them has no machines yet because if you remember, we had two workers which are the two ones we can see here. Say now that as an administrator, I want to provide more capacity for my users. I'm just going to go and hit the edit machine count and now certainly what's going to happen is the operator that is responsible for managing the infrastructure. Because if you remember our opening architecture, we said that with OpenShift 4, we are now managing the underlying OS also. So what it's going to do here is that it's going to kick off the provisioning of a new machine on AWS and it's going to configure it as a new node, meaning it's going to deploy, generate the certificates, communicate with the CUBE API server to make sure that it's accepted as a new node, et cetera. And once the provisioning is done, I'm going to see it appear as a new node in here. So something that used to take a long manual process for an administrator is now fully automated and that's something that we provide for all of the target platforms that support the API installation. So very easy and that's basically what we want to provide for the OpenShift administrators, make it as easy as possible to automate everything. Of course, things that can be done here through the UI are all YAML driven. So anything that we do is basically, the UI is just a front end for what happens when we change YAML files. And basically, you can do a getups approach if you wanted to configure your clusters and push changes. And that's actually what we do when we are using ACM for instance. You can have a set of centrally defined standard configurations like this is the LDAP configuration that I want to push. These are the authorized registries, et cetera, et cetera. So everything you want to configure. And whenever you create a new cluster, you can have ACM push those configurations as YAML files to the target cluster and apply them. And it's gonna be automatically picked up by the operators. They're gonna do what they have to do and you're gonna have your cluster end up in the target state that you asked for. So much easier to administrate than having to create scripts where you basically say, okay, you have to do this and that and this and that, et cetera. So yeah, that's what I would say was really groundbreaking with the OpenSheet 4 architecture and why it's such easier to administrate than, I would say a traditional Kubernetes platform first and the previous version of OpenSheet Austria. So as we're coming up here to the top of the hour, we have covered a lot of ground and yet I know there's more ground that we had hoped to cover. Can you maybe share a little bit about sort of the developer perspective before we wrap up or do we need to cut that to another show? Yeah, sure. So I think we're gonna have to maybe wait for an upcoming show for that part. And again, it's not a problem because as we said, we want to kick off those new shows with new ways of doing, coming from the ground up and going step by step. And so as you can see here, here we have the worker which is starting to appear, which means that the automation has correctly happened. One of the things that I wanted to speak about is the operator hub. We spoke about the core operators that manage the platform itself, but as an administrator, you are also responsible for providing content to your end users. And as you can see here, we have over 500 items that appear in the catalog and this is comprised of some solutions that are provided by Red Hat. So again, we have our solutions. For instance, the ACM or ACS, which are the multi cluster and the security solutions are both deployable as operators, meaning that you can install them very easily on the platform. But we have also some components that are part of OpenShift, like I believe we have the login. So one of the decisions we made with OpenShift 4 is that we provide a basic installation that provides you with, I would say, the minimal footprint that you need. But if you wanted to configure additional solutions with things like login, because you don't have your own login stack or for whatever reasons you want to install additional component, it's gonna be done through operators. And actually, let me do that because I wanted to show you how to do that for a developer oriented solution. So now as an administrator, I want to provide OpenShift pipelines as a service to my users. And OpenShift pipelines is basically our downstream version of the Tecton project. Tecton, which is the Kubernetes native CI CD or we have GitOps here, which is our distribution for Argo CD. And just a quick reminder, if people are interested in the subject of Tecton, check one of our earlier episodes. I don't remember which one, but we actually had a dedicated episode talking about pipelines and Tecton. So check it out. We do dive deeper into that particular subject. Correct. And so, yeah, as an admin, we said the question here is, how do I configure the platform? How do I manage the content and how do I add or remove services from the platform? And if you look at the left pane here, we currently don't see a pipeline section. We don't have that capability enabled yet on the platform. And we are going to do it by installing the operator, actually. So when you have an operator, you are faced with some options that you can. So you can use the preview channel or you can use the stable one. And in this case, it's going to be installed for all the namespaces. And basically just by enabling the operator, it's gonna install all the dependencies and in a few second minutes, maybe I'm gonna have the pipeline service running and available on the platform. So I wanted to kick off the installation and now we can switch back to the developer perspective, which is, so again, from the OpenShift UI, you have two perspectives. The first one was the admin and the second one is the developer one here. So what I'm gonna do is I'm going to create a new project. And actually, no, let's switch to this perspective. And what I can do from this perspective is I can deploy from different sources. I can choose to deploy from Git, which is what we're going to do. I can deploy from a Docker file, for instance, but I can even upload a jar file directly into my UI and that's a nice feature that we can show maybe in one of the upcoming shows. It's basically just drag and drop your jar file on the OpenShift console and it's gonna create everything that you need. So let's go ahead and choose a Git repo or no, let's go back and choose the samples. And just for the record, you see now that we have the pipeline section that has been added, which means that just by enabling the operator on the platform as an administrator, I now, as a developer, see that the service is available to me. So what I'm gonna do here is I'm gonna hit that basic Node.js option and it's going to deploy a new application for me. So what happens in the background is OpenShift is going to kick off the build process. It's going to fetch the source code from the repository. It's going to trigger the build for me and as soon as I have my image that is ready, I can then have the application running on the developer perspective, which is this one here. So because we have added the pipeline feature here, I'm gonna go ahead and right click on my developer perspective, which is what you see here and I'm gonna say create from Git. So this is a sample repo that I have for a basic Node.js app. And one of the interesting things here is that I can say, please generate a basic pipeline for me to kick off my application. And I want to create an application group called. So this is going to kick off the build for my application, but at the same time, I now have a pipeline that is being executed in instead of just a simple build that we do with OpenShift. So the pipeline is now running. We have a few steps here, which are I'm gonna fetch the repo, I'm gonna build the image and then I'm gonna deploy it on the platform. So very easy to get started with the platform and for the earlier application, we already have the Node.js starter application that is deployed. So yeah, very easy for me as a developer to get started with the platform. The nice thing is that I also have monitoring capabilities for my workloads. I can see metrics, I can see alerts, et cetera, for my applications. I can configure the UI if I wanted. So for instance, I want to add something that is called deployments because I don't see it in the UI here. So I'm just gonna go and look for deployment here. I'm gonna say add to navigation and now I have customized my developer console to my liking and I have the deployments that are accessible from this circuit. So really easy to consume and deploy continuous applications on the platform. And yes, basically I have hundreds of services that I can use from the catalog that we provide. As you saw, for instance, if I wanted to deploy, yeah, I need to enable them first as an admin to make sure that they can be deployed. But we mentioned things like databases, for instance. Databases, for instance. You see here that we have 60 items that can be deployed. Some of them are certified operators. Some of them are community operators. And oftentimes you have the two versions that can exist. You can have the certified version by the third party provider or the community addition of it if you want to experiment with it before going into the buying process. Right, this is a commercial option. Yeah, exactly. So yeah, that's, I would say, a very convenient way of configuring the services from an admin standpoint. The goal here is basically provide a cloud-like experience. When you go to a cloud, you just go to the catalog, you order your components, and they are there, they are accessible. Then as a developer, you just click instantiate the service and can consume it in your environment. That's exactly- Well, it works for both administrators and developers in this sense, because on the one hand, it's not complete anarchy. The administrator has to say, well, okay, here are the components in the catalog that we will make available. Exactly. And they have the ability to do that. And it's, like you say, it's sort of that cloud-like experience of, okay, I have enabled that particular capability. And now all of the developers automatically have that capability and they can choose among the capabilities that have been approved. And there is some degree of governance, but there's also the benefit of having this huge catalog that can be enabled very, very easily. Exactly, yeah. You saw how much time it took to enable the pipeline feature. I just hit the install operator then less than a minute afterwards, I had the service enabled on the platform. And I really think that that's what we provide in terms of user experience for both the administrator and the developer, which is- It certainly has the potential to really accelerate projects because of exactly what you just showed is that it doesn't become a huge operation to say, well, okay, I wanna enable pipelines, for example. It becomes something that there's agreement that it's needed. The simple step that's taken is by the administration is taken and all of the developers immediately have access to that new capability. It really is very transformational to use an overused term, but it is. Yeah, that's a fully agree. And that's the intent is basically to make that as much efficient as possible for our customers so they can focus on their core businesses instead of focusing on building platforms, which is our job and our responsibility, I would say. Indeed. All right, well, we are a bit over time, but I think it was time well spent. So thank you, Jafar, for the walkthrough here on OpenShift. And I hope everybody found this as informative as I did. There's a lot here to uncover. Please do like, subscribe, and share. We're here regularly on Wednesdays. We are going to be changing our schedule a bit. We have previously been a mostly weekly, occasionally missing weeks. We're now going to be going more officially to a bi-weekly schedule. And we hope that you'll remain interested, all of you regular watchers. Our next show will be on February 2nd. And we're looking forward to bringing a lot more information about OpenShift, about containers, about Kubernetes, about all of these things and the things that touched them this year in 2022. So please do like, subscribe, share, look for our next episode. And again, thank you very much, Jafar, for a really great walkthrough on OpenShift. Thank you very much. Thanks, Randy. And we hope to see you soon. I haven't seen the chat, but I don't know if we have had questions. I think we covered, we had a few. We did cover them while you were doing your run-through. So I think we're good. Great. Well, thank you everybody. Thank you everyone. Have a great day wherever you are. Bye.