 Let's get into the big topic of open source, something that we actually have in front of us. This is so awesome! We are in open culture that is actually happening. So, what is that process that a developer or let's say... How's the Kubernetes ecosystem really doing that? Greetings! Good morning, good afternoon, and good evening! And whatever other time I might have missed here, you are joining the Level Up Hour where we talk about containers, Kubernetes, and OpenShift. Like, subscribe, share, let people know we're here. And we have a really interesting episode today. I am joined by my co-host, I cannot talk today. Jafar, Jafar Charevi, how are you doing, Jafar? I'm good, I'm good, thanks Renyi, how are you? Doing really well, doing really well. So, today we are going to join a number of concepts together and see how OpenShift actually enables you to do a number of these things that might otherwise be a little bit more intractable or difficult. And so, we're going to talk about GitOps, we're going to talk about CICD, and we're going to talk about some supporting technologies. So, Jafar, where would you like to start? Yeah, sure, so I think that most people know what CICD is. So, companies have been doing CICD for decades now to build, test, and deploy their applications. But things have also evolved in that landscape with newer technologies, especially in the Kubernetes ecosystem. So, yeah, we can start by maybe highlighting those new things that happened and how they work and what they are used for with things like Texon, and then afterwards, I'd like to speak about the GitOps approach, like what the concept is and what it allows us to do, what value it brings, I would say, and then also explain how the two fit together. So, people might be thinking that GitOps can work alone that doing GitOps does not necessarily involve doing CICD and we're just going to do GitOps to build and deploy your applications. So, I wanted to highlight basically how the two fit together because they are complementing each other and also show what the combined value of it is. Of course, we're going to have demonstrations to showcase all of those things based on OpenShift. Tecton and our GoCD. So, these are the main components that we're going to be speaking about. All right, well, so, yeah, just as you said, CICD is something that's been pretty commonplace in the industry for a while and the idea of automating that application flow from idea to production and how to make it as efficient as possible and conditionally automated in the sense that there might be points that somebody steps in and makes a decision or it might be that the decisions are all automated, right? So, should we start with Tecton and sort of the role that that upstream project plays in OpenShift? Because one of the things that we always need to reinforce is that OpenShift is a superset of Kubernetes. It's not simply Kubernetes with a badge stuck on it that says OpenShift. It actually is a whole collection of upstream projects of which Tecton is one, right? Yeah, yeah, correct. So, yeah, absolutely. So OpenShift, in a sense, is a platform that uses Kubernetes at its core as the engine to drive the scalability and the orchestration of containers, et cetera. But it provides many more additional components, especially things for developers, things to do CI CD, to do monitoring and such services or storage, et cetera. And so, OpenShift has been providing CI CD features for a long time based on Jenkins. But if you've been following the OpenShift evolution, we've decided, I think, two years ago or something like that to include Tecton as well. So, Tecton is an upstream project which goal was to make Kubernetes a CI CD platform itself. So, basically, instead of relying on very specific CI CD tools that you would have to install, maintain, upgrade, et cetera on your platforms or outside of your Kubernetes platforms and use those tools to drive your CI CD pipelines, the goal was to basically try to standardize around Kubernetes and extend Kubernetes with those CI CD concepts, such as pipelines, tasks, et cetera. So, basically, training Kubernetes and OpenShift into a CI CD platform itself. And the reason behind that is that it provides a lot of capabilities that are very useful for any CI CD, I would say, engine or platform, things like scalability. So, we know that Kubernetes is highly scalable, but it's also very extensible because as long as you can run your tasks in a container, you can encapsulate those tasks in a container and create a pipeline definition that will run on Kubernetes and allow you to run those pipelines as containers on OpenShift and Kubernetes. So, that was one of the goals. So, basically, that notion of scalability, extensibility and, of course, not requiring additional CI CD tools. So, Tecton is the upstream project driving that. Red Hat is a big contributor to the project upstream. But we have also the downstream version of it, which is like the product we ship within OpenShift and it's called OpenShift Pipelines. So, OpenShift Pipelines is installed on the platform itself through an operator, and I can show you that a bit later on. If you're not familiar with the concept of operators, please raise your hand in the questions section and I can develop a bit more on that concept. Oh, Jafar, I'm not even going to wait for a hand to go up. Let's just explain that very briefly. An operator is basically a mechanism for providing additional functionality into OpenShift. Is that a simple answer? Yes, that's a very simple and nice way of phrasing that. The thing that is important is that the operator integrates some knowledge that is specific to the solution that it comes with. So, for instance, the Tecton operator would allow us to install, upgrade, and configure the Tecton components on OpenShift in a very simple way. And if we are using a database operator, the operator will allow us to deploy, scale the database instances, reconcile in case of a failure, et cetera. So, the operator is basically a mechanism that allows us to easily deploy, manage, upgrade, and troubleshoot components that run as containers on OpenShift. Yeah, it just really does so much for the extensibility of the platform because literally there's, I don't know, how many operators at this point out there, but it offers a way for these capabilities to be built into OpenShift in a fairly straightforward way. Yeah, sure, and so we do have hundreds of operators that ship with OpenShift. Some of them are provided by Red Hat, some of them are provided by third-party software vendors who certify their solutions for OpenShift. And the great thing is that to install those solutions on OpenShift, to be an expert in how the tool works, like the intricacies of how it should be installed, et cetera, you just have a descriptive YAML file that you need to use, and the operator has all the knowledge to basically do the install, do the upgrade, et cetera. So, it makes for... So, now we have a Tecton operator, which is the tool du jour, so let's go ahead and talk about that. Yeah, and so the Tecton operator basically adds several components to OpenShift. Yeah, and you know what, just let me share my screen to basically highlight those things while we're speaking, and I can show that in the tool as well. So, here we are on the OpenShift platform, and as you can see here, we have what we call the operator hub, which is a sort of internal marketplace where you can search for operators, and you can see that we have coverage for a lot of categories of solutions. We have database, we have AI in the machine learning, big data, monitoring, et cetera, and so if we search for pipelines, for instance, we're gonna see things like the Red Hat OpenShift pipelines, the operator that I have already installed, but there are also other solutions like CloudBCI that you can deploy on the platform through this certified operator. And so, the goal of using the operator, as I mentioned, is to make it easier to encapsulate a lot of functionalities and deploy them easily without having to go through the manual installation that you would have to do with upstream projects or such things. And so, what it does, for instance, is when we deploy the Red Hat OpenShift Pipelines operator, it's going to extend the UI with new components. It's going to add this section here, for example, with pipelines. It's going to allow us to visualize pipelines within OpenShift, and it's even going to allow us to edit those pipelines graphically if we wanted to make changes to those pipelines. Cool. Yeah, and just the thing, again, to sort of illustrate with those operators is we have a Pipelines operator. We have a serverless operator. And these are all these kinds of capabilities that those are the things that are out there. They're sort of a principle, they're a concept, but in a sense, it's something that actually makes that concept real and we've already done a lot of the work for you in advance. And so, that's what we're getting with the Pipelines operator is that you're saving yourself a whole lot of time and effort by simply using the operator rather than building your own pipeline. Yes, correct. And so, if we speak about the latest version, which is 1.7.x for the OpenShift Pipelines operator, one of the cool features that we added is the ability to use tecton chains, which is an upstream project which aims at providing signing capabilities and attestation provenance for pipelines that run using tecton. So, basically, it's a component tool to be able to provide supply chain security, allowing you to make sure that whatever you have run gets stored somewhere. So, you have the history of your runs, they are signed with a specific key so you know that they haven't been tampered with. And basically, it's going to allow you to audit your pipelines and make sure that whatever you have or you think you have run is basically what has been actually run. Yeah, well, I think we've had an episode or two where we've talked about things like attestation and security and so on. And these can be very important things for a lot of organizations. Often there's regulatory compliance that demands it. And even if you don't have regulatory compliance that demands that you have this ability to do it, a lot of organizations have realized you don't want to be the last one to know that something in your supply chain is not what it's supposed to be and somebody else put it there, right? Yeah, correct. And so if we look at this pipeline that we're going to be using as an example, there are several steps involved in here. So, for instance, we're going to clone the source code from the Git repo, run some tests, do some code analysis, et cetera. But as you said, one thing that we want to make sure we have is this history of the task runs to make sure that we can audit those afterwards. And the way the text and change provides that, so there are several ways we can provide that, but one of those is basically the task run, or yeah, let's look for task runs for instance here. The task run will have some specific annotations that are added to it when it's run, and this is done by the Tecton controller. And that's the beauty of using the operator, so it's very simple to configure. I can send a link, so if you guys want to look at how this is configured. And basically the operator allows you to very easily add the Tecton chain's capability to your existing OpenShift installation, and all you have to do is basically create a specific custom resource, CR, called Tecton chain, and the OpenShift operator, the Pipeline's operator will reconfigure itself and make that capability available. So once you have that running in your cluster, whenever you run a Pipeline, all the task runs are going to be signed, and the payload, the execution payload will be stored. So in this example, it's stored as an annotation on the task run itself, and basically if you decode the payload, you're going to be able to have the execution results and verify that this is what you were supposed to have run using a specific key. So basically when you deploy the operator, the chain's operator, you're going to use a key pair to sign your content using cosine, and if you want to verify, then you can use that same key to verify those runs. So as you can see here, it says Tecton.dev signed true, which means that this task run has been signed using Tecton chains, and the payload, if you wanted to verify that would be this one, and then you can decode it and make sure that we have the correct content in there. So that's one of the nice features that we've added with the OpenShift pipelines operator. Now if we go back to Tecton, and one of the benefits is that everything we see here is run inside a container, and basically those are broken down into tasks, and each task runs in a specific pod, and a specific task can be composed of several steps, and in that case, every step will run in a specific container within the same pod. So basically what it allows you to do is say this step is going to be run using this specific container image because you need to build your application, and you need, for example, this is a Java app so we need Maven and we need several other tools. So in the build step, we're going to be referencing that container image and maybe in a later step, we want to push the container image to a registry, so we're going to be using a different image that has things like scope.io, et cetera. So basically that's, I would say, the great thing with the Tecton. And if we wanted, we could definitely have a task that basically deploys your Kubernetes resources to a specific cluster and saves, for example, kubectl apply or oc apply, et cetera, and then you can just create those assets on your target clusters. But the inconvenience with that is once your pipeline has finished and your application has been deployed, you have no way, you have no loopback to understand what's happening on the target cluster or what's becoming more and more of a standard of your clusters because you are targeting multiple clusters, for example. So yes, you can deploy your artifacts, but what if somebody goes to the cluster and shuts down the application or makes changes to the application definition by changing the reticas or changing the image that is used to deploy that application or things like that. So the application can be tempered with in the target clusters, and you wouldn't know that using that type of CI city approach because once the pipeline is finished, it's finished. It's done. It's done. And there isn't a trace of the path that it took, right? Exactly. You might have an idea of the YAML files that you deployed at some specific point in time, but there is no, I would say, correlation with what's running on the clusters themselves. So that's where I would say the GitOps approach comes into play. And those are one of the main benefits of using GitOps. And so before speaking about the benefits, let's have a quick recap on what GitOps is. And then I'll speak about ArgoCity, which is the tool that we use within OpenShift to provide the GitOps capability. So as we said, so the first step of the application here was to clone the code, the application code from a repository. So everybody's been doing that for years, and nobody can think that they're going to have automated workflows or deploy applications without having the application codes towards somewhere in a repository or a Git repository. But one of the concepts that have been coined is since everybody started talking about infrastructure as code and such things was before doing infrastructure as code, let's make sure that we have also all of the assets that we need in a specific repository to make sure that we can from there trigger the infrastructure as code automation. So we have traceability to what assets have been used to create the components, to create infrastructure, etc. And as Kubernetes has been also evolved, has been evolving, the GitOps approach applied to Kubernetes was coined in this way. What if we could have all the Kubernetes resources that we want for our application to be stored in a Git repository and then have a Git tool constantly check for the Git content and make sure that this content is applied to whatever target clusters we want. And so one of the goals was this capability. The other one was to be able to target different clusters, so providing multi-cluster deployment capabilities. And that's exactly what GitOps is about. So it's about first having your assets stored in a Git repo, then have whatever tool you choose, make sure that those desired states, we call that a desired state, are applied to your clusters. And so for our adoption of GitOps, we've chosen to embrace Argo CD as an upstream project. So Red Hat is a big contributor to Argo CD as well. And we have our downstream version of it, which is called OpenShift GitOps, which is based on Argo CD and which is driven as well through an operator. So the operator here... As is everything. Yeah, exactly. If we look for the operator, we have the Argo CD, which is the community version, but we have the OpenShift GitOps, which is installed as well, which installs and configures Argo CD with everything you need to have role-based access and to have your OpenShift cluster registered to the target for the Argo CD instance, et cetera. So again, it does a lot of automation for you and makes it easier for you to... It's a head start. Yes. That's what you get so often with all of these operators, is you get a head start on things that you could assemble yourself, but kind of know what you're going to want to assemble so we go ahead and build that in as well. Sure. And so once you have those components installed, it will automatically add stuff to OpenShift so you can have direct links to the Argo CD instance that is running on the platform, but it will also provide things like authentication with the OpenShift authentication mechanism so you can basically use your users and automatically sign in to Argo CD with that having to manage users on your own, et cetera. So it's basically using the OpenShift Auth as a backend for authenticating the Argo CD users. Okay, so now let's have a look at how this would work in our pipeline. So again, as we said, we have several steps that happen here. So we do some analysis, security scans, et cetera, and at some point we're going to build the application artifact, we're going to store it in a repo, and then out of that, we're going to build an image that takes the jar file and puts it in a container image that we're going to push to the internal OpenShift registry. But how does the deployment actually occur on those target namespaces? So I have two namespaces for different environments. I have a dev environment here, which has a specific workload, and I have a stage environment which has another workload running there as well. And we can see that the deployment time is not the same, and the reason for that is because I have already triggered the pipeline and it has deployed my application to the dev environment. But now what I want to make sure is that I want to make sure that the Argo CD instance does not sync the application on the staging environment unless a specific step occurs, which is here the creation of a pull request allowing me to say, okay, I have reviewed the application in the dev environment, I can now accept the pull request and merge whatever changes I want so that the application can be deployed. So let's break down these components and see how things happen. So this specific step here will basically update the Git repository. So let's rewind it back very quickly. So let's go back to the Git repo. So basically we have this specific repository which has the application code. So it's the Java application that we deployed. But for the GitOps approach, we have a different repository and that's, I would say, a standard way of doing that. You always separate the application code from the GitOps assets code. And this is where we're going to have our content that we want to deploy on OpenShift. So we can see here that we have the application, we have some components. We have, for example, here a deployment file. We have a route and a service. And these are the things that are going to be pushed to the OpenShift target clusters. So the way this is going to work is we're going to have different environments. We have a dev environment and a staging environment. And if we look at this one here, we can see that we had a change that has occurred last time I run the pipeline, which was an hour ago. You were somebody else, right? Yeah, I mean, I did it because I wanted to prepare for the demo here. But I'm saying in principle. Oh, yeah, in principle, anybody can hear that. And one of the nice things here is that I have track of what things have changed. So I know now that I have updated my digest to point to the new image that has been built by the pipeline, and that has been pushed to the registry. So this basically says, okay, you had that digest for your image that was running on the dev environment. Now update it. And our goal is going to say, oh, there's a change in the files for the dev environment. Let me get those changes and apply those to the Kubernetes cluster. And this is basically what our goal CD does. So our goal CD, if we go back to the applications, we see that we have two applications, one for the dev environment, one for the staging environment. What the application says is that, yeah, we have a Git repository that contains the link to the assets. So as we can see here, it says the repository is this one. It's targeting the environment slash dev path to say these are the assets that are going to be deployed for that environment. Whereas this one is targeting a different path, which is the staging folder. So what happened when this step has been run is that we have updated the digest in the dev folder as we saw, which triggered the deployment of that application in the dev namespace. Now if we go back to the staging environment, this hasn't happened yet because we didn't merge the content yet. So let's go back to our Git repo and see if we have any open pull requests. So we can see, in fact, that we have this one. And we can see the changes that are going to be merged. We're going to do exactly the same thing that we did for the dev environment, except that now we're going to push the stages of the staging environment. So this one, so why can't I? So I'm going to merge the pull request here and I'm going to have, as a comment, a link to the PR. And what this will allow me to have is basically a traceability link that I can check from my Argo CD environment to be able to see the changes that have been pushed. So now let's go back, so we have merged the changes. Let's check those. Yeah, okay, so the changes have been made. Now if I check the application here, I see that it's still pointing to the last release, but now I'm going to ask it to refresh. And it says that it's out of sync, meaning that it noticed that there's a change in the application repo. And so basically what it did is it triggered the deployment of the new application, which we can see here. And maybe if we switch to the developer perspective, we can see the rolling update there. And basically it's going to trigger the deployment of the new application and decommission the old version of the application. So yeah, we can see that we have two pods in here. One is shutting down. The other one is starting up. And once it's finished, it's going, we're going to have only the latest version of the application running in there. So let's give it, yeah. So now we have the application up and running. If we see the logs, we're going to see that the server is up. And now if I go back to the topology view, I can access the application in the staging environment as well. So what happened is that Algo CD has refreshed, has checked the Git repository so that there was a change and it applied the change automatically to the environment. So this is one of the benefits of using Algo CD. And here I can see basically, traces back to the pull request that has been used to trigger the change and I can see basically all of the changes that have happened. So that's when I say, that's what I mean when I say traceability. It allows you to say, okay, we have deployed new items on the different clusters, but what exactly did we push there? What changes have we made? Did we change the routes? Did we change the deployment? Did we change images, et cetera? And the fact that you are using Algo CD and consigning all of your YAML resources in the Git repository allows you to have that auditability and be able to say, okay, these are the things that we deployed at this specific point in time, et cetera. So that's one of the benefits of doing those things. Now, if I go to the, so yeah, I have a deployment here and say for instance, I'm gonna scale down the application to zero. Okay, so I have, I don't have an instance running anymore. In the Algo CD definition, I should have at least one instance of it. And so now if I try to refresh or sync, it should see that there's only one or there's like a replica, which is not compliant to the number of components that I want is going to reconcile it with the cluster. And so that's what we mean by having reconciliation when we are using a GitOps approach. It's that, so with Algo CD, you have different options. You have the ability to automatically sync the components. So meaning that Algo CD is going to continuously check for the status and whenever it notices that there's a change it's going to automatically reconcile it with the state that we have specified in our repository. So that's one of the benefits of using Algo CD as well. So yeah, just looking at the time. Yeah, we still have some time here. So did you have any questions or do we have any questions in the chat regarding those concepts that we have spoken about? That's a good question. Let's see if we have. Don't think we have any questions in the chat and I think you laid that out well enough for even me to see what was going on there. So just to kind of recap though, the beauty of this is the complete insight and traceability of all the actions that have been taken and what those actions involved and that there's nothing that you can't in a sense reconstruct going backwards when something has been pushed and I think that's just an amazing capability and it's one like I said at the outset that there's some industries where there's even a regulatory requirement that if something goes wrong in your systems you better know exactly why it was, right? Certainly in the financial industry there's a lot of expectation that you can't just say, well, yeah, our customers money showed up somewhere that it wasn't supposed to because of above. You don't get to do that. You have to be able to reconstruct but I think even more broadly beyond the regulatory environment just in general with contemporary application development you have to be able to do this because there's too many changes going forward. There are changes that are independent of other changes. The whole way that applications are approached right now if you don't have a mechanism like this how do you even maintain any sort of control at all, right? Yeah, sure. And so that's exactly why those new I would say initiatives are completing the value proposition of things like Tecton or Argo CD, et cetera. So if we think about, if we speak about the supply chain security and the Tecton chains component that we have spoken about there is a broader initiative which is SIGSTOR basically and to which Red Hat is a contributor as well and SIGSTOR is, I would say consortium of many IT vendors or actors to define standards for signing, verifying and protecting software using several components. So Cosign is one of them. Chains is one of them. Recore is one of them, et cetera. So there are several components and Red Hat is integrating those components in OpenShift as they matter and as we see fit. And so for example we saw that we have the integration of Tecton chains by signing the task runs or signing the container images but we also have an integration with Recore where we can basically send those results to a public I would say instance of Recore to basically provide attestation for whatever you have been building in your building and running in your pipelines. So is that so that a third party can then have their own ability to get the attestation and know that oh and we now have complete insight into what's happened and how this is working and what's changed, right? Yes, exactly. So that's the end goal of it. And we do provide an integration so there's just a couple things that you would have to configure. Let me see if there's mention of Recore here for example. Yeah, so yeah, we could basically use Recore to configure Recore to push those changes to to the public instance. So all of those things are configured through the operator and I don't know if you guys have shared the link to that new feature in the open chip documentation which takes you through all the steps that you need to run to have Tecton chains running and to also have an example if you want to play with that on open chip. Allow me to do that right now, sir. Share. And so just tell us a little bit about this doc that I just shared. Yeah, so this is the documentation to open chip pipelines especially Tecton chains for supply chain security and here it tells you for instance what type of format do you want to use for the tasks. The one we saw is the Tecton one. There's another one which is in total which is a standard also for providing those attestations. There's transparency which I believe is going to enable Recore on the cluster. So basically you see it's just a matter of switching the flag in there and it's going to configure a whole set of components to have to configure on your own if you were not using the operator that would provide. Now one quick thing to mention there as noted at the top of the docs is that Tecton chains is actually a technology preview. Yeah. And so do we have a sense of when that might move from preview to fully supported in the product? So I believe chains should be supported in the next release. I'll have to check again on the roadmap but yeah that's what I had in mind which is like the next release of Tecton. So I would say viewers if you are interested in Tecton chains the best way to ensure that it's there as fully in the next release as opposed to a technology preview is to preview the technology because that helps us ensure that it's actually ready for going beyond that technology preview stage. Yeah. So maybe just as a quick recap so what we saw here is what Tecton is how Tecton is evolving to provide things so for example if we speak about the log for sale vulnerability that made a lot of caused a lot of harm and had a huge impact because it's a library that has been used very broadly and people started thinking about thinking again about supply chain security okay we really need to have ways to troubleshoot understand what we have built what components we have used what has been run in our pipelines provide you know at stations provide transparency to be able to publicly verify those things and that's why things like six store has have been put together and as things emerge in the different upstream projects or Kubernetes ecosystem Red Hat looks into those projects when it's something that makes sense that is meaningful that is going to be sustainable we make contribution to those contributions and then we integrate those as well to open shift as you can see with things like Tecton chains, cosine and recall etc Yeah. Well again the operator model is to do a lot of that sort of thing and to explore some different components and things that might actually be useful to some or many or all open shift users out there and actually sometimes even try some competing ideas that very often in our world there are more than one way to do things and more than one perspective and philosophy and the beauty of operators is that there is actually a mechanism for trying some different ways of doing some of the kinds of things that we're talking about and that you put them to test and test them in the marketplace of ideas in the field, right? Correct Yeah and so just again to continue with the recap so what we saw here is that Tecton will provide the ability to run those tasks as containers on open shift itself will provide that all its ability using chains etc but what we see here and that's how we transition from I would say Tecton to Argo CD is basically we have the Git repository holding so one repository holding the application code and another one holding the application Kubernetes Assets like deployments etc and the pipeline would make updates to those resources to say so here's the new image that you need to deploy or here's a new component that we wanted to add to this release of the application by adding a couple deployments or whatever and Argo CD is going to check that repository and make sure that it synchronizes the content of the Git repository with what's actually running in the cluster and something that we didn't mention but that is fairly important is this notion of when you want to push your application to different environments oftentimes you want to have specific configurations for each environment so using things like environment variables where you say these are the users I'm going to use or the ports I'm going to use in the dev environment and these are the ones I'm going to be using in the staging or production environments these are the secrets etc and the way you can do those things with Argo CD is basically breaking down your application into a specific layout where you're going to say I have my main structure of the items that I want to push that you would say for example having the app folder and you are going to say I have the deployments, the routes, the services and using customize which is yet another upstream project that is heavily used we're going to say these are the resources that we want to over I would say surcharge basically replace specifically for each environment so you have this this folder saying these are the files, these are the main placeholders but we want I would say replace the content of those files with the content that we have in those specific folders so in the dev we are going to update specifically this content by making changes to the digest in the specific file in the staging environment we're going to be doing the same by changing the content to a new value and basically our goal city is going to pick those values and apply them to the different environments so that's a very important topic when you are doing getups because you want to make sure that you can have different environments and different values of your environments and this isn't I would say the easiest way to do that so do not create branches for each and every environment use one branch and use customize and have those components in there yeah well so we've had a very quiet audience today on the subject of Tecton chains our Go CD and how that relates to getups but we actually have a question from Kieran which I think rewinds us quite a bit back to some fundamental a very fundamental question that I think we can close on and I'll let you take this one is what's the difference between doctors and OpenShift Red Hat in brief yeah so that's that's a big question and I don't even know how to answer that today because I think what I would say is that doctor is very much sort of a container management tool OpenShift includes container management with Podman which acts very much like doctor but it's so much more than that it's Kubernetes handling container orchestration and scaling and so on and then this whole ecosystem of operators that bring additional capabilities in you know that would be my my quick elevator here's the difference I anything you want to add before we conclude today no so the reason I was saying I don't know how to answer that because underneath the name docker there is a lot of things especially since it has been acquired by a different company and docker as a trademark decided to focus on docker desktop which is basically providing a better user experience for developers who are using containers than just using the docker CLI and such things but yeah OpenShift is definitely I would say the container platform that provides orchestration using Kubernetes using a container engine using providing a lot of services on top of that among which tools to help developers such as an integrated IDE service mesh, serverless pipelines, GitOps monitoring even running virtual machines on top of your virtual Kubernetes environment on top of OpenShift so yeah OpenShift is a very broad there's a lot there and so just Kira and I would say if this is something you want to explore a little bit more Red Hat training and certification actually offers a full portfolio of things covering from step one to every step beyond that on these subjects but you know here on Level Up Hour of course we try to give you some insight into things that are coming down the road things like for example the Tecton Chains which is a technology preview and you know every week or that well every other week is about what we're on you know Jafar will explore some interesting topics like Tecton Chains or great many other things so take a look at our earlier videos they'll give you a lot of information about what OpenShift is about and of course we'll be back in a couple of weeks we often have visitors who will also share some additional information and thoughts about you know about the world of OpenShift so any parting thoughts before we close out yeah so oftentimes if you get the question what's the difference between OpenShift and Kubernetes and not really what's the difference between OpenShift because like Docker if we had to this is like one of the small components of what a container platform should be which is the engine that allows you to run your containers but to answer the latter question what's the difference between OpenShift and Kubernetes I actually wrote a blog a couple years ago and if you want to look at it maybe that will give you a better understanding of what OpenShift is as a platform what types of services it provides for different types of personas whether you're a developer or you're an administrator because we do provide features for those two personas I hope that gives a very comprehensive answer to a very straightforward question on that note thank you Jafar for walking us through Tecton Chains, Argo CD and how they relate to GitOps I feel like I actually gained a whole lot of knowledge there in just an hour and that's our aim here so until next time please remember to like, subscribe, share, let people know that we're here we're always doing some sort of topic that is relevant to the world of containers, Kubernetes and OpenShift and so with that I'll bid you a good day a good night and a good week thank you everyone thank you very much, bye bye