 In this particular talk, we have, I mean, very less slide and try to do more demo kind of things and try to see if, you know, we can able to make the developers life a bit easier during this process, right? So the agenda we have, very simple, we have a little bit talk about the portment. And with the portment desktop, how you want to connect with this containers and the Kubernetes ecosystem. And then I switch it back, then we'll talk a little bit about the open shift local, what it is, how we are actually using it, and then the demo. And then what is our future plans with that project, and then the question answers, right? And we are trying to finish it within now, 43 minutes, I hope. So the portment, how many of you already know about portment, can you raise your hands? And I want to, you know, bit interactive so that a lot of folks don't feel sleep after this heavy lunch, right? And I also don't want to feel sleep because I will just talk and talk. So how many people are actually know about portment? Read about it, very less. And how many people are even heard and use the portment, more or less. How many people are actually know about docker, just heard about docker, nice. And how many people use docker, actually, nice. So when you go back to home, right, today, or in this venue, whenever you get the time and you have your laptop, at least try to go to that site and try to see how it is. And I'll show in one of the slides that you don't have to do much. Whatever you are doing with the docker, without any change, you can just alias docker with the portment, and all the scripts which you have, and all the workflow which you have should start working as expected, like, without any failure, right? If there is a failure, then there is an issue on the portment side. So when we already had docker, what is the use of the portment? Why we actually created that project, right? So I think in the beginning, Ken Oteramki already explained some of the philosophy around having the portment. In the docker, it's a daemon process which is running behind the route, and we'll see after two, three slides later the architecture of the docker and how the portment architecture is kind of different from that. But another thing is we see that with one solution, sorry, one tool doesn't fit everything. You should not have any one Swiss knife, and you can handle all those kinds of things. You can have small tools with a specific task which you want to perform and which you want to complete, right? So the way we want to work with the open standard, right? And open standard means we want to see, like, even from the community side, we get the issues, pull request, and the portmen already have the meetings weekly, I think, with the community, which is open to all to join and discuss about some of the features, right? And the interoperability, compatibility is like something which is, you know, so how many of you know the OCI image layering, OCI opens, right? So there's open container initiatives, open source container initiatives, which actually detail the specs of the image, the spec of the container, how it's supposed to work, how it's supposed to run, how it's supposed to build, and all those things, right? So it should be compatible with that also. So why the name is portmen? So when we started this project, the mind share we had is something like, okay, it is not only the containers which we want to manage, but we also want to manage the Kubernetes resources, right? And what is the basic unit of the Kubernetes resources is the port, right? This is the basic unit, this is the first unit. And so we started with, okay, we will have at least the port definition as part of the portmen so that the user actually able to create the port, using the portmen and manage it like it's managed the Kubernetes side, without actually running any Kubernetes at all, right? So that's why it is like port manager and the name become the portmen. As I said, initially like right now, if you already use Docker and if you want to try out portmen, you just need to do the alias. And once you do the alias, whatever the scripting you have and whatever the workflow which you have is supposed to work as expected without any delay. As I said, it has the YAML support, like at least the port level, because it's, we only have the port specific changes right now in the portmen. We are going to add some more Kubernetes resources, so maybe the deployment and the service part of it. But right now it's port, which is supported. The biggest thing is like no daemon. There is no daemon process which is running. It works as a fork. And then it has the rootless support from the beginning. Like from the inception, it has the rootless support. It works with the rootless. And then it's, you can use it as for the developers or sister admins and it's run on all the platforms. Right now I'm going to run it on the Mac. So that's what I was saying before. This is how the docker actually runs. So whenever you, so most of the time you use the docker CLI, right? So whenever you execute any command from the docker CLI, that's how the command actually process till your containers run, right? So it's every single command goes to the docker CLI, which is a daemon, which is running in the root context. And then the container and then what the process is executed. But in the rootless context, which is actually how the portman run, it's a fork x method. So every portman, the command we use, the cone one is a very small process, which is actually the main PID, which is running. And it exact that particular command. And then with the fork, you can have whatever the command the result, which you want to have. Now, there are a lot of people in the sender-sys admin side. So the people who are working with the Linux, they are very familiar about the unit files, the way they want to run the services. And portman actually make it very easy to create the unit file. There is actually the system regenerator, which is there. And you have the auto updates around it. So this is a very basic unit file you have here, which says that, OK, I want to have this image. And this is the volume. And when this image execute, I want to have sleep for 100 seconds. And that's it. Now, you can just use the system D and enable that service. So every time when your system comes up, this thing is going to execute it, right? And it's as simple as possible. Now, it is very easy on the C side admin side where they have to manage a lot of services, right? They can't every single time when a system boots, they can't run every portman commands. So they can just write the service using the system regenerator, and it will automatically have the unit files, which you can enable it. There are a lot of resource on the developer's blog about how you can deploy the applications with the multi-cluster with Qubelet and the auto updates and rollbacks and everything. So after the session, maybe you can just check out those blog posts and see how all things are works there. So a bit of summary on the portman side, so the main focus is around security. We are local compatible. There is a ML support and the system de-goode integrations. And it's really bridged the gap between the developer and productions because the developers can use it. And then the products on the C side can make the unit files out of it and then just run it. Now, going forward with the portman desktop, so this is a kind of you want to replicate what the Docker desktop actually doing, right? So most of the people are not really like the CLI part, and they want to have some kind of a UI to interact with the portman, right? So till now, everything was working with the CLI, and then we wrote these applications, which supposed to give you a proper UI front, which you can interact with the ports, containers, registries, and all those kind of things. So what's right now if a developer, right? So developers, to me, is where, so the time I take writing the code and then pushing to the production, right, how much time I'm consuming, how much inconsistency is there between my environment and when it goes to the production side, right? So there is always a saying that, OK, it works on my system. And then says, I'll say, no, it's not working on the production. This is how it breaks. Sometime what users usually do is assume that I'm using the Kubernetes, plain Kubernetes cluster, and my target production is using OpenShift. There are a bit of issues in that side also, because OpenShift also have the security enabled by default, which means you directly can't run an image, which is run as a root user. But on the plain Kubernetes, you can run it. So you might test it on your local system and see, oh, everything works even with the Kubernetes. So it should work with there also, but that's not the case. So that's the things like we want to bridge that gap. So this is the kind of the adoption barrier which we have. Inner loop is something like how our local developer environment right now looks like. And then on the far left side, you see the production, the upstream, how they are managing the services. So how our application is moving from local to production is how we can, this wall of discrepancy, how we can reduce that part. So in the portman desktop, we have some kind of a plug-in mechanism. So till this part, this is like the portman engine, which you can use it, the kind of wrapper. You can manage your containers, your port, your local compose if you have. And then far as we have the plug-ins, which is like OpenShift remote and managed services. So we have different plug-ins. We have plug-ins for kind, mini-cube OpenShift local, where once you have satisfied your application and you want to deploy it either on the plain Kubernetes or on the OpenShift side, how you can use it. And then we have even plug-ins for Deusenbox. So you can enable that plug-in and deploy the same application to Deusenbox, how it is going to look like in the production side. So that end-to-end lifecycle is taken care of one particular tool itself. Now to OpenShift local. So OpenShift 4 is how many of you actually deployed OpenShift? OpenShift 4? No one? So what I'll say that, again, when you go back, try to look into it and try to feel the pain which we are feeling there today sometime, how much time it takes and how much you have to learn to even to deploy those things. So what OpenShift local do is that it will make it easier for the developer. It's a single node OpenShift cluster. The way we do it, we use the OpenShift single node. So we use OpenShift installer. We create a single node OpenShift cluster. And then we compress that particular image and that what we distribute it so that the end user doesn't have to suffer that much of time. And all the kind of configurations about the DNS and everything like we do. So it's the quickest way to run or building the OpenShift cluster on the local computer. And then you can actually email it like whatever the development environment you want to have on your cloud, even locally. And then you can deploy a replication and see if that works well on this particular cluster. This is the architecture of the OpenShift local. So the way we work, this gray part is like this laptop is your local host or local system. And this is your host OS. And then we use the native hypervisor, which means for Windows we use Hyper-V. For Mac we are using Apple virtualizations. And then for Linux we are using the live work OK VM. And then using the native hypervisor, we deploy a VM which already contains all those parts. And then we have a different kind of a tool, small tools which interact with this particular VM. But for the end user, if you see in the top, the user only has to do the CRC setup, CRC start, and then OCNV2 just to interact with the OpenShift cluster. So it makes very easy for the developers. So now is the demo part. So the way I have this demo, it's very basic because I see like a lot of people are not even used portman. And very less people actually used docker. So I have very basic application, which is a very simple HTTP server written in Golang. And it's just asked for particular, it's only had the, I think, only the gate method, not even post methods. So we'll see that demo. What we'll try to do is that we will try to work like a developer will work, like create that application, try to run locally, see if that works, then try to create a container image out of it, then use that container image and see if that able to run that application again, then deploy it at a particular image to a Kubernetes environment, and finally, deploy that particular image to an OpenShift cluster, which is like the developer sandbox. And everything we are going with this portman desktop so that you are going to familiar with the UI aspect of it also, how it actually works. Right? Yes. Yeah, so there is plugins, which means you actually able to connect to the remote cluster which is running in the plugin. And then you can use the deployment of this particular things without any issue. And we are only doing the port deployments. We are not using any other resources of the Kubernetes, like deployments or anything at all. We are only giving till the port, like the port is sort of the basic unit of it, right? So any, so this plugin is, OK, so there is two things. So the plugin which I'm using is the OpenShift sandbox, which is mostly on the OpenShift sandbox side. But then there is another plugin. So in the Kubernetes world, there's something called context. You use the kubectl command, and you use a kubeconfig. And then you say, OK, and the kubeconfig have a different context. Like this cluster match with this server, this cluster match with this server, and all those kind of things. So this is not a very specific to the portman desktop, but it shows you the context. I'll show you that. And it doesn't matter where your Kubernetes cluster is, right? Thank you. So I think, is it visible in the back? Should I increase font? OK. So as I said, it's a very basic Go application. So I'll show you it's, I think, few lines of code. So there is only three, so yeah, it's only three handlers. Three request handler, which we have. And what it do is like I have a simple basic server, which is going to run. And when I do this particular URL, it will say hello, goes to hello handler, which will say hello. Then this headers, and this version. And that's it. And it's going to listen on 8080 port, right? So assume that I'm a developer. I'm trying to create this application. And then the first thing that I'll do is that, OK, I write it out. And on my local system, what I'll do is first is that, OK. Let me see if it is run in my local machine, right? So I'll say, OK, go run, hoping everything right. So yeah, it says, OK, the bind address is already used. So let's see if I already have some containers, which is running. Yeah. One second, I'll delete that. OK. And then let's again try it out. So now, my server is running. As I said, it's running on the port 8080. So I can go here. And I'll say, OK, localhost 8080. And it say hello, right? Right? And I have two different listings. And it say version is V1, which is fine, which is very basic, right? That means my application is working as expected. And I'm able to browse it. The next step which I'll do is, OK, now I have this application. I have to containerize it. So the next step is how to create this container file. So the file I already have, so this is the container file I have. It's very basic container file. I'm using multi-stage, but even it can be very, very simple. You don't have to use the multi-stage also. But what I'm doing is that I'm grabbing this particular image as a builder. And then I build my program. And I keep it binary name as my server. And then in the second step what I'm doing is, again, using the minimal of UBI image. And I copy whatever I built on that particular builder. And that's it. I expose this particular port. And as a user, I'll tell you why we are using this user when I'll go in the deploying part. And the entry point is my server is the binary, right? It's very simple. Now comes to the portman desktop, how we run it. So I already have the portman desktop running. So the interface is something like that. You see this. Here you can see, let's go to the dashboard, right? So there is some plug-in settings. If you go here, it will say, OK, what all the plug-ins are running. So I have portman, which is running, which is version 4.8.0. I have open shift local, which is running this particular version. Then I have a developer sandbox, which is connected, and which is running. And then there are other plug-ins which are installed. I'm not using those, but some of them are installed by default, like Docker, Lima. These are installed default. And even the kind. Then the developer sandbox and open shift local is something you need to install if you really want to consume those, right? And installation is very simple. Like you go to the settings. You go to the desktop extension. There are a lot of desktop extensions. The images are there. You can use it. Even you can use the extension, which is works with the Docker desktop. Some of the extensions, you can directly use it, right? So now these are the images, which I already have some of them. But let's assume that I want to create the image. So the first thing I say, OK, I want to build image. It will ask, OK, do you have a container file, right? So I say, yes, I have a container file. This is my container file. I select that. It will say, OK, this is the build context directory, which I'm going to use. What this is for is like, if I'm copying a specific file, this is the directory where it will find that file. If it is not there, it will show the error, right? So this is the build context directory, which I have. The image name, I'll give it a name of this image so that I can even push it later. So I'll say, OK, this is version three. Let's see. Yeah, just a tag, right? And then it will start building it. So right now, whatever I did using UI, you can do it with the CLI, which is like portman build, minus t for the tag, and the container file, minus f, and then the file location. And then the final thing is the build context you can give. But for a newbie who never used portman or docker, it will be easy to understand, OK, these are the things I have to mention. And this is the way I can generate the image out of it, right? So once I have that image, it takes some time to build it. So once I have that image, what we'll do is that we'll try to run it in my local system itself, which is using the portman. So in the container ecosystem, you have an image. And then you run the image, which will create a container. And then you try to access those containers, right? With the endpoints, like right now, we are exposing the port of 8080. So I'm trying to connect with that particular port. Go build is taking time. It's not a big thing, but the only thing is that it tried to get the dependency and then actually build it. So it's all depend on the network connectivity. And hope it will work. But I already have the image. So until unless it is built, which is fine, I'll go back again images. I already have an image with V2, which I built just before this session. Yeah, this is the V2 image. Now, what I'll do is that once I have the image right, and you can see there is something called notification. And it says that this particular build, which we did, is still going on. And once it's complete, it will say it's complete. So I'll keep it there. If I really want to go back, it's go to the task and it will go back there. All right. So now I have the image of V2. What I'll do is I'll say, OK, I just run it and create the container out of it. It will ask some of the things like what will be the name of the container. If I'll not give anything, it will take a random name. But let's not do that. Let's just say, OK, I have the name of the container is my server. And do you want to put any command? Like once the container start, do you want to run any command on it? I don't want anything. Then it also ask the port mapping. Like it knows that this port is export. Do you want to expose that port with the same one on the local host? Or do you want to have a different one, which is some same in the command line minus p option, which you have? I don't want that. I have the same. The only thing is that I have to do is, yeah, I already actually exited from process. So that's all I need. And I'll say, OK, just start that container. And now the container is already started. It says starting server. And if I see this container is already running. And if I'll go here, you can see there's a small button which will say open to the browser. If I click here, it will ask, do I want to open it? I say yes. And it's the same thing. Right now it's running from the container, right? And the endpoints are still the same. And still the V1, right? So now it works. So we created the image. We tried to deploy it on the local. It works. Everything good. Now try to deploy it locally on a cluster. I have the open shift local which is running. I want to deploy it on the open shift cluster without pushing that image. So be in the mind, assume that I have some. I have an image which has some data which is very secret to the company. And I don't want to upload that image to anywhere in any registry, right? So how we can do it? So here in the port, when once you build the image, which is, again, the local, there is a section which will say, OK, do you want to push that image to the open shift local cluster? Which means it directly pushes that image to the local cluster. You don't have to upload it anywhere. So I already have that. But I'll again click it. And if you go in the task manager, it will say it is already pushed. Our first thing is also completed, right? So now I don't have to push to the query and then fetch it again. But my image is already there in local. So now what I'll do is, again, go back here on the container side. And then I'll say, OK, this is the port. And this is another button which will say, OK, deploy it on the Kubernetes, right? So I'll say, OK, I want to deploy it on the Kubernetes. I want to make port is like, OK, my server port is fine. On the open shift, there is a security context which we have. So in the plain Kubernetes, you don't need. But on the open shift, there is security context which you have to provide. So I'll say, OK, yes, I want to do that because I'm putting it on the open shift cluster. And it will create the routes so that I can able to access my application through the route. And that's it. Like, if you want to deploy it on a different namespace, you can select whatever the namespace you want to deploy. But I have the default as it is. And I'll say, OK, deploy it. And it will say, you see how fast it is running because the image is already there. It's not able to pull the images like this. This is one 108 MB image from the internet. So it's already there. And it's already says that, OK, the route is available with this. So I can access this. It will say, do you want? Yes. And again, I have the application which is running on the open shift cluster now. So everything is right now locally. I didn't push the image yet. And even I'll say, OK, the version, which will again the same. The next thing is that, again, without pushing the image, how we can deploy it on the cluster of the open shift cluster. So assume that the organization have in-house open shift cluster or in-house Kubernetes cluster. So you don't want to push the image on the public registries like Quay or Docker Hub or anywhere. So there is another thing which you can do is if you go here, you can see there is registries. You can add the registry which you want. Since I'm using the developer hub, the registry is already added because there is already registry configured. But if you are using a different cluster of the Kubernetes or open shift, you can just provide the URL of the registry and points and the username and password and you configure it. Once you configure it, you go to the images and then you say that, OK, so usually what happens is that how you usually put the image to any registry. So usually you tag it like the full name of the domain, like Quay.io slash whatever the name space you have and the image name. But on the open shift developer sandbox side, we already do the tagging. So I already have that thing which I already pushed to the developer sandbox. Because here, if you see again, if you go to this particular image, there is already the icon which will say push that image to the developer sandbox. Once you do that, what it do is that it will create a tag automatically. It's just a tag if you see the sasam, it will be the same as the image which is this one. It's the same sasam you will see. So you just create the tag, nothing else. You can see even here, both have the same a9a. Now what I'll do is, OK, I want to run this image locally. I can do it again. I'll just remove those things because it's already worked. I don't want to take that particular port, which is 8080. So now I have this image. I'll say, OK, run this image. Same thing. I'll say run it as my server, whatever the path, everything. Or if I'll go here, yeah. So this is the port which is running. Now I want to say, OK, deploy it to the Kubernetes. Now the thing which I was saying, the context, right? OK, so sorry. What I have to do, I have to select the context first before deploying it on the cluster. So that's very easy. If you go here, if you see the Kubernetes tag, there's a different context available. These are all coming from the kubeconfig file. So if you go in the command line using this, I'll show you how it looks like on the CLI side. So these are all the contexts when you are going to get the gate context. And then you can set it using useContext command also. But here it is very easy. I'm going to set this particular context. And now if you see here down, it's already set my context to the Deus and Box context, right? So now if I'll deploy this image, deploy it to the cluster, it will going to automatically get the context. I'll again say, OK, update the manifest port security. I already pushed that image. So everything should be good enough because it's going to use the same registry where I pushed the image. So when I'll see deploy, it's supposed to work. And yes, it's still pending. And it says, image pulled back off. So it looks like the image is not there. So what I'll do is that I'll just push that image back where it is here, push image to the same box. It takes some time because now it's actually using the internet connection from pushing my local to that particular. And it says that it's already successfully pushed. So now I think my port supposed to be running. If it is not, it will run soon. Or I can actually delete it and then recreate it with that. It says now this. So what happens is the portman only understands the port resource. It doesn't understand the service resource and everything. Once you have that resource already there, you can't delete it using the portman desktop. So that's something you have to do with the command line. So if you see on that particular namespace, all the resources which I have is I have the service. And I also have the route which I have to kind of delete those things manually. And then it's supposed to be available. So if you see, it is running. And I even have the route. So if I actually go to that particular route, it should work. So now it is deployed on the Dev Sandbox, which is the remote open-sift cluster. So everything we did, and we didn't push anything to the public registry. So it's that simple right now. So this is the demo which I had, like the proper developer workflow which usually developer might have. So if you have any questions, let me know. And then let's go back to the slides. So why I choose the portman? Yeah, so docker, OK. So docker is going to be, docker desktop is going, I think it's already now a subscription base now. So you have to pay the docker to use the docker desktop. So portman is free? Yeah, it's completely free. Portman desktop is completely free. That's the thing. I think in the keynote, I'm not sure if you are there. But we also talked about it's completely free for any of the platform. It's going to behave same, and you can use it. It doesn't matter what is your organization size. Are you individual, or if you are working with the organization? Let's not talking about the portman desktop or docker. OK. Then it means I have an experience on this, like docker. So I can run the service in a docker container. Then why I choose the portman? Sorry, what? Why I choose the portman as a container? This is not a container. This is a container engine. You have different tools to manage your containers. You have different flavor. You can use docker. If you are happy with the docker, you want to say, OK, you are saying that, OK, if I'm not using the docker desktop, which is maybe the subscription, but I'm using something which is like a community-available, like Mobi project, right? Because I'm using the terminal so they are docker. Even with the portman, you can use the same thing. In the beginning of the talk, when I saw you, the alias thing. OK, so the biggest difference is the security side. Like when I saw in the beginning, why we are actually created another tool of the portman because of the root less and all the philosophy we want to have that, which docker doesn't provide by default for you. Even right now, the root less on the docker is something kind of blurry in the sense of the way it's actually eject the process. But on the portman side, it's from the beginning it is like the root less side of the things. Thank you very much. Can I just finish? Give me two minutes. So the portman desktop community addition is available. So you can download it. It's already, if you go to Google, say portman desktop, there's portman desktop.io, where you can download it for whatever platform you want to use it. And then try to explore it, run it, and see. And even the sandbox, if you attended the keynote right, you know that, OK, that's also free. So you can have the sandbox account using your developers.radite.com and then plug it with the portman desktop. And there are all the things like we also have the VPN and proxy configuration support, which is the back end with the VPN kit which we are using. And all those kind of things, which comes from, yeah. So free, open, and extendable by default. And I think this is another way, like how this portman desktop actually created. It's a UI framework which use these particular toolings. The client use the electron, which is close platform. And then this is the virtualization stack, which I was saying, the Hyper-V, Apple Hypervisors, and the native, which is the QMU KVM. It also supports the Docker extension, which I told, because you can extend it. So right now, we saw that kind, lima, docker, and openshift local. And then it has the default registry plug-ins. So you can configure the default registry. You can configure the non-secure registry also. So that's there. So future capabilities. So we are working with some of the part already done. It's a bit old. In the sense of we already are working with the Apple Hypervisor support, which is already there in the tech preview, in the sense of we can try it out. And then the third point is we want to enhance the YAML support in the sense of the resource support. Right now, we are only supporting the port, but we want to support the other resources of the Kubernetes. The networking part of it, like in the rootless mode, there are some of the part which doesn't work as expected. So we want to improve it. So with the portman 5.x, we are changing the networking stack. It is going to be something different, which will help us with that particular point. And then let us know what you want to see. Like when you go back and try it out and see if whatever the workload you are currently using on the docker, are you able to use it with the portman? Does the portman make your life a little bit easier? Or at least the same level which docker is actually providing you, then it's good. And create the issue if you want to have. These are the flow like where if you want to contribute, there are some of the points like we want the contribution from that side right now. And I think this is the screen. So this would take in the sense of where you can get those particular two links. And the slides are, I think slides will be available after a talk anyway. Do I go back? OK, I'll give you one minute. And I'm done, I think that's the next slide is question and answer. Now you can ask the questions. I have two minutes left. I didn't hear. Can you customize it? Customize of docker desktop? Yes. As I told you, it is an extension base. So assume that there is something which is missing. You can add extension to that particular things and then you can customize it according to your use. Sorry? On your Chrome. On your Chrome? No, I don't think it's a desktop application, not a web application in the sense. It shouldn't be like, it should be like just aliasing. It should directly go as smooth as possible. So the image created for docker desktop directly we can push to? You can just export it and import it on the Portman desktop. Because the same, both the images are OCI compatible. So it should be fine. It shouldn't have any issue. You can try it out, I think. Yeah, so there was some blog post sometime back where actually they made some kind of resource uses comprehensive, but it's only around the docker side, but not the Portman side. It's not the comparison, but mostly around the docker, like how much resource it's using. Till now, we didn't see any kind of issues regarding the resource limitations and all those things. But even there are all. So it's a very new project, right? The desktop part is very new, and the Portman is also like four years, five years back. So we are still evolving in that. And Portman was initially only for the Linux. Now we are actually the Portman machine, which is actually supporting both the different platform. And that is very new. That is, I think, a year and a half old or something like that. So we are still exploring it. And if you see there is some performance issue, right? You see there is a container file which is able to build in the docker very fast, but not in the Portman. Let us now create an issue, and we'll find a good solution for it. Any more questions? Yeah. Yes, you can run it. I think one of my colleagues actually uses it just to make sure that the compare it, like everything works as expected. You can use it. I think I don't see any issue till now. I mean, I'm not using it, but somebody use it. So I can say, OK, it works. Yeah. Sorry, sorry. Helm charts are supported in this. So Helm chart is, I think, on the Kubernetes side, not on the Portman desktop. There's nothing to do with the Portman desktop or docker side. Helm chart is, I think, on the Kubernetes side. You can install whatever you want. Whatever the Kubernetes resource you want to install, you can install it. Once you have the Kubernetes layer, right? Whatever the supported for the Kubernetes, you can install it. Sorry. When you're compiling a Mavel project. Sorry, it's checking what? Yeah. When I'm compiling a Mavel project, I can say that it's checking for a suitable docker environment for it to run. Yes. If it's not there, it runs in the local and locally. So when I'm switching to Portman desktop, instead of docker desktop, should I have any configurations? So yeah, I forgot to show you that it should be there. There's something called this button. It says docker compatibility. When you click it, what it do is that whatever the docker socket your Mavel project need, right? It will enable that. So it will map that particular docker socket to the Portman in behind the scene. And it should work. Because a lot of the people in our org are actually using for that. I think there is a blog post also. Maybe check on developers.id. Yeah. Yes? Yeah, you use kind here. No, no, no. Services with the kind, whatever you can do with the kind, it's supposed to be done here also without any issue. Yes, yes. Yes, yes, yes. But again, there is a tooling limitation, right? This tool understands the port. But you can deploy the kind. And then you can use the CLI of the cube's cube cuttle, right? And then you can do whatever you want. Yes? Yeah, right, right. That's what I did actually using OC config, actually. Yeah. Yeah. How much the CPU utilization, the memory utilization? Sorry, can you repeat? If I want to monitor that container, how much the CPU utilization and memory utilization? I think they have. So I never tried it out. But if I go to the port, right? This is which is running. It has some kind of this memory usage and CPU usage. So right now, you can see it's using like 10 MB, something like that. I'm not sure if you can see it, but it's there. And it's using very less CPU. That's why it says 0.0%. But you can manage it like you can inspect the container. That's mostly the container side of the things where you inspect the container. You can see what is the memory usage and CPU usage. And if you're using the same, so you can't have all the capability of the command line to the UI. If you really want to have some kind of advanced things, you always switch back to the UI where you can actually have more options, like even telling that, OK, this container is supposed to use only that much of CPU and that much of memory. You can have that on the CLI side. But not here, like we don't have the option. Everything is not exposed, because otherwise it will be very clattery. Because your use case might be then 8%, 9% of the users. I think here you can fix. I never tried it out, but I see there is a command line options for the portman. Thank you.