 So thank you so much everyone and we are going to start with the first of the sessions that is getting to know Podman and Podman desktop and friends by Karan and Ranki. Karan, Ranki, please go to the stage. Hi guys, so I'm Karan and he's my friend Ranki. So here I want you guys to pick up your phone, take your wallet and prove me that you can drive. Scan this QR code and upload your driving lessons. And don't share your Aadhar. I'm not just kidding. So you can scan this. It's a mini game. We are obsessed about cars. So Burr showed a fantastic demo of you guys are competing with gray and yellow cars. I have a different one. It's a hybrid car that does not have any wheels. So it would be interesting for you. Meanwhile, let me go brief about what we're going to talk about. So Ranki, I want to go over and show you, tell you about a nice story of containers, how this all evolved and how this came up into a specification, a community around it, and the runtime landscape. So all of those great things that I'm going to talk about. Then we will dive deep into developer tools for containers, how you can run, build, and manage containers effectively on a local machine, onto OpenShift, as well as using Dockers. And then we'll show you some more cool stuff about how you can do it live by using the things that we're doing right now. So it's a mini game. I guess you guys have, they should be working for you if Internet is serving well for us. How many of you are able to just open up the game? Is it working? One, two, oh, all of them. Fantastic. So show us your driving skills. And in next 10 seconds, we're going to flip over to understand what is the technology, which is kind of powering this mini game that we have built up for you guys. All right. So time over. I'll give you enough time to play this game with me in a couple of seconds, but I'll hand over to Ramke right now to talk about containers and learn how it goes. Thank you. Hi. Namaskara. How are you guys? So one of the things is how many here believe code is an asset? Show of hands? And how many of you believe code is a liability? Right? You see the difference, right? And with open source, things have been built a lot of legacy technologies, and there's a lot of innovations also go on because of the, because commercial technologies in many of these areas can't solve the scale problems. And if you take a look at most of the larger Indian startups, they have embraced a lot of open source technologies to build their own stacks, not just think most of the things what you have done, what you've been doing in, like say, you're booking your railway tickets, authenticating yourself against a government service. What have you done? Most of you might have used container technologies and Kubernetes to come here also without knowing that you have used those technologies. So, but where do these technologies come from? And what are the historical things which have happened way back in the day for what, for example, how container technologies are today? So, it all started in 1979. Any developers here from 1979? If you are there, I would like to pick your brain. But it all started with this project called CHroot, which came in UNIX as V5. And what are we seeing today has origins from 1979. And this was one of those important software interfaces written in UNIX where your applications could make a copy of itself into the root and have its own shared privileges. And this was like very huge at that given point of time. This technology itself kind of revolutionized the adoption of UNIX in all the major computing what you're seeing. And most of the things what you're seeing, your Android phone, derivative of UNIX, your Mac, your Apple, all of them are derivatives of UNIX. There's no such technology right now, which you're using has not been thought envisioned during back in the day. Like all the great engineers in AT&T Bell Labs, Berkeley University, MIT, all these engineers contributed to what technology is today. And especially with the internet, how many of you are writing programming in a proprietary programming language now? None. Most of the open source programming languages, most of the languages, all of those things are built by open standard. So it started off in 1979 and then another distribution called the FreeBSD implemented the SysCalls with more features towards the networking and the networking stack, what you're seeing right now, all its origins come from BST. And then for the first time, for example, in 2004, 2003, there were a lot of, there are a lot of innovations. SC Linux was one of those innovations, which was done by the National Security Agency of US. And then they had a collaboration with Red Hat to co-maintain this project. And whatever you're seeing as the mandatory access control by far, SC Linux is one of the largest adopted access control projects. And for the first time, in 2004, Sun introduces containers. That's the first time they coined the term containers. So it was in Solaris, they call it Solaris containers. It was a transitory term because they rebranded it to Solaris zones at that given point of time. And quite a few of these projects started adopting this. But what had happened is because it was more from a feature point of view and not from a security point of view. If you take a look at the older manuals, they'll tell not to run CH root or most of the security literatures at that given point of time discouraged you from using CH root. So this was happening. And in Linux, the container ecosystem kind of built on these primitives in 2008, when the Linux container project, the LXC project came out. LXC project had three main things. One is namespaces, the namespaces, the second one is your mandatory access control with SC Linux. And the third one is your C groups, what on which most of the resource sharing and the process sharing and isolations have been taken place. These three projects for a long time were one of the bases for most of the projects which were distributed at that time. And during the PyCon 2013, one of the lightning talks, Dark Cloud, which later on went to become the Docker project, they announced the project. What they had done is they had taken all the container engineer at that given point of time was actually from LXC. And what they did after that time is they used the namespaces of that thing, but they had a demon in their architecture in terms of both the hosts, both the clients and the root needed those demons to run in order to isolate. But what it did was for a developer, it kind of solved the packaging problem and from the ops person, they started getting the applications to be deployed in the same way the developer intended to. But what usually happens with technologies or any decisions is something called the Javons paradox. So what it does is when you try to make certain systems efficient, the whole purpose of it, because people are going to consume resources again over and over again, it's going to defeat the purpose of resource consumption again. So what started happening is everybody started packaging it as containers and it just blew out in the market that everybody wanted to make their applications as containers and distribute it to them. Now the problem is lots of containers were out there and how do you orchestrate these containers? So at that given point of time, academia, companies, everybody had their own opinion. Even Red Hat had its own opinion. It was called Geary. I'm not sure anyone has used that. So what started happening was everybody started using their own opinionated way of how containers need to be orchestrated and there were all these wars going on. And certain patches from the community and certain patches from other companies were not accepted by Docker as a company because what the community was dealing with is Docker as a company and Docker as a project too. So you see the place where I'm going towards because monopolies and technologies can form very easily. And the way Unix has survived and the way most of the technology has survived and adopted in the open source world is by standardization. So they needed a standardization body in order to say how should a file system look like? What should a container look like? What is a container? Is it a container from Docker as a company or as a project? So all of these things started and under the Linux Foundation umbrella, most of you might have heard of this organization called CNCF, the Cloud Native Compute Foundation. So that one started coming into this very young foundation and all and it started speaking, it started collaborating with the engineers from most of these companies, Docker, Red Hat, AWS, IBM, most of these companies and they started contributing code or donating code under the umbrella of CNCF so that whatever technologies what you're seeing right now, most of these technologies also grew. So the whole containerization technology and all of the adoptions of various platforms, 2014 Google started developing Kubernetes in the open because that's one of the ways Google learned its lessons from back in the day of doing open source to doing open source right and they started developing Kubernetes out in the open and there were many contributors. Red Hat was also one of the earlier companies to contribute to Kubernetes as a project and Kubernetes by far started getting the most traction in terms of it being a viable container orchestration platform, which both enterprises can adopt and also most of the projects started growing around the Kubernetes ecosystem. So what started happening is quite a few projects right now started adopting these technologies which were built from all the way from 1970s to now in the way that this is how a container should be built. It was kind of opinionated. Then the whole standardization process started coming in and quite a few container runtime spec came out and then how interface within the Kubernetes stream for a networking company or for the storage company and all of the enterprise companies how they were adopting with the technology started coming out and you can see quite a few, if you take a look at the timeline, there were quite a few projects which came out of many organizations and also quite a few of, like I said, Docker earlier on they used to use LXC as the container but then again they started their own container runtime called libcontainers and also they included container D as the demon for running the project and quite a few of these companies started collaborating in the first version of the container runtime, the OCI specs came out after that and if you take a look at the Docker containers, the way people were writing Docker containers and then once the specs were introduced quite a few players started coming into this whole ecosystem. OCI is one of the first standards or the standardization effort which took place and this one had like, this one had various specs which were published how a image should look like, how a runtime should look like and how a distribution should look like added on a bit more later. So when OCI was working Kubernetes also was working on another standardization effort for what a runtime in Kubernetes should look like and that's where the cryo project also came out during that same time around 2017. So what happened with this is it kind of made anybody can come and contribute to the container run times and anybody can make their own container run times as long as they're compatible they're not breaking any of the specs or the implementation and OCI and OCI and cryo also had a way to interface with each other because all the cryo commands Kubernetes now started interacting with cryo in order for you for the container images to come out and also interact with the host things and one of the major things is from the security point Alexi and the technologies on which the most container runtime engine started doing is a demon list and a rootless containers running over here from the security isolations and this one kind of added to more adoption and more user user namespaces and all of these things being hardened for for to make containers more secure so somebody had asked a question about container security and also how open source projects are going I mean there's a lot of consumption of open source software who gets the responsibility of it one of the ways in which you can offload that responsibility is to containers and if you know what exactly your applications want to consume from your underlying host or something you can get fine grained control on what what are the interfaces and what are the resources in which you can get these applications consume from the underlying technologies so this is how the container runtime ecosystem looks like a few of the projects like CRCO container D all of these projects were contributed I mean Docker contributed container D project and then various other communities at that given point of time between 2015 and 16 chorus was one of the companies which got acquired which got acquired into red hat eventually so they had their own container time called rocket so what what what and all technologies which we'll be talking today like podman and all you can see its legacy in in in rocket running as a demon list container and how it has been hardened so these are few of the container tools which will be working which will be demonstrating and all the app which has been running on your mobiles to see how these technologies empower thing so podman was one of the projects it's a client only tool which is which has a compatibility with Docker CLI so there have been many developers who adopted podman all they have done is Dana alias in the unique shell commands from Docker to sorry from from alias of Docker to podman and they have been just running they have been running podman in the back using the Docker commands and nothing nothing's been breaking and it's been it's been it's been wonderful the way this project has been adopted because this project had an advantage that it saw it saw the way people were running Docker but then it also took into account of security and how people want to people consume containers and like the name itself podman it's a pod manager so most of you know a pod in Kubernetes as a it's the lowest compute unit in Kubernetes in which you can deploy one or more containers so podman started off that the advantage of podman is it's no demon so any application by itself is isolated within its own environment over there so there been many I mean if you've been following the Kubernetes ecosystem from 2016 and all quite a few of the security vulnerabilities of Kubernetes was because you were running the container runtime and it had access to all your underlying hardware what this one does is because it is isolated by itself by design only those applications have access into its own container and and and it has its own name space so you can take paddles to the earlier CH2 water speaking on in in in 1979 even they were trying to achieve the same thing but here it's more easier and Docker Docker contributed in a great way because they kind of abstracted the way people can build containers without much of system knowledge underlying system knowledge but but but then any technology would have its pros and cons the cons were like it was giving access to all your underlying shared hardware resources and various runtime and another project runcy runcy was also developed in similar time for for being the runtime for for the containers and also most of the most of these these technologies run without root and they also run in isolated name spaces and it also makes auditing of your applications easier and it also it also complements very well with the Kubernetes ecosystem so if you take a look at all of the all of the containers which have been deployed and production and all of those things all kind of data where do you want to how do you want to isolate your applications and how do you want to secure our application and how what kind of applications require what kind of privileges all of those things you get fine grain control with podman and podman also does rudimentary builds of your system too and there are other tools in this ecosystem along with podman build on scope you are one of the popular tools in the same ecosystem use these things like like a trinity applications so you can get all of these three working together in order to be more productive and more things so the main thing is podman which is a which is a rootless demonless container and then you also have advanced namespace isolations it can also do rudimentary builds it's if you have certain projects in your build pipeline you can use podman to build that but in order to get more fine grain control on your build your applications you can also look at builder so if you have complex rules to write and a way to secure your application so builder is also one of the projects and all of these projects are written with standards in mind the standard like the open container initiative and most of these builds so the thing is to mitigate quite a few security lists or supply chain attacks and all you need to look at your code and how you're packaging your code and also how you're distributing that code to the operation streams so projects like podman builder make you make you do the right builds and the right decisions because all of these projects have compliance with the compliance with the interfaces and also by design they don't have a socket or like those things so it kind of makes you do secure builds and the last tool is corpio because the container ecosystem kind of blew up in terms of people writing containers and all even the base image of the containers people don't really think what kind of base operating system they're running at because all these are key decisions from the supportability point of view yes you're moving fast but also you need to inspect what are there in these packages and what are you need to inspect these packages and you also need to move your images from one container registry to another container registry all of these things you can use corpio for and why I'm saying that you need to use all of these tools is like I said most of the dependencies if you're not author of it you're not sure what is written in those core what is written and bundled in those packages we have seen quite a few instances of like say NPM is notorious for this of having malicious code within their own repositories and even last year there were few of the few of the malicious software's which made it through the Python the Python repositories and all of those things so you need to inspect what is there in your package and corpio is one of those one of those tools which can inspect these container images and showcase what is there and how you can how you can consume these in production so this brings us to portman desktop so portman desktop so provides you very you know a developer centric UI for managing your containers locally you can run it on Mac windows you know Linux you're familiar about Docker desktop so it provides you kind of you know a developer centric UI through which you can manage containers one interesting fact about portman the name is it's it's part manager right so you can run multiple containers which are kind of bundled together like a part from the nomenclature of Kubernetes so it's like part manager and through part man you can run parts without even Kubernetes so that's the beauty of you know part man and portman desktop provides you a way I use a centric way to manage your containers locally on your on your machines it comes with you know interesting features like VPN proxies and image registry so if you're working in there you're like in a closed environment where you know you don't have access to everything so you could use and configure the VPN proxies into part man and through part man desktop you can also connect to remote open shift clusters at the moment you can also connect to Kubernetes cluster in in future so so in like a month back we have announced the GA for part man desktop so go check it out part man desktop.io and download that and it provide you a nice user interface for all your containers running on part man by the way you can also use it for for dog running managing dog or containers just FYI the next tool that we have in our kitty is the OpenShift extension for Docker desktop so if you are you know kind of using Docker desktop at the moment you can you can go to the extension option in Docker desktop and get you know an extension through which you can seamlessly push your apps or local containers or local images that are on your machine I'm going to show this live in a moment and push it to you know remote OpenShift and Kubernetes cluster it's just a way for you to one click deployment of your containers not only just on your local machine but you know just to a remote host just provide the credential the end points where you want to deploy it so it's very easy as a developer for you to showcase your app to your colleagues like hey you know I've deployed this instant app on you know on a remote Kubernetes cluster you don't need to play with manifest you don't need to play with ML files but just magically works I'm going to show this I think just now let's see let's see how this look like okay so what I'm going to show you at the moment is I have this kind of you know a code repository where it's a javascript HTML javascript base game nothing fancy at the moment here's my here's my code and as usual I have a Dockerfile in somewhere living in my code repository it's a very basic Dockerfile it just uses an UBI image and you install Python and just you know I have all the javascript files on my remote on my local directory I will expose some ports and I will just launch a very basic you know web server to host my application so what we're going to do next is I'm also going to give you this this thing the instruction so I'm together with you I'm going to follow this instruction like you know you will typically go to any git repository read the documentation and try to adapt and learn as you go along so I'm switching back and forth between two windows here and before I go into this I have two environments for this demo we have an OpenShift developer console from yeah and then we have another environment which is Azure RedHead OpenStack and we're going to use this as we move along in the demo alright so I'm into let me connect to my instance it's a rel machine I'll do an SSH I'm going to show you this on Drell and I'm a fan of ZSH let's quickly go into that okay so it's a machine running on Drell it's a developer machine on Drell right I can do basic commands like podman images it will show me okay there's one image I'm not going to build it live but we'll just run it not from scratch but it will pull up the changes so as a developer you have your code base living in your local machine so if I do CD racing game app which is my local depository so I have everything in here which is needed so I need to kind of build a content image out of it and we're going to use builder for this so let's understand the oh wait on let's even not do go to containers let's run it even without containers we'll do containers as a stage 2 so let's validate that this works so I'm going to spin up a minimalistic web server python HTTP server on port 80 and technically my app should be running and should be accessible right so let's quickly validate that this is the kind of public IP of my instance and boom my application works right it is it's not using containers at the moment no container just basic web server running somewhere right let's kill it off we'll see it later as a developer your next job is to containerize your app which means pack your app like like build a container out of it right so let's see how it looks like so we're going to use builder builder BUD build a bud build using docker so I have a docker file but I'm going to use builder as a CLI tool to do it which means builder can work pretty nicely with docker compatible files and then I'll provide just a name attack to my to my image repository I'm calling it like hovercraft because it looks cool so I'm going to build it using bud we'll kind of read all the instructions which are there on my docker file pull it install it you know pack my pack my app files in there install the dependency that I've already involved in from and then you know I have got my my container image locally at the moment if I do images it will show me it's here like OK local host current and you know this image is is is up and running it's there on the machine next is we want to tag it tag it so that we can push it to an external registry it could be a private registry it could be public registry depends on you I'm going to use quay.io as our our destination registry to to push our image into so for this I need to tag it so it's like you do in like a docker tag it's kind of same so build a tag from local to to my to my few my destination remote registry I have it and as a developer the next thing would be OK cool I now have a container image locally on the machine I want to test it locally is it working I'll run this command and do part man run like you know you can put an alias docker and you know you can make it docker run which will underneath you part man so that's what so the CLI is part man CLI is super compatible with the docker CLI so all the options all the tools that you that you work work OK so my 8080 is is already in use let's debug this let's fill it out let's run it again OK now my container is running yeah my container is learning on my local machine at the moment fantastic what can I do with this I can still go to my app which is this one and refresh this earlier I mean you still seeing the UI because my browser has cached all the all the assets for JavaScript it is still it not work I mean if I refresh right now it will work because my country is up and running but you got the area right so I'm going to do a hard refresh here so now at the moment it's using port 8080 in the top which means it's coming up from this for my container why my country is listening to my app cool we have we have tried to begin started this app using non-container we have centralized it we have built an image and then we have ran it using port it's so good so far yeah so one of the things what Karan has done is earlier when he was running the application just with Python so he had access to all the systems memory which is CPU everything it was not isolated if there was any malicious code in that it could take access of your memory and but here what is done is by just replacing by running it in thought man in the back end now he's running it with rootless privileges and also a demonless state so whatever happens within that application is within that own that that own namespeed or that own boundaries of those applications yeah that's a good point that's a very important point at the moment yeah before that my code can literally go and and you know traffic around all the resources on the machine but now since I've confirmed it it cannot and just to validate look at the ID I'm not running as ID0 which means root I'm not root on this machine and just you know a user have them call it easy to use the remote server now that I'm using it's a real line okay good the next next is we have a we have an app which works but let's do and do some inspection of my app we're going to use Stopeo inspect command to kind of inspect going to the container and validate you know how things are growing that's one of the commands it gives you you know the output like okay you know details about about my image and you know descriptions coming from the UBI by the way universal base image what I had to UBI universal base image UBI which you can use to kind of build as use it as a base for building your content images right it's it's as well okay so inspection is done it has given me some more information around my system architecture that's okay next is let's go and kind of you know how about this image how about this image too quick because my image is still on my local machine it is tagged as a as a new tag on my but the image still exists on my local machine I need to push it out right so let's push it using Stopeo I'm going to log in I guess my credentials not here so correct me so one of the advantages with Stopeo is like if you want to inspect an image you don't have to to let you know into your system you just use Stopeo command to inspect the image that's posted on in a positive way right alright so I have a some ticket against against way and next is I need to push my image from a local box local machine onto a remote registry which means I'm going to do a copy so this command is very interesting so what I'm doing is I'm copying using my local machine storage like containers dash storage which will kind of access my my local file system and I need to push it to the remote registry so double checking so container storage in my local machine query to IO is my remote part and using Docker Semantics I'm pushing it to query to IO as my registry okay so what is what is the non do not get confused here because in the next command I'm going to do something else so I'm moving image from a local onto a remote query registry it might take a few minutes seconds done because the media already I do now it's moving kind of different layers that's part of this image in the destination so yeah my image is now indeed remote to a registry that's nice all right one interesting thing about the scope you which kind of kind of I mean I use it as you know whose hope you can copy images from one remote registry onto a remote registry like you know the scp of the cloud scp for content images without involving you know other dependency so I'm going to log into this time to a different registry which is docker in this case use mine which seems I need to try so log it succeeded because only cached okay so the next command is I'm going to copy it from Quay from one remote registry onto another remote registry which is docker so this this command could be useful for you it was like you know want to move it from your let's say your your in-house private repository let's say a different private repository or public repository public repository and local to public you know you can do a lot of great things around this okay now it's sustaining that it's about now it has previously told me that it succeeded but let me authenticate it once again okay log in succeeded let's try to copy it one more time yeah now this works so as I have what we're doing right now is we're going to show you we are showing you the the benefits of so we are popping from one remote registry to another repository right so while this is going on let's let's talk about you know apartment desktop how does it look like okay you know let's suppose one of your colleague wants to try out your app in his in his local machine then you know using portman so we have this portman as the portman running at the moment here so it connects to you know it detects the portman in this up or not you can kind of you know cover it on cover it up to all basic things that you would like to do from a UI for that it also detects you know Docker so if Docker containers are there images are there it will kind of detect it okay so it's a nice UI we have we have no containers at the moment running on this machine we have two Docker images a center image which I've already pulled up just to save some bandwidth and save some time here as I told you you can run containers multiple containers join them as a part you can do it and you can do other interesting you're using portman so what I'm going to do is I'm going to use one of my image and kind of launch it okay yeah so I'll just click on this play button give it a name okay DevNation and put it in this sort of container you don't need to even run those you know Docker run command or you know a portman run command you're using using a portman desktop you can kind of very intuitively manage your container so now if I hover on this row here which is my running container I can you know I can open up my app I can go to CLI I can you know check the MLS you you generate the ML file for this I can deploy it to let's say Kubernetes I can stop it delete it restart it you know all those things we can set it into so now I clicked on that button it has opened up my local support ADADD on this machine right okay this is good but I want to deploy this to let's say OpenShift so what I'm going to do is I'm going to click on deploy to Kubernetes here and it will pick up your um your logged in uh context Kubernetes context okay let me change it real quick so this is OpenShift Sandbox it's pure charge for all the developers here uh and you know for the rest of the world go to go to right developers you can grab your account on this I'm going to show you how to log in so once you're logged in into the system you have to get the token to get it correct I'll authenticate using Sandbox I'll get the token here and I will use this token and authenticate on my local machine not this window should be this window my local machine come on Sandbox okay you're logged in too you will see OC for my show program yeah I'm into my OpenShift cluster at the moment what we're going to do is we'll quickly restart so that it can get the context back I'll double click on my app one of the things is pure power command line user there are quite a few tools and quite a few effective tools uh in the Red Hat developer system OC is one of the tools very popular with the opspokes another tool called OpenShift do Odo we have a talk on it later today even in those things so if you're a developer who is being used only you know command line of doing stuff and you know wishing stuff to repositories and all and having the same kind of a workflow you can use both of these tools OC and Odo in tandem and most of the commands what you're seeing here like for men and all if you use Docker it's very much the same commands because there's compatibility with the Docker CLI in most of these demos what we're showing alright so we are into OpenShift at the moment we have no pods right I have a container running locally on my on the machine now with single click I'm going to push this container image to let's say remote OpenShift so I clicked on this button called as you know deployed open deployed Kubernetes it has given me uh it has auto generated the ML file which you don't need to write a single line it has detected the Kubernetes context through which I have logged into the machine and if I hit like you know deploy it should go and you know deploy my app on to my uh Kubernetes cluster and waiting for the container container creating kind of like very big difference my part is now has magically appeared on a remote OpenShift cluster without even writing a single line and if I click open in OpenShift console I should be able to see my app running by going to networking and routes and I have this app in here and uh yeah now my app which I've been playing around it is it has been available for for public it has been released right so this was department desktop and we'll quick touching upon uh extension for Open uh the docket extension so if you go to docket desktop good extension and here it provides you a facility to select the right Kubernetes cluster the right OpenShift cluster that you want to deploy on select on the target you know clusters and you can select the image which image you want to deploy again so it's as simple as you know you don't need to run a container you should have an image in your docker locally select the image from here right and then I'll also show you how to change it so this is still using my previous OpenShift set sandbox I'm gonna edit this and log into uh log into an OpenShift cluster here and I'll pick up my second OpenShift cluster here from Azure right there are no part of this moment I'll go and copy my command okay so I think your time is up and I mean yeah I'll the demo is here so what you just need to do is you just need to put up you know the credentials for this OpenShift cluster in here and as soon as you launch it it will kind of you know move this and deploy your app on to on to an OpenShift cluster so with that you want to just finish off and let the other people other speaker take the stage and thank you so much and uh yeah and you still have this slide you can pick up and my app is still running and play play a cool uh you know game alright guys thank you so much thank you for having me thanks we are using the the spotman and also we are having we go to the lines and talk what are most of the steps like what you have done you can do it talk what you asked for a building as well two steps we can do another thing is like it is a dark current history image registry all image registry positive thing provides like a wonderful check when we are going to push the images and like pull the images alright guys just give kind of these steps like wow use these alright guys these are the advantages alright guys using like YAML files and configuration right we can deploy any of the containers to different like and products I think just give some So we are here to provide the right to the right yes but I think there are a lot of tools here but these things have a different I think we are about to start for this but let's let's catch up outside