 So today we have Kather talking about Veldah, Podman, Scorpio, Docker, and what else. So I am going to share the recording. And this is a prerecorded session for that conference. Today I'm here to talk to you about container technologies. I'm suspecting a lot of you there in my audience are basically trying to learn these technologies. Everybody has heard of containers. Everybody has heard of Docker containers. Are there any containers that you want to learn or do you want to learn again? So those are the questions you have. You don't know where to start. You don't know what technologies are out there. Maybe you have never heard of some of the other technologies that are here on the slide. Like you have heard of Docker. Have you heard of Podman, Veldah, Popio? What do they do? Why do I need those? Those are the kind of questions you might have. And I'm going to help you understand what these technologies exactly do. So as I said, I'm Kather Kulkarni. I have been at Red Hat for more than four years. I work for management business unit. This business unit is where a lot of automation offerings are being developed and offered to the customers such as Ansible, CloudForms, Satellite, Insights, and so on. A lot of my personal interest lies within infrastructure and automation. And I'm also very much interested in the containers as technology, as you can tell from these slides. So what is containers? Basically, we are going to compare containers with virtualization. Virtualization is where you may have used on your laptop or in your data center where you deploy a hypervisor. Or if you have a laptop, if you have the host operating system as Windows, you might want to install a virtualization software such as VMware or Oracle Virtualbox. And these softwares allow you to create a virtual machine inside your laptop, inside your Windows operating system where you create a Linux operating system-based virtual machine. So basically, you install Linux in a virtual machine on your laptop which is running the host operating system as Windows. Similar concept applies in the data center virtualization as well, just not your laptop. But what happens is in the virtualization, each virtual machine is going to have its own guest operating system layer installed, which is being highlighted here on the slides. And this layer is going to be a huge and heavy layer that has to be installed and carried with the virtual machine. And then on the top of that, you can run your application or you can do your thing, whatever you really want to use that virtual machine for. But as you can guess that this virtual machine is really a complete machine because it has a complete guest operating system installed. And that makes it really heavy. It is sometimes low to start, low to respond overall, can cause a lot of resource utilization on your underlying host on your laptop. And it is not always fast as you would want it to be. Now, come in containers. Containers become really popular in recent years when they were introduced first by Docker. But containers as a technology has been around for a lot longer. What containers do for you is really that it gets rid of that host operating system layer. That layer is replaced by supporting files and the runtime. But these are really lightweight runtimes as compared to the virtual machine itself. You're not installing an entire operating system, but you're sharing the underlying operating system between containers. And that allows you to have something called a container that is a execution unit which is very tiny, very quick to respond within a matter of seconds. It's up and running. You can basically see them as ephemeral virtual machines that come on really fast and work really nicely for you. I know there are some of you who might already know these things and they might cringe if I say they are like virtual machines because they are not. But for the sake of argument today, we are going to say that they're kind of virtual machines. Anyways, let's look at the next slide. We are going to look at what Docker is. Docker is a container software that was launched in 2013 in public. It was founded in 2010, really, and it was made open source. They were using something called LXC underlying. That was the container runtime. The runtime is basically where the container execution is managed. The thing that manages all the containers is kind of the runtime in the loose terms. Now, then they moved to their own execution environment, execution runtime, which was written in Go. Currently, Docker is open source under the project called MOBI, as listed here on the slide, and it is open source project. Docker has an enterprise edition as well as the open source project MOBI or it has a Docker community edition. What Docker did for a lot of us is it introduced us to the container technologies. Although a lot of what Docker does or uses underneath or a container really uses underneath is some of the Linux system features. In the Linux, we have features such as C groups, namespaces, and so on. There are some features of Linux that have been around forever and those features were used to create these containers. Docker really made it easy for everyone to use it. That's why the containers become popular in late years. Before containers, virtualization was the only way to have more effective utilization of your softwares or have more effective utilization of your bare metal resources. Docker made it more efficient by taking it a step further using the containers because if you can fit five virtual machines on a system, maybe you can fit 500 containers on a system. That could be the scale difference and that's how efficient it could be. Now that you know what Docker is, let's look at a little bit more into its architecture. There are three main components. There is a Docker host, there is a Docker client, and there is a Docker registry. Now the Docker host is the computer, like your computer in this case would be acting as a Docker host. What Docker host has is it has a demon running. It's a service that's running on your laptop. It's called Docker Demon and this service is constantly up and running and it is responsible for managing all your containers, all your images, etc. But then when it comes to the images, the Docker images, in terms of virtual machines, if you were to get an ISO file or .iso extension file from somewhere and download that and use to create the operating system and the virtual machine. Similar to that, in the registry we have Docker images where you can have multiple different images present that are available publicly for you to download. So as you can see here on the screen, there is an Ubuntu image in the registry that we have showed here in the diagram. There is a CentOS, there is an Nginx, and so on. So these images basically help you create the Docker containers. So to create any container, you need to always get the image that you need for that container. Then here on the left-hand side, what we have is a Docker client. The Docker client is when you as a person trying to execute some Docker command, you are acting as a client to the Docker host or the Docker daemon. If you tell Docker to run a particular Docker container, if you execute this Docker run command, what happens is it tries to connect to Docker daemon. Docker daemon, if you have enough permissions, Docker daemon will respond to you and it will start helping you understand how it's working. It will try to create that container for you. It will first say it is checking if the container image exists. So now if you ask that, hey, I want to execute a CentOS Docker container. And now Docker daemon is going to be like, hmm, maybe I don't have that container image. So it's going to connect to the Docker registry. It's going to take that image down from there. It's going to basically download all the content of that image from the Docker registry and store it in the local container image storage and then use that image to create a running container. The next time when you execute the same command, it's going to find that container image in the local Docker storage cache and it's going to be able to use that to create the container and doesn't really need to talk to the registry in the second time and so on. So that's how the Docker works. Docker has a client server model where you act as a client or any system that's executing the Docker commands acts as a client to the Docker daemon, which is the server in this case, and everything goes to the Docker daemon. Every command is basically routed to the Docker daemon, which takes care of everything that you are asking it to do and then it returns you the results. So next we are going to talk about Podman. So what is Podman? Podman is another technology which is very similar to Docker, but it is very different. But simply put, if you look at the website for Podman for its documentations, it tells you it is basically alias for Docker. What that means is that if you know Docker, you know Podman already, and making that switch is really easy. You just go into your bash profile after installing Podman and basically write the alias Docker equal to Podman, save it, and next time you start using Docker commands, it will start using Podman internally. That's how easy it is to switch. Podman is also an open source project and some of the important reasons that it was created for are listed here. So Podman itself is a daemon-less software. Unlike Docker, you don't need to run the client versus server model. It uses a different model which is fork exec model. Now that is a Linux operating system level terminology or basically operating system level detail that you can read there on the slides when you click on the more info when I share the size with you. But for now, it is important to understand that Docker and Podman use different mechanisms on the back end to execute the Docker containers. Both of them support Docker containers, but Podman also supports OCI container image format. So what is OCI? OCI is an open container initiative and that was created by multiple industry leading companies who are interested in having a common container image format. So Podman basically supports both of them. So if you are familiar with the Docker file, you can keep writing the Docker file as you always did and it will still work under Podman. Podman helps you as a user not only do the containers on your system as you already did always, but it also helps you move into the Kubernetes. It has a special command that helps you generate YAMLs that can be compatible or that could act as a boilerplate for you to move from Podman or Docker world to the Kubernetes or OpenShift world. We are going to see that a little bit in the demo, but let's learn a little bit more about Podman. So Podman is demon-less, meaning that you don't have a single point of failure. You don't have security vulnerabilities that are there in the demon. So if the Docker demon stops working, your entire Docker environment fails. Nothing like that would happen in Podman. If the Docker demon is compromised, all of your containers are compromised. You don't have that problem in Podman and in the Docker versus Podman there is an important distinction as well that a Docker demon is running as a root-level privileged user. But in Podman, you don't have that kind of issue. If you are trying to use Docker and if you are not root, you have to add your user into the Docker group, which has a lot of higher level of privileges in the operating system. And at that point, you are at the mercy of the security mechanism implemented in the Docker demon, which is not a very favorable proposition in terms of the Podman. Podman doesn't really like it. Podman doesn't really have that issue because it is running a fork-exec model. It doesn't have a server or client model. It doesn't have that same kind of demon that has to be running. It doesn't require you to have special permissions. Everything is different in the Podman. And yet it's all the same. It is basically very similar to Docker for you as a user, but at the back end, it is very much more secure and very different from Docker. So that's Podman for you. Another important distinction that I'm going to also highlight in the slides here and in the demo later is that Docker uses a global image cache on your Docker host on all your laptop. But within Podman, you don't have that issue. For Podman, there is a separate image cache for each different user so each user can have their own set of images and they don't really have to be globally made available to all users. So your images will be stored within your home directory on your system and no other users will see it. That's also an added feature of Podman. Here in this diagram, you can see that this diagram looks a lot more different than what the Docker diagram was looking at when we looked at that earlier. So in here, what you basically need to understand is Podman CLI is not acting as a client, but it is just a command that you're executing and then Podman itself basically creates the child containers or child processes using the fork exec model and when you stop the container, all the Podman processes exit. Basically, all basically exit from your process tree and there is nothing running if there is no container running. Versus in Docker, even if you're not running anything, there is a Docker process, Docker service. The demon is always running and it has to be running in order for you to be able to do any Docker operations. So that's the important distinction that we are highlighting here on this particular slide. Now we are going to look at the demo. So let me switch to the demo screen for all of you. In this demo, we are checking first, who am I? I'm logged in as my own user, A.CoolGurney, then I'm trying to run Podman command, Podman R.M. Hello World Latest. Basically, what we are doing here is we are trying to run a new Docker container using Podman command and we are telling that we want to remove this container as soon as its processes are exited. So if you do Podman PS, this container will no longer be there if it is exited. You don't have to do that, but you can. It's just for having the sanity. You don't need to do it all the time. I do it because it's my personal preference. But what you are seeing here on the screen is that I'm running a Fedora 32 operating system on my host, on my laptop. And then on the next line, I'm running the Podman run command for the Hello World Latest image. And it's a Docker image. That's why it says hello from Docker, but it's running with the Podman. It's interesting, right? Now, what we are going to do next is we are highlighting that we are trying to run Docker command for the same image. And what happens is I don't have Docker installed. So I have to install that. It says Docker command not found. So I'm going to install the Moby engine, which is the open source version of Docker. And installing that enables me to run the Docker command. But for that, there is also another distinction on the Fedora 32. There was a different C-group version that was being supported or used as compared to what Docker would support. So I have already applied that workaround and that workaround is listed in the slides. And I'll share that in the notes with you after the talk. But using that workaround, I was able to downgrade the version of C-groups I'm using temporarily for this demo. And I got my Moby engine working. Now that we have installed this engine, and I have already pre-applied that workaround, I try to run the Docker command again, but it fails, it says permission denied. And this could be potentially because maybe the demon is not running or maybe I don't have enough permissions. So I'm going to make sure the demon is running. So I do system CTL start Docker. I need to have enough privileges to do that. And once the demon is started, I can try to execute the Docker command again. But this may not work again, because what happens is, even though the demon is running, we are going to see that the Docker basically needs to have special permissions. It needs me to have my group as Docker or my user needs to be part of the Docker group before it allows me to interact with the Docker demon and do anything with the Docker containers. So now what I'm doing is, I'm basically adding myself into the Docker group and I switched my group to Docker. And you can see as I highlighted here, my GID or group says that I have Docker group in my additional groups, apart from my previous groups that I had. Now, basically what happened is, I was able to execute the Docker command because I was made part of the group. This is the part where I was talking about the special privileges that are required for me to interact with the Docker. Now we are going to look at another demo. So in this demo, we are going to see how the Docker images and the Portman images differ. So let me start this video. Here, in this demo, you are seeing the Docker images first. If I run the Docker images command, I see that I have Hello World image that's present in my Docker image cache. Now next thing is, I run Portman images. Now you see that Portman images are much more there because I always use Portman over Docker nowadays, and I have a lot more images in my Portman's image cache. But both of them are different, and this is an important distinction. They are not using the same image cache. Now, if I go a step further and if I decide to pull a new image using Portman, I'm pulling the Alpine image, which is really a tiny image that you can quickly pull down. And if I do Docker images again, it's not there. This goes on to highlight that Docker and Portman are in fact having that different image cache. And if I do the Portman images, I can see that Alpine image. So this concludes our second demo. In this demo, we basically saw how the Docker and Portman caches are different for having the different images. So they both allow you to have own different set of images and they don't share the common cache. All right. In this third demo, we are going to see an advanced feature of Portman. So let's start the video here. As you can see, I have listed the demo 3.sh script on the right-hand side of the terminal. In there, you can see what all commands we are going to execute, but I want to execute them one after the other so that I can show you the effect of each command individually. Now, I have aliased Docker equal to Portman. So if I do which Docker, it tells me it is doing Docker equal to Portman. So it's telling me that it's running Portman for everything now, even though if I say Docker, if I do Docker images, I see no images. If I do Portman images, I see no images. I had cleared all my images in the Portman image cache. As we saw in the previous video, you don't have any of those images anymore. Hence the image cache is zero. There are no images and we are pulling our first image here. So if I do Docker pull, it's really pulling a new Docker image, but using the Portman image cache and Portman commands. So now that I'm running Docker images and Portman images individually, both of them actually do the same output because it is working as an alias. Now the next thing we are going to do is we are going to try to run this Docker container and we are using this Docker run command, which is internally reading that as Portman run command and it has a few parameters. So let me help you understand those. So we have dash dash RM, which is one of my favorite. It gets rid of the container as soon as container stops executing. Then the next thing we have here is dash dash name. It is completely optional to have that, but I have decided to give it a name. So that's a random name that I came up with and you can name your container however you want. It doesn't have to be anything specific. And hence the dream itchy. So the next thing is dash D, which tells that, hey, run this container in a demon mode. Run it in the background. Don't run it in the front or the foreground of my terminal. As soon as container command is fired, it creates the container puts it in the background and returns your control back to the terminal. And that's what is happening here. I'm telling my Portman right now that, hey, I want a Fedora 31 Docker container and name it as dreamy cheetah. And then when the container comes up, you run sleep command with 600 seconds, which is like 10 minutes. So this container is going to be around for 10 minutes. Now what we are going to do next is that we are going to do Docker exec and we are using a bash shell to drop into the running container. Now that you can see my prompt has changed and I'm running, I'm inside that running container. So if I do cat at sea red hat dash release, tells me I'm on Fedora 31, which is what it should be, right? Because we are running a Fedora 31 container and I just dropped into that container. But if you check my host operating system, I'm actually on Fedora 32 as you see here. So basically Docker or containers, basically in general are giving us ability to switch between the operating systems and versions as we need and have that isolation. So anything that's running inside the container is running on a Fedora 31 based operating system. But anything on my system itself is Fedora 30 and you could have Ubuntu, you could have CentOS. That's the beauty of containers. Now the next thing is a added nugget and it doesn't really have to be part of this talk, but I wanted to show you this as an advanced feature of Portman. Portman has ability to basically allow you to go from a pod to Kubernetes or open shift environment. As I'm highlighting on the right side, I'm going to use Docker GenerateCube command, which is basically Portman GenerateCube command. And this is a Portman specific command. If you were not aliasing Docker equal to Portman, this command would not work. Now for me to be able to deploy this on the open shift, I have already installed the origin clients using the command that's DNF install dash y origin clients as I'm highlighting here on the next slide using my mouse pointer. But this command is already installed for me. So I have the OC client installed and I'm able to use it out of the box. I have already logged in to our OpenShift cluster, which you would need to do in case you were to use this to deploy a pod on OpenShift. But we created this cube demo.yaml and in this cube demo.yaml we are trying to use OC create dash f to create that pod on the OpenShift environment. We are trying to basically create that Fedora 31 pod in OpenShift versus my laptop. But it is throwing some errors and that's one of the things that you need to remember that when you are using the GenerateCube command, you may not always have the perfect yaml format that would be consumed by your OpenShift environment or you might need to add or remove a thing or two and that's what we are going to do next. I'm going to open this cube demo.yaml file. I'm going to remove the security context, AC Linux options altogether, as well as remove the second object or the thing that is treating as an object, creation timestamp and etc. because it doesn't really need to be there. It is not an object and hence it is not having the object field kind, which is the OC command is complaining about. Now that we have done that, we are able to use OC create dash f and create the new pod on OpenShift. As you can see with OC get pods, I can actually see that pod is coming up and it's running and it's been there for 8 seconds thus far. Now we are going to look at trying to get into that running container or running pod on the OpenShift. With RSH, with OC RSH command, I was able to get into that running pod and I used dreamy cheetah, which was the name of that pod to get inside it. So I'm basically inside a shell environment within that pod and I can check if I am there by running cat etc. and yes, of course I'm there. So there you see that I was able to go from podman to OpenShift fairly easily with a little bit of knowledge of OpenShift and that's the beauty of podman generate cube command. Now the last thing I'm doing here is I'm just deleting that pod and cleaning up after my usage so that my OpenShift environment is clean as it always should be and as you can see here, the pod is terminating and that also concludes our demo. So thus far, we saw that how Docker and podman work. We saw three different demos of different commands and functions within Docker and podman. So there are some other technologies as well, which is Builda and Scopeo and that's what we are going to look at now. On this slide, we are going to look at Builda. Builda basically comes from the Boston accent of the creator of Builda which really means builder and a builder, as a builder, it is able to build Docker container images for you or container images for you. So you would ask me, wouldn't podman be able to build the images? Yes, podman can build the images for you but Builda was created just for building images. Podman does much more than that. Builda was created before podman. It existed before podman. So with podman, you explicitly only be able to create the images using a Docker file or OCI compliant Docker file or image file or there is an added feature in the Builda where it actually lets you create a container using command line. So you don't only have to create the containers using the Builda bird command which is similar to Docker build commands but you can actually use a command line interface where you can execute one command at a time and each of that command contributes to your container image that is easy and that is really a fun way of building images if you're really new and you're trying to understand how to write the first container image or if you're trying to create a complex image what happens is if you write a Docker file and if you make a mistake in the Docker file you'd have to rerun that Docker file by amending your mistakes and you'd have to be in that loop. Create the Docker file, try to build your, fix the failure try again, doesn't work fix the failure try again and that loop continues. But in this case with the Builda you're able to get away from that loop using the Builda CLI to build the container images using your command line interface. Now you can use Docker files as well I'm not saying you cannot but it's an added feature in the Builda to use command line interface the next thing that we are going to see is why does Builda exist? So when Kubernetes as a community decided to move to CRIO based OCI specifications. So CRIO is a runtime and OCI is a container image specifications open container initiative that made the Docker less useful to the Kubernetes environment or Kubernetes community. They decided to move from Docker or Demon to something else they had also moved from Docker Demon for running the containers to run C as a container runtime so they did not need the Docker Demon to run the containers so the only thing that they needed to the Docker container runtime for or the Docker Demon was to build the images and with the introduction of Builda they no longer needed that Builda was able to do what Docker Demon does in terms of building the images without actually requiring the Docker Demon and that removed all the Docker container related or Demon related security issues that we had in the Kubernetes and that's why Builda existed because they wanted to get rid of the Docker Demon and wanted to use run C and CRIO and OCI and everything that was open source and having much more features and much less security vulnerability possible. So let's look at another demo in this demo we are going to see how we are building two different container images using a very similar format one using the Docker file and another using the Builda command line and both of them are going to yield the exact same images with the exact same sizes. So let's look at that. So in this demo we are going to see how Builda works. Now here what I have is a Docker file in this Docker file I'm showing you that this is a really simple Flask application that's going to just run a hello world.py on the Double 3, Double 3. So in the demo 5.sh here what I have is I have command to install Builda first. Once the Builda is installed I'm able to go from there and start using Builda to build images. So I'm using here Builda Bird which is the build command in the Builda to build a new Docker container image using that Docker file that we saw above and it is really similar in terms of syntax with the Pornman build or Docker build commands. Now it is same or kind of very similar on how it is building the images in this particular format and I am tagging this as Docker file tag. So I know that this is built using the Docker file but then Builda also has a different method of building the images and that is using the CLI as I have been telling you about. So we are going to build another image which is Hello World, Poland, Builda CLI and it's going to have exact same things as this image has but it's just build process is different. Now as you can see we have created both the images and both images are stored with different tags and different hashes at the end here. So you saw the same thing happening twice on the screen and they were happening in two contexts one using the Docker file and another using the CLI. So CLI makes it kind of easier for you to debug and build the container images in more interactive fashion and that's an important advantage that you get out of this Builda software. But if you look at the Docker images that are there or really Pornman images they are because I use Docker equal to Pornman as an alias we have two different Hello World and both of them are exact same size 918 megs that happens because no matter what the process of building is we are basically doing the exact same thing and we have got exact same Docker images out of two different approaches of building it and now we are going to try to run those images and see if they show us that a Flask server is running. There we go. I have a Builda CLI based tag based image that I am about to run and port forwarding 3333 from my local system to within the container and there we have it. The container is running. We have the Flask server that is running on that port that we specified and if I were to click that URL it's going to try to go to localhost 3333 and we are going to see the logs out here that say that we received a get request. There it is. We see there are two different get requests that were arrived one for the forward slash or basically the homepage itself which was responded with 200 and the other one was for icon which we did not have so it was with 404 but we can see that it was running using the container that was using the Builda CLI based Docker container that we built now I am going to run the other container which was built using the Docker file itself in the standard approach and we have the same results again so this goes on to show you that no matter how you are building the images you can get the exact same results out of Builda at the end it's just about your preference on how you want to use this so you can use Docker file or you can use the CLI that concludes our demo about Builda okay so the last item that we have here in the presentation is Copio Copio is basically a tool that allows you to copy the images from one storage mechanism to other storage mechanism basically it allows you to copy a docker image from maybe a docker hub to a private registry to quay.io to your local system between registries and so on and it also allows you to inspect the container images so those are kind of the main features of Copio and that's why it was created Podman or Docker cannot do that by itself it also allows you to delete images from a container registry and so on so we are going to quickly go into a demo for Copio now in this demo what we have is we are running another demo script that I had written and here we are first installing Copio by installing Copio I am able to do what we saw earlier as features of Copio first I am able to inspect a container image which is Fedora Latest and if I do that I get all the metadata as you can see here in this you are seeing that what all tags are available for you in this image so we have 30 31 32 Latest 33 etc we have the container image name it's digest which is the Shasam and then we have some other information like the container image format it has OCI format here so that's one of the things that you can do with Copio you can inspect an image without actually having to pull that image to your system locally you are not downloading the image but you are just downloading the metadata and reading that and showing that here on the screen the other thing that you can do is you can use Copio to login to your Docker and Quay registries and once you login to that you are able to use Copio copy command to copy a Docker image from your Docker Hub to Quay or Quay to Docker Hub or any of the different permutations and combinations of different container registries or storage backings and that's what we are doing here in this one I am basically copying an image that I had in Docker Hub as I will show you in a bit in the web browser and then taking that from here to the Quay registry or Quay repositories that I have so here you are seeing in the Quay I don't have the Hadoop image which I had in the Docker so with the Copio copy command if I want to copy the image from one place to other place I can do that as soon as I hit refresh here you can see now that Hadoop image was present it is being copied it is not completely done yet but it is copying that image from Docker to Quay it's that easy that's one of the important features and why would you use Copio so as you can see the image copy is still happening so I am going to fast forward and it's going to finish the copying and the image is going to become available fully usable from Quay as well as Docker so that concludes our last demo now that we have seen all the container related technologies and tools, the next question you might ask is what to do next, how do I start so I would suggest that pick a technology of your choice and if you don't have preference pick Pornman and learn it's really for everyone you don't have to be a particular kind of person this is admin or a DevOps engineer to learn these if you are a developer if you are a quality engineer if you are just a new student this is really for everyone these are really beneficial for you to make your lives easier in terms of development or doing your day to day job exploring new technologies sharing software etc so learn Pornman and understand that containers is not equal to Docker one of the first companies or one of the first softwares that made containers very famous so when you are talking about containers talk about containers not about Docker, not about Pornman you should be talking about containers as the technology self then you can probably learn how to build your own container images once you have already learned how to copy or use the container images from the Docker registries you can push your own new images to the registries make it publicly available and make it usable for other people there are so many tons of images that are out there that you can just pick and start using or build on top of those images you can also use something called source to image which is an open shift related tool that basically lets you go from your source code to a container image to your existing tool to explore the last thing that you might want to learn is Kubernetes or OpenShift those are the container orchestration technologies they are slightly different than what you do on your laptop when you are running a single container to run containers in production as a production software you need multiple containers working together and to basically have those deployed and working together with the container orchestration and that's the next level in learning the container technologies so I hope that this entire presentation was very helpful and hope you learned something new today thank you so much for attending here's my contact information and I look forward to answer any questions that you have either here after this talk or offline and I hope that you have a great conference experience good day Kiran want to take it over if you're here I am not seeing Kiran here he's the moderator for the session but I guess I'll take over thank you so much for the talk it was amazing unfortunately we don't have time for a live Q&A right now so if you have any questions please head to the breakout room under the expo and you can continue your discussions over there thank you all so much for attending