 It's five to ten, so I guess. Let's just get going. Yeah, so this is a Ask the Ethics expert session. So hopefully you guys are bringing in your questions because this is your time to ask them. To introduce myself first, I'm your moderator. I'm Kurt Stam. I have been with RedHeads since 2006. I mostly work on integration projects. I started with JBoss ESB and currently I work on SynthSynthesis, which is a low coding platform to do integrations based on camel. So the panel to the Daniels to make it easy. Danny and Daniel Walsh, do you want to introduce yourself, say a little bit about what you're doing? Yeah, go ahead first. Okay, yeah. So I'm Dan Walsh. My title at RedHeads is Senior SynthSynthesis Engineer, which I always tell people means I'm an old engineer. And basically I run the container engine team, chief architect of all things containers at the operating system level. So everything that's underneath is open shifts and Kubernetes. And my main task is to make Diane Mueller happy. So when she yells at me, I have to do stuff. Cool. All right, guys. Thank you. So my name is Daniel Oh, and then another Daniel. And yeah, so I'm also working for RedHeads as a technical marketing major. And the specifically application runtime and the cloud name runtime, such as Springboard, Quarkus, and Node.js, and BroadX, and some data greed. And of course, with the EAP, I was the main catcher flow. And yes, I'm trying to give some inspiration, how to build cloud-nabry application and how to consider to new cloud-nabry architecture and reference architecture, et cetera, based on, of course, RedHead technology, but also open source technology, such as the CNCF stuff. So I am also responsible for CNCP ambassador. So yeah, people ask any around that topic. Yeah, we are ready to answer that. So I have a question because Daniel, you just said a whole bunch of terms that I know very little about. I used to tell people that I've done Java for 20 years, and it's been one day per year for 20 years. But I've heard about this new Java engine for containers. And I guess a lot of the development happened at RedHead, and it's supposed to be really, really cool. Why don't you tell us about that? Yeah, absolutely. That is a really good question. So first of all, there are a lot of the JavaScript, many, many years ago. So actually, Java was born 25 years ago. This year, actually, 25 years anniversary of the Java framework. And then the most JavaScript was born for container technology like the OCI. I'm not saying that D is some stuff. And then the Kubernetes technology. So just a couple of years ago, maybe three and five years ago, the Linux container technology and Kubernetes were born. And after that, there are big challenges for JavaScript, how to optimize the Java application learning on top of that, rather than just a single VM or a bare metal. So in order to catch up the new, that kind of technology, so new JavaScript, a little bit more of a cost zone, the way how to increase the development of the productivity and also how to enable Java developer to develop reactive application as well as the traditional imperative application. Also, anytime a developer just can evolve using Microsoft's application to server less or eventually with the application and running on top of the Kubernetes with the containerization. So this is all a bunch of the new characteristic and features and decision mandatory requirement to, I mean, for new cloud-native Java stack. And the corpus is invented by Red Hat and a lot of contributor and committer already participated in new corpus project. It's a corpus new open source project really focused on Kubernetes native, which means we can provide some new feature of a Java to develop more optimized Java application running on top of Kubernetes. For example, maybe you can run Java application with the same or even faster than any other Java stuff like a millisecond start of time in a small memory hopefully like maybe just five megabytes to add some memory hopefully and running on top of the Kubernetes with the containerization. So this is all new stuff based on Kubernetes and Phoenix container technology. Yeah, very cool. Yeah, so Dan, you're jumping the gun here because I think you're avoiding me asking the question how to manage security, right? Among multiple external parties. So let's say I have, you know, three or four different external parties that I want to interface with my containers. How would I go about that? Do you have patterns that you usually use or products that you usually use? So you want three different entities to all be able to communicate? Yeah. Actually, that's somewhat above my level. So I might have to throw that back to Daniel. But I mean, my usual answers to the question like that would be, you know, that's something to be done at the Kubernetes level. So it's, you're basically wiring together different workloads and different container entities. And so you really want something like Kubernetes where you can basically put a statement forward that these three entities can communicate and then allow the Kubernetes engines and the whole software suite or the whole OpenShift software suite to actually do all the wiring and hope those up together. Because as just sort of showed there is doing that by hand or humans getting involved in that, humans make lots and lots of mistakes. So using something like Kubernetes to, you know, basically orchestrate and wire together your communications paths is what I would recommend. I'll follow that type of thing. But would you like to use some kind of authentication framework? Well, like that does a lot to do. Basically, I mean, again, I know all the low level stuff. So I wouldn't, Daniel, build me out here. Do you have one of the authentication stacks that OpenShift currently supports? Yeah, I think it's, yeah, actually, I'm not the OpenShift expert. Not a lot, but yeah. So, yeah, there are some, you know, OpenShift authentication based on, yeah, you can use the actually maybe some, what is that, based on key clock, I mean the single sign on and also you can integrate the external authentication like a GitHub or some, any LDAP sub or something like that to authenticate your group and user some permission and some kind of authority. And also there are a lot of ways to integrate the security capability with the OpenShift container platform. And speaking of a security stuff and actually I got a bunch of the questions from developer because the Kubernetes and OpenShift is like a big giant toy with developer, which means another big challenge to learn a lot of stuff. Okay, so I just need to deploy my application into Kubernetes. But for that, I needed to containerize that application so there are some base image or some container image, a lot of layers. And one of the most interesting part for developer, so what kind of tool I needed to use on my local machine. And then sometimes I need to install some container engine or container CLI and sometimes I needed to root Privilege to run it on my local machine maybe as part of the CI CD pipeline. But you know what? A lot of enterprise developer just use the company laptop which means there are so many constraints to use root Privilege. But previously the developer doesn't need to use container technology or Kubernetes to develop their application but now things change. So how do we manage that? Some avoid some root Privilege or some admin permission to containerize the application or build, etc. Maybe as part of build CI CD pipeline. I think it's a good inspiration for that. Yeah, I mean it's a difficult problem because there is so varied environments where the world I live in where I'm sitting on a Linux desktop it's pretty easy for me. But when you get to sitting on a Mac box or a Windows box and now you're telling this person he has to develop containerized technology. So in my world I would tell them to get something like Podman running on top of the Mac on the Windows box have an SSH connection to a VM or a cloud instance running basically Podman in the cloud and then hook those up and you could start out by developing simple containers. So I think taking a say an application that's sitting in a running on bare metal or as a regular set of a VM and moving it to a container is one step and that step doesn't necessarily have to be right into Kubernetes. So sometimes we take this big leap say, okay I got to take I have this simple application that runs on top of a web server now I have to get the first step is to get containerized and then the next step is to get into Kubernetes. So what we're trying to do with our container engines is to allow them to sort of play with the container using similar to the Docker command line the Podman command line and then we have Podman generate koo which will take your environment once you get it up and running and actually generate the Kubernetes YAML so it's fairly complex YAML definitions for running a container inside of Kubernetes and for me what I program is all cutting and pasting I need to have something three quarters filled out so I can go in and muck it around as opposed to writing the entire thing from scratch especially something as complicated as a YAML file so that's sort of the way we go now OpenShift itself has all these really cool features that allow developers to build images and to build pipelines where they can just do a get check-in to a service and they can have actions off of the get check-in to have it automatically file out into a build a builder and have that whole process going and OpenShift has capabilities to actually do developing in basically in your web browser so you can actually go into a web browser and do it and that's really critical for a certain certain people I talk to in the deep doc government who basically never want a piece of software to ever exist on your laptop so the mere fact that the software would be on your laptop is a national security risk and so what they want people to do is to basically use remote access to where the software exists and be able to do a development there so there is lots and lots of different models and depending on your level if you're like me and you've been working in Linux for the last 40 years I have to have Emacs and a little box as opposed to somewhere on a Mac and might have fancy IDEs and how do we plug those into the entire suite and then the other people who have to deal in web browsers and stuff so yeah so Daniel related to that how do you develop because I like having all the code on my machine right I like having the whole cloud stack on my machine so I know nobody else can mess with me right nobody else can change anything and therefore I can debug the whole stack I can have four or five containers running and debug it that way but it seems harder and harder to maintain that I mean it seems like my machine is getting hotter and hotter by the day and you need more memory 32 memory you better grab some more of the good laptops okay so yeah actually that happened to everybody so my case is really something different because I have a multiple environment in my local machine public cloud even private cloud and even multi-cloud depending on what kind of demo I want to showcase for conference or customer et cetera but just go back to your question for just normal developer that they just want to try to develop all stuff in their local machine just sometimes to link to the git repository just a pushy application source code but that's all so but in order to test your application on Kubernetes cluster but you don't want to have you don't want to use the remote cluster like a Kubernetes or a container platform I think there are two pretty good options for them first well MiniQ is the only known package the Kubernetes cluster is a small one so you just run just might maybe a 6-gib or even 8-gib memory to run up and you can deploy the application and you can have also cluster admin permission to manage your own cluster and also the open shift also provide the container already a code ready container a CIC code ready containers yeah that is a small version of a MiniQ but it also allows developer to have some nice feature like an open shift feature like a deployment config or depth console or operator thing so really easy so for example I want to make sure my application is connected to a database but not in memory like H2 but also maybe I need to make sure communication with PostgreSQL or MongoDB or something else maybe that I agree but in order to test you don't need to install every single software on your local machine just accessing your small piece of open container platform like a CIC and just deploy that software using computer it will be deployed in just couple seconds under that part like a MongoDB or PostgreSQL will be spinner in just 5 seconds or 10 seconds but still is all your local machine but still maybe you need to make 12-gib or 16-gib laptop so actually I'm using the MacBook 12-gib so sometimes when I run CIC or MiniCube and running a VS code sometimes my computer just that but just to make sure how many memory you are consuming during the development but still a good option for developer 100% free also maybe like a then maybe high spec a laptop like a Linux operating system maybe you can install OKD like the community version of container platform you can install that on your local machine it's pretty easy maybe just how do we need to finish installation also you can have the podman build up to containerize your application build yet speaking of build up and podman some people still confuse about the between podman build up I mean the use cases or who needed to use build up rather than podman maybe then give some inspiration to that basically you know podman build is builder so if you're building a container image with the container tools or if you're using OpenShift builder that's also builder builder is sort of a fairly low level tool that supports fully supports Dockerfile and it supports this thing called containerfile that looks very much like Dockerfile but doesn't have to always say a company name in the context but so builder is again it's a low level tool but in some ways it's sort of you know comparing you know emacs to set ed or something you know it's like you know visual studio to vi right it's just two different levels of the stack most people that are using builder right now are really using it for tightly controlled builds inside of containers so you know we're spending a lot of time working with people helping them get they want to take their builds and put them into a CICV system to put them into Kubernetes and just have thousands of builds going on simultaneously you know sort of like OpenShift and I believe that's the most secure way to build your containers inside of a container right wrap down that because inside of that container you need to have let's just call it multiple UIDs a lot of times we say you need to have root but what you really need when you're building a container image is you need to have more than one UID right nobody very few people builds an image that only supports one UID so in order to build the image you need multiple UIDs and usually you need one of them that identifies this root to be able to do this so when we look at building inside of something like OpenShift you end up with requiring that users basically be able to launch something that has say 65,000 UIDs while it's doing the build because it's going to root on files say Apache on files and use their own files whatever on the operating system so what we've been working a lot is to get user namespaces plugged into OpenShift and this has been a long slog I've been talking about using namespace all the way back to 2013 and yet I guess Podman is probably the main user of user namespaces at this point but we're really just about to get functionality into Cryo which is a container engine that OpenShift and Kubernetes use now so that you can go out to Cryo and say okay I need 10,000 UIDs to build a container and will he end you or use the namespace for random 10,000 UIDs and he'll control those 10,000 UIDs so if it hands out 10 of those they'll be all unique groups of 10,000 UIDs and then you can do your builds inside of these environments and as far as security now you're really good because you're running UID 100,000 through 110,000 the next guy's running 200,000 to 210,000 if you escape from the container onto the host you would be treated just like UID 100,000 UID 200,000 but inside of your container you have root you basically are root so you can do things like change UIDs and the funny thing is OpenShift is really great about forcing people to by default run their containers as non-root which is I think almost every container in the world unless you're managing the operating system unless you're modifying the kernel or modifying the operating system you don't need to run as root in these environments but there are certain use cases we have a large customer who has a database product that probably drops it down to about three customers in the world that do databases but three companies in the world that do databases and what they actually have they're trying to run inside of OpenShift and when they run the database what they do is they get their SQL queries coming into the database and they have the database running as a single UID and so the database is owned by that but when a SQL query comes in they want to run that SQL query in a different UID than the UID that the database is running so what they do is they have their database comes in so a connection comes in and what they want to do is just fork off a process as a different UID and then allow that SQL query to be executed under that UID so that if there's some SQL attack they'll be running as a different UID and not able to directly access the database and getting things like that to work in Kubernetes without turning off all the securities like jumping through hoops and so that's sort of a use case where you don't need a huge amount of UIDs but you need more than one and so that's this type of stuff that we're actually at the high level of OpenShift we're working at cool okay there's still not anybody who has the question to the chat so I'm a little disappointed yeah we're very we're very intimidating maybe I can bring on someone poppy if not me so as I will track name is sublist and containerization so you know what so a lot of the Java developer really looking forward to a new java stack how to optimize their existing Microsoft's application turn into sublist application and one of the big benefit of the Porcos I already mentioned that so there are Porcos around Java developer to build native executable file like a Go so you don't need it to JVM any longer to run Java application because you can package your Java application as executable file and then just running on that file and behind the scene that executable file running on Gravian but many people ask me okay Gravian is developed by Waku and then how to manage some license stuff and some feature etc even that is still community I mean there are two version community Gravian also enterprise version so that's why Red Hat will support maybe around next month we will have a Manjaro it's a new project actually that is a downstream project of Gravian project but we'll print the more features in Gravian plus OpenJDKLab and some debugging features and 100% of this is a project just like any other Red Hat projects and one of the good thing is that we are using base image of network executable file based on the universal base image UBI, minimal and UBI and then maybe then you maybe talk a little bit more details because some people okay so the Manjaro based on Qarkas image is pretty small and super fast to run like 25 millisecond and just 5 or 10 megabyte to run that application because previously we need 1 or 2 seconds to start up and we need 100 megabyte to run I mean memory profane but now you maybe 30 times less than or 55 times faster than start up how to make it happen but first of all we are using the UBI image what the heck is UBI images I can't explain that I'm not an expert at that field luckily we have Dan so why don't you explain about that I'm going to ask one more question on top of that can we have UBI images for ARM as well oh very good question yeah you just need different people than us to ask that question so I think that's covered by product management and stuff like that inside of Red Hat so UBI quickly I like to step back in history so in when the whole revolution of containers started back in 2013 the sort of promise the sort of promise of containers was that third parties could basically develop an application put it into a container image and then run it everywhere it's kind of funny because we've been talking about Java that was the whole concept if you write it in Java write it once and run it everywhere so in the container world it was talking about you can write the software and run it anywhere and that's somewhat true the problem is you're weird to get support and so traditionally you know you basically have to marry a container image to the Linux kernel and it's the interaction between the two is what you need to get support for right it's not just the application of the image it's really how it interacts with the Linux kernel and how it works together and traditionally Red Hat it's said that we only will support code that we wrote that came from Red Hat against the Linux kernel and so even though you can run say alpine images and your Buntu and jbn and all this other stuff that all works and it works well and Red Hat will support it as far as if it's obvious a problem with the kernel we'll fix it but we're not going to dig deep into why your glibc library is not working correctly because it's not code that we actually ship it's not stuff that we can even look at the source code easily to look at the source code of it when you have a problem in an environment where you're using a mixed kernel say from Red Hat and software from Ubuntu and something goes wrong who are you going to call and what's going to happen you're going to have canonical saying that's a Red Hat problem and you're going to have Red Hat saying that's a canonical problem and you have no solution for that so Red Hat when the container stuff started happening many years ago we said the only thing we'll support is images on top of REL containers and so this led but we simultaneously said at that time that you can't take REL content our licensing for REL content was that you can't take REL software and put it out on any container registry that you would be breaking your agreement with Red Hat and you can't take it and run it on other people's container registry you can't run it in Docker on top of Ubuntu that's it'll work but it's not something we support and it's something we actively discovered so that went on for many years during REL 7 time frame and when REL 8 a lot of our third parties were coming to us and saying containers we ended up with a REL container and then we ended up with a non-REL container and the non-REL container will run on the non-REL platforms so they end up having twice as much stuff and that's not what containers promise the containers promise that we'd have one platform so what we wanted to do with UBI Universal Base Image was to take REL content and basically change the licensing around for it so for these select images for these select RPMs you can actually take them and you can store them at a container registry and you can run them on any other platform and really the goal here is to allow third parties to build this off or on top of UBI and if they run it on top of Ubuntu or Debian or Amazon Linux or CentOS it runs fine and it's fine you're not breaking any kind of license agreement with Red Hat when you do it but when you run it on a REL when you link it up with a REL kernel say in REL Core OS if you have to something like that then it's fully supported by Red Hat so we're basically allowing our third parties now to only have to ship one image and that image is built on top of UBI now the interest, the problem with UBI at this point is not all of the software that's shipped inside of REL is available UBI and that's like a constant don't tell anybody that this is a constant battle inside of Red Hat if we go too far we're giving away too much of our software but that's an ongoing battle but our goal is to build all the tools that you would want to use in a container environment should be available UBI and you can just go grab a UBI image just do a DNF install and it will go out to registries and pull down all of the available UBI content there's thousands of RPM packages you don't get the Linux kernel you don't get the REL kernel as UBI but that's really what it is so instead of saying it's a REL 8 kernel container image out there it's UBI 8 yeah that's really awesome and just I'm going to add one more comment and corpus and application perspective so developer can specify based on UBI image on Mandora on their corpus application properly filed and then they just kick it off using the Maven programming command line it's a normal stuff for developer like a Maven build, Maven package stuff and automatically create a build container image and push it to push the container platform automatically with that single command line and behind the scene you can also define or use multiple container build strategy like the source to I image SDI the source to I image strategy also DACA build using the zip, the Google container build tool as well so there are multiple ways I think multiple tools for developer to build container image based on UBI and also generate the Kubernetes Manifestor, like a resource definition or opportunity definition like service deployment etc all YAML file stuff it's all generated automatically and they use that generate YAML file and deploy opportunity container platform once again the developer standpoint they just want a single Maven command line and the other thing will be done automatically for developer so the old technologist DACA compose and make it happen for developer this is one of the cool stuff to have some technology on Kubernetes and UBI and corpus right this is pretty cool, this is going really deep into the container strategy I want to lift one more question a little bit out of the details and let's say let's say I am building an application I have 5, 6 different containers and maybe in the future I have another 10 scheduled, how do you visualize these dependencies and how do you maintain that knowledge because it seems like as soon as the developer is done with one container they move on to the next container and very quickly you start to lose what dependent on what and so how do you maintain for that to visualize these architectural dependencies with between containers and how do you maintain that knowledge is that something you guys have thought about? maybe I can take the question but not deeply, so just in my case so maybe just a couple of things come to my mind, so first of all open the container platform provide the depth console it's a pretty cool stuff then you deploy your multiple container image as part on top of the container platform and that console provides topology view and you can see multiple part just like eyeball candies and then you can make some relationship between a multiple version or okay this part is spring boot and this part is post square and you can draw some line between that eyeball candies part, okay so now two part have relationship and then you keep deploying multiple version like a revision stuff and then you can find this part have three revision even some less or traditional step for Microsoft's application so this is one of the way and just a little bit more of the container layer stuff maybe Quay or have provided some UI maybe Dan give some more detail yeah we said so the goal with containerization is all about basically building blocks sort of Legos so you want to get each one of your container image to be a micro service and all micro service means is that that container does one thing it's a web service or it's a database or it's a load balancer and so once you have some of those building blocks of an application now you want to go into Kubernetes and orchestrate them together and that's what Daniel was talking about is connecting together and the other thing you want to be able to do is say how many instances what kind of capabilities do you want to have in your application right so if you're at certain levels of load you want to kick me off another web server because suddenly my performance is starting to fall or give me another database to help load balance the environment and really that becomes at that point it almost leaves the developers area and moves into so the administrators world at that point because he's sort of maintaining this greater application so the interesting thing it's a DevOps whole idea of DevOps is the developers in my opinion are mainly developing low level building blocks and then maybe an architect level is to figure out how to orchestrate to hook them up together and then you're moving into operation stage where he has all the building blocks he has them all wired together but now he has to figure out how to manage costs so that you don't want to have a thousand VMs a thousand containers sitting out there doing nothing on multiple VMs which are just costing you money but be able to scale up and scale down as your environments change over the cost of the period so but again the problem is we're low level guys so I live I live much lower level I if you want to where I'm interested in my focus in our group is usually at the lowest level so I'm always looking for new ideas on how can I get images to be pulled quicker how can I how can I get memory the memory footprint down a lot of these tools are way too big we were talking about CRC earlier in CRC to run Kubernetes to run OpenShift inside of a VM right now it takes 8GB and what's happening in an OpenShift cluster is they have all these tools running constantly they're called operators but basically these operators their main goal is to make sure the environment stays up and runs properly and they do things like managing upgrades to the operating system and manage the whole flow so an interesting thing when we talked earlier was when I if I want to run Kubernetes on my laptop what is Kubernetes right and so the intro you were talking about is kind which is Kubernetes inside of a container the D stood for something I don't talk about anymore but the that's sort of a low level way of doing it and then you get to real what we consider OpenShift Kubernetes all these operators and stuff running constantly inside of an image and that's one of the reasons that it becomes difficult to run a full a real Kubernetes environment on a single laptop because Kubernetes was designed to work in their cloud to work on virtual machines to work on physical hardware and it's really a sort of enterprise level suite of applications and trying to take that and somehow squeeze it onto a laptop is always going to be fairly difficult one of the things I would like to talk about is and I'll ask to me these things are interesting that what excites us what are the things that we're looking forward to over the next six months to come along that are going to potentially change people's view like what's the next Quarkus next podman and things like that and for me getting excited is looking at some of the developments that are going on Linux kernel so we're red hats right now invested quite a bit in using virtual machine technology for running containers so we're working with Cata which was originally developed by Intel and so instead of having traditional containers looking at how can we manage really lightweight VMs running a container workload so it's really running containers and sort of a very light virtual machine but there's another project where there's some really smart engineers that have been working on this thing called live K run which is really exciting me which is unlike Cata uses QEMU for running a container basically the same way we run VMs and what these virtualization engineers have done is they've actually got a kernel as PID 1 inside of a container and it uses KVM inside of it but all the processes inside of the container are all seen of the system so it just looks like normal prod looks like a normal container running on the system except that instead of talking to the host kernel it's actually talking to a kernel that's inside of his name spaces and stuff I have no idea how it works but it's really really cool and my interest there is is this the future? am I looking at the next wave of how we're going to do this container technologies and again because it looks like they can do it much smaller so I think the end goal with all of Kubernetes is to get to the point where we can run thousands of these containers on a limited number of hosts and really maximize the amount of CPU of individual boxes as I said so I'll let Daniel tell me what in the next six months what's the most exciting thing you see coming? yeah sure and then it sounds really exciting to me actually as I already mentioned earlier we're going to support the 100% native compilation based on the project and also we're going to put together a more sublash and a fast function as a service features tied, integrated with the upper shift sublash feature so for example develop a standpoint they will have a single command line like a based on Kubernetes K-Native command line like a KN and just KN create fast and then you can just create the new function based on and at a time that will be deployed up to container platform as K-Native service and also we're going to integrate with the open shift pipeline and then inside the pipeline your corpus application actually a Java application will be compiled package as native compilation and based on UBI and deploy multiple platform and also in community side corpus already provide the sublash Java portable API also known also known as Funkey actually I love the name Funkey it's a standard Java portable API so you can deploy the same corpus application into multiple fast and sublash platforms such as Amazon and open container platform and Kubernetes K-Native and also Google function Azure function but point is you have still same application you don't need to change anything in application side but you just needed to add some server information okay this is a deployed Azure function this is a deployed to Amazon and this is a deployed to Kubernetes and open shift only configuration minor thing and both the other the application side exactly same so maybe I could say oh this is a hybrid sublash deployment with the same application developer perspective but behind the scenes as Dan already mentioned there are a lot of multiple and complex technology it is there so we are working on that and application side and the infrastructure layer and the corpus as well as the new jacar.ee also known as java.ee is a new feature so there are a lot of tools and features really about Kubernetes and K-Native and Microsoft and event-driven stuff will be coming soon very cool I think we reached the end of the time so I think we can continue for another hour but I think we have to run back to stopping there was one question about a coloring book so I just want to answer that so obviously we didn't have a red hat summit this year and that would have been the time that we were going to do a coloring book so if you saw my session earlier yesterday where I talked about container security I basically write the stories around Goldilocks and the three bears and talking about how we can get more secure environments to move from Goldilocks to Puppa Bear and so that would have been the coloring book and I'm hoping if the world gets back to normal at some point we will have a coloring book that covers the story of Goldilocks and the three bears and I haven't shown it to anybody yet but this is one of the key things in POD this concept of Sidecock containers and so a Sidecock container is basically, when you have a POD you have sort of the primary container and then people put other containers to sort of monitor it I think Istio is taking heavy advantage of this type of environment but usually when I give talks I tell people to watch out for Sidecock containers because I think that they're just a wave for the next wave of everybody adding third party products that just keep on putting Sidecock containers into primary containers and you'll get to the point where you launch instead of launching one application you'll launch one container application with five Sidecars and so every time you launch one of those you're running all these other containers that are just watching what the primary container is so we have a new part of the coloring book is actually showing what happens when you have a motorcycle and you start to add Sidecock containers to it and with the three beers and the Goldilocks it's a very funny video but it makes sense to try to convince people to not take advantage of Sidecock containers so anyways you're asking the trustee guys that's definitely for my son and daughter that puts that you want your kids to understand computer security I will do my best to explain that thing alright thank you very much and if I want one of those coloring books do I have to just the coloring books are always available for printing in your own house so there's three of them out there right now and this will be the fourth but yeah they'll always be open source so there'll be creative comments licensed I just come up with the ideas and Maureen Duffy makes them look beautiful so 90% of it goes to her I just have wacky ideas she makes them into something so it's a fun thing to do alright thanks guys I think we have to all move over to the two of them alright bye everybody thanks again bye bye