 Hello, everyone. So my name is G. Thanks for the introduction. So I'm currently a tech lead at mesosphere right now I'm approaching mesos p.m.c. and the committer since 2013 So I'm mainly responsible for like Containerization networking storage Stuff inside mesos Yeah, I've been maintaining the containerization part for quite a long time. I used to work at Twitter I was a software engineer there and I gained my phd from University of Michigan in 2013 All right, so this is the outline of my talk So I'm gonna give you a kind of very brief overview. How many of you are using mesos right now in production? Okay, cool So I probably skipped some of the part because I made this slides for like open-source summit and some introduction on mesos Probably I can skip those and then I'm gonna give you some overview of like history of containerization in mesos And then I'm gonna talk about how we adopt those new container standards And then I'm gonna highlight some of the new features we recently add into Mesos and then talk about the future roadmap what we are heading to So mesos is a kernel for data center applications If you think about what a traditional code OS kernel does for resource management The the traditional operating system like Linux abstract away those CPU host CPU and memory those hardwares and providing some programming abstractions to the the user applications Those abstractions like things we are very familiar with like process threads file things like this And then the way Linux kernel provides security and isolation for those Programs is through like things like virtual memory users all these concepts inside Linux kernel or other operating system kernel so what mesos does if we think about mesos as a distributed like data center kernel or distributed center distribute the system kernel and It does the same thing as Linux kernel Regarding like for example resource management part in manage not just single CPU or memory and manage like all the CPU and memory in the cluster and it provides some programming abstraction to Two developers. This is the mesos API. You're very familiar with like task offers resources and It also provides security and isolation for those application on top on this is through containerization That's my focus of the talk today. So I'm gonna probably skip those simple things That this is the typical mesos workflow I'll jump into like the history of containerization in mesos. So what is container? so I think I feel that the container that the world of container being overloaded on quite a bit and From different person the perspective from different person have different interpretation of container for example for developers If you are talking about a container, they're probably talking about their quite creating container images And if you're talking to an operator when they talk talk about containers They probably think container is like an isolated execution environment so actually the containerization in mesos folks on the ladder, which is from operator side we folks on creating a Isolated execution environment for your applications and it started very early. It started from the very beginning. So I think That's like 0.10 2011 we the first like a very early version of mesos has this process based Containerization so basically agent launched a bunch of processes on the host And each container is actually a process session and there's no resource isolation at all This is basically a bunch of Linux processes on the box and then In 2012 I think things that the problem with the previous solution is you don't have resource isolation So one container can just use up all the resources on the host So then we noticed that hey, we have this Linux C group support Which is very nice give you ability to Restrict the CPU and the memory for a given group of tasks group of processes So we introduced this Linux C group direct support and then at a time only enable like CPU and the memory isolation because these are the two main resources that people are using and then we also use the freezer C group for process management because like C group give you the ability to The track all the processes in a container very easily compared to using Processes process trees, which is not very reliable because you have this repair anything So use freezer C group for process management also that I think the freezer also simplify one part Which is like when you try to kill the container you want to make sure that? You stop all the other processes in that C group first and then you send six six sick kill to all these processes And then you unfreeze the C group so that those signal will be delivered The nice thing about that is you don't have the race condition because you have to scan all the piss first and then do Sick kill but at the same time if some process is still running that process might exit and you might send a sick kill to a Round process so the freezer C group solve that issue by just allowing all allowing us to freeze the entire C group So that we can deliver the kill signal atomically All right, so that's 2012 2010 and then in 2014 I think that C group stuff has been in production at Twitter for a long time and then in 2014 we start to add more and more like C group support like because there are more and more subsystem being introduced inside the Linux kernel And we want to add those support and we realize that old architecture Does not scale anymore So we kind of didn't refactor in 018 to introduce this concept called containerizer It use a pluggable architecture. You can specify on different Isolators and the launchers we do like these are the two main abstraction that we provide inside me Mesos Containerization code so the isolators you can think of isolator is like life cycle hooks so during the container like like Like before containers start after container terminates or during the I say when there's a task being sent to the container We provide those hooks allow you to inject arbitrary code to do isolations And then we actually made those CPU memory C group Isolation being part of an isolator and then we introduce more and more isolators to make it more modular And also there's another concept called launcher launcher is mainly responsible for Process management like how many processes are there and how do you kill a bunch of processes? How do you? Launch a process a launcher container. So we have three major launchers right now Linux posts in the windows So Linux launcher basically just using standard Linux features like C group namespaces Postics launcher Basically does nothing just to fork exact basically and windows is using job objects to to to create containers and processes on windows So that's 2014 018 Mesos 018 So there's a bunch of isolator we add later Like if you go to this I recently clean up the documentation. So if you go to this latest documentation for Mesos containerizer, you can see a full list of Documentation for each individual isolators. There's some special ones I think we add most of the C group subsistence support through isolation and then we have some disk isolators files as isolators some namespaces isolators and networking and the volumes I Don't I don't want to jump into details I want to continue this kind of history of containerization. I think 2014 the same year in in the next to like in 020 Docker is really popular at the time 2014 and So so therefore we add a new containerizer To to Mesos called Docker containerizer Which is essentially launching container not using Mesos like you're not using the the Mesos particle But using just using doctors demon by just sharing out to doctor demon and you just shout just do doctor run doctor Pro doctor stop doctor RM things like this and then these two containerizer can actually co-exist on the same agent So we can you can definitely have some container running using doctor demon some container using Mesos containerizer So that's 2014 now in 2016 last year on 028 we start to support on Docker using Mesos containerizer the reason we want to do that is we realize that what Docker does Essentially are using a bunch of Linux primitives, which we already have support for in Mesos containerizer The only missing piece Actually is the the part of like provision a file system for a container So essentially and also I think we realize that maintaining to containerize is kind of painful and anytime you want to add a new feature You have to do both implementations, which is hard to maintain in the end So what we decide to do at a time is trying to just adding this missing piece into Mesos containerizer What we call provisioners so provisioner is And yet another abstraction in sense Mesos containerizer to allow you to customize on the file system provisioning part So right now there are like two implementations There's a Docker image provisioner and there's an app C image provisioner and we are adding OCI image support right now I think the the patch is in review and needs to be merged very soon So we also add some more isolators To to kind of match the functionality from Docker for example the volume support Like capabilities support our limits And also we've made this special Isolator for on interpreting like things like environment variable and entry point inside and Docker image So that's 2016 And I think like after that I think we are I think that the kind of the direction we take in that project in Containerization is trying to adopt those new standards for containers. I think there's a lot of standard being made during that last year and the year before last And and we are trying to adopt those Container standard. I think that I can make majorly three categories I kind of category into three major category container image container network and container storage There might be more but right now these are the major three area that people are making standard for And I'm gonna talk about each of those in the next a few slides And I think Mesos is gonna support all these through the plug-in interface inside Mesos containerizer So if you're talking about container standard effectively like there's only one standard right now, which is Docker and Docker has this registry API That there's a bunch of implementation of that registry API like J-Frog Netflix and also those cloud providers have Their registry being hosted and also Docker Hub And then for storage Docker has this volume plug-in called Docker volume driver interface and there's a lot of implementation of that interface In the ecosystem like poor works to x-ray last or FS these kind of plugins and then for networking Docker use this Interface called live network It's an hour model they call and you can using that now or model to build your own plug-ins and the major now vendors Have the implementation there like cataclysm to Cisco, Juniper these kind of companies But but the fact that I think this is not a good Ecosystem because it's centralized It's kind of Docker centric thing and what a true standard I think is it has to be it has to have a stable interface it has to have backwards kept compatibility guarantee and And and there has to be multiple implementation for a given standard Because I don't think that it singles that I mean I mean we basically we need to decouple the standard from the actual Implementation these are two different things we should not couple them together in the doctor case actually like Implementation and a standard are actually coupled together. So which is not a good thing for the ecosystem And also has to be vendor neutral ideally like there are more vendors In that ecosystem the better and also you have to have that interoperability So what like for example if you build a plug-in for one platform According to one standard it should be very easy to move that plug into a different platform because they both implement the same standard So the ideal word is like this. So that's that's the current situation So the ideal word is like you replace Docker with a bunch of container orchestration systems and and then just replace those Interfaces with a kind of a true container standard for example for the register API. It's container image spec for For volume plugging it's container storage spec and for networking is container now or spec So that's the ideal world So we need a lot of standard for containers image now working storage That's what I mentioned and you probably some like runtime standard or matrix or some other monitoring standard for containers In this talk, I'm gonna just focus on these three image networking and storage Because I think there's not really a standard for our matrix yet, but there is a stand for runtime, but it's not the focus of this talk So container image spec what's the scope of that standard basically if you think about that it's basically like application writers Write an application they compile their applications and then the next thing is they need to package their application into some sort of image and Then and also package not just the application binary and you also need to patch those application configurations into the image And then you I need to once you get the image you have to Store the image and transfer the image over the wire into the machine that you actually wants to run the container in production on the on the target machines you have to Unpack the image so that you can get those you can recover those application binary and a config and then run those Applications using those config. So that's hot. That's what an image span needs to do to me I think there's already a standard for that which is OCI container open container initiative So so open container initiative has two spec right now when it's called image spec The other one is called runtime spec. So this one I'm focusing on is the image spec because that's what developer cares about Because as long as you have an image spec and you guarantee that on the way you run your image on this machine Is exactly the same as you run on the other machine then it should be pretty straightforward You probably don't need other things So maces will support OCI soon I think the reason we don't merge that patch is because we want to do our due diligence to make sure that The way we store those layers and artifacts in a way that it's extensible in the future because they think about that It's not just simple as just store that on the file system. You have to How do you index them? How do you do garbage collection? How do you do cash replacement things like this and I don't want to introduce a too much complicit Yeah, like too much complexity yet for another new image format. I want to unify all these Things into one single unified artifacts store. I'm gonna mention later So that so that's the only reason that we don't merge those patch yet, but I will we will probably merge those patch very soon So so maces I already mentioned maces containerizer already support like Docker images and we support FC images So in the future, it's very natural to just extend the provisioner interface to add a new store called OCI To to support that that will be very straightforward for networking The school networking Specification needs to to handle in the container world. I think the scope for that is How to how do you connect containers? How do you allocate IP addresses? How do you enforce security policies? isolated performance Provided provide QoS or or balance now traffic. There's a lot a bunch of stuff you need to handle in our king area So there's a standard already right now, but I don't think it handle all these I handle some of these But not all of them So we probably some other standard or like we improve that standard to handle all these networking stuff So the CNI is that that the standard right now essentially adopted by major orchestration systems and now our vendors it's a simple CLI based interface and And and the container orchestration system just invoke those CLI commands before the container starts or after the container terminates And they're recently just joining CNCF. So it's a CNCF project and being donated to CNCF recently So this is like how briefly how CNI works So the container runtime on the left will basically before container start it will try to Create a container now our namespace first and then cause CNI plugging with that now our namespace saying that hey CNI plugging please add my at your container into into at this container it's now working to the one that Provided by the underlying our vendor So that so the container orchestration system just call add the simple CLI command Just add the now or to the net to the narrow namespace and then once the container terminates It is called delete which is just detach the now or from the narrow namespace And the actual configuration for the network is passing to the plug-in through arguments common command line arguments and environment variables and the IP management is actually part of the plug-in logic So the plug-in is responsible for allocating IP if If the container don't have an IP yet and the IPAM interface is also plug-able So there's an interface for IPAM that there's some general IPAM like for example local host space or Like a centralized ETC based SED based IPAM you can reuse those IPAM because the interface standard MISO support a CNI from 0.28. I think it's it's it's only it's only support inside MISO containerizer And if you want to use that just do a dash-dash isolation equals now Slash CNI on the agent config on the agent flag that you will be able to use CNI networks Provided by those narrow vendors and but you also need to install those narrow plug-ins on your agent host to be able to leverage that and The majority of the vendor narrow vendors already supports CNI Okay, so for storage, I don't want to talk too much here because there's another talk after this talking about Specifically about the container storage interface. I just give you a brief overview what storage Or a storage spec needs to handle in the container world The scope is like you have to handle like things like provisioning and deep provisioning volumes Attach detach volumes mount and mount volumes Things like create snapshot or restore snapshot take back up things like this So I think that's what a container storage interface needs to handle and there's a new interface Called container storage interface. It's a joint work between MISO's Kubernetes Docker and Cloud Foundry community on this and The goal of CSI is actually like make make sure that The vendor just need to make one plug-in and that plug-in will work for all the container orchestration systems And it support all the features dimension previously And I think one thing I want to highlight here is also needs to support both mount and block volumes Not just mount volumes and also support block volume because we do see use cases where people want to use raw block Devices for some data workload Anyway, I don't I don't want to jump into too much detail here So there's in the talk in after this at four I think on this so I'm gonna skip this part Okay, so it was 30 for 30. Okay, so I think the fourth 30 in the same room So we're gonna cover that in in the next talk The wrestling the next thing I want to talk about is some kind of the new features we built into MISO's since last year That we are really proud of I think if you listen to Ben's keynote this morning, he mentioned some of the nesting support debugging support debugging support already So I just want to dive into some of the details why we build this and how we build this and and and come over as a demo So why nested container? So we discovered a bunch of patterns that making nested container necessary So one pattern we discovered is the sidecar pattern right you have a server running and then you at the same time You have to run a proxy alongside in the same now or in any space with that main process Providing some authentication authorization support things like this So this is the one thing and then the life cycle of these two containers to be tied together So that either one if one container dying in that whole container like whole paw Then the entire on the processes in that paw will be killed That's the sidecar pattern the other pattern we discovered is called transient container So you have say you have Cassandra running and then you have some some job you want to perform on that Cassandra on note and Things like backup you want to like periodically on take a backup for your Cassandra data But but you don't want to run the backup all the time. It's not a service It's more like a transient job that you want to run It's like more like a crown job, but you want to access the state inside a Cassandra container so this is like the color another pattern that we discover called transient container There's another pattern that we also discovered we I call it like hierarchical container Why we need nasty container so kind of nasty content can be helpful here is like say for example you want to run Kubernetes on top of mesos and then Kubernetes has this pop concept and then If you think about in this case actually think about Kubernetes cube light is at the top level container and then and then Each paw inside the Kubernetes is actually the level one nested container And then the container inside that paw is the level two nested container nested under level one So so like we discovered a lot of use the case for nesting container. So that's the reason we build it last year and So mesos container right does support nesting. It can be more than one level. So depth can be greater than two And you can do volume sharing between siblings And it's fully compatible with other features in mesos containerizer So how it works is actually like we provide an API on the agent allow executor or any processing inside a container to Invoke that API to create a nested container in this case They say the executor container wants to create a nested container called nginx It just talked to agent API say launch with a bunch of configuration of the container and the command and Agent and the containerizer will just be Responsible for launching that nginx container provision the file system for that Docker image and then just launch it and And as I mentioned we support more than one More than two level of nesting. So one thing we do leverage this feature is debugging So the way we implement debugging support is actually like it by invoking by by launching an like Nested container underneath the container you want to debug for example in this case you want to debug on the nginx container You have some problem with that container What we essentially do when you do a debugging is we launching a level three nested container and nested underneath the nginx container And that container has access to the namespace of the nginx container so that you can do do all the debugging work You want to do so that's how we support debugging Yeah, so I think I just mentioned the debugging support So basically the debug debugging support that we want to do is basically provide Equivalent Docker exact and Docker attached and they can do that remotely So you don't have to be on the same machine to do Docker exactly can do that from any machine as long as you're authorized And then it's fully integrated with the mesos authorization authentication and it's actually leveraged mesos container Nasty container support as I mentioned So, okay, so rather than saying anything more I can give you a quick demo on this Okay, so let me do this So I have a so this is my Mac so I have a virtual machine running Okay, so I'm doing a live demo. So I'm gonna start a mesos master first. Is that big enough? Okay, so I'm starting a mesos master. So I just specify IP is a very standard way of starting mesos master So I'm going back to my browser just to check if the The cluster is running so All right, so the master is running so I'm now starting the agent so I have a tab open to just show you What the Yeah, I think I saved the The command somewhere Okay, so that's the command. I already started master the agent is basically like these are the flags I specify so especially for the isolation flag I specify a bunch of isolators like Docker runtime house since Linux volume some volume isolators capability and namespace pit and and you just specify where you want to store the Docker layers because I don't want to use the Docker image and that's how I start the Slave so I'm gonna start a slave. I have command save somewhere. Okay, so I just start a slave Okay, so I go back to the UI and do a fresh. So there should be an agent registered On my virtual machine. So let me go back now. I'm gonna do is I'm actually gonna use a mesos execute to to launch a Task group we call task group. It's actually you can think of that as just pod you have multiple containers running in the same executor and we have a Config for that task group. Let me go to that config Yeah, okay, so that's the the config of the task group. So basically it has two containers one called producer What the producer does is basically like a Specific a bunch of resources the command do it does is like it has a volume and it will constantly like every second It will touch a file in that volume using using the current date of the name of the volume The name of the file is the current date. So that's producing some file in the share volume and they're actually using a Docker image It has this volume It's a share volume and then there on the container. It's running the same as secure called consumer The consumer is actually doing one simple thing, which is just basically LS this volume trying to find out the content in this volume And it has the same share volume So it was so if that working properly, you will see the the file you produced by the the producer And the consumer can see those files and it's using a Docker container as well So that's the the configuration of the task group. So I'm gonna just launch that All right, so it's running I'm gonna go to the UI Yeah, you can see like there are two tasks running producer consumer and I can go to the sandbox of the producer sorry consumer just to see if The STD out so it's constantly printing those files inside that share volume. So that's working Okay, so yeah, I think I just launched a time I mean if you go browse the the content in the sandbox, so if you go to execute So, okay, hold on If you go to The a secure sandbox So that's the a secure sandbox You have a share volume here and the share volume contain all these files that you just touched And then if you go to each task, you can see can like these are the sandbox for the consumer And producer and then go to there you have the STD out and STD out It's fully nested and then use the top-level a secure So it's in the nested container top-level a secure is running level one container and then The container in underneath the container that the actual task is running under nest to net level to nest a container And you can get the STD out in the STD areas in the same way as before Okay, so Well, I'm gonna do here is say there's a problem with the consumer. I want to debug that container So what I'm gonna do is actually I'm switching back to my Mac So just to to show you that it's possible to do that on a remote machine for debugging so I'm using a CLI which is actually DCLCI but But it's gonna we're gonna add in Meso CLI for that too so that it works for Manila Mesos So the DCLCI is basically just hitting Mesos endpoint to streaming those responses See if you can't run this so I think I already can fix this I do a DCOS task Sorry task It show all the tasks currently running inside a cluster. You can see the consumer and producer here now say I'm gonna do a debugging for that Container, so I'm gonna do DCOS task Exact consumer And then when we launch a shell This doesn't work I think I need to do That's TI so that's T-I means like it's interactive and using a terminal Okay, so now I'm entering that container if I do an LS you can see The volume so if I see the into volume and you can see all these file in the share volume and you can also cad the STD out files And Yeah, and I think terminal also works VI also works, so Yeah, I think that's kind of the debugging so if you go to UI what's interesting you can find out is if you go to UI And if you go to the consumer sandbox You can actually see a subdirectly being created which is actually the nested container underneath that container and then if you go to these containers to see the command so you can actually see all the command that I kind of type previously So so that will all the command that you do while you're debugging are actually also Captured and saved in the sandbox of that nested container underneath the level one nested container So that's how the whole things works right now And I'm glad that we do that in this way. It's more extensible All right, so I think that's it for the demo. Let me go back to my talk Right so future roadmaps, I think that's pretty important that I think we have a lot of things to do I just want to highlight a few things One thing we want to do is standalone mode So we got some feedback from folks that it will be nice that we can using the mesos containerizer without even using mesos master Yes, we are adding that support right now And it will be available pretty soon I would say in a month that you can you can just hit an agent API to launching a container On the agent without even involving any offer cycle and I think there's another Isolators that's contributed by Apple folks which is called host port isolation For those folks that wants to run container on host network and then only isolate the container using ports This will be super helpful because that kind of enforce which port that a container can use So basically any port that's not allocated to that container Will not be used by that container. So there's some kernel feature So so so the ice ice what the isolator does is basically scan those Proc file to make sure that all the ports this the processes inside that container is listening on or using are not It's part of the allocated resources. If not, then it will just fill that container It's a very good thing that Apple guys contributed and that part is being merged already So if you can you want to use the latest head then you can should be able to use that But if not wait for the next release So Apple forces are also adding like pan module support to allow me so it's container to use any arbitrary pan module And then I mentioned earlier that we want to do this unified artifacts store because we have a lot of like Artifact store and cash inside me so it's like for example factor cash And you have Docker layers and right now it's like all separate Which is which is not a good thing because you have to implement all these like cash Replacement algorithm garbage collection algorithm for those cash, which is not an extensible model So I want to move to a unified artifact store using content just content address both storage to make sure that we don't have Duplicates there and then we have a unified way to do garbage collection And do cash replacement and also Regarding security when they add a support for set comm and se Linux and another thing that we just get started doing Discussion is it's called VM support and username space So we are doing some research on VM support I'm trying to figure out what's the best way to do VM support because I do get some feedback from folks that people wants to Mix the workload like ransom as container ransom as a virtual machine because the security model provided My container is not that secure because the kernel is still shared Compared to virtual machine, which is more strictly more secure than a container for those applications that have a Sensitive data running inside VM is sometimes more Desirable than running in a container and also username space I think Apple folks is trying to do the username space space support recently So we just have the discussion and they are starting to do some prototype Yeah, so that's kind of a future roadmap if you have anything feel free to talk to me that you want us to work out So just give you a quick summary. So containerization in May So it's it's very stable. It's seen production for years and and we do have an option to allow you to not rely on darker demon if you don't want to and It's very pluggable and extensible We can our arbitrary extension by writing an isolator or provisioner or launcher And we also embracing all the container standard. We are embracing container networking interface container storage interface and open container initiative and We do have a containerization working group So there's a regular meeting every two weeks at Thursday morning 9 p 9 a.m. PST Pacific Time US We do have a slack channel called containerizer If you're interested in any of the things that happen inside containerization part of the mesos project feel free to join us And and all the meetings are recorded We have a pretty good audience every time and we have a pretty good agenda every time too So if you're interested can join us and all the notes and agenda are actually in this link if you are interested and also This is accessible in the mesos get hub documentation website Okay, so I think that's it. I'm gonna open the the door for questions. Thank you very much Thanks, yeah, I Will abuse my role as a track lead and ask the first question Okay, so you've been working in containerization before a doctor became a thing Mm-hmm. So If I would ask you to name two or three reasons why container became so popular Do not divide for developers develops programmers cluster cluster operators, but three to main reasons why Containers as a concept became some popular. What would you say? Yeah, so I probably just name one I think the major reason that people move to container world is Because developers really like containers because now they have a standard way to package their applications Just imagine before container. What's the way you package your applications? It's platform dependent on CentOS or Red Hat use RPMs Oven to use DVM packages on some like I don't know. What's the package mechanic on Windows or on OS X so before container came along that there's no standard way to package your application and deliver the Application to dev out to two SREs. So I think that's the biggest thing that I like container is Like so the app simplify the developers to to to package the application and run them out their applications I think that's the biggest wing of container Okay, do we have other questions One hand on the left Um, it's it's great to see a building tools for debugging of course like one of the big cells of Docker when you're on a host is being able to run things like Docker ps Docker kill and all of the other like CLI that's exposed with the API is that plans to build like Some equivalency with the mesos containerizer. So if I was on an agent Yeah, I could run the equivalent of Docker ps to see all of the rain container right now In fact, we have operate API on agent if you have a Curl you can just curl those end point to to launch a container kill a container and then get all the containers are running on the agent But we don't build that CLI yet, but that's on the roadmap. We definitely wants to build that CLI I think that's how kind of the product I want to do for a long time Just don't have the resource to do that but but that's something we want to do for a long time We definitely wants to do that Even without a CLI you can still hit those agent endpoint operate the API is to to to do the things You want to do in fact all these debugging Functionality like attached exact are all part of that API. So if essentially right now you can just write a script using curl to do that Or write a simple Python script to that, but we're gonna build that we will build a CLI for that That's great So we got in CI. It's not just on the roadmap. It's on the works So if you visit the mesosphere booth, there is a guy with a long beard Kevin clues He's the lead for this effort other questions Here can you tell us a bit more about the VM support VM support? Yeah, yeah, because the life cycle is quite different in a VM from a container Yeah, so we just get the so so during the Containerization working group every two weeks We just get started at the background research how we want to do the VM support inside mesos And we start this doc trying to collect all like how other people are doing VM support in their systems I think we don't have a kind of the design doc yet But I think one way to do that is support KVM Because essential KVM just a Linux process you can and you can you see yours too to manage the resource isolation for that But there's some other like thinking on that too. I just want to do all the research first before we moving forward But we do see a lot of kind of use cases for that I think that's kind of important That's the next thing we want to do as a group Containerizing working group and a lot of people are interested in that if you're interested you can participate in the working group too There's a doc. We are trying to fill out all these background research for example How criminal is is doing VM support? How? How open stack is doing VM support? How? How key KVM is doing? I'm someone mentioned hyper and also there's some And there's some windows people like windows has virtual machine support for their windows containers like things like that We just do wants to do all these research first before moving to a design So that we are in that researching phase still and we're gonna have a sync In in two weeks on that because last time we check is like a month ago that we asked folks to do research themselves and then go back together to have a kind of a Sync up in the working group meeting so that we can present each Systems and then we decide what to do next make sense more questions Okay, what about cleaning up of like images that was created by UCR? Is there plans to create some tooling around that? Wait, and can you repeat the question so so when you create an image in Dorker so there after some time there are a lot of like still images that you don't need and they invented just like some prop Yeah, okay doing to to clean them up. Yes. So this is actually being worked on right now So I think their patches contributed by ubers on that because they want to run on Missus containerizer in in production and that that's one thing they want to fix Which is like GC those image layers that's not being used So we're adding an endpoint on the agent trying allowing to prune images and there's some more features on that There's actually a design doc somewhere in the mailing list That basically we're adding this endpoint allow you to operate to monitor the disk Reusage if that reaches a certain limit you can hit that endpoint to just say hey prune all the unused images It's not a simple problem because it's think about garbage collection, right? Like people are doing garbage collection for years. It's not a well solved research problem yet Think about Java garbage collection Like you have to do either mark and sweep or reference counting things like this But we are adding this endpoint right now So it will be ready soon and the initial algorithm we use is mark and sweep to just mark all the used layer Layers and then delete all these unused layers and then make some exception for things like I think Uber has this use case where You don't want to delete some base images when you do a prune So we adding those filters too so that you it will be ready in one five 1.5 next release cool, if you promise that's within 1.5 More questions Okay, looks like there are no more online questions if you have questions later, I will try to spend the next day at the booth and you can always shoot as an email and We are going for a break and the next talk will be at 16 30 here. It's about the container storage initiative Have a good break and see you afterwards