 My name is Vincent Batts. I've been in a couple of different open source projects over the years. For the last couple of years it's been mostly getting gray hairs because of the drama in the container community. So that has been being a Docker contributor and then trying to remove myself from the Docker community for a few years. But containers have been interesting and fun. Mostly a headache to a lot of people and how they use them. They do try and make a concise approach and it has gotten people thinking about how to deploy apps differently. LXC did a great job in the beginning but it was not for mere mortals and we've gone a long way since how people were using LXC and it's neat and I'm ready to ask the Facebook folks more questions about how they're using portable services to make sure that we can keep some more of a cohesive story going forward. But I didn't put the timestamp up there but the original containers were a Google patch for Mount Namespace in like 2002 or three I think and Shrut plus some of those rudimentary like early namespaces was the beginning of that. And so LXC came along and first first commit first really I mean first commit first full-on release of LXC was in 2008. So it had been around for a while like people were already using containers in a number of different ways. I thought it was interesting that even system D hello hello okay so probably like literally five percent of the room raised their hand just then. How many folks currently or have had experience with LXC? So substantial more folks is that current or is that just history history yeah. So it's interesting but so system D in spawn was originally like Leonard was like protesting that it would be anything besides a debugging tool for years and it wasn't really until some of the other tools started to get exciting that it he made it more of a user interface like added features made it more of a system utility plugged in machine CTL and other plumbing pieces to make it more fully rounded. When Docker came onto the scene in 2013 it was originally Python and then Go was cool so it switched over to the Golang not always a great option for containers at all. And I would I shared the link earlier and you can look at this later most of these have links to find this these these pieces this slide deck is mostly almost like reference material to click out and find because some of these pieces or projects you might want to go put your hands on and or explore in how you could integrate or reuse some of the stuff but there's no links to Docker here because there is no longer one there's no longer a good thing to sing a single project to link to for Docker because it has been so torn apart and put into different projects part of it or in Moby part of it or split out there's no single one link to follow to for that anyhow other pieces in history how many people had put their hands on let me container that for you LM CTF Y cool like two people so it came out literally around the same time as Docker it was a lot of the same folks that were early maintainers Victor Mar mall Vishnu from Google but it was kind of thrown over the wall of like how they were using containers one of the cool thing precedents that it had and still is a good use case for is it was not focused nearly as much on the user interface it was meant to be almost like what's run seas become now is that last mile step with all the knobs and whistles like you could you could actually set lots of secret parameters and lots of fine tuning pieces of the last mile rather than presenting just a unified pretty user interface and it's funny that we've almost gone full circle on what they've done there have like pushing it down to where you have a very fine tuned last mile with lots of belt like lot lots of knobs to turn and it's still out there I don't think it's had a touch in four or five years so rocket come along in December 2014 a lot of the tensions that were happening during this time between mid 2013 to the end of 2014 was how fast and to some extent footloose the development was happening within Docker community that there was a lot of contention around how and what was the design that went into features that were getting merged how did we arrive at any kind of a consensus for how those features were going to be used so with rocket we went with a spec first development and that spec was the app C spec how many folks got familiarized with app C at all Derek of course so like maybe about 5% of the room it was great because it was released as a as a full spec as an approach that could be taken and it provided a reference implementation at the same time any changes that would happen in rocket would be debated in app C first and almost within like months of the spec coming out there was like a free BSD implementation and folks started working on other derivatives of somebody changed out like the stage two and made like a VM runtime it was a lot of what we hope to see happened synthetically but it happened because there was a spec first rocket is still used in places but is largely put out to pasture there's some community maintainers but you can go find it on GitHub rocket or RKT RKT libct so this this project got a lot of attention in a short order it's not actually any user interface that's a C library and there's a go-lang wrapper around that C library but it came from the Otis open BZ folks and Andre Vagin and Pavel are crazy people there they really have done awesome stuff for the container community both in the kernel and also in the user space they're the maintainers of like CRIU the checkpoint and restore in user space there's a lot of neat features that are thanks to them but this was a neat proof of concept to try and push back the abstractions from the from the command line into like a library that could just be reused and other ways I think it's effectively a dead project now I don't think it has had commits in years or at least not push publicly but around that same time it generated the conversations for what Docker was going through of everything was shelling out to different last mile run times for Docker during this time the primary back-end was LXC like so you'd run Docker run something and it would actually shell out to LXC commands to do that and there was shrewt and I don't know if there was another one at that point I don't know who the LXC was the primary one so they were trying to figure out how could we have a minimize some of those steps or shelling out and it was funny because they actually asked a lot of the LXC community for features to improve that that communication between the two and right around the time the LXC community was looking to add those features to make it a better workflow for reusability they started pushing the code into a direct invocation so that the Docker daemon would just shell or not shell but would effectively call all the way down it was all one big binary it would actually re-exec itself and they started lib container kind of in the likeness of libct to have a source code API to be able to call down to the run time at that time what was it in us in a center in us and it was was like we made a command line that you'd use that lib container to test out that last mile that's what became run see that you hear folks talk about today so yeah that that was not well received within the LXC community that there was asks for something that they started to deliver and then immediately was switched away over to lib container to just wholly rewrite it them you know and not use LXC LXC was actually removed around very within 2014 so the LXC community split split off and made a manager over LXC called LXD how many folks here have put hands on LXD cool probably about five or ten percent and it is a live and well community it is really neat it's LXC is also written in C LXD is written in go-lang surprise I mean maybe not unsurprisingly but it's it's now used for various back-end systems within canonicals world and a lot of this again like app see bubbled up into a conversation of all the needs being able to discuss the design first and not just run off in a tangent to develop a feature or user experience without at least getting all the people that are involved on board and it led into the open container initiative specifications initially was just one spec it got split off into a runtime piece and an image format piece like how would you pack it or get a content addressable nature about that image it took about two years for from the time that we formed the OCI to the time we had a version one of the runtime and image spec and only this year have we started the the registry API like how you can push those images around it's not reinventing the wheel there it's leveraging what's already been done but there's a lot of cruft in the Docker registry API that needs to be cleaned out and potentially new features added like ways to even do like peer to peer synchronization or chunking down and not using tar archives anymore so that that conversation will happen in the distribution spec but so notably 2017 when we finally stamped the v1 release of the image and the runtime spec there were already expectations of how that that kind of standardized layer would be accepted within the community kubernetes had already been going like it was launched in 2014 with rocket it introduced conversations within kubernetes of how adding additional container run times would was really really a pain painful process when they kubernetes was originally very very Docker centric even to this day we're still finding expectations that are very Docker centric and those are being worked through there's fewer and fewer but rocket made a lot of that very evident so in 2016 kubernetes rather than wholly going with just OCI they they made a container runtime interface and almost in likeness they have an image service and a runtime service and it's a way to describe what they expect of any container runtime that will provide any kind of an image service so you could actually have two different g rpc endpoints one that just does image management and one that just does the execution life cycle of those containers so far I don't think anybody's really split that up it's just one one entity one g rpc socket will export both the image and runtime service but the communication piece here is is that it's pulled straight from their website and so initially they the migration when they introduced this g rpc container runtime interface they started migrating over to a docker shim and only in around Docker kubernetes 1.8 was the migration complete where they weren't doing any side channel communications through over to the to the docker runtime for long times it was even pieces of like events from the container whether or not pods were were going up and down some kind of some of pieces of the C group management or resource management of the containers these were all like side side approaches to the container runtime and it was only about 1.8 that they migrated everything over to the CRI or some other kind of side management piece like C advisor or otherwise so cryo and I didn't get a I didn't go back in time far enough on cryo but cryo was originally the OCI daemon and there was contention on that name so it was forcibly renamed to CRI O for OCI but cryo started in 2016 when the CRI came out like it was we were part of the conversation that got the CRI framed up so that it could happen for other run times and has been only kubernetes focused since that time it it was defaulting to run C for that last mile but that's configurable and has been very interesting because since we were so early to that to that conversation and we weren't trying to tailor to every single use case it allowed other projects to come on board and I'll talk about a few of those run C replacements in just a second around I don't know if I'm allowed to say this on camera but around like 20 late 2016 I was called into a meeting in California and effectively asked like a cease and desist to stop working on cryo like formally shut the project down and with concessions to go in and try and make the container D part of Docker do those pieces instead and the political nonsense that it all was our request back we're like cool we will we will look at doing that if the Docker community can get rid of the BDFL model and go to a fully open governance and there was no there was a there was a disconnect there and so at that time was when the Docker community went full steam ahead and making their shim container D layer actually have user interface features and talked to not only the Docker daemon but as well as having a CRI shim that could run on top of it and so those two have been kind of the informal competitors since then but still to this day container D accommodates several use cases not just for the CR the Kubernetes CRI but now is is another contender in that area and then Ali Baba's pouch was announced about a year ago I think they've only been really pushing it in the last six months but Ali Baba's pouch is fascinating I have only put hands on it in so far as to be kind of shocked horrified and amazed but it it imports from all all the projects like it it imports from Docker's lib network and the CNI it imports from OCI Docker run C it can run as its own tool it can also export a CRI it can do regular pooling and pushing of images as well as it does a peer-to-peer bit torrent back in for images it they maintain that their system can run on rel six kernels it uses KVM all of it but it's it's in the CRI layer as well and so like Antonio mentioned a few minutes ago because so much of the conversation is now at that CRI layer and people are making these run times there's a unified interface for it rather than all those tools having their own user interface tool there's CRI cuddle and it's you can you can interact with it but it's largely a debugging tool but I think this is this is now where the conversation of like how will you work underneath the Kubernetes stack or talk to the runtime is going to be with the CRI cuddle so real quick since the 1.0 of the OCI image and runtime spec went out there's been an abundance of these these run times places that can drop in and replace largely the compatibility layers that they know how to do certain life cycle steps and that they can read this Jason structure it's not that complicated the stupid part that is still being argued is the compatibility of the CLI so at this point all of these run times to be able to drop in and replace are conforming to what the run C CLI behavior does it's dumb but that's where it is there's Oracle introduced a rail car which is written in rust nice neat I don't know the particular benefits of using it except for it's not running go link if you've heard about clear containers or hyper V from the hyper SH folks both of those were using a some kind of a VM back end hyper V could talk to virtual box and Zen and KVM and other stuff whereas the clear containers had focused only in QMU those two projects came together and are now caught a containers yeah it's it's good stuff we've been working with them heavily because they've they have almost since the beginning clear containers now caught a containers focused mostly on cryo to be able to run underneath Kubernetes and we've been working with them to get as many of their patches for the kernel inside the client upstreamed because they had some improvement you know some optimizations for the guest kernel as well as they had made their own machine type in QMU to buy it to pass to pass over like posting and all kinds of bios pieces to make it as fast as possible trying to get all that upstream but that's their novel containers is kind of a unicorn approach neat it's got some security features but you have to tailor your your container image actually has to have like an executor that's shoved into the image that can help your container run so they have like some node.js examples but it's a run C drop-in replacement in video now has a fork of run C it is a straight up fork of run C but it does all kinds of like GPU and CUDA optimized pieces so that you can do more containery things with Nvidia and not have to hack the shit out of that last mile. Google has GVizor you can use it in a non run C replacement kind of way but they have a run C drop in replacement it's it's more general purpose than NABLA it has some KVM offload but it's largely a syscall emulation layer it will probably capture like 80% of your use case. So Windows now has a drop-in replacement this is how some of the communities have had Windows native support was calling to this Windows HSC the host computing service shim layer but now they have a run C drop-in replacement so that they can be shelled out to and I have a feeling that they're also going to be working on and announce their own CRI replacement as well we talked to them about using cryo but and they had worked with container D for a while and I think they were fed up with re-architecting them so they're going to probably do their own. All right so then lastly there's also a list of community pieces that were either wrappers on like flat pack has bubble wrap for the limited user containers there's a bubble wrap wrapper in spawn I had made an in spawn wrapper don't use it there's a minimal C runtime here there's a community project here since June Leonard has actually started a native support for the OCI runtime in in spawn there's a few pieces missing it needs testing and review but that's a neat approach that rather than some of the pieces you could just swap that out and it would be more integrated into the full system stack we've done most of the footwork of that in run C already but it's nice to have options so that's that and then all this has already been covered but it's in the slide deck for reference other options that are now since things have gotten popular build a podman scopia you've already heard mentioned another one this is terrible color scheme I'm sorry but UM OCI is a project from Sousa and it helps you manage the OCI image format so it's pretty useful when you can use scopia to pull down an image mochi I don't know how you pronounce it but emoji to unpack the image and then you can use run C rail car whatever to run the image if you were doing debugging of that yourself you can use scopio and emoji together to do that that's it so that's this talk you can go to that address and see these slides already so yeah I think I have like 30 seconds I don't know if we have enough time for questions anybody have a question I want to shoot out on that history real quick nothing crickets see me later