 Hello, hello. Testing. All right. Should we start this? Okay. Well, good afternoon, everyone. Or should I say good morning or good evening if you're still jet lagged? My name is Mike Metrol, and my talk is, what should I know about insert container project here? Over the next 40 minutes, I plan on covering several of the key projects in the container space, how it relates to OpenStack, and what each project entails. Because there's such a wide breadth of projects out there, and I have a very small amount of time, my idea is to try to cover as many of them as possible in kind of rapid fashion so you guys kind of get an insight into the death of the projects. So I figured I want to do something a little bit different in terms of the format to go about this, so let's play a little game. How many of you are familiar with speed networking or speed dating? Kind of show of hands. Okay, most of you. To make sure things don't get lost in translation, speed networking is when you have a group of people that are interested in meeting one another but a very small amount of time, and the idea is you get paired one-on-one, you have about one minute to tell them who you are, what you do, kind of the highlights of who you are, etc., and you move on to the next person and run robin fashion until you meet everybody. With the idea of being that you get exposed to everything out there and then you branch off into somewhat of a tangential conversation based on whatever piqued your interest. So I figured this is a great format to expose you to the various projects out there. So let's start with an example. When you meet somebody, you kind of know what their name is, a couple facts about them, they tell you some things you may not know and some endorsements. So an example, my name is Mike, I'm a product architect at Rackspace, and then for the last three and a half years, last year and a half I've been very heads down into the container space doing R&D work as well as market surveillance, some things you may not know about me, joy coding and go, love to play golf, my favorite editor is Vim, and according to my LinkedIn profile, I've gotten endorsements for Python, clock impeding bash, and yes, even Reddit. The things you can get endorsed on there never cease to amaze me. So you guys get the gist, let's get into containers. All of you are pretty smart individuals. We all know what containers are, but to make sure we're all on the same page, containers really bring about four to six times more utilization and efficiency per application compared to traditional VMs and hypervisors. The way they've been adopted really has aided and faster development and duration, and they operate pretty much close to bare metal speeds. Some things you may not know, and people ask me this a lot. Containers are not just to enable PAS layer. They can certainly do that, but sure enough, there's much more to them. They can modernize your stack, they can be integrated into your CI CD pipeline, and the footprint of them is much, much smaller. A trend I've been noticing over the last couple of years, or last couple of months, that I find interesting is that many legacy and enterprise-focused organizations are looking to jump their apps from their current implementation straight to containers, skipping over VMs. Now that may be because they never really made the jump into the cloudy way, but nevertheless, they're seriously considering containers to do so, which is a trend that I find very, very interesting. Some things you may not know, if I've also got containers, is that they all share the same underlying kernel. So containers running Ubuntu and Debian can definitely work on the same host, but Windows containers running next to Debian containers, for example, aren't the case. So it's very different from virtual machines. They're very lightweight, they make app isolation easier, and they can play well across all sorts of platforms. All right, we pretty much got that. So containers are great and all, but let's talk about the runtimes that really bring out the full features of containers. Docker is the most popular one that's abstracted LXC, and they did so in a way that it made it into a pluggable architecture, if you will, as well as they established a model to package and distribute applications far better than what we had before. I constantly get asked this, but the Docker engine is really the only abstraction layer you need to enable cross-platform portability. So if you've ever tried to take a virtual machine from, say, AWS to OpenStack to Rackspace or have you, you've noticed that the pain points in that are very obvious. The boot and init process varies from platform to platform. Also, the sizes of these VMs can be a couple sizes of gigabytes. Transferring VMs from one platform to the other can take numerous amounts of time, and containers are very lightweight in that respect, so they allow for more usefulness and elasticity in terms of not just the resources, but how you use them. The Docker Hub is a collection of over 100,000 applications for a sake of an example. It's like the app store, but for servers, and it's really neat in terms of what you can find on there. So I can take an OpenStack container, I'm sorry, an OpenVPN container, or MySQL, or a Postgres, and I can have it running within seconds, not knowing a single thing about how that tool or technology works, which is very different from, say, trying to build whatever binary it is you're trying to install as well as maintain it. So it's really interesting to see how we evolve how we look at SAC applications in traditional servers. Docker runs in all modern Linux distros that are 64-bit, and there's recently been added support for Windows 7.1 and up. They're very fast, and they're actually being embraced by the entire industry as we speak. With new competition, naturally, sorry, with new technology, it naturally reads competition, and this is exactly what CoreOS' Rocket is really aimed at doing. It competes with Docker, and it's doing this in a different way. Rocket is really an implementation of this new App-C spec that's being developed as an open-source spec in the community. They're defining different ways of how you actually describe and run and manage, and how you enable discovery protocols for containers that are very different from the Docker way. So much so that the Docker manifest in the spec that really defines what a container is in the Docker world was done so after the fact that Docker was created, whereas CoreOS said, for Rocket, let's actually build up the spec, let's throw it out in the community, let's make sure that everyone agrees on this, and then we'll implement it. So it's definitely a more kind of thoughtful approach to it. And Rocket's really aimed at tackling the enterprise primitives around containers, particularly around security and image auditing. It also brings its own new image format, known as Aki, which is, again, different from the way that Docker is defining images. It's still under heavy development, and not production-ready, but it's definitely getting there. And one key thing that the CoreOS folks were harping on when it comes to Docker is that Docker established what we know as a container, and it made it popular. But they started dabbling into other things that didn't really necessarily relate back to two containers, such as launching cloud servers, creating systems around clustering, enabling wide functionality with regards to building and running images, networking, etc., etc., and they were packing this into this one large one binary. So Rocket said, well, we really need to just focus on the true building blocks of what a container is and what we as an industry want it to be, and that's what it's trying to do. A couple of endorsements around Rocket. Intel's clear containers effort is partnering, they partnered with CoreOS to enable the security joint effort to ingrain security as a core concept of Rocket. Kubernetes, which I'll get to later, recently added support for Rocket. And there's been many implementations of the Apsi spec out in the open. So much so, I think there's like four or five implementations of the Apsi spec, which is four or five more than Docker. So that says a lot about what the community thinks about Docker and where they think it should be going. So containers are great enough, but how do they relate to OpenStack? Well, for one, if you're not familiar already, the OpenStack Docker driver is really about being able to instantiate containers through Nova instead of virtual machines. It uses Glance as a back end image registry for Docker as opposed to running your image registry in a container. It's a StackForge project. And essentially, for all practical purposes, the Nova Docker driver is an HTTP client that talks to and controls Docker v of the API. And it works well with DevStack. So pretty simple. That's great and all, but we need something more. And that's what Magnum has really aimed at accomplishing. I'm sorry, Adrian, if you're watching, couldn't help myself to embed a Tom Selleck PNG. So you kind of have to do that. Anyways, containers are great and all, but you really need to think about the fact that they're very different in terms of the lifecycle of how you develop them, how you use them. And those are very different from virtual machines. And so Magnum is aimed at being this container service that makes containers a first-class citizen in OpenStack. It uses heat to deploy swarm Kubernetes and mesos, just a couple of container orchestration engines. And it really is about providing abstraction in the two-fold manner. The first abstraction layer is what's known as the Bay type. And what this does is it allows multiple tenants to instantiate the container orchestration engine, or COE for short, of their choosing. And it allows it to run side by side with other tenants, very similar to how the virtual machines are done. It also provides an abstraction layer from the API perspective and how you deploy it. So Magnum's job is really about saying, hey, I want to deploy the COE. Once it's done, drop me down to the native API. I have the respective tool. And let me do my thing. So it doesn't try to get into the middle or kind of boggle the actual use case or the utilization of the actual COE that you're deploying. Google's recent involvement in OpenStack obviously lends Magnum to be a sweet spot for collaboration because Magnum supports Kubernetes. So definitely keep an eye out as to where that's going in the future. The next project is CoreCube. This is a project I personally started and maintained. And if you think of Magnum as the full-on suite slash toolbox to deploy this multi-tenant, multi-COE deployment, CoreCube is the complete opposite. CoreCube is really just about a simple, easy way to deploy a proof-of-concept project of Kubernetes running on OpenStack in CoreOS VMs. And it does this across not only all Rackspace environments, but also pure native OpenStack. I started it really to understand CoreOS' projects, just the OS itself, but at CD, fleet, flannel, etc., and how this all ties into Kubernetes. I also added support for SkyDNS in Kubernetes if you're familiar with that, which allows native registration discovery for pods to run it and go. And the constant endorsement I say about CoreCube before I finish up here is that it really doesn't require any additional services or installation to use. If you have an OpenStack available and it has heat enabled, that's really all you need. So this is, again, not production ready by no means users that. This is just a way to allow you to play with Kubernetes on OpenStack without dealing with the mess of setting it up. So let's get to the meat of what everyone's really here for, the COEs. First up we have, oh, Apple, it's finest. Let's try this again. Sure we can enter this out. All right, and we're back. DockerSwarm. DockerSwarm is really about making a bunch of different Docker hosts, look like a single virtual Docker host, and giving you one single API. On top of that, it's really aimed at providing a common slash standard interface for any COE you're choosing, whether it be Kubernetes or Mesa or what have you. If you've used Docker hosts, this is great for one host, but when you start to kind of branch out and think about multiple hosts, that's what DockerSwarm's sweet spot is, right? It's really wanting to be able to allow you to control Docker across many different hosts. So if you've used Docker, using swarm is pretty much identical, and there's not really a learning curve if at all. But it's not really battle hardened yet. There's no container failover, there's no HA for the swarm processes themselves, and the plug-in support for other COEs is still incomplete. A perfect example of this is, again with the Kubernetes example, if you're familiar with a pod, which is one or more containers, key part being or more, swarm thinks a pod is just one container. So if you expand that beyond one container, swarm just doesn't really know what to do. So it kind of starts to set the tone for what's lacking in swarm. And if they plan to be the standard interface going forward, being able to adapt to the various COEs, right container orchestration engines that they want to be the interface for is definitely going to be an uphill battle. At the same time, there's no real foundation for enabling microservices and the requirements that those things have. And this is really what we're all after here when it comes to container rights microservices. Because when you see the power that containers can harness, it's really about being able to deconstruct your monolithic architectures, piecemeal things into its own separate logical division, and then giving it the resources to do that. And swarm just doesn't really have that. Again, I have this kind of bias towards Kubernetes because I feel like they just started to get things right from the gecko. They have the concept of the pod and everything around it, which, again, I'll get two slides, but it really starts to define at least a talking point of how you should be architecting and restructuring your applications. And swarm is just not aimed at doing that, especially if you want to speed this kind of multiplex way to control many COEs. However, it's still perfect for smaller dev environments of about less than 50 hosts. This is by no means a hard limit of swarm. This is just my personal recommendation. And it's the next logical step of controlling and playing with Docker outside of just one physical host. Next up, we have mesos slash mesosphere's DCOS and marathon. So let's start with mesos. For all practical purposes, mesos is a distributed systems kernel and cluster manager. And mesosphere's DCOS is an OS that encompasses that. So perfect example. You have a Linux kernel and you have an OS like Ubuntu or Debian that consumes that. The same analogy can be applied to mesos as well as mesosphere's DCOS. Mesos is a distributed kernel and mesosphere DCOS is the encompassing of that kernel as well as providing more enterprise-y type of functionality from a UI to monitoring, etc. On top of that you can run marathon which is really about controlling C group services as well as Docker containers. And Kubernetes is actually similar to marathon in the sense that it can run alongside of it or in lieu of. And the reason being is because the mesos here folks thought or at least they have a hard stand saying that the Kubernetes scheduler is not as sophisticated as it should be when it comes to controlling containers. So they created a shim that allows you to map Kubernetes resources to mesos resources. So you can definitely have some sort of intermingling of the two. Another key point you may not know about this set of tools is that mesos and open stack get constantly compared and they even say there's some overlap. But to compare them head-on is kind of doing both at the service. It's definitely comparing apples to oranges. Traditionally mesos is about giving services the resources it needs and open stack is about giving you VMs or the resources it needs. At the end of the day they both about the goal do about giving the resources in some way but mesos is definitely more catered to services. Open stack traditionally VMs. Obviously open stack has moved along in terms of projects like Ironic to instantiate on bare metal and then Nova Docker ever instantiates VMs or traditionally it's VMs. Set of containers is VMs. But the cool thing is that you can actually intermingle these as well. You can run mesos on top of open stack or you can run open stack on top of mesos. A nice little anecdote is about a year ago eBay who's been a prominent player in the open stack space they have an internal open stack cluster and essentially what they would do is every time a new developer got hired on they would get their own VM instance with Jenkins for their builds and this Jenkins instance would live in the VM on the open stack cluster. Well as they got more developers and they each got their own instance it basically turned to the fact that they were creating technical raw because these resources where they're idling or they were not being used their full capacity. So they wanted a way to kind of restructure that and regain some resources back. So they ran mesos on top of open stack phenomenon mistaken they put marathon on top and long story short they were able to minimize their footprint. So there's definitely interplay and interchangeability between the two and interoperability but it's definitely done at some expense right it's a little wonky but it can be done. Some resounding endorsements around this whole space is that Twitter, Airbnb, and Apple use mesos famously Apple is using mesos to power Siri. Verizon deploys all their DC services internally with mesosphere and Airbnb, eBay, PayPal and Yelp all use marathon. Mesos has been around for a couple of years now it's definitely reached a mature level and it's really ideal for a large environments from hundreds to not thousands of physical nodes. Next up is Kubernetes. If you haven't heard about Kubernetes by now kind of living under a rock. Kubernetes is really aimed at being this fully featured large scale container management system modeled after Borg. And Borg is an internal cluster manager at Google that powers hundreds if not thousands of simultaneous jobs behind the famous Google apps we all come to know and love. So just Gmail and Google Maps. It's based on decades of running production workloads and the experience around that. So who better suited in terms of the authoritative stance to do that than Google? Besides we all have confidence in Google so why not? At least some of us do. Kubernetes is supported across various different platforms including OpusDoc, Rackspace, AWS, GCE, Azure, Red Hat, etc. And some things you may not know and I think this is really key here is that Kubernetes takes a stance in defining again what the microservices architecture should be and it's really based around the concept of the pod. And the pod is I believe the truly perfect way of encompassing the atomic unit for what an app should be. It's again one or more containers that share the same volumes that share same C groups resources that share the same networking namespace and these pods these containers living in the same pod communicate with each other via local host. When you couple that with other concepts from Kubernetes such as the replication controllers such as services, labeling, etc. It starts to kind of fill out the story of how you actually think about designing your applications to fit into this new microservices model. Replication controllers are a way to allow you to do self healing in a cluster scope and it always gives you replication abilities as well as allows you to really think about your policies that you want to enforce. Couple that with services which is essentially a load balancing service aka a single choke point from multiple pods and you start to kind of see that Kubernetes has really given you a language to describe your applications. Kubernetes has no idea what an application it is from the hello world to this you know your standard three tier web apps. It just knows and gives you the concepts to allow you to define your business policies as well as the requirements for the quality of service and it goes about making sure that those are actually enforced. There's a specifically there's add-ons around Kubernetes that come built into it if you want to enable them around monitoring for containers elastic search UI and DNS and DNS is very important here because when you think about restructuring your applications for the container world a perfect example is say I have a Dejango app and a MySQL backend you can put each of those in their own dependent container if you're thinking about one physical Docker host the way they can communicate is through Docker links for all practical purposes the Docker link is a private network tunnel between the two and the MySQL is given basically the socket of how to access MySQL so this is great and all on one host but when you start to think about multiple hosts that doesn't really apply links aren't natively meant to work like that so a pattern has emerged known as the ambassador pattern which essentially is a proxy between the containers that resides on both hosts and these proxies are these ambassador proxies rather they serve nothing more than to allow the communication between different hosts in your cluster but there is also flawed because they too depend on Docker links and the biggest flaw about Docker links is that if the Django depends on MySQL the second MySQL container goes up or down or the sockets change or something happens to it there's no way of notifying the Django of those changes so Docker links are pretty much useless if you have no way of dynamically finding out the information so the DNS is very important because through SkyDNS for example it ties in well with the concept of Kubernetes services again which is a load balancer so DNS lends itself natively to just allow you to communicate with the current information stored for the container and this again is not only integral into the overall concept of containers but it comes natively built into Kubernetes something that does not exist in Swarm naturally nor in Mesos and that's very important because containers are very ephemeral they come up and down much more often than VMs other COEs and ecosystem tools in the container space are looking to integrate with Kubernetes if they haven't already so that speaks volumes about their front runner status it's really ideal for about 100 nodes right now the platform.net recently had a publication how they basically tested this to support 100 physical nodes 3000 pods but Google found out the issues that they were having and they plan on being able to support up to 1000 nodes by the end of this calendar year the force behind Kubernetes is just standing from a community standpoint since their first in it in June 2014 Kubernetes has seen almost 20,000 commits from almost 600 contributors and they're averaging about 250 to 300 commits per week but to put that into perspective ours technically got recently stated that I think the kernel version 3.17 if I'm not mistaken saw around 1,300 commits per week so Kubernetes is about a fifth of the commits compared to the Linux kernel which is pretty impressive it's being used in production today by Box, eBay, Red Hat and many others Next up we have a specialized system that relies on like the color like or one-offs that definitely still fit into the container space but aren't necessarily a COE themselves First up is Engine Yards DEAZ DEAZ is really aimed at being this PAS layer to facilitate app deployment and management it's built on Docker as well as Core West products from etcd, fleets and the OS itself and it's really about structurally abiding the Heroku 12 factor methodology so for all practical purposes it's a private Heroku clone however it lacks persistent storage and state-aware support for applications and that's very important when you talk about the container world because in Docker the philosophy is everything should be in a container including your more stateful information such as databases, message cues, etc and I'll get to this more on the next topic but that is a big key point about containers so if you don't have a way to support stateful stuff I mean you're kind of kind of doing your application of the service DEAZ however also leverages Heroku build packs kind of speaking to the Heroku story it can deploy to anywhere on-prem or in the cloud and v2 that they're about to release soon is set to be running on top of Kubernetes also again going back to the story that Kubernetes is kind of the showstop the showrunner here DEAZ also is being used by some small and medium businesses there's not a whole lot of fortune 1,000 companies I noticed but nevertheless it is getting used next up we have Prime Direct as Flynn which for all practical purposes is a direct competitor to DEAZ also has a layer aimed at solving stateful problems as well as the more stateless stuff around Heroku it's less prescriptive in the technology it uses than DEAZ but again it is a private Heroku but it's not limited to the 12-factor methodology it does provide an appliance for auto provisioning HA and fall over abilities for Postgres as a backend and it's looking to expand that to other databases as well some endorsements around Flynn Coinbase Shopify CenturyLink all utilize it next up this is very different from DEAZ and Flynn is Flocker Flocker is a data volume and multi-host container manager so before I kind of go into details about what Flocker does again the whole point of this is that everything should be in a container at least if you bought into the container story that's what you're trying to do so when you start thinking about how do I put your databases your message cues into a container given the fact that containers are so ephemeral and they come up and down you don't want your stateful stuff to just drop you're going to be able to persist that data in some capacity and that's what Flocker's really aimed at doing it's about being able to allow you to use containers for your more stateful stuff and does it in two different ways it uses a backend to enable a shared or local storage fabric and uses a front end proxy to allow you to do the proper routing for the containers you're interested in doing are utilizing depending on where they are at so you don't have to actually maintain or worry about your containers residing in the same physical host if they are that's great but if they're not you still want to be able to communicate with them and the back end storage actually has support today for AWS as elastic block storage and there's even support for open stack cinder EMC is enhancing Flocker to work with their extreme I only skill a old drivers VM are also partnered up with them to enable their drivers and it's now available as an official Docker plugin as of I believe Docker 1.7 Flocker just hit version 1.0 and they're definitely picking up a lot of steam so keep an eye out for them if you're worried or interested about how to maintain your stateful data next up are micro OS is we all know core OS it's essentially a minimal Linux OS and that being like the distro for massive server deployments and it really does OS it really described an OS in a different way that embraces containers it provides a subset of tools or a toolbox in terms of your basic binaries and then it says everything else and user land needs to live in a container and so that that really kind of moves the conversation of maintaining upgrading updating your servers because long gone are the days where if I update say open SSL it breaks some package that depends on that so if everything is self-contained in the container and user land then everything underlying can be updated obviously without like say the kernel for example but everything else can be updated and the application itself will not be bothered core OS is a fork of Chrome OS and it's flagship projects at CDN fleet or born out of necessity specifically core OS again is aimed at being a way to manage and update your OSs in today by today's standards and they really wanted to have a way to not only update the machines accordingly but make sure that the reboots happened in a in a phasing approach and so they needed a way to track a semaphore to make sure a subset of the machines rebooted at a time instead of the whole cluster so at CD is really was a way to store that semaphore and fleet was a way to coordinate the rollouts of the reboots obviously they've expanded past that and they are being used by multiple different companies today for various different uses but that's certainly originally they started core OS recently acquired KIO for both public and enterprise container registry and it's available across all platforms including Opus stack next up we have a project atomic out of red hat kind of similar to core OS again it's minimal OS aimed at managing containers they took a different stance in terms of of how they're approaching it it's really a security first focus more enterprise driven and it's baked in with SC Linux by default something that core OS didn't do until their most recent release of 808 so definitely they have a leg up on core OS from that stance from my fedora and red hat friends they tell me that using atomic is very similar to for door 20 and it started about six months after core OS so they had some catching up to do for sure but definitely atomic is best suited for your red hat stack especially if you start considering things like open shift there's native support for Kubernetes atomic itself is available as open source today and an EA of an enterprise platform is in the works and obviously atomic becomes an integral part to the open shift story especially if you've bought into the red hat ecosystem lastly in the micro OS topic is rancher labs rancher OS kind of same constant right and OS based around containers but it's doing something differently it's saying that you not only encompass all of the OS binaries in terms of the substance that you're providing in a docker container but as well as you use the land and it's doing that in a twofold way it provides a docker daemon for the OS itself kernel side and a docker daemon for the use line apps so updating the OS concepts is as simple as rolling out a new container which is very very easy and efficient it's still very early in the rancher labs story but they are offering a beta platform and even though they're smaller compared to core west and atomic it's definitely worth keeping an eye on them for the sake of giving you guys a visual of what all these different tools kind of do on the left is a Venn diagram I drew that kind of encompasses the major players and the the buckets they fit in you have traditional paths layers your coes and your specialized offerings and you can kind of see where some of them blur the lines such as flint and flocker on the right is a mind map of the open the open container ecosystem like a 30,000 foot view and this came out of red hat and I definitely encourage you to visit that URL because it really does kind of encompass all the different aspects that are using containers and how they're being really kind of expanded in terms of the OS itself database configuration management as well as the OSes so definitely how check that out to kind of understand how things fall into perspective to kind of reel things in back into OpenStack from and not just harp on containers so much what should we as a community do for containers well for one we have to be very aware of the shift in application development and we have to be able to accommodate that for containers so much so that some people are worried that there's not really an necessity for OpenStack anymore if containers are going to be so predominant that's not really the way to look at it there's still a need for infrastructure management and CoreOS is best suited I mean OpenStack is best suited for that if we know where and when to draw the lines and deculper responsibilities I strongly firmly believe that OpenStack should not be the end all be all tool for everybody the magnum approach for example where it's saying deploy COE and then kind of take a step back and drop you down to the native APIs is definitely a true weight of saying where OpenStack really fits well into this space so knowing that and being able to draw the lines and leave the responsibilities of the management aspects to say the Kubernetes and Swarms and Mace those are the world and not have OpenStack bleed into that will definitely make things easier in terms of adapting containers to your stack when it comes to OpenStack and if there's one thing I'll leave you with today it's this there's a lot of noise out there please pick the right tool for the job and make sure those tools fit and integrate well with your stack and just because you can mix and match these tools does not mean you should perfect example eBay ran Mesos and OpenStack and they got it to work and they were able to able to marathon with it cool it worked but do you really want a convoluted complex stack like that that just makes for like further headaches down the road with that said I point you to this GitHub link of a couple of white papers I wrote several months ago that are still very relevant today and this presentation was a condensed version of that they're easy reads perfect for the playing right back home if you have nothing to do for 12 hours and if you're on Twitter follow me on Twitter and I'll take any questions if you have any sure so where do I start so yeah I'll repeat the question so the question was can I compare the app see spec and how that falls online with the open container foundation so you have to understand right again Docker just kind of created the Docker the company created Docker and they defined the manifest and the specs after the fact so some people kind of thinking obviously that's a wonky way of doing it because we should define what we want out of these tools before we actually implement it there is if you guys have been following there was kind of some debate or some like some bad blood between the core West folks and the Docker folks because chorus just came out of nowhere with the app see spec and with rocket and because they're so firmly of course so firmly believes that Docker didn't get it right the first time the app see spec is a communal way of saying what do we want containers to be and what those containers should actually be encompassing and so the foundation was kind of a way of saying hey you know we're friends again but also we should you're right we should probably take a community first approach and so Docker said we're going to donate the run C runtime which essentially is a wrapper for look container and yeah we'll embrace the core West folks and their necessities around app see spec but that was a couple months ago right that was in July and not really much has changed so the community is kind of like divided right now in terms of the Docker camps and the and the app see spec camps as I said earlier there's a lot of it there's four or five implementations of the app see spec so there's definitely some meat behind that time will tell how Docker kind of goes forward with this a lot of people don't really necessarily believe that Docker is you know they're they're trying to play nice for the sake of playing nice but I don't think they well people don't believe they have real intentions of adopting the app see spec or at least they want to be the most influential voice at the table so does that answer your question cool any other questions going once twice sold all right cool thanks guys