 Hi everybody. My name is Ben Caro. I'm here to talk to you about quick prototyping and continuous integration using LXC and Puppet. So just a little bit of background. My name is Ben Caro. I'm a release engineer at Mozilla. The things I work on day-to-day are all of our version control systems and primarily the hosting of these things. If you're familiar with Git or Mercurial, a lot of our build systems and a lot of the people who contribute to Mozilla have to check out code from these every day. And these things take a while to scale and they're actually pretty intricate in the way they're set up. So yeah, we do all of the version control systems you've probably ever heard of, including some you haven't, simply because we don't tell the engineers which they can and can't use, which means we end up supporting all of them. So just to give you a bit of background on this. So the way that the infrastructure at Mozilla has been done since the dawn of time, and for that it means 2001 to 2014, is that everything has been deployed with some kind of configuration management. Most recently it's been Puppet. That's what we're using now. We're very happy with it, so that's what we're sticking with. But the problem was that we didn't really have a documented environment for what everything should look like and we didn't really have a target. We just had a Puppet base class, which is like a set of rules and packages and things that you should apply on every host. And then past that all of the people, all of the infrastructure engineers developed their own modules and the way that they did that was kind of ad hoc. So if they wanted to do it a particular way, even if they weren't very familiar with Puppet, they could go and code up a module and submit it, and if it ran, it worked, and that's great. There was no real peer review, there was no real documentation on things, and that got us to a certain point, but once we had modules that were bigger, this created a very big problem for us. So we wanted to frame the problem first. So the problem is that developers and upstream projects needed a way to be able to replicate our production environment. So if we have a problem, let's say with Git on our Git server, and we wanted to go to the Git people and say, we have this problem with Git, we understand it's a scaling problem, this is the version of code we're running, and here's the entire system that goes with it. That way if we have a problem with it, we can send it to them and they can easily replicate the problem, and they can add their own tools on there and start debugging it in ways that we can't even imagine, and then they can help us solve our problem for us. Additionally, we wanted a way to make the development environment for our developers really close to production. So what normally happens, and you hear this a lot, which is kind of like a fight between sys admins and developers, is that developers will code on their laptops, which usually means it's running OSX or something like that, and production is going to be something like Red Hat Enterprise Linux, and these two are very different when it comes to some packages that a lot of, when it comes to libraries that their applications are using, and this ultimately results in the developer saying it worked on my machine, why doesn't it work in production? And so we wanted to minimize that because we were sick of hearing that excuse. So we had a couple of requirements for it. We needed developers to be able to bring this up by themselves without having to read through pages and pages of documentation and understand many things that quite frankly they didn't really care about. We also wanted it to be really public and shareable and reproducible so we can just give it to anybody, even if they're not affiliated with Mozilla, and they can recreate it to see if this is something they want to deploy for themselves or for their own company, or if it's something that's not really right for them. And it needs to be as close to production as possible. So we looked at a couple of different options. When you think I want a development environment and it's not my laptop, you automatically go to a virtual machine. So that's exactly what we were thinking of. So we looked at virtual machines on workstations and they had some problems. So things like KVM or VMware worked and theoretically they would allow you to match as close to production as possible. But the problem is generating these images and giving them to people is really expensive and it takes a lot of time. And it's not something that people want to go check in and no one wants to go grab a new nightly and have to download 800 megabytes to their laptop and then set it up every time there's a change. It's just not workable and it's slow. You can only run a few of these at a time. If you try to start more than two or three virtual machines on your laptop, you'll find out pretty fast that it's just not something that scales very well. So we looked at what about doing the same thing but in the cloud. And this does work in itself as a problem where you're not going to set your lap on fire and kill your hardware faster. But the problem is that you need to be connected to the internet to do this. And then we have computers, contributors all over the world. And for some people like in Kenya, some of our contributors in Kenya said there are no cloud providers there and so they would have to deal with the latency over to Europe or the U.S. And that's just not workable for them. In addition, this kind of defeats the purpose or it makes it very difficult to be able to share these images with contributors because they're stuck in our Amazon instance, for example. And that means that we can't really share them besides sharing the AMIs. And that means that we're telling the other developers they have to use Amazon that way too. So we started looking at a couple other things. So specifically we started looking at container solutions. And these have some problems but they have some really great advantages too. The first one we did was all right, this is kind of the bare container option. The one in the Linux kernel that's popular these days is called C Groups. And so we tried looking at C Groups and using something called Copy On Right Images which is kind of a lightweight way to be able to take these AMI-like things, these images and copy them without using more space and doing it in a very efficient manner. The problem is that these are really hard to use and so teaching developers how to do this would take days or at least hours of their time and it would teach them things that they quite frankly don't care about. The problem is these Copy On Right things that make it really convenient to use aren't in the standard Linux kernels. You have to go install extra packages, you have to recompile your kernel and that's no fun and no one wants to do that anymore. And one of the drawbacks is that it's Linux only. So if your developers are on Windows or Mac then they have to have a virtual machine anyways to be able to go run and install all these things. We also looked at using Docker and we created a solution probably about a year ago and Docker has matured quite a bit since then so some of the reasons we didn't use Docker before have now gone away but one of the really bad problems that it still exhibits is it doesn't really clean up after itself. So if you install Docker and you create images, you create instances and you throw them away and you do this quite a bit, you're going to figure out pretty fast that you run out of disk space and then you need to make a big long bash command that takes a list and then deletes all of those old instances that you're not using anymore and then you can't really do that if you have any instances running because there's no real way to tell what's running and what's not so you end up deleting everything and it's just a big mess. So we didn't end up doing that. If you like Docker files then this is totally a solution for you but with our infrastructure we wanted to run public code and Docker files are, you can use them kind of like configuration management. Basically Docker files are a way to describe how you want this Docker container to be set up and they have things like run the shell command or put this file in this place or set this environment variable which is cool but it's not kind of in the same realm as a full configuration management system like Puppet. Additionally it's Linux only but you can also give an API to developers. That's one of the new features that makes this pretty cool nowadays. In addition we also looked at VMware vSphere which is VMware's cloud offering so you can think of it as like a VMware's private cloud like OpenStack or like a private EC2 cloud, something like that. And it works, the problem is that we couldn't really use this and still be able to give accounts and access to all of this infrastructure to people outside of organization just because there's a very real security risk there of giving them access to these infrastructure machines and they could essentially run whatever they want. There's no real system in this for being able to limit their permissions. Additionally it's not really open source and it's expensive and it's still in beta status and as far as I know from our infrastructure team it hasn't worked out very well for us but we're still trying it out. The last couple ideas we had were EC2 and I touched on this a bit earlier and you can spend a bunch of money and get really fast VMs or you can spend a little money and get some slower VMs and that's totally a solution and it's very easy and developers are very comfortable doing this simply because you can go to aws.amazon.com and click around and make yourself a VM and if you have an image that matches production then this is going to be really easy for developers to bring up. So we have this around and it's still a viable alternative and we support it. The problem is that you can't do offline development and you have to create like another system for your volunteers to be able to go get these images and install them on their own aws accounts and that might not be something that they have the money for or something they want to do or it might not even be legal in their country. So we decided on this last option called LXC and Puppet and LXC I'm going to get to what it is in a minute but the features that it has is that devs need access to a Linux host. It's Linux only and it allows us to really closely match the environment that we have in production and it allows anybody with access to a Linux machine to be able to set this up for themselves which was really valuable for us and what we were really seeking to do. So what are containers? Containers are what you can consider an operating system level hypervisor just like a computer level hypervisor, something higher level would be like a virtual machine where it's emulating a keyboard and it's emulating a network device and it's emulating a hard disk and all this stuff. With containers you don't need to worry about that. It's done inside the operating system at a lower level. So you can think of it as one kernel with multiple user lands. So you can have your one kernel running on something like your laptop here and then you can have a SUSE running in a container and you can have Red Hat in another container and you can have three Red Hats running if you want to test something crazy. Doesn't matter. And there's two real solutions for these. One, they're both in the Linux kernel. One is called C Groups which is what I mentioned earlier and the other one is called OpenVZ and I'm going to talk about both of these in a little detail next. C Groups, they stand for control groups and it's been a feature of the Linux kernel since, I think, five years ago. So it's been in there for quite a while and it allows resource isolation. So it allows you multiple user lands but you can set memory limits, you can set how many CPUs it has, you can set a whole bunch of other limits, you can freeze things and you can do this for a single process or you can do it for a whole system. And what I'm talking about today is doing this for a whole system so you're creating virtual machine-like things inside of your workstation. If any of you are familiar with CH Root, it's kind of like that but it provides a few more advantages, like you can have real network interfaces and you can limit other resources and it creates different process and user tables. And what this means is if you do something inside there and it starts a daemon, let's say, and then you exit outside your CH Root, your daemon is still running except it's also consuming some resources on the barebox and that can create a bunch of havoc and the only way to get around that is to reboot or spend two days debugging the stupid thing. So that's kind of C Groups. The other container option is OpenVZ and I'm not going to talk very much about that but this used to be the leading container option at the Linux kernel. There's a lot of the cheaper web hosts or shell hosts out there that you can go buy shell access from. I still use this. It was never really part of the Linux kernel, it was always an out of kernel catch set but it was really popular because it existed long before C Groups and it had better isolation, so it had better isolation so your containers theoretically couldn't talk to each other any more than they could in C Groups and these companies still use this and they're very happy with it and they don't want to change but there's never many contributors and no one's really working on it anymore. So LXC, which I'm primarily talking about now is a set of convenient scripts on top of C Groups so it allows you to do things like create, stop, start, and destroy and it's not really adding any features to LXC but it's making it a heck of a lot easier to use and it's making it something that your developers could use if they wanted. It also has some other options like cloning so if you have a container that you really like and you want to make a clone and try to do something on that you can just do clone, you can freeze it, none freeze it similar to virtual machines and then execute, which is the command at the end there actually allows you to just execute like a single Apache process or something like that and this is really useful if you don't want to bring up an entire system but you just want to run one single process. The way that handles the creation of these containers is through what are called templates and these are basically, I'm going to get to those in a second but they're basically shell scripts that describe how to install a system and what it should look like and these can handle complex resource set up so you can have things like three network interfaces in there or you can pass block devices or you can pass video cards or something else like that into these containers. So comparison to other kind of dev environment strategies like VMs that we talked about before is this is a lot cheaper and you don't have to be shelling out to EC2 every time you want to create a test environment or you just want to try something out. This is good because it's part of the vanilla Linux kernel supported by our vendor which is Red Hat so if we have a problem with it we can either like file a bug with Red Hat or we can post something to the mailing list there's a mailing list and IRC channels for LXC or I mean we can just ask an LXC and pretty much all of the core contributors are in there and no matter what time of day it is someone's going to be in there and they're going to be happy to help you out. Some of the problems are that it is less flexibility so like I said with containers they're really Linux only so if you're on Mac or Windows or something else you're going to need to run a virtual machine to get to these things which is a problem but it's something that our devs can work through and it's entirely in the vanilla Linux kernel so you don't need to go poke around you don't need to recompile your kernel in a single package on a Linux machine and then you can start playing with this. Some of the reasons you might want to choose something else like Docker is that you really want the portability of Docker files they're very easy to understand they're just in one documentation page on Docker's website and you can read it in about an hour and even if the only thing you know is shell scripting you can understand this and get it to work for you and that's really cool but like I said it still has a problem of cleaning up the old instances that it kind of leaves around there's no real documentation on that but if you join their IRC channel and ask around they'll give you this big long shell script that you can run to fix it for you and it's useful if the developers aren't running Linux as a primary operating system because this has an API that you can talk to instead of just running all these things on your local machine you can just set one up for everybody on your project or at your company and they can just start using that now I'm going to talk a little bit about the configuration management so done with that let's see for now and now we're talking about things like a puppet or chef or ansible or salt can I just see a set of hands who's dealt with configuration management here before in one of these okay good that's a lot of people who've done configuration management before I'd like to see that this talk I'm not really focusing on like hardcore features of puppets so even though this uses puppet this is going to be very easy to apply to whatever whatever tool you've used before um so this is just a basic pattern in puppet so we have a class called Apache and we wanted to do Apache things so we say we have a package it's called HTTPD we should make sure it's installed and then there's a file that's a config file for Apache that tells it what port it should listen on things like that and then we have the service and we make sure it's running and that it's going to start when the system starts and we call this very simple pattern package file service and you see it a lot whenever you do configuration management the last line down here line nine basically it's just an ordering of things so we should install the package and then we should install the file and then the service that's just a puppet thing because puppet can sometimes do things out of order and so building on that a little bit we have higher level classes some people call these bricks some people call these meta classes and it's basically you're describing what you want a machine is how I try to look at this so we could have one that says web server or database server and this might have multiple classes in it so we have a web server here a class web server we make sure we include the Apache package and other nice things to have on a web server are like the Nagius checks that it should be running to make sure that the web server is actually up and operating correctly and LogStash to be able to have the logs be pushed out to somewhere where we can go read them later and then line 6 and 7 down here are what puppet calls the node definition so basically it takes a host name or in this case a regex so it starts the new line and it looks for web and then any set of numbers and then .dc1.example.com and it says to include the web server class and that's going to include the Apache class, the NRP stuff and the LogStash stuff all in that and you don't need to specify each individual component there and this is really useful in just describing machines so the way that we actually use these puppet commands the way we use these puppet classes getting back to LXC is we throw these in the templates for when we create these machines when we create these containers themselves the way that the LXC creates script works is it creates this golden image that lives in varkash LXC so if you want to install like Ubuntu trusty and you type LXC create dash t trusty it will create a master copy that lives in varkash LXC trusty whenever you want one after that it will just use that pre-compiled one that way you can speed it up and it gets really fast if you do copy and write stuff all of these scripts are written in shell they are executed whenever you run LXC create I wrote some custom ones to be able to match our production environments more closely I'm going to show you what one of those looks like I think in the next slide but these are just some of the things that you can do beforehand so in here you can sell some packages if you want more packages in your base image or you can apply a puppet base class if you just have like a base class and you're going to use this in a lot of different modules that you're developing you can pre-install puppet certs if you want this stuff to talk to a puppet master or one of these things it's really cool that I'd like to talk a bit about later is you can create multiple containers with each LXC create command and that's going to be really powerful and I'll tell you why in a bit but this is a little big but this is kind of what it looks like and if you're familiar with Bash there's just two functions here there's one called download sentos and it just sets a couple things here like the cache directory and these are the packages it's going to install so this is kind of a base level sentos system so you have your package manager and some files and then puppet so it creates the folder and then if the cache exists it just copies it in there otherwise it runs yum and and installs to that place and one of the other things ah ok one of the other things we can see here down below is that ah ok did I go back? yeah sorry about this, apply puppet down here basically it's copying our puppet module that we wrote into the correct place for puppet to go consume it and then it's running library in puppet which is a tool that allows you to create dependencies within modules so like if the module that I wrote depends on other modules I can have it just go automatically and then install them to the right place and this is really valuable if you have infrastructure that requires a bunch of modules that you didn't write and you need to make sure that they're in the right place it also keeps them up today which is handy and then this little puppet apply command down here is key and this basically just says apply the hgweb class that I've defined and let's try this out and see if it installs and runs correctly and that sort of thing improves CI a lot like if you're using Jenkins or Travis CI some of the problems are that when you're using it for infrastructure things like this sometimes the last puppet run can affect the puppet run you're doing now I mean Jenkins likes to make everything live in its own directory but there's nothing that really stops it from affecting the system outside of it and that can be really bad when you're applying these sorts of things to it so one of the things that LFC helps is that it gives you a vanilla system so that every time you make a change and you want to test it and you commit it and you want to test it and make sure everything works correctly is it's applying it to a vanilla system every time so that way it doesn't matter if it doesn't matter what you push last you're guaranteed that it's not going to affect this run it also allows you to do multiple isolated environments and that basically means that if you want to test your package on let's say 2.2.12.04 and Red Hat 5 and Red Hat 6 and CentOS 5 and 6 you can do these all at the same time on the same system without having to worry about them interacting with each other and that's also really valuable for us because we have a lot of hosts that run Red Hat 5 and 6 it also has a much faster turnaround time compared to virtual machines because you don't need to wait for virtual machines to copy you can basically just copy an LFC container takes maybe 5 or 6 seconds this is a virtual machine which can take minutes at a time and then destroying them is also expensive and they're because they have to do all the virtualization stuff they actually run slower to boot and this allows devs to easily reproduce the CI environment so if they submit a job and it fails and they read the build logs and they don't know why and they need a little more information historically that's been really difficult for them and they haven't been able to get the same environment to be able to produce the same build errors but using this they have the exact same environment because the environment that they're developing in is the environment that continuous integration is running in as well so that makes CI a lot more handy for us this also improves prototyping because it allows you to test your new code on the production OS you don't need to test it on your laptop and then give it to the IT people and hope they deploy it and then hope it runs the same on their machines as it does on yours because yours and theirs already match like I said before with the CI it allows you to do this in multiple environments so if you think this thing is going to run in many different places you can test them all using the same way more than one or two won't really overload the host like with virtual machines if you try to build all of these things and test them at the same time on your laptop you're going to have a bad time and what is this programmatic creation of faux production environment it's difficult with VMs right what it basically means is that when you have like a big system of like let's say you have like a web server and database server and a load balancer and the traditional way of testing that for a development environment is you just have one virtual machine and all of these things run on it the problem is that the configuration between these has to be different than production just because they're all running on the same host so some things might be listening on local host some things might be listening over Unix socket or something like that but with this with containers they're actually running on like a Linux bridge which is like a virtual network and that way they're communicating on a network the same way they would be communicating on a real network of these things were in production so our development offering so what are we giving to the developers to be able to have them like set it up on their laptop and start coding and bringing it up to speed so we're giving them these LXC templates that we wrote to be able to just deploy we're also giving them Begrant files to if they want to run, I mean if they have an Apple laptop these Begrant files will allow them to create a virtual machine and then run all of these containers in the virtual machine so they can do it even if they're not running Linux we chose to do master with puppet simply because maintaining a master was kind of a pain and it doesn't really buy you any extra features unless you're doing some really advanced stuff and we created templates that we put on our wiki so if another project inside of Mozilla or even outside of Mozilla wants to set this up for their own project then they're able to just rip off our templates and change some names and change module names and upload it to their own site and that's great they're I mean the creative comments license so everybody is free to do that some of the initial impressions that we got when we deployed this thing is that the LXC CentOS template is really new it's only been in there since 1.0 and 1.0 has only been released for I think about a month and a half now but hasn't been released for that long so this is a very very new thing and it's obviously not in CentOS and it's not in REL but it is in Ubuntu 14 the newest Ubuntu the trustee release so if you're using that then you've got it already although you're probably not running the CentOS template if you're running Ubuntu there's still none for Red Hat 6 if there's anybody from Red Hat in the audience I would love to figure out all the lights with thing issues that deal with setting up Red Hat containers it's got some kernel audit grumbles so sometimes depending on what you're setting up you might have to tweak one little flag to get it to work and this was kind of difficult for us to figure out but we eventually got it and now everything is good and these operations are really fast so your developer can do these and it's not the classical type make and then go get a cup of coffee and come back and then see if it worked and then actually just sit down and get within a few seconds a completed compile or a failure which is really valuable and it wastes a lot less dev time than it would otherwise so the productivity enhancements are that things are fast all these operations that happen like I said before are fast and then we can almost we can let developers have something that's really close to the production environment that we run as well because we're running Red Hat for all of our servers that are in production and because Red Hat requires licenses for all of the machines we can't really give people those machines because we would just be burning licenses but we give them sentos and that's close enough they can run the same binaries and they're close enough where we haven't really found that many problems with this before one of the things this does it allows faster turnaround time so we don't have to worry about developers developing on a different environment then we have to test it in production and then maybe something broke so we have to have this feedback going back and forth about what it's going to take to get this thing in production and the holistic testing is what I was talking about earlier where we're spinning up multiple containers and then having them talk over a virtual network so it simulates an entire like it simulates an entire system instead of everything just running on one host and this is really valuable because instead of testing each individual part you can just go to the website and see if the website loads and for the website to load it has to go to the load balancer and it forwards that to the web server and it has to grab a bunch of stuff from the database connections and then how am I doing on time okay cool and that way because the website returns successfully you know that the database server is up you know that the web server is running and you know that the load balancer is configured properly so this is kind of one of the case studies that we've done this is one of the projects that I did Mercurial which is the primary source control that we use for almost all of the Mozilla software so if you ever want to check code out and build Firefox you have to install Mercurial and then do Mercurial clone Mozilla and then you can build Firefox so this is really really busy because we have a big build environment that checks this thing out so we have to have like a whole set of machines and they have to be configured just right and there's different types of machines depending on the activity that goes on so the infrastructure for these a couple of the things we have SSH hosts so whenever a developer wants to push something new they connect to this SSH host and then push their code out likewise the HD web hosts if you go to htd.com and they hit these hosts and they host they basically just host the website for everything and we have mirrors so we have a lot of stuff in AWS and we have a lot of build infrastructure in different data centers so when these build machines want to go check out a copy of the Firefox source code they talk to one of these mirrors if there's one locally and we do that because it is a lot faster and we use a lot less bandwidth doing this because it's already local there and deployment of all these things is just handled in one big puppet module and so you can say include mercurial web head and then it will go install all the things from the web head subclass so we have continuous integration working on all this code so whenever I check in a new commit if I want to change a setting in mercurial it's going to I'm going to check it into Jenkins and Jenkins is going to start a job and then it's going to install LXC or it's going to set up container it's going to set up actually three containers for all of these and then install everything and then run a couple tests to make sure it works correctly and then it will tell me in IRC if something is broken or if it completed successfully and using this we can get kind of a good expectation whether it's going to work in production or if we should try to hammer it out further and make it work better likewise if it ever fails then we can we can get devs to come by and ask if it went wrong and then we can give the code to maybe some upstream and the communication is done on IRC so every time someone submits a job Jenkins says a line in a channel and whenever a job completes Jenkins says whether that line is completed or well it says if completed and it says whether the bill failed or it worked correctly so in review before we had little to no testing it was kind of a wild wild west in terms of all of our infrastructure and how it got deployed and to get access to a staging server so if you wanted to test some code before you deployed it you had to file a bug and someone would need to go manually create a virtual machine for you and set up a bunch of stuff in DNS and DHCP and then they had to give you explicit network access and a user account and then you could SSH in and try your code and if it didn't work and you needed a new host you had to file another bug and then you would need another human to go and do all these things for you but now you don't need to do that you can create it for yourself which is really cool and now we have some teams that are taking these templates that we wrote this documentation and deploying it for themselves so that they don't need to bug IT anymore and they can be much more self-service and things happen a lot faster and they have a much more standardized environment some people are choosing to use EC2 instances just because they're comfortable working in that space it's fine we also support that so yeah some further reading on this there's linux containers.org which is the official site for LXC and there's some great blog posts one of them is by a guy named Stefan Graber and that's a tiny URL for him and he has a whole set of blog posts starting from like an introductory tutorial to some of the more advanced features he might want to do and there's Planet Mozilla which has a lot of things IT related especially related to LXC so if you want to see what we're up to you can just go check that out and you can ping one of us on IRC if you're curious or something like that I'm Ben, I'm B. Carroll on Twitter IRC if you ever want to get a hold of me can I open the floor up to questions Hi, I'm curious to know how here how did you model this development environment for Firefox for ARM builds not the Firefox for mobile but it's for ARM builds for open to ARM builds that's a great question and I'm glad you asked that so these the C-groups in the LXC stuff we're doing isn't necessarily limited to x86 so we can also do these on ARM builds and that way we can actually set up some builds for Android and we can build on Linux for ARM and we can build other things using the same development machines that way we don't need to buy more embedded machines and fill a data center with more of them Does that answer your question? Yeah, but the host has to be an ARM machine right for that Yeah, the host has to be native, otherwise you can go through QEMU but then you're doing emulation and it's much slower Okay, thanks As I said you are using LXC create the native commands or any other like Libbert or something for creating containers or how you're doing that So if you want to create containers without LXC you know there are special file systems in the Linux kernel so like sysfs to go tweak system things what you would do there is you would cd into like slash sys slash c-groups and you would make a directory and then you would echo things into these special files that exist that aren't really files and then doing that you can execute a process and you can change things like the cpu limit and things like that and then after it's done to destroy you would have to go like remove the directory The question was like you are using the same commands as LXC create and LXC all those commands or you're using any other like Libbert related like libraries for creating your containers while setting up your environment So if I understand you correctly are you asking if we like lose functionality by doing this instead of raw c-groups Yeah I think that we aren't really any of the features that we're losing with this we never really used to begin with but if you know of some that might be helpful to us I'd love to talk to you about it Thank you You mentioned that with Docker you need a lot of cleaning to be done How does it go with the LXC that you use do you not need any kind of cleaning to be done cleaning for the files that are remaining Yeah So like your question like was with Docker I mentioned all the cleaning that had to be done but with LXC is it the same thing So with LXC it doesn't really clean up after itself either you have to go like do LXC destroy and then give it the container name and do that The difference there I think is that with LXC even though it's manual it's very easy to do but with Docker you have to run a couple commands and then there's these hashes and you have to do like a bash for loop over this list of hashes What about Vagrant LXC like does it still work Like it is not LXC is not officially supported by Vagrant but like there's a project called Vagrant LXC but there's some glitches like I don't know about LXC support in Vagrant exactly I do know that there was a blog post on the Vagrant blog very recently about getting it to work with Docker and so you can use if you do Vagrant with Docker and you're not on Linux it will create a virtual machine for you and set all this stuff up I'm speaking about like having LXC as a replacement for virtual box in Vagrant like not like creating the Linux for like this is like it's not officially supported by Vagrant but like there's a Vagrant LXC project but which is not greatly documented so last time we spoke on this like I think like one year back so any update on that like Vagrant LXC having LXC as the base machine Yeah, so with LXC as like talking directly with Vagrant Yeah, originally Vagrant only supported virtual box and it's something they've tried very hard to rip out and make it like universal so you can replace it with many things and LXC was one of them but as far as I know they still have some virtual box isms in there Okay, so I haven't really looked into that for a couple months now so I'm not sure what the latest on that is Okay, we're going to discuss one later then Hey Hi My I mean it's something like just a casual question So is it what kind of a school of concept you come up with like I want to go with a docker or a Vagrant or an LXC so how do you benchmark it when you come up with a POC so I mean like you hold some benchmarks right let's take Okay, this gives me this output so I go with it so what kind of a rules or tips which you can give from what a Mozilla does in the real time. Yeah, so like what you're asking is like will we determine which one of these is best for us? Yeah, I mean generally with your longer deployments you know the POC which you does so from the docker it's you know day by day you're getting more and more containers on different programming languages so LXC is one which is being like closely with the Linux kernel mostly so how do you benchmark those things and come up with a solution for your particular requirement like I just want to know how you do it? Yeah so mostly it's been about developer time so how much of a pain are these going to be for developers to set up and start using and so Vagrant really wins for that but the problem is is that Vagrant doesn't really win for to be able to have like these nightly images so a lot of our developers can't download 800 megabyte images every day to get the latest version so it was more like we have these requirements and we're striking things out and the thing that we used was the thing that had the most features that we wanted and the least strikes doesn't the developer feel like it's an overload on their Vagrant on top and LXC container and then and then come up with the solution it's a load on their laps right? Yeah so it definitely is and that's why we kept the EC2 option for them just in case they wanted to load a virtual machine up in their in their Amazon account and then just run that instead some people just thought this was too much effort for them okay yeah thanks any more questions? Any question down here? So I come from a database background you mentioned something very interesting about stage testing for a bug using LXC and containers now stage testing for a production bug is going to require production like data do you have a use case that can explain where I can get you know you still need that amount of data like we have data that is like in terabyte sizes yeah right how does this apply to an environment like let's say for example Amazon I don't come from Amazon but yeah something of that kind it's a very cool idea right I have a lot of stakeholders asking me for hey I need an environment that is easy for me to test and I don't want to come to you every time saying I have a bug can I do this right there are ACS involved because it's production level of information we have a stage environment but you know how would yeah so I think I understand you correctly in that like when you want you want a staging environment but the hardware kind of has to match production because you're moving the same amount of data yes and so there's no real good way around that without just getting the same hardware for staging that you do with production okay with this you could kind of do double duty so if you don't have to test things in staging that often you can use these same staging hosts just in case you have like six different one terabyte databases and you need to test them all separately okay yeah cool thanks okay if there's no more questions thank you guys for being a great audience