 Howdy, everyone. Welcome to the second scale container day, second time we've done this. And we have a really cool lineup of speakers covering different techniques and different technologies for you today if you stay all day, not required to. We'll have three talks this morning, a break for lunch, four talks in the afternoon with a break in the middle of the afternoon. Starting us off right this morning is Elizabeth Joseph, who some of you who've been around open source for a while may know from her work on Ubuntu, where she co-wrote the Ubuntu book. And you may know her from her work on OpenStack, where she wrote an OpenStack book. And now she's working on Mezos and DCOS, and presumably writing a Mezos or DCOS book. Not yet. Not yet. OK. Well, anyway, she's going to talk to you about application deployment on DCOS and welcome. Thank you. All right, thank you for that introduction, Josh. So yes, I'm here to talk to you about continuous delivery using DCOS, which is built on top of, hey, laptop went to sleep. It's waiting too long. Come on, come back. So I'm going to quickly talk to you about what continuous delivery is, just so we're all on the same page before diving into this. So as Josh already explained, I've written a couple of books over the past many years that I've been working in open source communities. Sort of started out in Debian, moved over to Ubuntu, started working on OpenStack. And it's funny because in my career these technologies all sort of build upon each other. So I started off with Linux. Went into a cloud platform. And now I'm working on containers, which live on top of the cloud platform. So I've pretty much been going all the way through. So I spent about a decade working as a Linux systems administrator. And sorry about the flicker. We've got a bad VGA cable or something going on here. So I was working at HP when I was working on OpenStack. And after my team was dissolved there, I realized I really liked giving talks at conferences and getting to know people and sort of translating what we're doing in the infrastructure side to what people can do in their own environments. So I'm now a developer advocate over at Mesosphere. And we're the company that employs most of the developers of Apache Mesos. There are a bunch of other companies as well. But if someone is working on Mesos and they need to be paid for some reason, maybe they don't have a job somewhere else. We'll pretty much scoop them up and make sure that Mesos continues to be supported. And when I was working at HP on OpenStack, what we were doing there is we were running the continuous integration and deployment platform there. So I spent four years doing that. So this sort of coalesced in my talk today where I'm talking about CI CD based on my work that I did. And then how doing that in containers today makes a lot of sense. So first to define continuous delivery, just grab from the Wikipedia page software engineering approach so engineers can deploy software in short cycles, making sure that releases happen often. So this could be releases are happening with every Git commit. That's what we were doing in the OpenStack project. Every time someone updated the documentation or made changes to our infrastructure in any way, they went live immediately. So that continuous deployment is not actually that common because it's a little bit crazy. So usually it's more like it could be like once a day or a couple times a week and then sort of going up from there. And in the process of deploying the software, you're also testing it. So instead of releasing every six months or once a year, you end up having a development and feedback cycle that looks more like this. It might be hard to see here. But it's sort of starting out doing your project planning and then you're doing your development, testing, staging, and release pretty fast. And then immediately once you release your software, you can get customer feedback. So that's like the week of the release. You immediately get customer feedback. And then the next week, you're still working on things and still making releases to customers so you can continue to get that feedback. So the customers are happier because they're getting their features more quickly, or at least getting to try out some of the things in your software at a faster pace. And the developers are happier because they're actually seeing their work go into production and touch customers. And this is something that can't be understated. It's really important for developers in your organization to see their work in production, especially with people only staying at companies for a year or two. If they work on a feature and don't see it released for six, eight months, maybe a year, it's incredibly discouraging. Because by the time they've moved on to another company, their software is finally in production. And they're like, that would be satisfying if it had happened quicker. So they're getting closer interaction with the customers, and customer feedback can come in really quick. So this is becoming really popular, is doing this continuous deployment to get the feedback really fast. And if you're doing this with containers, I sort of see it as this two-pronged approach. So first, everything that we do at Mesosphere and for our CI CD pipelines, everything is run in containers. So this is the applications themselves that are doing the CI CD stuff. So I'll show you a demo at the end if we've got time. Hopefully we should have time. That runs Jenkins inside of a container, runs GitLab inside of a container, and then also all of the jobs that it's running to build the container that we're actually deploying, and then the deployment itself, all running in containers inside of a Mesos cluster. The other thing it allows you to do is organize everything really efficiently. So I have a picture coming up later that sort of shows you in a traditional system, things each have their own server. But when you split things up into microservices on containers, you're allowed to have much better resource utilization. So the way things sort of used to go is that operations would handle everything starting at testing. So when I was working on OpenStack, my team in operations ran the entire testing infrastructure. So every time someone made a commit to OpenStack, it went into our testing infrastructure, and then we pushed it. Well, we didn't have staging because who needs that? We just pushed it into production. We're an open source project, it was fine. So we controlled the entire testing infrastructure. And as operators, that's fine. It's great for us to control everything. We don't want developers running crazy things, but the developers felt really stifled. They were like, we can't use our own tools for testing, we really want to have more control over this. And this became particularly apparent in teams like the design teams and the documentation teams, which didn't necessarily want to have the same workflow as engineering. So what we'd have, which I thought it was kind of cool, but all the docs people had to learn Git, and they didn't think that was cool. Some of them did. Some of them really actually embraced having an engineering workflow, but it made it hard to get contributors to documentation people, because they were just not really keen on learning Git and learning Garrett and learning their entire development workflow just to write documentation. So in this new paradigm, we're sort of trying to move testing into what developers can control. And in order to do this, that means giving them control of their platform, but they don't necessarily want to run Linux and make sure a machine is upgraded and stuff. They just want to run their application, and that's where containers come into this. So you end up having to have an environment where you can support multiple CI CD pipelines. So the base part of this, all these little servers on the bottom are things that are run pretty much by infrastructure. So in the case of DCOS, that would be Mesos. And then on top of that, the developers themselves can run these individual applications. So they don't need to worry about hardware or their virtual machines. They just say, I want to run Jenkins. They click a button, as I'll show you, and then they're running Jenkins. So they just really need to worry about what platforms they want to use. So this can be an infrastructure of what I'll show you. I'll show you GitLab and Jenkins, but this could also be something like Artifactory and Travis CI. Maybe they're actually using GitHub and they want to add that into their infrastructure. So they're pushing everything in and out from GitHub from the stack. But whatever they want to use, it makes it a lot easier for them to decide what they want to use as far as an infrastructure goes. So if you're doing this on virtual machines or on bare metal, you end up with this industry average of utilization of 12 to 15% on servers. And obviously that's very bad. So you have one server for Jenkins, one for Artifactory, one for Cassandra. And then if you want to do high availability across these, you add in another server and then you're load balancing across those. So a better approach is using some one of these container orchestration and clustering software like Apache Mesos. So Mesos is what you would install. So you install Linux on your fleet of servers, whether these are bare metal or virtual machines, you just need to have Linux. In the case of Mesos, it runs on Red Hat and Ubuntu and CoreOS. DCOS doesn't officially support Ubuntu, but it totally works like people are running it. But Mesos itself, you install that on your base Linux install. And Mesos is what clusters all of the machines and turns them into this one big cluster that you can access and get resources from. It also does sort of intelligent scheduling. So when an application needs certain resources, Mesos, depending on the application, there are frameworks inside of Mesos that know like if an application like Spark or Hadoop needs certain resources, you can intelligently divide those up across your infrastructure. It knows it needs a master and agents. It knows it needs like one master always has to be the master and then you have backup masters. So these frameworks inside of Mesos are sort of intelligent with making sure both masters aren't on one machine and other things. It's scalable, it's been used, Mesos has been around for seven or eight years. It's being used by OpenTable. OpenTable is a really cool one so you can go to OpenTable.com and scroll all the way down to the footer and highlight an empty space in the footer. You can see which Mesos agent it's running on. That was really cool. I learned that during one of their talks at MesosCon. And then Yelp and Uber, Twitter, how they solve the fail-well problem was using Mesos. So it's a pretty big thing in the industry running a lot of the internet. One of our founders is a guy from Berkeley, Ben. He's super cool. He's one of the reasons I started working for Mesosphere because he's super passionate. Still codes in the weekend even though he's this fancy startup founder now. And then on top of Mesos, we created DCOS. One of the things we saw is our customers were solving the same problems over and over again. In fact, some of them actually created open source projects around solving the same problems like Yelp did. They have an open source project called Pasta. And that pretty much does a lot of the same things that DCOS does, which is doing all the resource management, tasks scheduling, container orchestration. So it definitely calls out. It uses Mesos on the bottom, but Mesos can be kind of a bear. It doesn't handle container networking and logging in metrics and stuff. So DCOS adds that layer on top that handles everything you need for your orchestration. And making sure your containers can talk to each other because container networking is kind of terrifying. And one of the really cool things it has is this universe catalog of preconfigured applications. So when I installed Jenkins and GitLab later, these are applications that you just install from our little app store, but they have some same defaults. So I'm not really gonna change much in the Jenkins defaults or the GitLab ones, but it pretty much gives you a base amount of RAM and storage space and CPUs to run a super basic install. And that's all preconfigured, typically by people who know what they're doing. So we interface with the Jenkins community and the GitLab community and the Artifactory folks to make sure we have some sane defaults in these packages. And in a lot of cases, they actually contribute the packages themselves to DCOS. So to show you a picture of how this all comes together, the bottom there, you've got any sort of infrastructure. So you've got, maybe you've got physical servers, maybe you have an open stack cluster, or maybe you're going to public cloud. So Amazon, Google, Azure. And that's pretty much all you need these to do is run CentOS or CoreOS. And that's what those lowest infrastructure stuff does. And that's what you install Mesos on top of. And then DCOS actually includes the installation of Mesos and then it includes all the container orchestration and everything that I talked about. So this means you can also move it to, if you start off in a cloud, if you decide the cloud is too expensive, you can move it to another cloud. If you decide cloud is not for you, you can move it on-prem. And it avoids a lot of the vendor lock-in, which is something I care a lot about as going through this open source stuff for so long. And then you interact with DCOS through, there's a web UI. I actually need to upgrade, date the screenshot. It's hard to see anyway. But I'll show you that when I do the demo. And it also has a command line tool, which you'll also see. And then it also has an API. So these are all ways you interact with it. And so if we look at our side from before, where we had the traditional data center with this not great utilization. When you're using Mesos and DCOS, you can see all these little blocks are now put across maybe three servers instead of five. And so they're all containerized so they can all run alongside each other. They're all playing nicely together. So you've got a few nodes for Jenkins nodes and Cassandra is split up across a few servers. So you're able to have an infrastructure that's really packed in tighter and you have much better resource utilization with all these things running in their own little containers. It also means that the entire, like you can run Jenkins and Cassandra on the same server, right? But then you need to install all the dependencies for both of those on the main machine. With containers, you can just ship the dependencies with the application and then you can move them around to all the machines you want. And you don't have to worry about libraries getting stale on these old virtual machines or servers that you're running. You just move the containers around. When it comes to actually deploying apps, I'll talk about this in a continuous deployment sort of way, the scheduling for launching a new service. Since you've already got your cluster running and you're just maintaining Linux and Mesos down there, you don't have to have a sysadmin go in and install an application with all the dependencies for you or have to set up puppet to do all that stuff. You instead call out to the frameworks and Mesos that sort of know where things should be, but you also have the containers themselves. So you do have to provision the containers and handle dependencies there, but it's not on the system level anymore, it's just in a container. And then for deployment, as I mentioned, there's like a UI for DCOS and a command line tool and stuff. So you don't need to give developers access to your actual servers. You need to give them a restricted user account so they can launch specific things through one of the user interfaces. Being a sysadmin, I used Nagios for a zillion years. I was actually still using it on my home network until a few weeks ago when I upgraded Debian and the Debian packages went away. So I'm a big fan of monitoring, but one of the really cool things about using these new containerization things is we have in DCOS something called health checks. And if a service dies, you can have it automatically bring it back. Or if a service dies five times, you can say, okay, after five, don't bring it back again because something's wrong and I need to look at this. But there's different ways you can do health checks. So you're no longer getting a page if MySQL dies. It just restarts it and lets you know, like, hey, this thing died, but I restarted it. So your blog still works. I mean, you should probably figure out what the problem is, but it can automatically restart that stuff. And you don't have like, I used to use all these tools. I can't remember the one I was using a while back was, but there's all these tools you can use for like restarting things automatically. And system D, of course, does some of this restarting now as well. But health checks are really nice because you're able to see like, across time how your systems are doing. Service discovery is something that I really like in DCOS. So with the olden days, you've got your DNS servers maybe running locally on your system. And you're trying to manage all of these host names and where your services are. Maybe you're manually load balancing. DCOS takes care of a lot of this stuff. It has automatic service discovery. So when I launched Jenkins later, Jenkins will have an internal service name of Jenkins.marathon.mesos. So that's its internal DNS name. You can always get to it by that. And it's really nice to know that you already have a DNS name already set up automatically for you for every service that you launch. There's also load balancers built into it. We're gonna use a load balancer when we launch our application later to see how it can automatically serve traffic out to the internet for you. And then for persistent storage, we're working on a bunch of stuff using like for the external volumes, things like Rexray, it will mount the volumes for you inside of containers. And then you can also run things like Ceph and Gluster on this cluster. And those are like self-healing file system, network file systems. So the point of all this is that we have this huge cluster and now developers can with a single click install whichever application they want. Once it's installed, it's served across the entire data center. You can do load balancing and the applications are pretty easy to run. So the developers can run their own infrastructure and it's fine. They don't need to worry about updating Linux. They don't need to worry about owning servers. If their Jenkins dies, it can pop up on another server and attach to the network storage and be up again and it's okay. Question. Yeah, so the question is if one developer launches Jenkins, how does Mesa handle another developer launching Jenkins? That's all okay. They can run alongside each other. And in fact, that's one of the really cool things about using something like this is you can launch, if you're doing an upgrade, because upgrades are terrible, you can launch two Jenkins simultaneously, just clone the storage backend and say, hey, I wanna bring all my data over here and test it with the new version. And then you can test it with the new version when you're ready to do the cutover, you can do that. So it's actually much easier to do an upgrade. So you're not doing an in-place upgrade anymore. You're really doing a migration, but you're able to test it super easily because you can able run a bunch of different Jenkins is all at the same time. Right, so the question is how are they referred to internally? I think we just depend them with a number. I'd have to look at that though. Yeah, so yeah, so you can run multiple instances of these all alongside each other. So if you look at this sort of continuous integration pipeline here, you start off with your little developer over there doing a git push. It goes into a version control system. From there, it goes into your continuous integration system where it's gonna do the testing. Since we're using containers in the testing, it's going to build a container and that needs to ship it off to a container registry. In the original version of this talk that all our sales guys give, it sends it to Docker Hub and GitHub. But I like open source, so that's why I'm using GitLab for everything here. It then does container orchestration to actually deploy into the production environment. So it goes and takes the container that it just built and then loads it into production and then you've got a load balancer in front of that. So putting in the actual details of this. Your git push sends it to GitLab, BitBucket, GitHub. That's where your code lives. Then it gets sent to Jenkins and you can upload it into GitLab, Artifactory, Nexus is another registry for container images. From there, it can use Marathon or VAMP, which I'll talk about in a few minutes to actually do the deployment because there's a bunch of different deployment ways you can do deployments. And then DCOS and Mesos does the actual container orchestration work. And then you've got Marathon LB, which is our load balancer, which is what actually serves things to the world. So we had a customer, I wasn't involved in this project, but what they were doing is they were building a DEB package for all of their deployments. So their process was building a DEB package, pushing it to their Debian repository. There was some sort of syncing problem. I think they had like a load balance across their DEB package repositories. They'd wait for the repository to be ready. They'd boot a new AWS instance. They'd turn the instance into an AMI and the AMI is what they actually used to launch the server. So this process took about 30 minutes, which may not seem like much, but once they put everything into containers, they stopped building Debian packages. They just immediately shoved everything into a container and ran that container through the whole CI process. So that's the container they tested, that's the container they put into staging and the same container went into production. And they said it takes like less than a minute now to run this whole thing, which was a huge win for them because they were a big company with a lot of changes to their software. So having to do a lot of this launching and creating an AMI and updating your Debian package and all that stuff was just a lot of overhead that they don't have to do anymore. So that's all sort of like the basic stuff with the CI CD system in containers. In DCOS and Mesos, we have some ways to do some of the more advanced strategies. And the ones I talked about in the abstract for this talk were Canary and Blue Green. In case you're not familiar with these terms, these are deployment mechanisms. So Canary is where you release a small number of nodes with the new version to your customers and to whoever is using your software. This allows you to test them on a customer and see if anything goes wrong. A lot of big companies actually do this. Like Google will do small rollouts of new software to just specific users. And then if something goes wrong, they can pull it back and they've only impacted a small number of users and the rollback is really easy. So that's a pretty common way to do these deployments now instead of just pushing everything into production all at once. Another one is Blue Green, if you have the resources for it. It actually runs two production ready infrastructures side by side. So you'll run one that's running in production and then the other one is the one that you upgrade and when you're ready, you flip over to the other one. And now you're running in production the new version. So if you need to revert, you just pop over to the other one which is still running the old version but if everything goes well, everything goes well and everything is running. And then you can upgrade that other one so they're both at the same until you're doing the next upgrade. So, and yeah, it's, I have these linked, I'll provide the slides later but there's really good articles on Martin Fowler's site about how these work and how they came about. So in DCOS, we have a couple of ways of accomplishing these. The first way is just doing it totally manually. This is what a lot of customers end up doing eventually once they understand Marathon. So Marathon is what handles actually the deployment of things. You have Marathon definitions to install services and we can actually look at one for Jenkins in a few minutes but this allows you to set a custom configuration. You can tell Marathon like launch this many systems when I do a deployment and allows you really fine tune control. The trouble is Marathon's kind of complicated and it's a little complicated to write this stuff yourself but we do have a documentation site that you can check out to sort of get started on blue green deployment ideas for Marathon. There's also a lab that one of my colleagues wrote about two years ago and two years is like 20 years ago in Mesa's time. So I don't think the lab actually works anymore but you can get some ideas from the Marathon definitions in the lab to start crafting your own to do one of these like blue green deployments or play around with the canary stuff in Marathon. If you don't actually wanna do all of this yourself, there is a piece of open-source software called VAMP that I quite like a lot. It still hooks into Marathon but it does a lot of this stuff automatically for you. So it has a concept of blue green, it has a concept of canary builds, the deployment. So you're able to do it more automatically and there's a lot of changes in the interface you can make. So there's documentation on all these various components and how they work together and then Mesa's and DCOS specific documentation. VAMP also has Kubernetes support. So if you're using that you can also use VAMP. Of course it doesn't hook into Marathon but it hooks into Kubernetes own scheduler. And then there was a really good talk by Julian over at Microsoft about using DCOS and VAMP together at Mesa's Con in Europe and Prague last year. And it's from October so it's still pretty fresh. He's got an open-source project that has all of the code for his talk and I'll share the slides again but that's the YouTube video. He does a really nice job of explaining how you would do this deployment with VAMP. I was gonna do a demo in my talk but then I'm like why would I waste time on that? He already did it and it's a great presentation. So I've got like 20 minutes to get this demo done. It's totally gonna work now that I'm standing in front of a bunch of people. So I'll explain what I'm doing as I get started because I wanna actually make sure this goes well so I have my notes here, all right good. So I'm gonna just make sure I'm still logged in. Yes, all right. Someone around this command, can you see that? Do I need to make it bigger? Pretty good, all right. Ah, thanks from the back. So I just typed DCOS node and this will show me all the nodes on my cluster. So this is the CLI. You can also go into the web UI over here and you can see the nodes and this pretty much has the same view. You can do DCOS service and right now I don't really have anything running. I've got marathon running which is just the scheduler and then metronome which handles like one-off jobs and tasks. It's kind of like cron for a cluster. But I want to, first I'm gonna install my load balancer. So I'm gonna install marathon LB, blah, blah, yeah, whatever. So I'm just gonna install this. So then if I go back over to here, if I look at my nodes you can now see, hey, there's something happening here. I go back to my dashboard, you can now see that of my CPU allocation two of 16 are now allocated to this container. So the next thing I'm going to do is I'm gonna install Jenkins and I'll show you this by going to the catalog through the UI because I actually wanna make some changes and in order to do that on the command line I actually have to edit a JSON file but it's a lot easier just to do it here if we're doing a demo. So here I'll show you the catalog first. So we've got a bunch of certified packages and then we have ones that are handled by the community. So Jenkins is one of our certified packages. That means we have a partnership with Jenkins to make sure it's running a decent version. Oh, I was gonna do something else first. I wanna find out what node to run it on. So I don't wanna run it on our public node. So where's Marathon LB? Hey, it's still deploying, why is that? All right, good. So let's see this one. So Marathon LB is running on our public agent and I don't wanna put Jenkins on the public agent. So if I default, Jenkins will just land on whatever container I want or whatever container it wants but I'm actually going to pin it to a specific one because if it reboots and goes to another container I don't have like shared storage on the back end set up for this demo. So if Jenkins pops over to another server it's not gonna have its storage. So I'm just gonna make sure it's not being launched on 236. So I'm gonna pin Jenkins to a specific server when I launch, or specific container when I launch it, or agent. Jenkins. So I'm gonna do that by going to the storage one. So here, storage, this is where you would go in to help to figure out where the host volume is. So by default it just goes to temp inside of your container which is not good but this is a demo so whatever and I'm just gonna add it to that host name. If you open up this JSON editor you can actually see what the JSON is and this is your Marathon definition. So if you wanted to do this manually or wanted to upload this from the command line or you wanted to edit it anyway this is what you would do. I mean normally in production people don't use the UI very much. It's used for playing around and getting approval concept but in reality people just have a whole stack of these JSON files and that's what's in their infrastructure. So I'm gonna go ahead and run this service. And so that's Jenkins running and you're gonna see the resources go up on here. I'm also gonna install GitLab. This is one of our community maintained packages. So I'm gonna review, oh I was gonna find a node, yeah review run this one. So I'm gonna make a couple of changes on here. I'm going to add a virtual host so this is gonna be cd.princesslaya.com and I'm gonna change the port because my cluster is running on a infrastructure that doesn't actually have port 50,000 open and I'm just gonna grab one of these to, I'm gonna just put it on, I'm gonna pin this one to a specific agent as well because again I don't have storage in the backend setup. So I'm gonna go ahead and run this. So these services are now deploying. I also didn't set up any SSL certificates and something that I learned about Docker is it really doesn't like using insecure registries. So some of the demo magic I did before getting here was telling Docker on all of my agents before launching anything that it's allowed to grab from an insecure registry that I set up. But we're also gonna have to do that inside of Jenkins because Jenkins also needs to pass along that information to Docker in order to actually do the deployment from an insecure registry. Obviously in productions you would use a secure registry because you'd actually have SSL certificates set up for all of this stuff inside of your infrastructure. So of course this is a demo so this is gonna take longer to deploy than normal. But it's coming along. Let me see what I can show you inside of this, inside of this here. Yeah, so we got our dashboard. It's showing how things are going. Jobs, this is what metronome powers. This will be able to run one-off jobs. So if you have a cron job or something, you can set this up. We're actually having it running internally to send out an email every week reminding us that all hands is coming up on Monday and to submit questions. But this can really do anything. It can run all kinds of one-off things. And then the catalog. There's all kinds of software in here. It's pretty good. Artifactory is another continuous delivery sort of thing. Got Cassandra and all these beta ones are just sort of in waiting. But yeah, you can deploy SAF on DCOS and all kinds of other good things. All right, so Jenkins is running. So I did not give a public address to Jenkins but what we can do to open up Jenkins is actually run it inside of the DCOS UI. Come on, internet. And the address is like service slash service slash Jenkins appended to just the regular address that you have inside. So if you look at this address, it is the same, yeah, we just have a service address for Jenkins. And GitLab is running. So I'm actually going to go to do this one first because if someone went to cd.princesslaia.com before I did, they would have been able to set my password for me. All right, but the first thing I'm going to do actually, I have to do a tiny bit of configuration on Jenkins in order to handle the insecure registry situation I have myself set up with. Guess what? We don't have the latest version of Jenkins. Gonna fix that. This is kind of boring but all I'm going to do is set up an environment variable to pass along my insecure registry instead of Jenkins once this loads. I have to hit advanced a few times. All right, so that's what I need to do in Jenkins. Now over in GitLab, I'm going to log in as the root user which you would never actually, ah, phew, okay. So I'm going to create a code repository, a Git repository in here. You would typically set up your own user and not use the Git user or the root user. And I think I call, yeah, I call this scale website. So I'm going to create a Git repository called scale website. I'm going to make it public and I create the project. So actually now if anyone wanted to, they could create a user on this instance and submit a pull request to my project but you don't need to do that. So I'm going to just clone this repository and it's empty, of course. So I've actually got all the files that I want to go in there staged over in another folder. So I'm actually just going to copy all these from the other repository over into my scale website repository and then I'm going to show you what I put in here. So the idea here is I want to launch a website in this continuous deployment system. I'm just going to use nginx, it's going to be super simple. So if we look at my Docker file here, all it has is grabbing nginx container and then just copying my index.html to the place where we want to serve it up. Oh, it's all the container does. It runs nginx and serves my website. The index.html, super simple, tells you hello. I've got a Jenkins file in here that's actually going to define what my pipeline should be doing. Hey, it's got some space there. All right, so it checks everything out from the Git repository. It builds the Docker image, it logs in and pushes it up to the container registry inside of GitLab and then it actually does a deployment of the Docker image once it's all been built. Yeah, yeah. And then I want to show you my marathon.json. Again, this is a definition for the application. So this will be my website that's running. This is what it's going to pass along to DCOS once it actually deploys the application. Caps lock. All right, so I'm just going to get add all these files and I'm just going to do like an initial commit here. So we look over, your repository's empty, but I'm going to do a Git push. Hey, come on. And now where my repository is not empty, so just push them up to the Git repository. The registry is actually our container registry and you can see it's empty right now, but it won't be in a few minutes. So we are, we have our, everything's all set up. So we're now going to go over to Jenkins and actually set up the pipeline. So we're going to go to the new item. We're going to call this nginx. Going to do a pipeline and this will be our continuous delivery pipeline. I didn't add a testing component into this because I wanted to make sure that the demo actually got done before we all had to leave. But adding, adding a testing, adding testing into this would just be extending the Jenkins file to actually do a test. Like you might check to make sure links are all working or something on a website. And then what we're going to do here, we're just going to, we're going to set up a build trigger. It's going to just pull Git every minute. It's craw and style. And then it's going to warn me that I'm running this every minute. Probably wouldn't want to do that normally, but I need to get this demo going guys. So here we are. And then pipeline script. We're going to use Git. And because remember I added the Jenkins file and everything in my Git repository. So I'm going to, I don't actually need this root guy here. And then I, so that's my Git repository. I need to add a credential to Jenkins in order. So because when I push it to the image registry it's going to need to log in. So I am totally insecure using root over not SSL again. And then I'm going to call this, I give this the ID of GitLab. So this is my credentials for Jenkins. No, don't make it worse. And that's my pipeline to find. So within a minute it's going to go ahead. So I'm in the nginx project now pipeline inside of Jenkins. And in the build history we're going to see it pop up in a moment because it's going to look at the Git repository. It's going to find that Jenkins file in there. It's going to be like I'm going to use this Jenkins file and I'm going to run my thing any minute now. Yeah, so this is, I mean at this point in this talk we're sort of getting into like Jenkins-y stuff and GitLab stuff. But, and of course you can run other things, Artifactory or you can use GitHub as your code repository. You can use Docker Hub if you want to as your image registry. But GitLab's pretty cool, so there we go. So what it's trying to do now, so it's starting this job. It's looking for an available agent on the Mesos cluster because there's this plug-in that we ship with Jenkins that interfaces with Mesos directly. So Jenkins is able to run something on the Mesos cluster and it's able to spin up its own containers. When I was working on OpenStack we had a tool called NodePool which connected to a bunch of virtual machines across the world and different data centers. And that was fine too. But we did have to require this whole like NodePool tool in order to manage our connections. With this for a really simple like continuous integration deployment pipeline, you can just use the existing Mesos cluster that you have. So I'm gonna open this and see what it's doing over here. It's probably waiting on something. All right, so it found my code repository. It's doing a checkout. So we can start seeing here Jenkins is doing its thing. So checkout is looking good so far. Yay, so now it's going to build the container image. So if we pop back over to the log screen here, you can see it's pulling down the nginx image that it's gonna use. That's how it builds that. And then it's going to publish it. In the publish stage it logs in to GitLab and then it pushes the image up there. And then it goes to the deploy stage. In the deploy stage, it grabs the image from GitLab and then it pushes up into DCOS. So now we have this nginx service running and that is the one we just deployed directly from Jenkins, which is pretty cool because now it's in your list of things that you're running. And then if we open that up, oh man. Okay, just give it a second. It's totally gonna work in a second. There we go. So this is just a little website we put in to GitLab in the code repository a few minutes ago and now it's deployed. Of course, you probably want to add a testing step in there so you'd have an after build. It would probably have testing and then publish and whatnot but wanted to make sure I had time here. So where am I time was? So I've got maybe a couple minutes for questions so. Yeah, all right, any questions? Hi, not so technical question here but just how do you enjoy using the DCOS GUI? And we've sort of been fiddling around with Jenkins and more and more where I work and I just, I was looking at the GUI and it looks really nice to use and how do you like it? Yeah, I like it a lot actually. It's funny, I can actually, I can try to find one DCOS 1.8. The GUI used to be really, really dark and it looked a lot, yeah. So this is our old GUI here, actually it's hard to see but it used to be like dark. That's actually very similar to how Marathon looks so I actually just, I can show you because Mesos and Marathon also have their own GUIs which is super good for debugging because DCOS sort of tries to make things simple so this is the Marathon UI and then you've also got a Mesos UI that are totally separate so it allows you, these are lower level ones you can sort of dig into them but the DCOS UI, like it's super nice. We have a really amazing design team and we just released 1.11 yesterday. We released like twice a year so yesterday was a big day. So we released yesterday and the UI is actually even better than it was. Like the login screen is really slick so yeah, I love the command line so GUI is not really my jam but I actually quite like it. They did a really nice job. They also, I think in the 1.9 release if you try to launch something and it fails, there's like a grid which will show you, actually I tweeted about it the other day, that will show you like what it actually failed on. I was trying to prepare for this talk and my whole demo was breaking all over the place. It was a bad day. But it actually show you like why did it fail? Did it fail because it didn't have enough CPU or it didn't have the right port? Yeah, here it is. Hey, look, there's my cat. Okay, where did it go? Well, now I lost it. Oh yeah, there we go. Yeah, so this is like a resource offer screen so it'll show you, it's kinda hard to see but you can see those little check marks at the bottom. It went through every agent on the list and in one of them it, you know, the role wasn't right. That meant it was a public agent and we wanted to run it a private agent. One of them, some of the other constraints were right. Everything, all the CPU and stuff was good but the one I wanted to run it on didn't have the right port. So it has this resource offer screen which is really nice for like just first glance like debugging why am I not able to launch this thing anywhere. So that's pretty cool. Question. Yes, did you say earlier installing DCOS also automatically installs Mesos? Yes, yeah, so when you install, yeah, DCOS it brings in Mesos automatically. It's part of the installation, yeah. Anyone else? All right, well I'm around all weekend so come say hello and ask me questions whenever. Oh and I can show you the registry here.