 All right. Awesome. Sorry for that. Thank you, everyone, for coming. I'm Jamie from the Habitat Core team. I'm also the lead engineer of Habitat. Habitat is an open source project from Chef. We've been building it for about two years. It's a collection of pieces of an overall application that has a focus on applying distributed systems to application automation. If any of you all are familiar with traditional configuration management, it typically takes an approach of building a system that your application will live on. So it maps pretty well to a system's administrator persona. So you provision out a machine. It's a VM or a container or physical hardware. And then you choose a distro that's part of the provisioning. And then from there, you layer software on top of it and you configure the machine the way that you want it to be. And then you put your applications on it. Habitat takes an approach where it applies distributed systems to make, instead, applications the focus and then applications as autonomous actors in a system that interact with each other to set state using choreography versus orchestration. Orchestration would be micromanagement, top down. So I'm the CTO of this company. And I'm going to ask Phil to do a thing. And I'm going to ask you to do a thing. And I'm going to go down the line. And then I'm going to ask each person, did you do your thing? Did you do your thing? The problem with that is there's a network partition between every one of those requests. And as the system gets more complicated to set up, you need to build in more logic into your orchestrator to handle failure cases through those network partitions or through those different systems. With Habitat, you set a goal for the system. And then the actors in the system make that goal happen for you. And we do that using distributed systems. So Habitat itself is a process supervisor, a developer studio, and also a hosted build service. Those are the three large pieces of what Habitat make up. Habitat currently works on Linux, any 64-bit version of Linux, 2.6 kernel and higher, I believe. Windows Server 2008 R2 and higher. And currently, we have just support for the client in Mac OS. The process supervisor is a large part of what makes this an interesting talk for all systems go. But the thing that I like to talk about first is the hosted build service. So we have our process supervisor, and it runs packages. So Habitat has its own package managing system that we'll go through. And the build service is the thing that actually runs those. The first thing I want to do is show you all the build service and get a job going so that way we can show the basics of where to get started. So here I am at Habitat SH. If you log in, you'll see this screen. I'm logged in as me. These are the origins that I have. There's a bunch of them that are test origins. The one that we care about is my personal one here right now. So what I would do normally is create a brand new origin when I land here. Origins are like an organization in GitHub. It's a way to segment packages into different containers. So instead of having just one Redis package, you could have one for Facebook. You could have one for Chef. You could have one for yourself. Or you can consume from the core origin, which is where most of our software lives. You'll see on this previous page that the core origin has about 484 packages in it right now. So if you landed here, you'd create your own origin. I'll create a new one for all systems go. We support public and private packages. So a public package is one that you want to share with the entire community. A private one is just like a private GitHub repository where we'll store your packages for you, but without authorization, no one will be able to download them. All the private data around the build output and things like that as well is quarantined off unless the person is a member of your origin. So let's just make a public one. So you'll see that we also generated some origin keys. These signing keys get used when we build packages to identify where that package came from. So if you receive a package, it's got a signature associated with it. And as long as the public key matches, you know that it came from a verified source. So if I was to connect a new package to this or a new piece of software to build the package, I'd come connect a plan file. This is reaching out to GitHub. Sorry, the internet is giving me some trouble here, so it's going to take a minute. But this is reaching out to GitHub to find a GitHub repository that contains not just my software, but a file describing how to build that package. So if we go to my GitHub, actually, I just want mine. I've already forked a project to prepare for this. This is just a sample node application. In this directory is just a node app, but there's a Habitat directory as well that has a plan.sh file. This is the entry point for how to build Habitat software. This might be a little small. Sorry, guys. It's just nine lines of bash. We used your system scripting language to represent how to build packages in the Habitat system. This is nine lines long because we know really well how to build node applications. This is called scaffolding. You can build any software with Habitat. It's not just for Node or Ruby or Python or a high-level language. We built an entire, I mean, it's not a Linux distro, but we rebuilt every piece of Linux from G-Lib C up. And we'll get to why we did that crazy ass shit in a minute, but we had to. So nine lines, and this describes exactly how to build the package that we're going to put through the build service. I'm going to make it a public package, and I'm also going to export it to Docker. Oh, I haven't set up an integration yet. So part of the process here is, and this is optional, we have post processors for the build process. So after we build a package, we'll upload it back into Builder where Builder hosts your software. So your process supervisors come, they download the software from Builder, and then they run them there. You may have a preference in your organization to use containers. So one of the things that we're going to do here is set up an integration with Docker Hub. I just got to get my password, sorry about that. Whoops, live demos are great. So I've got the integration set up. If I come back here and refresh, connect the plan again. Let's put it out to a container in that origin. Okay, so I've set this up. What I'm going to do here is edit the file to make sure that the origin matches the origin that I just created in Builder. So it's all systems go, and yeah, this is good. Just commit these changes. And because I just commit those changes, it actually kicked the build off for me. So if we look at the output here, this is the Habitat build system, which is called plan build. And it's running and just taking care of installing, installing any of the package dependencies that we have, building that software, uploading it into itself, and then exporting this all to Docker. I happen to have a Docker container already built of this so I can skip ahead and do the cooking show thing for you. Oh, yep, the cooking show thing's going to fail. We're going to wait a minute while this actually does build. So this is a good moment to chat a little bit about while this is building, how the packages work, and why we went the route that we did. So every Habitat package is immutable, atomic, and isolated. So the thing that we're building right now is isolated from the rest of the system entirely. This sample app that we're outputting depends on G-Lib C, but it depends on a particular version of G-Lib C that's a snapshot in time from the moment that we issued the build. So because they're isolated in this way and because they never change, it gives you those superpowers from a container, but it doesn't just work with containers. You can use this for physical hardware, virtual machines, or in this example that we're going to do, a container. And first, traditional configuration management where you have an artifact that configures a machine that has global state. You don't really know if that thing will succeed in running a year from now. You also don't really know if it will just tomorrow or today because the machine that you run that configuration management on basically is dealing with global state at all times. Because we instead set up the machine and we cared about that first and the application is a citizen of the machine, when you try to configure the application, you just don't know what the rest happened on that machine before you came there. So we say that Habitat packages are immutable, isolated and atomic, meaning they never change, they're all or nothing atomic, and they're isolated from everything else in the system. And interesting, why we went this approach and why things are isolated in the way that they are and they only work against the thing that they were built against. Let's take a little bit of time back and talk about a story from about 2000 or 2002, which was a day that Linux rebuilt the world because of a vulnerability in Zlib. Zlib is a compression library that every socket pretty much uses, which allowed for arbitrary code execution if your machine was vulnerable. And because of that, basically every piece of software on the planet had to be rebuilt to remedy this issue. The problem with that is at the time, people weren't dynamically linking to Zlib or some people were, but one of the larger issues at the moment was it was less common to dynamically link to a shared system library. So you couldn't just patch Zlib and then restart all the services. Instead, you had to go to your build server if you had one because this is 2002, cruise control landed at like 2000 or something like that, right? So it's a way back time machine that we're in. So you had to rebuild basically every piece of software if you've vended Zlib into it. And the problem with that is you just had no idea sometimes which applications were built with which versions of that software. And today in 2017, we still have that problem with containers. People are trying to solve for what is in that container? Do you know, did you just go to Docker Hub, download Redis and then run it and hope that everything in that container is copacetic? Do you know what version of OpenSSL is in there? Do you know what version of Zlib is in there? And the answer is no unless you built it into your build system. And Habitat gives that for you for free. All this is open source and free. That build service that's running right now, $0. You don't give me a credit card. It builds your software for free. So actually what I wanna do right now is rebuild G-Lib C in production as a live demo. I'm not gonna promote it to stable because I don't actually want a problem. But I will rebuild the world right now and show you guys what that looks like. One of the reasons that we built Builder to support this process supervisor and packaging manager is for this problem. We realized if we isolate things in the way that we do, it will be, you will constantly be rebuilding software manually trying to figure out which software at which level needs to be rebuilt to remedy the issues that you have at the leafs or the applications of the dependency tree. Okay, so this failed because my password wasn't any good but we'll get to that in a second. For now what we're gonna do is, I'm looking at Fletcher and Chris's face, the people on my team that didn't know I was gonna do this. I didn't even know. Yeah, I didn't know. If it does, if it's live demos, man, I don't know. I don't know if it's gonna work. It should work, worked like a couple of days ago. All right, so while this is building, I just wanna show you quickly a dashboard that we have. This is private. This data we hope to get out to people so you can see what's going on in the system. But right now, we just launched. So this is private and internal. But this is basically what's happening in the system right now. And G-Lib-C will start working through it in a minute. Right now, I wanna see why this fails. I don't. Thank you very much. All right, so what happened was my username on Docker is different than I thought it was. That's right. Oh, I got this part right. This will just take a second. I'm sorry, guys. Can I get a time check really quick? How many minutes? 20, excellent. Thank you. We have to validate that input. We've got an issue on our board. Also, this entire project's open source. The build service included. So if you go to Habitat on GitHub, here, you can follow our project tracker, see exactly what we're working on right now. We also have a roadmap. I can show you guys that in a minute. But it's all open source. So you can see 100% what we're doing. So this is kicked off now. This is that sample app. And we can see that we've got a number of jobs kicked off. G-Lib C is building right now. And as soon as G-Lib C is done building, it looks at the rest of the dependency graphs. So everything that depends upon G-Lib C kicks those builds off. And they happen in stages. So if 20 things depend on G-Lib C, those will begin building. If 40 things begin to build off of those things, they'll start building. So on and so on until we get to your applications. And why this matters is if you depend, if you have your own origin, say you create Facebook, and you depend upon core G-Lib C for anything. As soon as G-Lib C rebuilds, we issue a command to your projects as well to rebuild. So you'll have a message, it'll automatically rebuild your software, and it won't affect production. That's why I'm pretty okay rebuilding G-Lib C right now. I hope my live demo doesn't fail, but I know that nothing's gonna happen because inside of Builder, we also have this concept called release channels. There's two by default. One is called unstable and one is called stable. Unstable is where all this is happening right now. If you depend upon unstable packages, I mean, more power to you, but I would not. Once we're done with this, if I wanted all these packages to be consumed by other people's projects, or if I wanted to promote, say, our production environment of Builder here, I would promote everything to the stable channel. Builder, this service that you're looking at right now is built on, so my history is I built video games for 10 years. I worked on Guild Wars Two, Lord of the Rings Online, Dungeons and Dragons Online, League of Legends, large online distributed games. In this whole build system, our goal is to have about five nines of uptime or more in a year. This is brand new, and it's in preview, so I can't guarantee that this moment, but the backend architecture that's under this is a distributed online game system, basically, and it models really closely to the experience that I learned from working with the server programmers at Guild Wars Two, where their uptime was unbelievably ridiculous. Anyway, so this build is kicking off now, and it's pushing it into Docker, where I'll be able to pull it in a moment. If we look, G-Lib C's still building. All right, so it's done. So I'll look at latest here. You'll see it's in the unstable channel. There's no latest stable build of this. If I wanted to put this into stable, I take this package identifier, and I simply run this command to promote it. This will be in the UI at some point, but it's not right this moment. So if I push this into stable now, any supervisors that are running the software and connected to the stable branch will automatically be updated, and it'll be automatically updated in a rolling update fashion, if so you want. We'll get to that in a moment just as soon as we get to this Docker container bit. So if I look in Docker now, it created this for me. Like I said, this is optional. You don't have to output a Docker if you don't want to. Just trying to, yeah, there it is, for 8080. So after I pull it, I'm gonna start the Docker container, and forward some ports, 8000, where the app is listening. What just output was the supervisor itself. Our process supervisor runs as PID 1. Why this is important is if you go back to the moment where I described how our packaging system works, and we know every version of every piece of software running. Because we're process one, there's literally nothing else in here. This isn't even Alpine Linux or Busybox. This is nothing other than the process one. We know for sure every single piece of software that's running in your container, and what ports are listening to. All that metadata about the application travels with the package, and because the package is immutable, it never changes, it always works. So if I look here, we'll see I'm running version 1.0.2 of this really, really basic sample app. And if I wanted to change that, something about this app, I just commit something here. Builder picks up the changes, auto builds it. And if I had that container set to auto updating, the container inside would auto update. If not, I could just pull down a new container and deploy it if I'd like to. So let's take a peek at our G-Libs is still building. So we'll move on to the next bit of the slide deck, which talks about the Habitat Supervisor itself. So I talked a lot about packaging, the build system, those are the sexy things. To this crowd, this might be the sexy thing, which is the process supervisor that runs all of this. So Habitat is a peer-to-peer process supervisor with a network thread that communicates in a peer-to-peer fashion to spread rumors about the processes that it's running in an epidemic protocol. I like to play a game with everyone here. If you don't want to play, it's okay. This usually works for a less experienced crowd. You guys might know how this all works already. But I want to simulate what it looks like to spread a rumor through a bunch of autonomous actors. So every one of you is a process supervisor running services right now, okay? And the protocol that we're going to speak is a fist bump. We're going to do that for sanity reasons. If you don't want to fist bump, you don't want to play, you can stay sitting. But you can high-five, you could wave anything that you're comfortable with. What I want you to do is stand up and fist bump three people around you. So if you want to play, let's do it. All right, wait, wait, whoa, not yet. We're going to, I'm choreographing this, okay? I'm the operator here. I get to say when it happens. Thank you, thank you for being ambitious though. Okay, so I'm going to issue a configuration change right now. So I tell Phil, and you start, and as soon as somebody gets fist bumps, you turn around and hit three other people. And if you've already received one, just take it anyway and drop it on the floor, right? Okay, config. So three peers, everyone three peers. And as soon as you're done, sit down. All right, so the rumor's pretty much been propagated to this room of 10,000 people. That was very fast, right? A lot of people came to this talk. So if I personally got off the podium and went and issued a command to every single person, it would be error prone. Maybe during that time, someone has to go to the bathroom or I couldn't even give the talk. I can't go do the rest of my work if I was going to each machine or person, which is a machine, in issuing this command. So what I just simulated here was the audience is a peer to peer network. I suggested that a configuration change happens. You didn't have to play if you didn't want to, but if you did, what you did was established a membership list with each other. So you know who's there, who's not there, and you also established information about yourself to each other. I mean, I'm abstracting a whole lot of what that fist bump meant, but imagine that you were a supervisor telling each other about the processes that you run. So you established a membership list and then in the future, if somebody went away, like say somebody had went to the bathroom, if you went to fist bump people and you realized that person was gone, well then you would send a message to your peers and say, you know, Tom's gone and they'd verify that he's gone, detect there's a failure in the system, mark that person is confirmed dead and then the world would keep on going. The person that's confirmed dead could come back and be like, what happened? I just went to the bathroom, still you're saving my seat, man. And they can come back and rejoin the ring. If they come back and they start causing problems, one of the permanent members of the ring can permanently get rid of them and say, oh, that person has departed. Or as an operator, if I know that somebody's coming back and they're misbehaving, I can permanently depart a member as well and they're not allowed to join the ring again. It's a kick ban. What I just described and simulated is something called the swim protocol. Habitat's process supervisor has this built into it. It uses swim to establish a membership list with the peers around it. Swim stands for scalable weekly consistent infection style process group membership protocol. All that means is mathematically, it scales indefinitely or at least linearly over time. And its job is to figure out who's alive, who's there and who's not. Now that doesn't have anything to do with the services in general, but it does have a lot to do with who's present and what supervisors are available. How we figure out what you're running and what services you have is with something called newscast. And we use this to spread rumors on top. This is a sub protocol, no, this is not a sub protocol, I'm sorry. This is the main protocol for how we issue rumors. There's sub protocols built on top of this which I'll chat about in a minute. So spreading rumors, we spread information about what services we're running, what their health is, what their state is, are they up, are they down, what configuration they have, and if they're the leader, if they're the follower, we build sub protocols on top of this as well. Those sub protocols are leader election. So say you had a database server, three of them running in a ring, an application server is connected to them. You automatically figure out where those databases are through service discovery, linking the services together. So the app servers find the database servers. Let's imagine that you shoot one of the database servers in the head and it was the leader. Well, they'll perform a leader election using Swim to identify the failure and then spreading rumors to figure out who the new leader is. And then the application servers will identify, oh, one of those database servers is gone. I stopped talking to them and they'll be reconfigured and sent a message. All this was done without operator interference. This happens when you're sleeping. As long as you build it into the packaging, which again are immutable, they never change, they always work, the system will just self heal itself as long as you set the packages up correctly. This works with any software. You don't need to change your software. So this works with Postgres, it works with Redis, it works with an unnamed, really terrible piece of enterprise software that we used as a proof two years ago to see if Habitat would work with the most garbage shit I could find. And it does. Let's check on G-Lib C really quick. Okay, so it's done building and now it's kicked off a whole lot of stuff. So there's about 30 builds running that it's dispatched now and there's 31 other builds that are pending and waiting until those are done. Now they go off in groups and eventually builder will be rebuilt. This thing, this app that you're looking at right now will be rebuilt because I rebuilt G-Lib C and if I promoted the entire world to stable, builder will be automatically updated because builder is running itself, I'm sorry, Habitat supervisors are running builder. Builder is building builder and builder is building Habitat supervisors. Habitat supervisors also can automatically update themselves without a service outage. When they do so, they'll update, reattach all the processes and then eventually builder will be rebuilt and the builder components will auto update themselves. They'll do so in a rolling update fashion and we shard our data one 128th and we have them separated in concerns. So there's a session server, there's a job server, there's all the workers and they'll auto update in a rolling fashion. So you won't notice a service outage either. Unfortunately, maybe next year I'll do that on stage but right now I'm not gonna update G-Lib C to the world and show you that process. All right, back to the slides here. I've got a little bit more time. So the last bit I wanna show you is how to interact with the supervisor and what I'm gonna do is run a multi-tier application with Habitat. It all starts with the Habitat CLI. You can get it a couple different ways with brew, with chocolatey, if you're on Windows and there's a curl bash that somebody in the audience is gonna complain about that if you don't want it, you don't have to use it. You can just fucking download it from Bintree and put it on your machine. I don't care how you get it. You can even install Habitat with Habitat because Habitat, the CLI is packaged with Habitat but you get to start somewhere. And then what we do is we enter something called the studio. This is one of the large pillars of Habitat. So we have the distributed build service, we have the supervisor and then we have this development environment. Basically it's an isolated CH route. It depends on your platform. If you're on Linux, it's a CH route. If you're on Windows or Mac, it's currently a Docker container and it has just enough of what you need. It's completely isolated, clean room environment. So when you build software in here, it doesn't pull in the dependencies of an operating system around you. You can only link to the things that you've depended upon. And what's really nice here is this studio is exactly what Builder is using to build your software. So if you've ever used another build, I'm sure everyone here has used a build system before but what sucks about them often is troubleshooting them. You make a change in your local machine, you're like, that's probably gonna work. You kick to the build server and fucking four hours later, you're still trying to figure out why the build server won't build your stuff because the path is wrong or like it's missing a dependency or somebody else is on the build server changing it while you're there. And anyway, I digress. We did this so we would avoid that. Basically, the studio builds the software the exact same way that Builder does so you don't run into those problems. So let's kill this container up the font a little bit here. And so I already have Habitat and I'm gonna enter something called the studio. So once you're in the studio, it gives you some information here. So for instance, I'm already running a supervisor and if I type SL, I'll see the output. It just went and grabbed some information from Builder itself. This is connected to the internet. I'm connected to the internet. I just went out and connected to Builder and downloaded some packages and now the supervisor's running but there's no processes in it. So if I run HabSVC status, there's no services loaded. So the first thing I'm gonna do is pull down the router for Builder. So it's downloading it from the stable channel which is the exact same version that's running in production. And it also is downloading all of its dependencies. Everything that the router has ever linked to. In our infrastructure, we have gateways at the front, a router and then all the services are in a service tier and then the database is in our data tier. So this is like the message router between all of that and how we can stay online. So if you look in the Hab directory here, you'll see that that's where the packages went. So if I look in the router and at the bin directory, let me move this over a little bit. I'm gonna LDD Builder router so you can see what I mean about the isolation. So you are linked to only Habitat packages here. There is no system glibc here that we're linked to. We're linking to a specific version of 0MQ and LibSodium and LibArchive. All those were built with Builder and with PlanBuild. So that's why I know everything that it's running. I'm gonna start the supervisor again because I hit Control C and killed it. Stupid bug that we have, sorry about that. So if I wanna start the service, I just type start. If I look, it's now taken that package and began to run it as a service. I check the status, it's running. And I also have on every single supervisor an HTTP gateway that's running which will give me information about what's on the system. So I don't have curl because I mentioned only what is needed. The very bare minimum is in this studio. So I'm gonna ask what provides it. It's gonna tell me some bullshit, that's a bug. But normally it works, trust me. And I'm gonna install curl and pass it this dash B flag which bin links it into my path. So if I say which curl, I've got it. Okay, I don't have which, you get the point. I'm gonna ask what services are running. And it's a bunch of spew here. So let me get something to format that. If I look, this is information about every single service that's running in the system. That's what it's path is. This is every piece of software that it's running. Or every piece of software that it depends upon the service does. You can hook this up into any system that you want to put data over it or graph it, et cetera. The next piece that I'm gonna show you is running the application server that connects to this. And then I'm gonna have to stop. But I can talk to anybody about this afterwards. So I'm gonna install the API server. And if these were on two different machines, it would still work the same way. Just for the sake of time. And there's a bind that I have to connect this to. So I connected to the builder router service which is running in the default service group. I don't have enough time to describe service groups, but there's tutorials online. There's a 10 minute tutorial. You can, there's everything you need to learn. We try to make habitat as simple to learn in two lunch breaks. So like you can learn all this shit without me in front of you. So we start it. The API server started. And I'm gonna look at its config really quick. And you'll see that the routers were automatically filled out in this configuration. This config is my apps config. I config our servers in Toml. But this would work with yours too. And the routers, normally, again, it's all open source. You can see how any of this works too, like how our whole live service works. This is the part of the plan for configuration. It's very simple, just handlebars. If you look at the routers, we went through each alive member of our router bind. We iterated them. We pulled out their sys IP and the configuration that is port. And it's a simple polymorphic relationship between the router and the API server. Router saying, I exposed these things. And the API server saying, I wanna depend on something and you must expose at least this port thing for me. Last thing that we should do here is check on our G-Lib C build. It's still going. That's gonna be going for about another 25 minutes or so. But at the end of it, if I wanted I could promote the entire world and software running anywhere that is depending upon our stable channels and automatically updating would be updated. I have no more time. That's all I could show you. All of this was completely unscripted. So I'm very sorry if it was terrible to listen to me talk for 45 minutes in stutter. But I'm Jamie Windsor. You can find me here on GitHub, on Twitter. Two of the team members, two of the core team members are here as well. Chris and Fletcher, if you guys wanna stand up. I think we're all wearing Habitat shirts. If anyone's interested in Habitat, anyone's interested in Builder, the Dev Studio, we'd love to chat with you. Thank you very much for listening.