 So, my name's Kendra Coleman, I'm a developer advocate on the EMC code team. You might have seen us, we have a booth here, but again, I'm a developer advocate. We write integrations into Docker and Mezos and all these other kind of cool things for persistence. And we're going to talk about that a little bit as we go along here. A little bit about me personally, I guess maybe on my work front, I actually was in the VMware ecosystem for like a very long time, about eight, nine years, did a lot of stuff, had a very prominent name in the VMware world. And then it was probably about four or five years ago when this whole, like where DevOps started coming around and started getting back into it and doing programming. My degree from university was an actually visual basic. I did that for about a month out of college and said never again. And then so I think again about four or five years ago was when DevOps started coming back around and I started getting into new languages like Ruby or Ruby and Node and then I started joining this team and we got into the Docker ecosystem and from there started learning Golang and so on and so forth and that's how it happened. A little bit about me personally, I come to you all from Kentucky. I'm from Louisville where we're known for two things, horses and bourbon. So I'm a pretty big bourbon aficionado. So if you like spirits and brown water, you can always come talk to me, actually run a bourbon podcast. You can go to iTunes and type in bourbon pursuit. It's a weekly Friday release where we interview master distillers across Kentucky and different distilleries across the nation as well. So enough of that though. So we'll kind of talk a little bit about what's gonna happen here. And if you're here in the last session with Victor and Mike, they kind of stole a little bit of my thunder but not all of it. So we've got a little bit more surprises that are gonna be in line for you as well. Now I also have to apologize a little bit up front because I kind of lied a little bit in my abstract to make sure this would get accepted, right? I'm not just to get accepted, but with inside of there, I talked about using a three tiered web app to do scaling and stuff. And I went to a talk this week, and actually when I was building this and building the demo, I was just like, this is gonna get harder and harder. And I went to a demo, or like I said, a talk this week from Corey Quinn. And it was called Heresy in the Church of Docker. And then I went to it, he kind of talked about and said, three tiered web apps and stuff like that. He goes, that's ancient, that was done like three years ago. So if you're not building your applications like this, then you're just doing it wrong. So thankfully, a lot of the things that I'm gonna talk about today really play well into this microservices world. And I think another premise of what today's talk is all about is just the evolution of technology in general. Because we've kind of noticed that if you haven't been in the container, who's here just learning about containers just in general? Okay, so if you haven't been in this container ecosystem for at least greater than a year, you're probably thinking to yourself, this is pretty damn complicated. I'm hearing all these buzzwords going around, and I don't know how they all fit to one another. I've talked to all kinds of people on the show floor and they're like, my boss sent me here, they think I have to learn about this. I don't know if we're gonna use it or not. And thankfully, technology is getting, well it's always getting more complex. But at the same time, it's getting a little bit more easier to consume. We can take this just example in the container ecosystem is one of them. We can take this example just for like cell phone technology and evolution. In the early 90s, we pretty much had this brick we held in our hand. And it had 10 digits, a pound sign and an asterisk. And it was all built in with firmware inside of it. And it was pretty simple for what it did. Now as time progressed, it got amazingly more complex, right? We started throwing down operating systems. Now we've got applications that have their own pieces of code and their own isolation. Yet however, it's become insanely easy to operate. It's become the easiest operator I can take to this because my own daughter who's now two and a half, at one and a half she started taking away our iPad. And she knows how to scroll through, find her folder and watch crazy YouTube videos. Like that's just what she does every single night. Now she doesn't know how to slide and unlock and press the digits for our pin, but she knows exactly the user interface and how to do it. So it just kind of goes to show you that it's a very complex technology but very, very easy to start consuming. We can take this even out of the technology realm and you just look into cars in general. A single car has around 30,000 different pieces, down to its smallest bolt. You've got braking systems, you've got electronic systems, you've got the engine itself. All these are very complex pieces but somehow are very easy to operate by essentially four different pieces. You've got a steering wheel, a brake pedal, a gas pedal and a shift, right? Put it in some sort of gear and depends on your kind of car, you might have a clutch as well. Now the future of this is actually gonna get even more complex but yet even more simple as automated cars start taking over or introducing more and more complex sitting to the system. Yet those four things that I used to actually drive the car are going to be going down to possibly zero. To me actually just sitting in the car and telling it to say, take me to Container Con in Toronto or take me to Ottawa Canada or click my shoes and take me back home to Louisville, Kentucky, right? So it's gonna get a little bit more simple even though the technology is getting more complex. So let's look at the beginning of what this all looked like in the container ecosystems to kind of set the stage for anybody that's just now getting into it. In the very beginning we had what we're called container links. And you could made this a little bit easier by not using links and just exposing ports and containers and then talking through those different ports. But container links are actually able to create secure communication channels between two containers that all had to run the same exact host. And it's actually funny if you go and look at Docker's website for this, this is now called legacy links. This technology is about a year old now. So it's now called legacy. And that's one thing to understand is this is how fast this industry is moving. And it was probably about early 2015 is when Docker made an acquisition of this company called Socketplane. And Socketplane was looking at utilizing VXLAN as a way to create an easier way of networking inside of Docker. And Docker took this, again, kept it open source. And it's now a package called libnetwork inside of there. And it allows two Docker containers to talk to one another on multiple hosts, and even across multiple clouds utilizing VXLAN. Now, as I said, I came from a background of doing VMware. And I spent a lot of time doing vCloud director. And vCloud director is one of those initial technologies that utilize VXLAN quite heavily. Yet it was super, super complex to set up. I had to go to every single host, set up what was called a VTEP or its own IP address. I had to go and actually have a Cisco switch that had to have a certain technology enabled on it to make sure they could do that. Increase EMT use and everything like that just to make sure this would work. However, as we said that as complex technology becomes easier to consume, it's all kind of hidden from beneath the covers. If you were using Docker and 1.0 or even two years ago, this sort of stuff didn't exist. Networking was very, very hard. Yet today, if you're getting into Docker and using these sort of container runtimes, you're gonna see that you're gonna be thinking, wait, this stuff, this isn't how it always was. So it's getting a lot easier to consume from that part of being able to actually utilize the networking aspect inside of there. Now, orchestration in my mind still isn't the easy thing to do. Now, I'll talk a little bit more about the next version here about where it's come in, but Swarm 1.0 wasn't the easy, it wasn't hard, but it wasn't the easiest thing to get set up and working. It required an external key value store such as ZooKeeper, etcd or console to be able to use lib network and be able to get to the scale that was really proclaimed in thousands of nodes. At the same time, it also required additional flag if you wanted to do restart to those containers as well. Now, you've probably seen Google around here, you've probably heard of Kubernetes. This is one thing that's gained of a lot of attraction over the past six months to eight months. And at least for me personally, I have yet to been able to set up a Kubernetes cluster from scratch without some use of bash scripts or like a CloudFormation template. And this link right here is actually to Kelsey Hightower's GitHub repo, where it's actually talking about setting up Kubernetes the hard way. It's about 10 to 15 different readme's on there that tells you how to set it up from start to finish and adding in like TLS encryption and all these different things into it. So, and not using bash scripts, right? So it's a very, very complex piece of software to actually get up and get running. Mezos is another one. If you're not familiar with Mezos, it's a Apache Foundation project. It was started by Ben Heimann. It's a way to actually cluster multiple hosts together and then utilize what's called a framework to figure out how am I going to run applications on this cluster of hosts. And it's actually that cluster is now seen as one compute resource. And the one thing about with frameworks, you can use things like Marathon from Mezosphere. You could use Aurora. That's actually another Apache software project that was actually started by Twitter and those you can actually run containers. But each one require their own specific syntax to be able to get up and get running and actually utilizing containers. Now, it was not more than a month or two ago when Docker introduced 1.12 with SwarmKit integration. And SwarmKit is where, again, this is where life is becoming yet more and more simplified. So it's integrated directly into the Docker engine. It's an optional feature, so you don't actually have to start the Docker or start Swarm if you really don't want to and you can utilize Docker underneath Kubernetes or Mezos or whatever it is. But if you want to use Swarm, it's very easy to get going and we'll see that. It also has desired state, being able to say, I want this many types of containers running for this particular service, as well as reconciliation. Being able to say, if I lose a host, it's gonna restart those containers on a different host as well. And it's decentralized by having these concepts of managers and workers. So now you can delineate work out to certain different things if you really wanted to, which we'll also show in a demo. The lib network pieces are built directly into it. Now, not only is it just allowing me to do networking into it, but it also allows me to do load balancing as well as service discovery, which are really two cool different pieces. I can do rolling updates with services, allowing me to say I can specify how many times in parallel, if I have ten different containers running a particular service. How many times in parallel do I want to take down these containers and restart new ones? So I can say, if I've got ten containers, I want to run in two in parallel if I'm updating a new image. It's gonna take down to put up new containers, take down the next two and do the next two, right? So it's an easy way to do rolling updates and it's secured by default, just actually having TLS built in and using secure encrypted traffic network back and forth. So let's do a quick little demo of this before we jump into the next pieces, just so you can kind of see what's going on here. All right, so within here, I actually have a three-node setup. This three-node setup is using an EMC backend storage that's running on here called EMCScaleIO. It doesn't really make much sense in this demo right here, it will for the next one. But just so you kind of have an idea of what's going on here. So within here, I can do a Docker help just to get my command. And within here, I can see Docker Swarm right here. I can do Docker Swarm in it. And since I want to do this for a particular IP address on this system, since it's all running with inside a vagrant, I'm gonna use the Advertise Address string. And so I'm gonna go ahead and put this in here and copy and paste this out because I actually need to get my IP address before I do this, because I just realized I can't remember exactly what it is. So I know that this is going to be 50.12 instead of doing that. We'll make it a little bit easier on this here. So this creates my first manager with inside of the swarm here. Pretty easy to see. Now I can use this command that it spits out right here to actually join my workers to it. I've got two other workers over here that I want to add to it. Join as a worker, join as a worker, and we're done. That's a very simple way to actually get started with the cluster. I can go over here and do Docker node list. I can see all of the pieces that are added in here. I've got my MDM-1 server, which is gonna be my leader. I can do a Docker info, and I can get information about this particular swarm cluster by going here, and I can see I've got one manager and three nodes. Pretty simple to go on just for the clustering, right? That's how easy it's become from that point. Now I want to introduce the network capabilities. So I'm gonna say Docker network, and I'm gonna just do a quick list real quick just so you can see what's available to me. I've got bridge network, host networking, and I also have overlay networking. Overlay networking is VxLanus is actually what we want to use. So I'm gonna say Docker network create. I'm gonna use the driver as overlay, and I'm gonna create the container con network. So pretty easy to do that. Now let's go ahead and use this. So I want to create a new service, and I'm gonna use this, and actually just so you can see all the different commands that are available to you. And also I should also mention that this is a little bit different. If you've been in the container ecosystem now for six months to a year, you're probably used to saying like, oh, Docker run and Docker PS to actually get my containers running and be able to monitor them. That kind of is out the window now. So Docker service is like the new syntax on how all this is happening. So we say a Docker service create, and I'm gonna name it Redis, and I'm going to put it on the network of CC. And I also want to do a constraint here because I want to make sure that this is always running on my worker nodes. I want to make sure that my manager node is only utilized for managerial purposes if I really wanted to do that. And within this demo, to make sure failover is gonna work, that's what I want to do. I'm gonna specify the node.roll equals worker, and I'm gonna specify the image as Redis. Let's see, where did I miss that? Oh, because I have single quote, not double quote. Okay, cool. So Docker service list. I can see that I've got one replica running. Let me see actually where it got placed. So Docker service PS list, or sorry, Docker service PS Redis. And it's gonna see that it's now running on my tiebreaker. So I come over here and I can do a Docker PS just to see the running images, and it's right there. Now, I can also scale this pretty easily. So I can say Docker service scale Redis to three. It's gonna scale to three. I can do a Docker, I'll do it again here, Docker service list. And I can see that I have three replicas running and I can see that they are now distributed amongst both of those worker nodes by going and looking here. Now let's go ahead and let's look at and examine load balancing between them. So I'm gonna create a new service. I'm gonna take this and instead of going to worker, I'm gonna go to manager. Just so we can make sure that when I'm accessing everything, it's actually going through the entire network and I'm not just running it on an individual node that it's already running on as well. So I'm gonna create Redis O2. Now, I know this is gonna be running on this particular host right here, so I can see that it's up and running. So I'm gonna do a Docker exec and I'm actually gonna go inside the container here and I am going to connect to the Redis service. So I'm gonna specify the host as Redis. Now, if I come here, I'm gonna set a new key value store. I'm gonna say container con, hello container con and I can do a git cc and that's exactly what we would expect, right? That we can now access this. Now, if I exit out of here and I'm gonna go ahead and just copy and paste this again. Now, if I go back in here and I'm asking to connect to the Redis service, what do you think's gonna happen when I try to do when I try to get that value back? Do you expect hello container con to show up? Actually, it's not going to, right? Because what's happening now is actually doing round robin load balancing between that service. So you can kind of understand what's gonna happen here. My name's Kenny. I'm gonna exit out. I'm gonna do one more and I'm going to set container con to 12345 and now I've got all three nodes with some sort of data on it. I've gone three times for three different replicas. Come here and now I should see hello CC, which I do, right? So as we can see, DNS discovery, service discovery is working. We've got load balance working. It's an easy way to kind of just be able to kind of imagine all this happening and all this is utilizing live network to get all this done for us. Cool. So let's go ahead and we'll kind of jump into the second half of here. So the second half, oh, there we go. Okay. Sorry. Is that working? Yeah. My screen was showing something different. Anyway, next is kind of thinking about state, right? We kind of looked at Redis there. Redis is actually a stateful service. People kind of start messing with Docker and thinking to themselves, this is great for something stateless such as a web server or Kibana or something that only requires a configuration file. I don't need to worry about a, you know, losing data or anything like that. At MesosCon this past year, and if anybody doesn't know, MesosCon is yet another conference put on by the Linux Foundation as well. Ben Hyman, as I said, is the co-creator one of the founders of Patchy Mesos, had this slide up and he actually took it from LightBend. Oh, of course it's not going to work now. You're going to make me click it. Okay. Well, anyway, so he said that there's no such thing as a stateless architecture and it's true. You know, you don't have a data-less infrastructure. Either, right? Data is kind of the key to all this. And we at EMC code, we kind of look at containers as the next evolution of the virtual machine. Virtual machines made it pretty simple to make it somewhat portable, right? It was a way to be a wrapper around your entire application. You could move it across hosts. You had ability to back up and disaster recovery and all these different kinds of things. But you could do it for any application. When you start messing with containers, a lot of the examples that are out there, of course, like using nginx as just a web server, but that's just one component of an entire microservices architecture or whatever it is that you're building. There's no reason why you can't use something like Postgres inside of a container. In fact, I'm not alone on this because if you go to Docker Hub, seven of the top 12 applications inside of Docker Hub require persistence. So whether it's MySQL, MariaDB, Elasticsearch, Redis, Postgres, a few more I'm forgetting, but all of these require some level of persistence. So you can't really run these applications and a lot of people that may be running these applications are just doing it for fun to get started. But as soon as you lose that container or you lose that server, whatever is happening, you lose the data as well. So I guess I kind of got ahead of myself, but that's kind of like look at the problem here, right? The problem is that when I run a container, all the application in the state gets stored inside that container. Now, if I lose the container, I lose my data. That's simple to understand. I have the ability to utilize localized volumes and local volumes give me the ability to store that data on the local host itself, but that yet leads into another problem. If I lose the host, I lose my data. You could go through the aspect of wanting to do a external NFS mount, mount it to the host, have that path, expose that path into the container, yet that's a very manual and trivial process. And that's why Docker introduced the Docker volume framework, the Docker volume driver framework, which was in 1.9. Another thing to mention here is that scale is concerned. Now, Moore's Law always takes into effect. You could have a server that has 16 terabytes of storage or even more than that, but is it really smart to say, I'm going to put all my data on this storage and if something happens, I need to migrate it. If it's a raid card failure or whatever it is, how do I get that data over to another host to kind of get up and running again? So within the Docker 1.9, as I said, when it was announced, we were one of the first three people that actually had a Docker volume driver that was a part of it. So Rexray is our Docker volume driver that we had written at EMC Code and it's still under heavy development and it's a completely open-source project at github.com slash emccode slash rexray. Let's go and dive a little bit deeper into what this is all supporting as well because us and only one other competitor really out there support more than one endpoint or storage endpoint itself. And so we support multiple different types of storage available to us. Of course, we eat our own dog food, so we support EMC technologies, but at the same time, we also support VirtualBox. VirtualBox is actually one of those really cool things that allows you to keep a consistency in your deployment mechanisms from development all the way to production. So if you're developing in VirtualBox and you're utilizing Rexray for your volume management for a particular container, you can actually keep those same exact command lines when you try to move it to QA into prod. So if your QA is actually running in Amazon and we actually support EBS and EFS now, you can actually utilize that inside of AWS or GCE or Rackstac, Rackspaces Cloud. If your prod is running back on-prem, you can now use maybe something that's maybe one of our products such as EMC. You can actually utilize those products inside of there. Now, the only thing that happens here is really a change in a configuration file. Rexray is a super simple installation, a super simple architecture. It's a stateless application that only needs a configuration file and it's a simple curl bash command that installs a go binary. So it's super simple to be able to use. Again, it is open source and we are one of the only vendors out there and one of the only projects out there that supports high availability with it. And we do this through a feature called preemption and I'll show that in the next demo of actually being able to have the ability to do a forceful unlock or unmount from the storage endpoint and when a new container is requesting access to that volume, go ahead and attach it over there. So how are we solving the problem? How does this look in just the architecture here? Well, Rexray is installed on every single one of your hosts that you have Docker installed on as well. And Docker, the daemon itself does an API request to be able to say I want to create a volume or I want to mount a volume. It's gonna go ahead and examine which type of Docker volume drivers have been registered to it and it's gonna go ahead and it's gonna offload those operations to the Docker volume driver. Now, as I said before, you could be manual about it. You could take an NFS mount and you could do all the manual intervention to it but this is where kind of storage automation and orchestration takes over. So now we have the ability to specify the Redis data folder that we've already had either pre-created or have Docker create it for us automatically on the fly and map that into the data folder inside the container. Now, if we lose the host, we lose the container. There's a raid card failure, whatever it is. Data persistent remains intact on that remote volume. It's the same exact thing as going back in time to our 2002 desktop and ripping the power cord at the back. It's just essentially a hard power off. Now, one thing to note is we're not in the data plane here. We're in the control plane. So in regards to data consistency and data integrity, we're just keeping the last write that's actually gone and being flushed to disk, right? So keep that in mind when you're trying to think of all these questions that are coming up. So now utilizing an orchestrator or I can do it manually if I want to restart that container on a different host, it's going to go ahead and do that. And now I can actually achieve the high availability features. And not only that is I can also scale to the whatever my platform allows me to. So for utilizing something like AWS or Isilon or something like that, you can scale to petabytes of data. Now, oddly enough, you're probably not going to have a petabyte of data be attached to a single container. Maybe that's not microservices, but who knows? I don't know what your application looks like. And then one last thing about kind of how this looks in an architecture. So we're going to push it to swarm. Pay no attention to this. That's old now. So pay no attention to that, but we're going to push it to swarm and swarm is going to be doing everything for us. Now let's run into a demo. This is kind of again where I lied a little bit and we're going to do something a little bit more fun. I'm not going to do a three tiered app. Instead, what I want to do is I'm going to do Minecraft. Anybody who doesn't know Minecraft? So if you don't know Minecraft, ask your kids. If not just GTS, okay? Google that shit if you can get that. Okay. But it's a game. It's taken, like it's been around for quite a while now. It's all in 16-bit or 8-bit. I don't know for some reason retro gamers and everybody's like really totally into it. So anyway, we're going to look at this Minecraft server. I've used this one a few times to kind of spin up for demos and stuff and it's never failed me, so let's hope it doesn't fail me again. So within here, you can see that it has all the commands for Docker run. You kind of have some other things in here. We'll look into that, but really, I want to show you how to examine a Docker file and figure out what to pay attention to when you're dealing with volumes. The Docker file itself tells you how to build this container. So we can see that it's starting from the Java version 8. It's adding a few different things in there, but really, this volume command is what we want to pay attention to. So I'm going to go ahead and add this over here. And that's my cheat sheet in case everything, you know, breaks off and goes to hell. And we want to create our volumes first. So I'm going to do a Docker volume create. I'm going to specify the volume driver as Rex Ray. And I'm going to say the name is going to be mcdata. And I'm going to specify some options in here. So I can say the options of size is five. Now I've got four volumes here, and I don't want any anonymous volumes being created. So I'm going to create one for each one of these. So I've got data, mods, config, and plugins. And to make this go a lot faster, I'm going to break this all down into a single command. All right. And I'm going to go back over to my host real quick. So back into this instance right here. And I can do a Docker volume create. And these are creating all my volumes that are going to be available to me. I'm going to do a Docker volume list. I can see that they're there. Now one thing to also note here is that they have the local volume driver. The local volume driver, as I said, Redis is a key value store. It requires a little bit of persistence. So a local volume has automatically been pre-created with this unique ID right here, right? So that's what happened when I actually spun up that Redis instance. Now one of the things that you're going to kind of notice when I do a Docker volume list on the TV virtual machine or the tiebreaker is that I have access to all of those particular volumes as well. Because Scale.io is a software-defined storage solution. So essentially it's just replicating data between all the nodes and asynchronously writing it at the same exact time. Again, Rexray is just looking at the configuration file and telling me what kind of API endpoint do I need to look at. So even if I was using this on AWS or VirtualBox, it wouldn't be a difference because all it's doing is just inquiring the storage endpoint and saying, what do I have access to? And since this one actually has two Redis containers running, that's why you see two different local volumes. So let's go in here and actually let's create the... Let's start creating the Docker service itself. So let's say Docker service creates. We're going to specify the name is mc. And I'm going to go back over here and I'm going to make my life a little bit easier and take a few things out of the Docker run command. So I know I need to specify the yield is true. I need to expose port 2565. I also want to put it on the network of CC because if I really wanted to, I could access it from my Redis pieces. I also want to put the constraint on this one as running on my worker nodes as well. And this is where I need to go and start talking about my mounts. So the mounts is really where we're starting to play around. And one cool thing about this is when I get to the second one, I guess I'm not going to give it up real quick, but the cool thing is the mounts now allow you a lot more granularity than what you were used to in the Docker run command. So I can say the type equals volume and I need to specify the source. So my source is going to be mc.data and my target is going to be the data folder in itself. And the volume driver is going to be rexfray. Now I need to do this again four more times for each one of my volumes. So just bear with me real quick. Now the fun thing about this is that originally if I had a single container that needed access to multiple different types of storage endpoints, I couldn't do that with a Docker run command because the volume driver flag was a global setting for all the volumes inside of there. So now if I really wanted to, and actually rexfray gives us the ability is that if I have access to multiple storage endpoints, I can do this through a series called Modules with inside a rexfray that I could say I have an Icelon and I've got AWS and I could have those all running inside a rexfray. I'm just actually creating two different socks for each one of those and registering them to Docker. So now I would specify the volume driver is AWS or the volume driver is Icelon. Rexfray is just the default of what it is. So I've got data, but I need to change this into mods and then I've got config and plugins. And lastly, I need to get the image name itself which is this guy right here. And since I'm a, I've done this a few times so I'm a pretty decent presenter. So everything should be already downloaded for us. So we shouldn't have to worry about that too much or taking any time. So let's go ahead and we'll create. Oh, screw it. Oh, did I? There you go. You guys waited all that time to say something. I wrote that like three minutes ago. All right. There we go. So it's creating. Now this is going to take a few seconds to spin up right here. So I can do a Docker service list. I could see that it's still, it's still going. I can do a Docker service PS and I can see what's kind of state it's in. So I can do a Docker PS MC. I can see that it's going to be going to MDM too. It's actually already in a running state so we're not even preparing over there, which is great. So I can do a Docker PS here. Wait, it's not. What is it? Oh, it's actually still in preparing. So when I blow it up, it's really hard to see all this stuff. So let me do it one more time. And now it's in starting state three seconds ago which is perfect. So I can come in here and I'm going to tap dance just for a second while the container ID is getting fetched. And then we're actually going to tail the logs for it. So I'm going to do Docker logs. I'm going to follow it. And here's my container ID. And we're going to see that it's going to, and it's going to actually start setting up the world now. Now while that's going, I'm going to open up Minecraft on here. So you see it's preparing the world right now. It's spawning the area. Oh, what was this? Let me figure out what this was. All right, tap dance a little bit more. This is what happens when I don't own my own login. Oh, because I can't type my own pieces right. Okay, cool. So now we're launching Minecraft. We're getting into it. We can see that the area has been created. And I'm going to go into multiplayer mode. And I can see that the Minecraft server is actually available on 50.11. I'm going to join the server here. So I'm going to go ahead and log in. And as we can see within the background, we can see EMC code has entered the game. We're going to go through, and we're going to collect a few different pieces here. Now, I have a kid. She's two and a half. She doesn't play Minecraft yet. And I don't exactly know what I'm doing. I've heard you have to go through and build beds and all this other kind of stuff. But just give it a few seconds to catch up here. But the idea is to go around and collect different pieces. Oh, left the game. What is going on? Bum, bum, bum. I hope nobody else is like trying to log in the same exact time. Okay, cool. All right, so let me go collect something before I get kicked out of here again. So I'm just beating down a tree right now. That's all this is really doing. All right, so I've got one piece of oak wood in my inventory. There we go, we got two. So I can look and see that's my inventory right there. So I've earned an achievement. I've been taking inventory. Now, I'm going to simulate that this has actually fallen off and that Docker has died. So I'm going to go into MDM2 again over here. And I'm going to kill the Docker service. I'm just going to shut it down real quick. So once this comes up, so pseudo system. So when I do this, I'm actually stopping Docker on MDM2. It's going to see that it's now killing off the container. It's shutting it down. And at this point that I have lost access to the game. I can refresh it and I can see that I can't connect to the server. I can go over here though and I can kind of see, well, where's my server going? Well, it's being already preparing and moved over to the next server, which is my tiebreaker server. So we can see that it's already preparing state. It's moving all those different pieces over. And since Rex Ray utilizes preemption or HA, it's actually doing a forceful disconnect on those volumes. And it's now going to start attaching them to the next host itself. So we can see it's in a starting state. I can come over to my tiebreaker machine. And not only that is, I'm also even failing over those Redis containers that were over there on the other side. So I can do a Docker logs dash follow on over here. And this should be a lot faster. So you can see the areas already spawned up. I can come over here. And thanks to the magic of lib network, I don't even have to join or edit the server that I was actually talking to. It actually understands where it was connecting to before. And it just forwards that traffic over to the next server. So now I'm back in here and I can see that my inventory is still at 2. And I can just continue playing in the game as I was before. So pretty cool is that we're now actually persisting data. Between all this and doing it failover, utilizing swarm to do all that at the same exact time. A little bit cooler than watching Postgres with a few tables falling over, right? So we'll kind of wrap this up a little bit. Today we kind of learned that clustering, networking, and failover with persistent applications has gotten exponentially easier than it was even four months ago. All of this is available on GitHub, every one of these projects. Even if you wanted to do this same exact thing, we actually have a vagrant repo with Scale.io so you can get up and running. Again, Scale.io was just a good demonstration to be able to show everything running locally on my own desktop here. If I really wanted to, I could run this all inside AWS, give out the public IP and you guys could start joining my Minecraft server up there. So Rex Ray doesn't really care about what storage back end you're using. It's all supported underneath of it. With that, there is also an open storage summit tomorrow. I'll be talking a little bit about the same exact thing tomorrow, but it's a full day talk just about openness and storage and all that kind of cool stuff that's happening around us. So with that, that's the end of the presentation. If you have any questions, feel free to ask. You can follow me on Twitter at Kendrick Coleman. You can also follow EMC code at EMC code. My GitHub profile is at kacol2 and I'll be around here for today and tomorrow to answer any kind of questions that you may have as well. So any questions? Thank you.