 Hi, good morning everyone. My name is Tim, and I'm a product advocate for VAMP, an open-source release and management platform that runs among others on DCOS and Kubernetes. The title of my talk is Advanced Deployment Strategies and Workflows for Containerized Apps on DCOS. That's a huge mouthful, but it will make sense as we dive in. A little tiny bit about me. I am... Dutch VAMP is a Dutch company that has an HQ in Amsterdam, but I live in Berlin. I come from an operations background and have been responsible for deploying applications in many shapes, sizes, text-to-text over the last almost 20 years. I actually studied art history, so it's pretty cool for me being here in Prague. I've never been here before. It's great. But enough about me. Let's dive in. Today, I want to talk to you about deployments, and I want to start with the following hypotheses. The hypothesis is, in a perfect world, no one would deploy anything ever. So that's kind of where my talk ends. Thank you. Have a great mesoscon. Without all silliness, let's investigate this statement a bit closer. In a little segment that I would like to call a brief and unscientific history of application deployment by me. So, let's wind the clock back to 1998-2000. This was about the time that I was getting started in the business. I was working for a consulting company who had consultants at banks, big Dutch banks, Rabobank, if you might have heard of it. So back then, application deployments were a bit like giving birth. They were the culmination of months and months of eager expectation of work, dedication and all scenarios for the birth or slash deployment had been written down. Big word documents. Emils were written, CC lists grow, and experts and engineering teams were on call for when the big moment happens. The eventual delivery process was a big contradiction of joy, pain and a lot of sweaty men staring at monitoring screens deep into the night and as always, things never went as planned. Each minute new unforeseen complications arise and it turns out that our 35 page implementation plan that we had written three months in advance was not an actual reflection of reality. Wow, who would have thought? So, people panicked. Some turned to alcohol. Others just have to call home to their families because of course we do this stuff on Friday evenings and had to tell their wife and kids that, yeah, I'm not coming home this weekend. I'm sorry because deployment. So, you slave on into the night and finally early in the morning when the sun rises, the deployment is done. You fold all the demons that were lurking in your IBM, WebSphere, Corbaa, Spring, Tipco, Monstrosity and you want. Your new About Us page is live. But at what cost? What did we learn? Well, any logical rational person would say we will never do this again. No more deployments. It's just not worth it. So, we learned that deployments are evil. They're soul-consuming demons that should be destroyed. And banished from the face of the earth. And then, as we fast forward to 2011, so that's about 10 years, disaster strikes. Multiple times. It's like a meteor shower. Boom, continuous delivery. Boom, lean startup. Boom, DevOps. Boom, agile. Boom, cloud stuff. So, they're all telling us that everyone should be deploying everything all of the time. Back then companies like Facebook and Etsy started claiming, yeah, we deploy 50 times per day. It's completely normal. This was a time that Hiroko got very popular and showed us that you could deploy something with just one git push command. But the horror doesn't end there. So, if we fast-forward another two years, we come to the great flood of 2013, 2013. That's, so this was the message and then we go to 2013, the great flood called microservices. And what happens then is if deploying one monolith application was already not enough, they're telling us to actually split this up into multiple deployments and that's just crazy. I just heard yesterday at the keynote that Netflix has 380 services in production. So, either these people must be insane or evil or both, clearly. But there's light at the end of the tunnel, because around the same time, so let's say 2013-ish, the earth was subjected to a Cambrian explosion of containerized stuff. We had Kubernetes, Docker, Mesos, all coming out around the same time. So, you could start getting this on-prem, hosted, SaaS solutions, everything. So, glory to the world. Hallelujah. Everything's saved. Here's our timeline again, my brief unscientific history. We're in 2017 now, and my conviction is that the deployment problem is still not really solved. Sure, you can have marathon running on your Mesos cluster. That's pretty easy. You create a piece of JSON or you type it into the UI. Boom, you have a deployment. It's there. But what happens if it's in production? What happens if there's multiple versions? What happens if there's dependencies? What happens if there's actual real traffic and it's not some kind of to-do app or showcase for a conference? Well, this is kind of the problem that struck me when we started VAMP back in end of 2013, beginning of 2014, and the whole containerization principle is great. I got it very quickly. We quickly saw the implications of it. But me and my colleagues, we're from that background where we were at banks, where we were at large traditional e-commerce companies, and these companies have different concerns. They have different type of people. They're mostly here in Europe. They're not Bay Area startups. So how can we make this work for them? It's one of the underlying principles. So we started building something, which we called VAMP. We actually don't use it anymore, but originally VAMP sued for the very awesome microservices platform. We kind of left that behind because we very quickly found out that it doesn't really matter whether you're doing microservices to use VAMP, but the name is nice, so we stuck with it. I was hesitant if I was going to explain here how VAMP works. I'm going to do it a bit, but there was a talk yesterday here by Julianne from Microsoft who did a fantastic job actually showing off VAMP. I was in the audience like, hmm, there goes my talk. But I think I picked out the right bits that kind of delve a bit deeper into what VAMP can do for you. So if you want to get in on, let's say, ground level, I would highly recommend either looking up Julianne's talk from yesterday. It wasn't the same room here, or just going to the website. Because all the demos and all the startup material is there. Having said that, I will try to explain a bit like what VAMP does. So you have hardware that never goes away. Stuff needs to run somewhere. Then you have your applications. Then you have VAMP that runs on top of something like DCOS, or Kubernetes, or straight Docker. What VAMP does is it natively runs inside one of your container orchestrators, just as any other container would run there. What we've done is we've abstracted how you can send commands to your orchestrator. A bit like a JDBC driver from the Java world. You have a couple of databases. You have one driver. You just talk to the driver and it sorts out all the differences. However, we do allow you to take advantage of some of the specific things that are either in DCOS or in Kubernetes. We do that by using something we call dialects. I'll show this later. So you're not losing any power that's specifically, for instance, in Kubernetes. So VAMP runs on your orchestrator. What does it do? Well, it takes into account three basic artifacts that you can deploy to VAMP. Extremely similar to what you would do in, let's say, Marathon. We have what we call Breeds. Breeds are the tiniest, smallest building blocks. They are the definition of, hey, this is my application name. It has a container version. It has, I don't know, some variables. And then we have a blueprint, which is a multiple Breeds can live in a blueprint. It tells VAMP, like, okay, this one should talk to that one. And if you spin up this one, then that one should also be there. So it can declare dependencies between Breeds. There's a reason that the blueprint is in the middle, because in day-in, day-out, you will only probably use Blueprints. Because if you make a blueprint, we extract out the Breeds. So it's very common that you will never actually click on that tab in the UI, which I'll show you later. So Blueprint is a static artifact. It's a piece of YAML or a piece of JSON, whatever you want. And then you click Deploy, and it's turned into a running container or containers on the orchestrator of your choice. Not a lot of magic there. Basically, everyone does it this way. But then we have two things that are kind of special about VAMP. We have the concept of gateways, and gateways are, strictly speaking, just load balancers. They allow traffic to stream into your deployment, your running applications. So far so good. Everyone does that. But what we can do is we can extract metrics from the gateway. If you deploy a VAMP gateway, it is connected to an Elastic Search data store, and every request that runs through it, we measure stuff. Request rate, response time, error rate, et cetera. And we feed that into our metric system. And what we allow you to do then is write another thing called workflows. Workflows are pieces of JavaScript, node, to be specific, that can use whatever is fed into the system and do things to VAMP. I will make this a lot clearer later on. And these things can influence the gateways or influence the deployments. And the consequences of this are pretty big. You can have a very interesting dynamic deployment by using things that are happening during the deployment in the decision making of the deployment. This is exactly what 20 years or, let's say, in 1998, we would use to do with an implementation plan. We roll out four instances of our app. Then we check this and that and blah and this metric and see if everything is going fine. Then we pull a couple of switches to, I don't know, have new customers try out this new feature. And if that's all thumbs up and everything is green, then we move on to the next stage. This used to be a Word document. Now you can put it in code. We tried to come up with an acronym for it, I don't know, DAC, deployment as code, something like that, but we didn't do it. There's already way too many acronyms in this whole space. So in a nutshell, this is the power that VAMP gives you out of the box. So the rest of my talk is demo time. All the funny stuff is over. I want to show you basically three scenarios. And I want to get us from an application that's deployed version one to a version two, in two or three different ways, depends on how you look at it. The fun thing is you can play along. If you go to this URL right now, you'll get a nice screen. It's http mesoscon.vamp.io. You will be connected to a Azure cluster. We're using Microsoft's Azure containers, which is great. It really works really nice, and we got free credits from them, which also might have been an influence on this, and you will get a screen that looks a bit like this. Let me just blow it up a bit. It's a blue screen, version 1.0.0, and it prints out the hostname of the container that it's running on. So before I dive in, I just want to show you this party trick because it completely drives home the point. Here's vamp. Let me blow this up a bit. Let me go to the gateways, adjust the slider. Let's put it on 50-50. Let's save it. Actually, there's a constant load on it right here. Yes, it's going great. And roughly 50% of the time, if I hit a hard refresh, you will see a different version. Great. This is the basic thing you can do with vamp. Again, Julien yesterday showed how you get here from zero to this situation. Using the UI that I just showed you, you can already do a lot of powerful things very, very, very easily. Deploy two versions of your app, pull the slider, boom, and then you have your 50-50% split. You can do this for three or four or five apps. It doesn't really matter as long as the weights, as we call them, add up to 100%. But as we said, we want to automate this. We don't want to be pulling that slider ourselves all the time. So just to reset the situation, I will go back to our blue version. 100% of traffic. There we go. So the first scenario that I want to show you guys is the following. It is a rolling release from a CI CD pipeline. This document and all the files you can find on our GitHub. I made it public yesterday evening. So if anything goes too fast or you didn't quite catch it, after the talk, you can go to GitHub and read through it and play around with all the examples yourself. So in a typical deployment pipeline, you want to have something like this. This is what Julien did yesterday. This is a clip from Jenkins. Jenkins pipeline, completely scripted. Even the pipeline, so there's a Jenkins file that tells Jenkins how to proceed through all these stages. And stage, the last three stages are all vamp specific. The first couple of stages are kind of what you would find everywhere. You pull something, you build something, you test it, then you wrap it in a container and you push it to some container registry. And then vamp comes in. So I'm going to show you a couple of the commands that would help here. And then I'm going to show you how that works in a Jenkins context. So what you want to do essentially in a very simple rolling deploy is the following. You have your version 1.0.0 running somewhere. It is set up in a specific way that at first deployment, you thought, okay, this is the right size. These are the right tags I want attached to it. This is the right port that it's running on. And then version 1.1 comes along. You actually don't want to redo that. You don't want to say, okay, now I have to go back to version 1.0.0 and see how that's running. Copy, paste all the specificness out of it and put it in my 1.1 version and then kind of, you know, switch it. What you want to do is just bump the version number because a 1.1 release should have no breaking changes. A 1.0.1, just a bug fix should even be easier. So many systems make this actually pretty hard to do. You kind of have to have all these knowledge of like how is my current environment doing when I want to add something to it. So what we do is we give you two options. And as you can see here, there's two commands that I give from the VAMP CLI. There's a CLI that you can use in your CI system or just on the command line. And it's the VAMP Generate command. What VAMP Generate does is it ingests a current running deployment and allows you to basically replace a couple of placeholders like the version number or like the container name or the address of the registry. This way you keep all the things that are static the same. You just bump the version. And what you then do, once you have done this, and this will then in the end, whoops, here we go, it will turn out that you have two blueprints automatically generated from the CLI. And I'm showing you these in the UI now because it's nicer to look at. We started off with a simple blueprint 1.0.0. We push it into the VAMP Generate command. We replace a couple of tags and out comes to 1.1.0. What you then do is either from the command line, as it's shown here, do a VAMP merge. It's the middle bash command at line 75, switching from English to German to Dutch sometimes mixes up the numbers. So number 75, VAMP merge, it says put the 1.1 version in my already existing deployment. From the UI, it's basically exactly the same. And I think I already had it deployed here. So yes, I did. So I'm just going to undeploy it, which is from the command line to VAMP undeploy. What? VAMP list deployments. There we are, VAMP undeploy. Simple deploy. Sorry, that's not an underscore. Deploy. So while that's rolling out, let me just clean this thing up. Boom. Yeah, get rid of everything. Undeploying. We should have a clean slate pretty quickly. Create a new deployment. So I'm resetting it. Simple deployment. This is all kind of precached, so it should be pretty quick and the app is not that big. We're not talking Java sizes here. Deploying, great. So what the merge command allows you to do from the UI, it's even more simple. You just click merge to and it will prompt you to say, hey, do you want to merge it maybe to this one? Yes, we do. Click merge and it's deployed. Or it is deploying. What happens now is actually nothing. Your users of your app will not notice anything because we haven't changed anything in the gateway yet. It's just the app is sitting there receiving no traffic. So in your CI CD pipeline, the next step would be to update the gateway. The top command, that's a long command, but it actually makes sense. And I will not try to explain it right here, but it's a very logical thing where we say, hey, there's one gateway. It's at 100% weight. And there's another one. It's at 0% weight, updated from, I don't know, to 1090 or 5050 or something like that. It is exactly what I showed you with the slider just through a CLI command. So what you would do now is actually try to put that into a script. And we have a Jenkins pipeline script in a gist. And I'm just going to show you it's a piece of groovy. That's what Jenkins eats as pipeline scripts. And I'm just going to blow it up a bit. And here at the top, you can see a super simple loop. It goes through 10 steps. And at each step, it updates the weights. And then it sleeps for 10 seconds. And it does it again. And it does it again until it's 100. And then it stops, or at least when 10 steps are done. This is pretty powerful already. It's this whole pipeline script is 85 lines with indentation and comments, et cetera. It describes the whole deployment of this admittedly pretty simple Node.js app. But if you can already do this, deploying, testing, pushing, et cetera, in this amount of code, that's pretty nice. But as many of you would have noticed, what if something goes wrong? This loop, that's pretty simple. You're completely correct. There are many other ways to do this. And there's not a lot of control here. It just will just keep going through these 10% increments. So what other options do we have here? Well, one option, an extra option, is to split the testing that you want to do once the application is deployed on user segments. And these segments can be anything you like, as long as we can somehow read them from either the content of your requests or the headers in your requests. So with HTTP stuff, it's pretty easy. With TCP stuff, it becomes a little bit harder, but it's possible. So we're going to do a little simple experiment here. This command that you see here that starts at line 79, vamp update gateway. What it does is it goes into the gateway and it sets a specific, what we call, condition. Condition is pretty much the same as content-based routing for all you network people out there. We do basically the packet inspection. We look at your requests. We grab out the headers and based on that information, which you can tune with vamp. We make decisions on where the application requests should go. You can do this in the UI or you can do this from the CLI, which means you can integrate it into your CI CD pipeline. So I'm just going to run this and then we're going to see what happens. It didn't throw an error, so it should be fine. If I go to my gateway, you should probably be able to see this. I'm going to make it a bit smaller. Boom. What we just did, and you can see it here in the corner, lower right corner, we added a shortcode, what we call user agent, the shortcode user agent. On the vamp website, you can see a whole bunch of shortcodes that we created in vamp. And in the end, they map to HA proxy ACL rules. ACL rules are kind of daunting, can be very hard to read and understand, so that's why we made them a bit easier. But if you want to use them, if you want to use reg-axis, it's totally fine, you can also do this. What's even cooler is that, and you might have noticed, there's this little tab called conditions here. It's empty now, but we allow you to save the conditions that are here as artifacts. So you could come up with something really difficult, like this application with this user that has that cookie from that browser, blah, blah, blah, blah, which could be a very long piece of a long string or a long reg-axis, you save it, and then you can use a reference to it, allowing other people that are not as handy with this stuff to still use your segmenting. So if everyone whips out their phones or their browsers, you go to mesoscon.vamp.io, and I am on Chrome here, and I will have my blue version. But if you're on Safari, let me just reload this, it actually does it. It is the green version. Admittedly, very simple example, but the sky is pretty much the limit here. There's a lot of interesting stuff you can do. We've seen usage of this where, for instance, a very common problem. I have a new version of my app, but I just want to show it to the internal personnel. Put the IP address of your company in it. We will route the traffic from internal traffic, internal users to this new version, so you can try it out. Super easy. So those are two ways of getting from our 1.0 to our 1.1. And now we're going to dive into the last chapter, which is the workflows. As I try to explain in my sheet, workflows are pieces of JavaScript. And I'm just going to show you one piece. It's called workflow CanaryJS. Some of you might not be familiar with Node or JavaScript, but I think you can definitely follow a couple of these things that you can see here. First of all, we import our libraries. Nothing special there. Then we set a couple of environment variables. And why are we doing this? We're doing this because this workflow is generic. It can deploy anything you give it within VAMP because the services and the gateways it's going to start editing are put into environment variables. I will make this very clear later on. So we get environment variables. In this case, the gateway, service 1 and service 2. And then we have a bunch of functions. We have a run function, an increase function, and an update function. The run function is pretty easy. It runs the whole thing. It's the entry point for the whole workflow. The increase function does what we just saw in the Jenkins script in Groovy. It loops over a bunch of things and adjusts the weights. You can see that right here in the code starting at line 19 that we look at what is the current weight. Is it still within the bands that we want it to be? And if so, and everything's okay, we update it. And then we run the line 25 update gateway. This is a lot cleaner than doing curl commands or something else from a CI CD pipeline. And then we have the update function. And the update function uses our own Node SDK for VAMP to make the actual update to the gateway. So what we do is we save this piece of JavaScript. And we save it into VAMP. And I can show you where it is. And this might also make it a bit clearer what a breed is. So a breed is a static artifact. What we did is we wrapped this piece of script into a bunch of YAML tags. Let me just blow it up a bit. Actually, I have it right here. Breed canary. It's the exact same script. We just tag it with what it is. Put it into VAMP. It just sits there. It does nothing. But then we have our workflow. And this is the actual workflow definition. And you can see what it does. It has a name and it references that breed, canary, it's called. So it will pull in this piece of JavaScript and it will instruct it with these variables. The period, the window. The period is how often it goes through its loop. Window, we will jump into that later. The gateway that it needs to instrument and the services within that gateway that are the actors now. The two services that we want to migrate from version one to version two. There's a couple of options you can give there. None of them are pretty much important right now. Let me just scroll here. You can see how we instrument service one and service two. And now we can run this workflow. You can instantly see you could do this with anything. You just have to write that piece of JavaScript once and then you stick in the variables and it will run. So just to show you that I'm not lying. Let's have a look. We are at, let's see, where are we? Oh, you can't see it. I'm at the wrong tab. Here's our gateway definition. 100%, 0%. Okay, let's kick off this workflow. And I could do this. Well, let's do it from the UI. Why not? Or actually, I think I have the command right here. Yes, I do. Bottom, create workflow. All right, so it's pushed in. And you can instantly start seeing stuff happen. Here at the bottom, you can see that it's updating the 90% and the 10%. Actually, we still have this one. The user agent is Safari on it. That's actually, we should have removed that because they both work at the same time. Conditions are first. They're number one in the hierarchy. So any request is first matched to a condition. And if it's then not matched, it will look at the weights. So what's going to happen right now is that user agent in Safari will still always go here regardless of what happens to these weights. And our workflow, you can see right here in the Workflows tab, this guy is running and doing its job. And at the same time, good to show you, actually, I have a little script here that's just doing requests every, I think, half second I fire a get onto the homepage. It should always be green because you don't want to lose requests. We're almost done. We can see the 20%, 80%. It's almost there. But some of you might say, like, yeah, that's nice, but it's actually still doing exactly the same. It's not checking whether things are OK when I'm rolling out, when I'm doing my deployment. You're absolutely correct, but I needed this setup to show you the next iteration of our script. And the next iteration is a canary release with rollback. And this is where the metrics engine comes in. The libraries that we import at the top allow us to dive in to the metrics and use this in our script. And this script is exactly the same as the one I showed you earlier, except for this little extra bit. And it starts at line 23. What we're doing is we're counting the amount of 500 errors that happen on a specific gateway. It might be a bit cryptic, and we're still working on getting the naming right, but FT, as you can see in line 23, is front-end. This is a HA proxy-specific thing. We're going to make that a bit nicer because we don't really want you guys to be worrying about HA proxy-specific things. Then we have the gateway name, and then ST is status code. I actually had to look that up, but what is ST? It's a status code, just a HTTP status code. That is greater than equal 500. 500 means things are going bad. So what we do is we check that, and because we're using Elasticsearch, we have some pretty powerful stuff that we can do there. You can see the dollar window. We instrument with 30, which is 30 seconds. So we look at the amount of errors, 500 errors in the last 30 seconds. You can pretty much do whatever you want there. You can write a very complicated script that takes into account all kinds of other things, but this kind of drives home the point. So we count them, and then at line 25, you can see if there's more than three, we actually roll back. Otherwise, just proceed as normal and increase the weight of the gateway. So let's test it out. I think we should be done here. We're at simple deployment number one, is at zero, and the other one is at 100. So I'm just going to quickly reset this because we're going to have to have an application that actually creates errors. Luckily, I created an example of our service that throws 500 errors all the time. So let's make a clean slate here, and I'm going to delete the gateway. Just to be sure, our starting position is exactly the same again. It's 1.0.0. We're going to call it simple deployment again. Actually, I'm going to suspend this one. Bye-bye. And then I'm going to merge in our simple deployment 1.1.0-40. Merge it. Of course, in the real world, we don't know that it's 40. So it is deploying. It's probably already done. It's still busy. Starting position 100% is 0%. And while that is deploying, we're going to issue the commands to actually start the... Yes. It's the breed already there. It's already there from my practice run yesterday. So this looks all blue and nice. Great. Are you done? The deployment is done. So here we go. VAMP create workflow. Boom. So what are you going to notice here in the bottom and here? Let me just get old screen set up so you can see it. In the beginning, everything is exactly the same. Nothing's happening. 90%, 10%. Looks good. Boom. 500 internal server error. That's not good. Oops. There's another one. There's another one. So we should see something happening here. It's almost real time, which you see. And we'll switch back to 100%. It will keep doing this forever because we didn't build in a stop sign. It's a workflow that's now in demon mode. Demon mode means it will just keep running and running and running. We also have workflows that you can trigger with events. They're basically one-off. Do this now. If you get to the end, stop. For demo purposes, this is a bit easier, but I would definitely recommend doing this with an event-based workflow. Also, and I thought about doing this for the demo, but too many variables. Things can go wrong. What's super easy to do now you're in JavaScript land is just import, I don't know, a Slack or something else. Send a message. Put in your if statement something like, hey, something went wrong here. So you have instant notifications directly from your own deployment about what's happening. What I did do, actually, and you can see this in the code here, is that I have it throw an event. Did I put that in? Let me see. Update gateway. Oh, no, I took it out. Sorry, guys. So this will keep on running. And in the end, we will never actually deploy our new version. It will just keep bumping into this error stage. I hope that's clear. So that is actually the end of my talk. I wanted to keep a lot of time open for questions, because there always are. Thank you. And again, if you want to play along at home or check out the code or the slides, it's all here at GitHub. And you can always send me an email or ask questions right now. Thank you. Thanks for the presentation. How do you integrate with the Kubernetes Inverse exactly? Ooh, that's a very interesting question. If you spin up, for example, a Kubernetes cluster right now on Azure, they integrate into their own load balancer for this. And at this moment, it will sit, let's say, in front of it. We are deeply looking into making native integration work there. That is probably possible using dialects. And I'm just going to show you this right now, because it will give you a little bit of information about how this might work. This part, starting at line 14, dialects, marathon, container, Docker, force pool image is true. What we're doing here is we're natively speaking to marathon. We just go to the marathon docs, look at what their JSON structure is, and put it in here. I put this force pool image true, because during the making of this demo, I tweaked the container a bit, and it's a bit of a pain having to flush your Docker cache on the node. So now I just instruct marathon or DCOS in this case to always pool the container. So we're actually speccing what parts of Kubernetes we're putting into the dialects, like this week or next week. So I expect a lot of things happening there. So thanks again for the presentation. My question was regarding gateways. Yes. I heard that it was based on HAProxy. Yes, correct. So in the case of DCOS, does it use marathon LB or it spins up another instance? It does not use marathon LB. Marathon LB is completely separate from this, but we get this question so many times that what we did three weeks ago, I typed up this blog post. I'm just going to go to our own blog. How to extend marathon LB with Canary releasing features using VAMP? Because there's lots of people on marathon starting out with marathon LB, and marathon LB actually gives you through a bunch of Python scripts some kind of zero downtime deployment. It's kind of not production ready. They're kind of being a bit difficult about it. But we made a write-up on how you can actually start using VAMP. Actually, it's actually already here. Maybe this picture might not show you a lot, but maybe a bit. What we allow you to do is deploy VAMP, and you don't have to use the blueprints, the breeds, all that other stuff, the workflows. You just use the gateway part, and we can then still use the sliders, the conditions, etc., and you can use it next to marathon LB. Does that answer your question? It's easy to use our own services that are already running to use the gateway part and metrics, etc. This is an extremely common thing we find when we talk to customers and users, is that, hey, we already put all this effort into tweaking our marathon deployments, but we still like that cool slider stuff and all that stuff. It's totally usable, and it's free. It's open source. You can install VAMP from the universe in DCOS. You can be up and running using this, well, it says a seven-minute read in seven minutes, and you can start playing around with it, and the nice thing about it is that it's parallel, so you can still have your, let's say, your edge load balancer or your DNS pointing at the marathon LB stuff that you've already set up, and then open up a different port or a different hostname to play around with how VAMP would then use that traffic. Great. Any more questions? We've noticed that you apparently integrated console as a backend for VAMP.io. Is there any subtleties regarding this integration? Sorry, are there any what? Subtleties, like do I get the same features, etc. Yes, you get exactly the same features. Technically, VAMP started using only ZooKeeper, but this was in, well, I could bring back my timeline. This was when Kubernetes didn't exist, and there was Mesa's and kind of marathon version 0.2 or something like that, and then DCOS didn't exist also, so it was very logical for us to use ZooKeeper, because ZooKeeper, in this case, is used to update all these HA proxies. So if you have five, then the HA proxy configs pushed into ZooKeeper, our VAMP gateway agent, which is a component of VAMP, reads it and updates the HA proxies, but then console started to appear together with Kubernetes, etc., and requesting users of Kubernetes to also run ZooKeeper just for VAMP, that was kind of a non-starter. So we abstracted this stuff out, so the key value stored, the distributed key value stored that you use, you have, at CD is the standard one, if you deploy to Kubernetes, ZooKeeper is the standard one, if you deploy to DCOS, but if you have console, then you can use console's at CD installation. Just add the config file and you should be fine, and the features are exactly the same, so no difficulties. All right, thanks, Tim. Great. Have a nice Mesa scone, guys.