 I think I'm online. Can everybody hear me in the back? Okay. A little bit too loud. Okay. Okay. Good afternoon everyone. How's everyone doing? Great. It's Thursday afternoon on the last day. I'm sure there's a lot of heavy heads and a lot of sore heads. So I'll try and keep this interesting for you guys. So today we're going to be playing croc hunter for the next 40 minutes. Is that what you all wanted to do? Yeah. Good. Thought so. Thought so. Now, so today we're going to be decomposing lithium monolith with Kubernetes on open stack. And you know, coming to the summit, I had a different deck put together and hearing questions out in the audience over the last two days. I've actually changed that deck to what I think everybody wants to hear right now. So hopefully I'm on the mark. But feel free to interject. Now I want to keep this really interactive. If you guys have questions, write in the middle, put your hand up, give me the question, I'll repeat it back to people. I want you guys to be engaged. I want to, you know, I want to answer your questions and concerns while we're going through the content here. So my name is Lachlan Evenson. I'm the cloud platform team lead at Lithium Technologies. So you may have seen the keynote I did the other day. I'm going to drill into a little bit of our journey to getting that keynote the other day. So the agenda, I want you guys to walk away, not only share our journey with you to getting to container based microservices on open stack, but really I want to short circuit your journey and, you know, help you get up and running fast. How many people are running containers on open stack at the moment? Two, how many are interested? Everybody who's here is absolutely interested. Fantastic. You're in the right place. So containers, right? This is, I'm getting a little bitter. Still hear me, okay? Yeah, I think that might be a bit better. So containers, you know, in the market right now, there's a lot of hype about containers, right? What I'm trying to do is shortcut some of that hype and show you what we've done with containers in open stack. So I think for you guys, it's important to get on board with containers and at least have a look at what they're doing and the problems they're solving. And, you know, as the container ecosystem establishes itself and matures, then you guys already have a leg up and you know what's coming down the line. So I think, you know, if I can share that with you guys, that's a great result. You know, I want you to be able to actually walk out of here and apply what I'm saying today to your particular situation. And I'm happy to keep the dialogue going after this session. So if you want to reach out to me, you know where to find me either on Twitter. So one of the biggest things that open stack, so there's plenty of talk about, you know, do you need open stack? What's open stack give you when you have containers? One of the things open stack gave us was a platform to innovate and iterate really quickly. So having all the resources, compute, network and storage at our fingertips, we could actually overlay Kubernetes or other container orchestration systems really quickly and start getting our feet wet, which I think is a great result and a great, you know, a testament to these cloud platforms that you can take them and iterate with new technologies really quickly. Okay, before I get into the details, just a little bit about what we do at Lithium Technologies. So at Lithium Technologies, we help brands connect, engage and understand their customers. And we do that via online communities and social monitoring tools. And, you know, you saw at the keynote one of the tools I showed that was a pulse of the whole world, the global community activity as we saw it. So we do this for brands like AT&T, Skype and Virgin. Containers, VMs and open stacks. So one of the things I want to go into is, you know, there's a lot of talk about containers, open stack, VMs, when to use what, does it make sense to throw away VMs, all that kind of stuff. I'm going to share with you our experience and where we're at currently after running containers in production on open stack for about three months now. So I wanted to kind of debunk that the issue really isn't containers versus VMs. They both have different properties and they do different things. So you really want to make sure that, you know, you're meeting the business needs with the technology you're providing. So honestly, as I say here, our engineers couldn't care if they're deploying to containers or to VMs. What they want is their app to be deployed to a cloud environment, have it scalable, reliable and simple patterns to get it out there, right? So that's really what we're trying to achieve here. So nobody came to me and said, hey, I really want to use containers. They came and said, is there any way we can do better with how we deploy our apps? So that was really the question to us. So, you know, approach it like that. Don't, you know, what's the problem you're trying to solve? So what I was trying to articulate is, you know, the right tool for the job. Some applications are built on the assumption that they're running on a VM, right? So it's very hard to pick them up and put them in a container. If you're starting from the ground up, brand new app, it's very easy to change the way the application works given that it's running in containers. So, you know, using the right tool for the job, I think is very important. Don't try, this is not just a race to get everything containerized and it's not a race to throw out VMs. We still see VMs at Lithium as still an important part of our cloud and we still have plenty of apps that need VMs and were built for VMs, right? So when to use which one is really the question? And there's no solid answer to the way, you know, to answer that question. So it's really what's the problem you're trying to solve here? For example, we were going down a microservices architecture, right? So we have a big monolith and we're trying to break out new features outside of that monolith. So we saw containers as being able to enable that and speed that up. So that's the use case that we pick containers to solve. We'd been running OpenStack in production for about two years at Lithium Technologies and we really took time about six months ago at Vancouver to have a look at how the cloud was being consumed, right? So our customers are internal developers, they're writing features, we have a SaaS platform and they're trying to get those features onto the cloud platform. So the promise that we'd sold to them was the cloud platform was not only going, they were going to have elastic infrastructure that they could deploy to and it was going to make their life easier. And, you know, they wouldn't have to worry about infrastructure, but when we actually took a pulse of how that was working for them, they said actually it's easier still to deploy to bare metal than it is to VMs. So, and I have some stats at the end. So we took a look at that, that statement bothered us. We don't feel we'd actually delivered on, for the microservices team, a platform that they could really embrace and stop worrying about the infrastructure it's running on and start worrying about writing the feature or the application. So what we saw with containers were, you know, you had a very simple packaging and employment and a great development experience. So from laptop up into production, you could package and deploy the same artifact. So it was a great result to the developer experience where it had somewhat been broken with VMs. We couldn't really give them a nice open stack on their laptop to deploy using heat and then promote that up into production using the same heat template, right? So with containers, we kind of had that experience. So we thought that would be good. The other real challenge for us in the past were we were traditionally a bunch of operators and sys admins, right? So we were using operational tools to solve developer problems. Developers were finding it very complex using the tools we had given them, which were traditionally operational tools to deploy their apps. So with containers, we wanted the effort to be developer led. We wanted those guys to come and say containers are something that work for us and we, you know, it's making our life easier. And then it was on us to build and provide an infrastructure that would support running those containers. So question, another question that's on everybody's lips is should you split the monolith, right? There was, many people have gotten up and said, you know, absolutely, you just break it apart. But really, what did the monolith actually provide? So when we took a look at lithium at why people wanted to check code into the monolith, it was really because there was tooling, monitoring, a well established pipeline that had been there for 10 or 15 years in our company. So they felt safe deploying under the monolith. Now, when you asked them how coding under the monolith was, it was horribly complicated, but they still would prefer to go that route because they were given the deployment tools that the company had been working on for so long. So, you know, that was something we were trying to solve with providing an end to end container pipeline. So what we actually did was we gave the developers containers and we said, for all your new services, let's have a try at putting them in containers and deploying them on OpenStack. So we basically cut it off and said all new services from this point forward are in containers unless you have a case otherwise. So what that really did was enable the developers to try a new pipeline, which was the pipeline I demonstrated from Git commit all the way out into production. Yes, yes, yes. Microservices were solving a people problem. Yes, so we kind of, we, you know, I do agree to some extent with that statement. So microservices basically, if you go and look at how our developers were feeling checking into the monolith, there was a lot of, if you were bringing new developers on, there was a lot of history in that monolith. The company's been around for 15 years. So coming on board and understanding that code base took a long time. Now, what microservices gave us was a way to deliver features without having the backlog of that monolith. So bringing new developers in and saying, you've got 1000 lines of code, you want to knock it out, it does something fairly, you know, one thing really well was kind of liberating for those guys. And even the developers had been around for 10, 15 years. They were actually able to mobilize the code a lot, you know, getting code out a lot quicker as well rather than, you know, the dependencies. It wasn't even, even the code base, it was the build process, right? When you've got a massive code base, build testing, integration, unit test, that, you know, even that flow took a long time to get code into the environment. So, you know, they've actually, we gave containers to the developers had been around the longest first to see they understood the monolith, but we wanted to see if this really could revolutionize the way that they were working. Getting off the ground. So I think this is something everybody trips up on and in OpenStack and in general, there's a lot of things out there, right? So how do you get started building container orchestration and what does it look like on OpenStack? So, you know, just some guidelines from us. Be incremental. So don't try and deliver everything. Try and do one thing well and just do that and deliver on that. So we kind of made that our first deliverable was to containerize one app and run it on a laptop, right? So that was deliverable number one. And then it was containerize the app and run it on OpenStack on a single node. And then it was. So we had these really short incremental milestones. It wasn't let's deliver everything at the same time. And this is what a lot of our team struggled with, right? Because they basically looked at containers and said secrets management, config management, monitoring, tooling, all these things had to be rethought and they got stuck in that headspace, right? How do I even do all that again? So we were very pragmatic and broke it down and gave very small deliverables. And I think that in turn made us successful to getting where we are now. So, you know, I said, do not boil the ocean. When you look at the container ecosystem and the tools that are out there now, your head's going to explode, right? But what is the problem you were trying to solve? So for us, it was getting that time to production and from check in to deploy down. So that's what we were trying to revolutionize and make it a simpler process so that we were out of the way as a cloud platform engineering team. You can't containerize everything. This is an interesting statement. I think it's a myth, but really what are people really saying when you can't containerize everything? You can containerize everything, but does it make sense, right? It's not one shoe that fits every foot container. So really think about, again, what you're trying to do here. So second point, it's not just for stateless web front ends. So I hear a lot of you can only do web front ends and stateless apps and containers. One of the challenges that we had internally was that was the developer's perception. And we actually said, give us your most complicated app, your most complicated microservice, let us containerize it, and let's see what you think. And what that app was was actually ZooKeeper, Kafka, produces consumers, multiply scaled. It was a massive app, right? And I thought, gee, I don't know if we're going to be able to do this. But at the end of a week, we had ZooKeeper, a reusable container pattern in Kubernetes for ZooKeeper that every new microservice could use. We had Kafka. Every new microservice could use Kafka. They didn't have to build their own. Whereas traditionally with VMs, people were building the same thing over and over and over again, right? So we actually created a lot of patterns and short circuited a lot of learning for new teams trying to bring services on board. So what we have now is complicated apps that live on Cassandra, with ZooKeeper, Quorum. We can scale them up with a command, like the scale up command I showed the other day. We can actually increase the size of a Cassandra or a Kafka cluster at the click of a button, right? So where that had traditionally taken a long time, you can even do it on event, right? So based on this type of load, scale it up. So I'd love to talk to you guys about that if you're interested in what we did with ZooKeeper and these applications that are more complex and clustered, they do actually lend themselves to running in containers because you can scale them up, have Quorum. So why Kubernetes? So in production today, we're running Kubernetes on OpenStack. So what Kubernetes gave us was little engineering effort and no additional capex band. So I took a look at Kubernetes and I had a cluster up and running on OpenStack literally within 30 minutes and I could start deploying containers to it. Now, that was a win for Kubernetes and really it was a means to an end. I just needed orchestration on container orchestration. What we really liked about Kubernetes was it was Docker primitives, right? So you're working with a common framework and a common tool set underneath. So what Kubernetes overlays on top of is Docker. So all the artifacts you're producing could be reused where you were you to use another container orchestration. So one of the things highlighting about what OpenStack provided in this model, if you go and look at containers as they are today, they have a lot of gaps still, right? So persistent storage, networking, all these things are incredibly complex. Has anybody worked with networking and persistent storage on containers with distributed apps? It's complex, right? It's complex. So what OpenStack gave us was a way to fill in those gaps that Docker didn't actually meet right now. So we were able to use Cinder block storage and attach that into containers. We were able to use Neutron networks and overlay these things on top so that we could build a container orchestration system very quickly. So I think this is an ode to, you know, it's one platform as Jonathan Brice said in the keynote. We're able to iterate quickly and provide container orchestration. So innovate. Why Kubernetes continued? So for us, we operate in AWS. We operate in OpenStack. We operate on bare metal. So we have all these environments and we will continue to operate like this. We have used cases that work best in the public cloud. We have used cases that work best. You know, it's the right shoe for the job again. So what Kubernetes gave us was finally we could have a platform that we could deliver to bare metal. We could deliver to public cloud and we could deliver to private cloud using the same orchestration system. And the demo I showed was actually, you know, the realization of that dream. I pushed one container to two places and attached a load balancer and was serving out of AWS and OpenStack at the same time. What this allowed us to do was use AWS effectively as an AZ. So we had apps deployed to both places and if AWS had an issue, we'd be using OpenStack. If OpenStack had an issue, we'd use AWS. And we're able to realize other potentials like upgrades, right? So we could actually just take an OpenStack cluster offline, upgrade it, forget about all, if the data stores don't live in it, forget about all the compute and bring it all back up and everything gets rescheduled onto it and it comes back up. So that was something we could actually use as, you know, a methodology to, you know, red-green upgrades. How did we deploy? So this is something else that's really interesting. So we're cutting from source at the moment. So we're not using any tooling. We're building from the open source Kubernetes repo on the Google platform, GitHub site. And what that allowed us to do was really get in the trenches early and understand what this thing is, whether we could contribute to it. And it gave us, you know, as we see the market evolve, we see the bits that are missing and people are filling those gaps, right? So it's given us an understanding of how Kubernetes runs. And, you know, I think that was fairly valuable in the early days. Now, one other thing I want to note is Kubernetes on AWS, we actually used a tool called Cloud Choreo. And what that does is allow you to define your AWS resources as code. And we did that for Kubernetes. So I defined Kubernetes in a GitHub repo, and it points the repo and creates Kubernetes. So if you take a look at that website, there's a one-click deploy Kubernetes cluster in AWS. So if you want to use that, play with it, that tool can actually get you up and running really quickly. Question over here. Yes. Yes. Yes, absolutely. So the question was, can I elaborate on how we solve the networking problem? So on OpenStack, we run OpenContrail. Okay. So in the first iteration, we actually did Kubernetes using static routes. So you can what you can do with layer three and routing, you don't need an overlay, right? When you want to get down to granular control, you need an overlay at this point. So the first iteration, we stuck to primitives. And I did implementations that I've spoken about in previous sessions with static routing. And that got us up and running to get a feel for how it worked. Now, what V2 is, so keeping incremental is reusing our SDN to provide network connectivity inside Kubernetes. So it just so happens that OpenContrail, we've been working with OpenContrail, they actually have integration with Kubernetes. So we've been able to, at the moment, we're testing with double overlay. But our dream is to have single overlay servicing VMs using Neutron and servicing Kubernetes using the Kubernetes plug-in on OpenContrail. So one control plane to manage both container workloads. And what that has also given us in OpenContrail is the same security, all the things that we chose Contrail for were suddenly put into the container world. So you actually have secure multi-tenancy. You're using network namespaces down to the pod interface. So that was a great win. Now, one other thing that we're working on right now that I hope to demo soon is we've put OpenContrail on AWS. So we have OpenContrail, AWS, OpenStack, Federated. I can move layer three workloads from OpenStack to AWS. So OpenContrail has given us a very powerful platform to reuse and not only service VM workloads, but container workloads. So I can talk to you a little bit more about that. Does that answer your question? Yeah. Thank you. The implications of running containers. So as I said before, containers are completely different. What we saw initially was when we asked the developers to use it, they wanted to stuff VMs in containers. They wanted to SSH to a container. They wanted to put SSHD. And that was of no fault of their own, right? They were trying to understand what containers were about and how to utilize them. So one of the things I said from the get-go is that they demand that you rethink how you write your application and how you look at every aspect of how you monitor, log, secrets, config management, don't create container anti-patterns. So it was mind-boggling when I said you don't need to SSH to a container because what I actually saw out in the environment was people putting SSHD, writing entry points that were init, basically init systems to bootstrap multiple processes under a container. And I said, why don't you just split them out, have one process, and I'll show you how to do logging. So we actually, with containers, we were able to simplify logging and monitoring and provide it as a service. If you mount this mount point, I will make sure your logging makes it into the logging infrastructure. If you expose this mount point, I will make sure the monitoring hits your container. So whereas traditionally, that with VM infrastructure, they'd have to deploy that themselves. So I think the main thing is have a look at the patterns with containers. If you're building your app's green field, you have the utmost flexibility to really make containers shine and really utilize them in the best fashion. So we're still striving. I see some anti-patterns in there now, but we need to make sure that we're trying to think, build from the ground up. Containers are not VMs, they're different. So they demand that you actually rethink your application. So if you're in that workspace of building a new application, which is what I said be incremental, we could build from the ground up and really utilize containers to their full potential. I don't know. I personally have my feeling about anti-patterns, but it may make for a great blog post. I've seen the Docker blog, and if you look at the Kubernetes blog, they go into sidecars, ambassadors, adapter containers, where these are kind of methodologies where you want to, you have two applications in pairing them and putting them inside pods. So they talk to one another rather than trying to stuff everything. Now the other great thing with logging, monitoring, all these kind of infrastructure, we actually implemented a lot of the stuff we were using in containers. So that actually helped us get to a v1 release a lot quicker. So we took our monitoring tools, our logging tools, and implemented a containerized version of them. Now is that ideal? The jury's still out, but for v1, we didn't have to redo a lot of our tooling. So we just put logging and monitoring in a container, called it done. It did what we needed to, and we could move on with iterating later. Questions? Great. Hello. Yes. Yes. The data store. Great question. So the question was, if you're using sender to provision block storage and mounting that into your pods, if a container goes down and comes back up on a different machine, how do you make sure it has access to that data? That's a very complex question and a very complex solution. So what we've actually done right now to iterate, because that was one of the first big problems, is pinning application workloads to that mount point, right, and doing some clever namespacing. But if you go into the world of, you know, NFS file systems or file systems that could be shared cross-platform, you could actually achieve it. Now the thing is, I don't want to, we didn't want to get ahead of where the marketplace was for containers and Kubernetes, but I would say in a year, you're attaching, you're making the sender call from the container and attaching it directly into the container, right? So container goes down, comes back up, search true sender, do I see what I'm looking for? If so, attach, if not create, right? So that's going to come. So we said, let's not get ahead of ourselves and try and redesign the wheel that's going to come, because everybody has that problem. Yeah, so I mean, I'm not saying that we're using NFS, but some kind of clustered file system and depending on your properties, but I think, you know, if you go down to reattaching the volume to the container rather than the underlying host, I mean, it's got to go through the underlying host anyway, but if it's presented directly as a mount point to the container, then if the container moves, all it needs access to is sender, right? No selector and pinning, absolutely, yeah. So did you have another question? Yes. Yes. Yes. So the question was, do I use rolling upgrades for blue, green upgrades in Kubernetes? And the answer is yes. So one of the other reasons we chose Kubernetes is out of the box scheduler gave us every use case we had, and lithium is not in the business of writing schedulers. So if it gave us what we needed out of the box, we'd prefer to stay with it. But yeah, I know of use cases where you might want to do some more interesting things with rolling updates. One more question? Yes. Yeah. Yeah. Yeah. Okay. So there's two questions. One is when we said let's containerize one application, what was the developer for feedback? Let me address that one. I put up an internal Docker registry. I put up an internal GitHub repo that said this is how you use the registry and one example of a Docker container that I had built. I went on paternity leave for two weeks. I came back, the repo is full, and there's a Post-it note on my desk saying, when can we get this to prod? I didn't do anything. The developers loved it. So with very little overhead, they could pick it up and run with it. Because if you've ever read a Docker file, it looks like a shell script. So all the metadata you used to have with config management tools is boiled down to java-j, run jar, right? And from java-18, run jar, expose port 80. That's their Docker file. So that's the first question. The second question, can you... So, yes, I understand. So when you get down to kernel modules and network, NFV and different things like this, we do only because we run OpenContrail in containers and OpenContrail has a kernel module. So I have some interesting things about that. But yes, running in kernel space, kernel modules, containerization of apps that need root space and kernel, direct kernel access. I think this for now is a use case for VMs. So rather than, will it come to pass in containers? I don't know. But right now, don't try and one shoe fits all. So, any other questions? Yes. Yes. Yes. Yes. Yes. So you need the IP of every Redis node. Query the endpoint API, right? So query the endpoint API is the answer. Elasticsearch discovery in Kubernetes, which if anybody is interested, is built on the same way. So Elasticsearch needs to know about all the other nodes in the cluster before it comes up. You query the endpoint API and the endpoint API is all the IPs of the pods that are attached to the service. The endpoint API in Kubernetes specifically. Yes. Let me just get through the rest of the content because there's lots of good questions here. Lots of good questions, which is a good thing. So I just want to finish up and I've actually only got a slide or two left. So the results, right? So we've been running in production for three months, right? We asked the developers, how long does it take to get a microservice out? They said, on VMs, it took us one month to write the app and three months to deploy it to the cloud in a stable manner. That was templating cross AWS and OpenStack. So it was incredibly complex, right? The main point here is now they're actually spending time on features, not infrastructure. So the last app we rolled out before I left Tokyo was out in production in 15 minutes. So off the build loop. So we've gone from three weeks to 15 minutes. So, you know, that was a great result for the developers writing this code. What we've found is there's a single automatable pattern, including CICD. So we've been able to abstract this pattern, which was really our goal, so that anybody writing any service had a very loose fitting front end to put their service in and the pipeline would take care of pushing it out to the environment. So we have a stable container deployment pipeline. So a great result. Us as the infrastructure team, we have a lot of tooling that lives in the cloud as well, and traditionally we were the guys who knew how to write all the config management scripts. So we had the secret source to get apps out into OpenStack and into AWS that the developers didn't necessarily have. Now with containers, we actually followed the developer-led pipeline that they created to deploy our containerized applications that support the cloud, things like secrets, config management, key value stores. We were actually using the same tools that they use. So this was something that in the past had been split. So the last point is interesting. So if you've ever written a cloud formation template or a heat template, they're incredibly complex. They thousands of lines. AWS actually released a tool yesterday called the template editor for cloud formation. So when you're editing templates, it's actually incredibly complex. And when you have a repo full of templates that are deploying VMs, it's actually incredibly complex problems. So when you look at how people are consuming the cloud that they acknowledge that things, orchestration tools, heat and cloud formation make for incredibly complex deployment templates. Okay, the results. So when we look at the code that's going out there, people are checking in more often. We've got a higher code coverage. They're not worried about the infrastructure. They're now worried about writing the code. They're writing unit tests. They have a common CI, CD deployment pipeline. So we've got higher code coverage on those new services, smaller PRs. When we're deploying to VMs, they were scared to deploy in fear of breaking their application. With containers, they deploy. So in one week of the latest app we deployed, there were 100 commits. There are 100 deployments of that app to our development cluster in one week. So people could test, get it out. Things were tests, unit tests, integration tests, they were all running. So these guys could iterate really quickly. Complex deployment options have been simplified. And I had a quick slide on this. I pulled it out, but I had elastic search in heat and elastic search in Kubernetes. And it went from a three-node cluster with something like 700 lines to 40 lines in Kubernetes. So actually the metadata around deploying elastic search in containers was vastly smaller than that of doing the same thing, cloud formation or heat. Canary releases rolling upgrades and rollbacks. So I don't know if you've ever tried to do an in-place upgrade of containers, rolling them out, ASGs, reattaching them. It takes forever. These guys are doing rolling upgrades off the back of a Git commit now. And it's just way simplified. One minute, exactly what I showed in the keynote. All right. That's all I had for today. Feel free to continue the discussion if you want to ping me or talk to me. I'll be around all day. Happy to feel more questions. And I want to be in this together with you guys and hear about your journeys too. Did we have any other questions? Okay. I'll take one from this gentleman here. Applications in the data. Exactly. Great question. I spoke a lot about apps. I didn't speak any about data. That was one thing we didn't want to address early, because again it was what's your business running on? Make decisions that are smart and don't put data layers at risk. That was us, right? We run a SaaS platform. It's got to be high uptime, SLA-based. We moved front-end apps and microservices to get a feel for it. But we didn't touch things like data stores right now, right? Because we had them in VMs and we had them in bare metal and had access. So it didn't make sense for us to try and bite into that right now. And again, I think that will come to pass as the Docker environment matures solid foundation for data stores and things like that. Yes. Yes, this gentleman. Yes. Yes. So let me just repeat the first question I'll answer. Is there a middle ground between monolith and microservice? Potentially. It's use case driven. I can't give you a solid answer. So whatever you use case is you decide what fits. The second one is cloud formation. So I mean what cloud formation does is kind of give you access to multiple APIs in one place, right? And to aggregate them all into some kind of order or your request for resources and give you a yes, no at the end, right? With Kubernetes you have a single pane of glass. You're asking a single pane of glass for access to everything. So I'm saying give me a load balance or give me block storage, give me blah, blah, blah, blah, blah, blah, right? Whereas so Kubernetes handles all of that and it's very tightly packaged. If you look at the way a Kubernetes spec file works, it's very, you know, very succinct rather than I need to go and ask Nova for this. I need to ask Cinder for this. I need to ask Neutron for this. Tie this all back together, right? You can do that very simply in Kubernetes spec file. I've got a few more questions. I'll come to you guys. Yes, the lady behind you. Sorry. So what we had was we had an open stack environment running production workloads. What we wanted to do was provide container workloads on open stack and that's what we did actually in a month. We've been looking for six months but the work was the start of August to the end of August and we've been in prod since September 1st. So it was being able to use open stack to deliver containers is the message. Gentlemen in front? Yes. So the question was what do we do with failure domains with this complex stack all the way up? SED. SED to run on. Yes. Toast. Yes. So it's, how do we deal with clustering SED? Is that correct? So what we actually did is if you take all the VMs right, we vertically put them in any affinity groups and put Kubernetes pods on top of them but we did kind of the same thing. We built some VMs that just ran SED and put a load balancer in front of them and if you take a look at that cloud choreo you'll have a look at exactly what we did internally. It basically puts an ELB in front of, so a load balancer in front of an SED cluster in an ASG so you can spin it up, spin it down but it's highly available. So we kind of did it outside the framework of Kubernetes right now. Yes. The CI process. Yes. So the CI process that we actually did the other day was Git commit. It goes in whatever you test to find. So one of the things we wanted to do is have everything in the repository so we put all our metadata for our tests in a form like a Travis CI right so you check out the repo and the CI tool knows how to build and deploy the app. Now if you look at the individual commands that are involved it would be build your app and produce your app artifact whether it's Java or Go then you'd have a Docker file, run a Docker build in a tag, produce the Docker file, upload that Docker file into the registry and then you issue a command to Kubernetes to do a rolling update of the replication controller and give it the new image ID that you've just produced and that's the end-to-end deployment. Then if you want to roll back using tags in Docker tags we're tagging the build number and we can actually just click roll back and it'll roll back to the last tag. So you can actually do it. It's another rolling update. Yeah. Absolutely. So yeah we do this right so based on you define your Git commit policy on commit to this branch do this action right and then on your deployment we have several Kubernetes clusters dev, qa, stage, prod point that out to this environment. We have actually we have different clusters for the environment and then an app consumes a namespace but there's a dev cluster with its own endpoints a qa cluster with its own endpoints a stage and a prod cluster right so the deployment tool says this is a dev check-in push it to dev so answer your question. So yeah so the question is do we do anything with the YAML files? The YAML files I believe are still pretty raw for developers to use so we want to write a tool around having you know a common library where they can just insert their image ID and have a you know I think it'd be neat to have a tool that just rendered the YAML right you just say I need this much compute this much storage this much and this you know and it builds it right so I think that'd be pretty sexy yeah yeah if you need to or just variable substitute with something like ginger right just have a template and you just yeah a great question how do we manage config management so again we're incremental so we stuffed all the config for all the environments in the same container because the idea is you use the same container the past dev and qa to prod right so we stuffed it in and we just feed a switch in as an environment variable which tells it which config file to use okay that's v1 so don't get hung up on v1 because you want to abstract that out into a probably a clustered key value store and use version that and have it off git and say hello I am Locky and I'm trying to run in dev and he goes here's your config okay so the question is do we use the secret mechanism the answer is yes and we also use hushy corp vault so we have hushy corp vault to store secrets and it actually solves a lot of problems for us if you want to know a little bit more about what we're doing but hushy corp vault runs on on console so we have that as a back end and it's all in containers in kubernetes so that's what I mean about eating our own dog food we're deploying vault on console clustered on kubernetes and let me just check how much more time do I have here I'll make sure I think I'm up I think we're up so we can continue the discussion thank you guys I hope you found it beneficial and thanks for joining me