 All right, so two o'clock and off we go so this morning in the little keynote section when I was chatting with mark Intro a little bit, but if you weren't there My name is Tim Pletcher, and I'm the engineering director for application services at time Warner in the cloud group and I'm here to talk to you about mesos on open stack We're gonna run through a couple of demos and hopefully we'll be able to give you an idea of basically the the Process by which we got this thing ready to go out the door and and get the users on board it And we'll show you some things that are pretty neat So first thing I'm going to do is a little bit of information about what we do talk a little bit about choosing mesos and why we want that route Framing the just basically the general past platform Service that we want to provide and what that means the ecosystem That you need to put in place when you're gonna roll this type of tooling out so that the engineering teams have all the piece Parts they need to be successful Productive on the platform as quickly as they can talk a little bit about some strategic vendors that we work with that have picked up some of the the load from other work that we would otherwise have to do and Mesos on open stack some design considerations Talk a little bit about automation and how we approach that and you're actually going to see some of that running And demos so while I'm talking to you I'm going to go ahead and build a cluster and you know fingers crossed It'll go as we want it to go so let's do the cluster build So what I'm going to do is I'm going to fire this ansible job And it's going to build a small cluster three masters for agents And that's going to run in the background while we chat and when that gets done We'll circle back around take a look at it and I'll show you how that get actually gets wired into our monitoring tooling on the you know as part of the build process and then The final thing I would say is because this is going to be a long-running job on a wireless network Who brought the offering to the demo gods because this could go really well or go you know not so well, but But there you have it. So let's let's kick that job off real quick And we'll come back to it And I'll just move this over a little bit so it's not so distracting and so if there is a failure it doesn't immediately become obvious But all right, so who are we we we work in the cloud group. So a couple years back Matt Haynes who who Myself and a bunch of the other folks in the cloud group worked with and for at HP came over to Time Warner and started this group up and And the first the first team to come in was Jason Rawls team And there were primary focus out of the blocks was object storage and then it grew of course to include all the other IIS piece parts and it's been an ongoing process and so they're they're 18 24 months into this deal now and and that IIS platform is solidified and and is getting better every day more features Things that we need to be successful Matt asked me to come and start the team to do application services So as you look at you know in a company that's a classic enterprise, you know shop they're gonna have a whole range of applications and those applications are gonna span Pockets of modernity and also a lot of legacy stuff So we wanted to be able to put tooling in place That would be there when these teams start to get moving towards service-oriented architectures and looking to modernize applications And it's a it's a big footprint. So So our job is to go do that work And so we have two pieces that we're gonna you know that that that we provide one is Basically a what I call a general-purpose pass and so that's a way and it's a multi-tenancy platform Where any engineering team can come in and they if they can containerize their their package They can we can give them a fast path to get out and rolling on mesos With a lot of these constituent piece parts available to them the other Work that we'll be doing will be providing some of the you know the underlying Key piece parts that they need to be successful So they may have an application, but they may want to start to use elastic search or they may want to start to use Kafka or Cassandra Jenkins is Implemented in many different places But you know as a practical matter that should be a shared service model in in terms of best efficiency So that's the other thing that we do the third thing is We're like the open-stack team. We're evangelists for transition You know so I think that's one thing that you have to keep in the back of your mind when you do these projects is You know you're gonna have to bring people along You're gonna have to put the stuff out there that they can look at that they can access and they can use and they can Start to see a path forward for their work right both on the business and the engineering side We are a small team so the past team proper is myself and four engineers And and but we have a background that you know is what I guess you could say is cloud native So myself and two of the team members were at HP together Two of the other team members were were with me at Matt my fitness And the software infrastructure and DevOps group and so you know We have a pretty pretty skilled and experienced group and dealing with large-scale architectures and performance and all that other good jazz I Was running production workloads on AWS When it launched basically And so I've been a you know, I've been it's an amazing dynamic what that has brought to the table and you know if you've ever Been writing the checks when you went from a co-location to the AWS model You become very very convinced about the efficacy of that approach so But we've all done we all have public and and personal projects and professional projects that run You know all over the place in the in the cloud dealing And we have a lot of time with with opens back So I think one of the you know one of the benefits that that we have is that We went through the entire build cycle at HP for public cloud between Jason's group and my group and Haines And so, you know, we learned early on How to make this thing run and so that actually you know Puts us in a really good position. We want to go put mesos on top that for example So why do we choose mesos? You know, there's You don't want to get into like the whole you know Back and forth about this side of the other thing We kind of feel like we looked at this last year when we were starting to stand the team up you know, we went and looked around and What we find is that you know, there appears to be For lack of a better word a canonical architecture that's developing between open shift and tectonic and cloud foundry And they're gonna move towards that lean OS Kubernetes container runtime Package, but for where we were in the size team We were at they was still a little bit early on with some of the peripheral capabilities and and Google didn't give you everything When it you know when they they push Kubernetes out the door. So so we had to make a decision around the you know, not just Today, but you know or not just the future but today and what we could be most successful with and for us After we took a look at it mesos made a lot of sense It's been around since 2009. You know story on it's it's run at scale big scale in a lot of places But one of the things that was really fascinating about a case study that came out of last year's mesos con was a similar small team a team of five guys that went through and deployed a fairly extensive application footprint and that that resonated with us and at that point We you know, we kind of made our choice around that so so off to the races we went and And so when we think about this, I'm going to circle back around now and talk a little bit about the bigger picture You can't just throw the cluster out there and call it done That doesn't help anybody You got to give the engineering teams the tools they need you've got to be able to support You know legacy monolithic applications that can be containerized and move forward You have to work with the teams to onboard them So so we're we're my engineers will go out and work with teams as we get this ramped up To get them in the door and make this thing a success for them And I think that's a big part of you know any project like this when you're going to bring these these new technologies Into into a legacy architecture We want to be evangelists. So, you know That's a you know being a change agent has all kinds of interesting aspects to it requires patients It requires a management commitment. That's a big deal I mean I I oftentimes kind of feel like the engineering side of it is the easiest part The hardest part is the people side, right and especially with people that maybe haven't been exposed to some of these approaches and you know True CI CD those types of things where you're actually, you know, you're gonna rock some people's world So we have to be ahead of that that thing And we want to be ahead with respect to tooling So, you know, we want to be able to get out in front and have the stuff ready when they come in and they start asking But you got to start somewhere, right? So what do we feel like the compulsory tool set is that you need to provide for the engineering teams internally? Well, I think it starts off. Obviously you've got IAS underpinning it and and there are you know Functional aspects of IAS that they will get used and that we use under the covers and you know, it's just a you know That's part of the package You've got to have a private registry, you know, one of the big things, you know If you're in a startup It's a little bit easier because you don't have the security guys breathing down your neck And you don't have some of the process requirements that are in the larger shops and and that's a real thing So obviously you can't go pull stuff willingly off the you know The public internet and so you've got to have your private get repos and you got to have your private registries key See ICD tooling de facto Jenkins, right? I mean, that's just a that's the thing But you want to be able to give the teams an easy way to leverage Jenkins where they don't have to worry about running out of gas you know with respect to capacity on the Doing built slaves and whatnot. And so that is actually one of the most compelling You know things about this approach with with Jenkins and mesos is it you know We can provide very elastic capacity for for build tools credential management We're we've got vault hashy corpse vault that that we you know We use that internally on the team and then there's a public service that will make available to the to the other engineering teams load balancing Absolutely need to have that for the teams you know sure You can go ahead and do your h.a. Proxer do engine X on your own if you want to but there's there's all the other stuff Round load balancing and reporting and security and understanding how that works and and with the mesos piece There that actually gets to be a little bit more of a complicated picture. And so so, you know, obviously it already been present in TWC and Jason's group for VM load based load balancing and so we took a long look at that and we ended up with them For our purposes as well Monitoring, you know stats D. There's manask on the open stack side We have stuff that we stand up to run our Tooling in the in the admin namespace and I'll talk about that in a little bit But you know, you might have teams that want to run their own stats T and and graphite and do their own thing and own dashboards So we want to provide that Elk obviously for log file analysis You know, that's something that you know We find very useful and we just feel like this is the you know The base tool set to walk in the door that the teams can take advantage of when they want to Come on to the platform So strategic choices. I mentioned the the three, you know primaries that we have so Let's talk about that so with mesos, you know, you get a core cluster management solution But there's other stuff that you need to run inside the enterprise, right? Like a great example is ad integration So can you build it? Yeah, absolutely, right? When you got a team of four guys, are you going to go want to go write ad integration when you'd really prefer to be out with the engineering teams and No, so, you know, Mesosphere brings that to the table Good acl implementation around, you know management and access getting better every day Production ready packages. This is actually a pretty big deal So, you know, you do have to go through and implement the frameworks, right to deploy on marathon And some of that can get a little bit involved depending on how you want to do it But one of the great things about what mesosphere is doing is the, you know, is the universe repo and so they've worked with other with other shops, for example, they worked extensively with uber on the The latest Kafka and Cassandra packages and that makes our life easier So what that means for us is we can pretty quickly deliver, you know A very robust solution the engineering teams in some dimension of functionality, which is great right Avi Avi has a great multi-tenancy story around how they interact with with marathon in mesos and And that's really, you know neat I mean when you deploy an application into marathon Avi just picks it up and does its thing and Gets to tribute the service engines and off you go and they have a very good Multi-tenancy solution on the UI side. They've got a really neat security implementation So you can very easily visualize white lists black lists services talking to each other It actually smooths the you know makes again back to some of those mundane tasks that your teams or we would have to build Or your teams would have to deal with can be done, you know fairly easily And then core OS for the red the private registry We've been super happy with that Out of the blocks it dropped into our sd tool chain and you're gonna see that in a little bit and We've been we've been zipping and zooming with that so design considerations the pro tip so Ah, there we go Pardon me So, you know Obviously every open stack implementation is gonna have its dimensions that are unique to it and and ours is no different You know the big thing for us and looking at running what could potentially be really big clusters on top of open stack Is how's the network play out gonna work out, you know, what do we need to be thinking about there? What where are we at with you know with router or liability and all those things because you know as soon as you start talking? Back and forth among cluster nodes and things go south on the network side and the clusters, you know pretty much useless so So that those were all considerations ha is really key We you know, we need an FS at the current version of Jenkins because it stores state in the file system And so that's an aspect of our cluster build and Multi-region Image sync, you know the little things around Making your life easier and leveraging that the tooling that the that the open stack ISI provides we do take advantage of that and Then there is a decision on the topic of one cluster versus many You could you know from a production applications perspective But it turns out that really kind of when you sit down and talk to the you know the hardcore long-term mesos folks It's a one cluster deal for production as opposed to multiples and and and you kind of get the feel for that once you start digging in Bare metal so I mentioned this in the in the keynote this morning and I called Jason out and you know I said basically I nag him about once a week for bare metal So with mesos you really kind of want to run on bare metal There's you know Vm virtualization Kind of just adds latency is a practical matter So so we're gonna be pushing real hard to get on to get those guys moving on ironic as soon as we possibly can Implementation architecture so you decided to go do this thing you want to run mesos Well, you got to run the cluster So what other stuff do you need to do? How are you going to turn it on? How are you going to how are you going to get it going? How are you going to maintain it? You know how are you going to monitor it? What is the actual operational footprint that you're going to have to have to make this thing go and so that's a big deal So our model in our approach Goes something like this so we view the world in terms of network cores, right? So at TWC we actually have multiple network cores and so our per our namespacing starts at the network level So right now we're in the in the open-stack network core and then we think about regions and then we think about environments development production and staging And then we think about the functional namespace and for that we have admin and we have mesos and and all the customer-facing workloads run on the mesos cluster in that namespace And by namespace, I guess it's it is what it is, but and then on the admin side is where we have our bootstrapping Services the things that we use to you know to manage credentials We you know we have our own admin versions of vault admin versions of Jenkins Admin versions of quay and so that allows us to to run the thing and present the cluster effectively From an operational side and that's actually a pretty big part of what you have to do to kind of get this thing ready To go and long-term manage it so I hope you can see this And I apologize that that text is a little bit small so basically this is the best way to look at it and We don't expose any services in the functional namespace External or in the admin namespace externally Those are all for our purposes everything the customer customer see are in the mesos namespace and These are separate open-stack projects. So that's a key consideration. So we do have some isolation there around that for variety of reasons So some comments on the neutron setup because going back to the networking we started in with a specific model and The thinking around that model was hey, we really want to have you know Some isolation in the networks with the routers and how they talk to the outside world And and how do we want to make that work? So we actually had multiple networks in our mesos namespace at one point and routers in between them all talking back into the admin namespace So two things with that one we ran into some quirky things under the covers with neutron that made that actually Pre-challenging when you talk about automated build it actually was really hard and it just flat out was a problem the other thing is the the is team is Has reworked their approach with that and they've got a kind of a novel deal where they're spreading the routers out The virtual routers out over the compute nodes And so what that does there is basically You know in a traditional model you're you got all the stuff running on a couple of note of you know No router nodes and if you lose a router node you use a whole a lot of virtual RRs So what they're doing is spreading that out and that actually kind of took some of the pressure off of us to deal with that in our Network design and as soon as you start in simplifying your network design for deployment life gets a lot easier for the automation side of things And that's actually how that worked out And it's been actually really good And so so that's that that one is something to think about and you may want to Chat a little bit with Jason and crew about how they're going about that process with with spreading them out So let's move on to operational automation So when you've got a team of four automation is the only way you're gonna survive once you actually start to run things in production It simply will overpower especially if you're doing a multi region, which we will So that is a huge deal. And so we've been working on that almost since the get-go And you have to do the work up front because there's no way you're gonna catch up to this on the back end and you know, I think we've all been in projects where it's you know, you're under a ton of pressure and You know, you've got to get this thing out the door and meanwhile some of that Automation and you would have really loved to have had done up front Has to be you know dealt with coming back around and that's never an easy discussion to have so We're fortunate in that, you know, we've been able to take the time up front to do that And that's been you know a big chunk of our work package Up to this point and and that's why I'm running this thing for you in the background, which is the you know The full cluster build We ended up with Ansible the team has As I'm sure all of you have or used everything out there We've been really happy with it I think we're we're as we get you know More and more well-oiled and how to use it but already, you know We're pretty fully automated on most major tasks running the cluster And that's a mandate right so you know, I kind of put that in the table when we started is you know Once once we get down the road and we're ready to say all right come on board customers There's no more touch in production manually Anything that we would need to do there you really need to do automatically And so that's that's actually driven the the process and it's it's pretty well burned in and we're on the right track there So this last one is kind of funny Adam and I are both bike guys we live in Austin and so you know We'll occasionally ride together, but but when we were first getting down the road, you know Adam I was like I want to see the automated cluster build and he's like yeah, I'm gonna go ride my bike And I'm like dude Please don't worry click. It'll be ready in 15 20 minutes when the when the job completes And I'll be back, you know in a little while on the bike and I was like all right So 15 20 minutes. I went in there. We're done and I'm happy So, you know, I think one of the things about automation. That's really you know a key thing is like You know, that's engineer happiness, right? And I'm sure everybody in this room from the engineering side. You're like yes, you know make my life easier Don't make my life harder and crappier, right? So give me time to automate and then I can go be productive and you know And I think it's you know the anecdotes here obviously But I think that's something to keep in the back of your mind across all this is just full commitment to automation across the board all right, so Got a plenty of time left here. It's demo time So let's take a little look over here and see how we're doing on the cluster build We're almost there So So let's talk a little bit about this and then hopefully by the time I'm done with this slide It'll be will be wrapped up So basically the way it works is you know There is a ticketing process to create an open-stack project because there's some administrative and some ultimately some charge-back Mappings that need to get put in and approvals to go consume the resources once that's done And the quotas are updated to accommodate what we want to do then we're off to the races and so The way it works right now is we have a couple different, you know Primary Ansible jobs one is a is an open-stack provisioning job and it'll go through and spin up everything right so kind of like a heat deal But you know it we keep it in Ansible everything in Ansible so we're consistent across the board Once that job is complete then a way you go into the mesos cluster build and so that actually will go through And I've already run the the provisioning so we're just running the out the mesos book right now Once that's complete then the mesos cluster builds that actually will build Avi as well and install that But that actually pushes me a little bit past my demo allotment of time as far as runtime goes So we we didn't choose to run that at the tail end of this this time But but yeah, so then you you know you can build all the way out and you're done so I Have to be patient here Well, we're done fancy that so so at this point I think one of the cool things is so the clusters run it's running You'll notice data started to flow so Well, I'm sorry as I was there we go, so Once the thing it's automatically wired in so that the meso the Ansible job will go ahead and install and Do all the plumbing I'm with like D to get everything flowing So you can go build a cluster. You're wired into reporting right away and away you go And that data is flowing now as you can see and then See if we got we're live here and away you go so while we were talking the cluster got built it got inserted into Monitoring and you're off to the races and at that point you go run another package to put Cassandra or whatever on there Or you know if you are running in the DC OS model with mesosphere You could just go do it at the command line if you wanted when we do installs like for Cassandra and whatnot Those are actually running DC OS, you know commands when we do install the package into the covers So that's not the only demo I have for you because I think one of the things that that actually is pretty cool So, you know, I think everybody at one point I hope had the experience, you know or the aha moment when they hit it to get push a Roku master and magic things Happened right and your application was deployed to production It was a pretty it was a pretty foundational moment I think in in basically in true CICD automation because if you'd been around on Amazon for a while You're pretty much building your own clustering and and dealing with all these things manually and all sudden boom done, right? So as an engineer, you know, it was really a powerful tool, of course You could you know very easily get yourself into trouble with that But I think the point of it is it's like, you know, if you go back to what we were talking about with With the whole you know the whole desire to enable teams and engineering teams and make their lives better I kind of this is a mantra, right? And I kind of preach this thing all the time to the team Because that's just that level of simplicity now You may not always get to you know to the point where you can do a one-liner like that and Just be out the door because there's some complex packages, you know an application form You know footprints that you need to get out But that should always be the driver of what you're trying to deliver to the engineering teams when you're going to do this type of service So, you know and I ask myself often times when we talk when we talk about it as a team It's like well, you know this can be harder, but then you got to ask why right and if you go tear it down enough You can find a way to make things a lot easier For the folks to consume your your platform, so we make this a priority So what I want to do now is show you what we call our single service CICD Toolchain and so basically I'm showing you this presentation on go present So it's basically simple binary points at some content files. It's got under the covers some You know some TWC branded stuff I have this thing running internally and I think ultimately will provide it as a service because if you want to do a Presentation and you just want to write a text file and magic stuff happens and you get slides. It's awesome, right? But it's also a single service So you do have to get a binary from github update through Docker build and then out Jenkins onto marathon restarted behind the obvious it so It may seem simple, but this is the same thing you do if you just had a single service binary doing some other thing You know or whatever you want to do there. So So let's take it from here So at this point, I just need to go Send this package away And so at this point once that happens then our friendly quay will be listening for that and we'll start the build in theory And let's hope my demo luck holds There we go So it's working its way down there Jenkins will pick this up when it's done. There you go. You got jobs pending And so this happens pretty fast on the restart. So if we miss it I Apologize so at any rate I made a small change Success, let's hope I didn't shoot myself with that one either and we're waiting for executors and we're waiting So that's finished and somewhere. There's a restart happening on obby and there it is. So So that was basically a full CICD run, you know for what it's worth, you know, whoo-hoo so So I'm really happy that both those things actually work because it this could have gone really badly so So the the pre-zones updated And so where we go. So so for us what happens next is we've got you know We're pretty much all done and ready to go out the door X some destructive testing. So we'll spend the next couple of weeks banging away at it and trying to make it do all kinds of bad things and then At the end of this month or may I guess we're in month or if we're in May now at the end of May We'll put people on the platform and get going And at that point we'll spend a lot of time with the customers and you know, we'll Get out there and do our thing. So So that's pretty much it. I have Terry Howe and Adam McManus with me from my team and So if you have any questions and I may or may not be able to answer them Fortunately, these guys will So if you could I guess they want you to go to the microphone so they get it recorded for the the replay So Gavin you have to head over there. So is are you deploying a separate mezzo's cluster for each? Customer it seemed like they got no no, okay No, so they get a they they will get a marathon instance And then Avi on top of that multi tenants and on top of that so it's a single cluster single mezzo's cluster You're talking earlier about running traditional monolithic apps Can you talk about any real-world issues you run into doing that and In general a lot of that's like really loud a Lot of customers I thought you have tried doing that and they've just run a lot of real-world constraints But obviously that seems like the holy grail So if you have any learnings, you know We haven't really even started to dig into that hard Gavin So there are there are absolutely some classic applications where that will be a problem and and so you're you know If you can get it into container then we can run it And I think and I think we'll just have to take those case by case I mean there's going to be some some big architectures for example I know you know there's been some efforts internally around the adobe enterprise manager because we use that for a lot of content management It's a big footprint And they've had challenges getting that in there, but we haven't started to work with them yet I think there's other there are other opportunities that we are aware of where we know the high level of what this particular thing Does and we'll be able to get it in there. It's just a matter But we haven't you know, we're still not in production yet So our focus has been on get the cluster and all the operational tooling in place And then we'll start really digging in with the customers. Thanks Yeah, hi, thanks So you started out with a very emphatic claim that you should begin with IIS and My question is this if you didn't have one already Would you still begin there or would you start with some other bare metal provisioning? System to begin well, I mean you'd have to you know You'd have to take stock of the resources that you have because what it what it does for you is like it takes a networking Other picture it takes all the hardware physical provisioning out of the picture all of that selection You know if you have those resources And they can be committed to your project then with mesos. Yes, you can run it on bare metal But for us, you know and your team size is really critical there So for us it was you know, it was important I'd say that you know in a large shop you should have IIS first because Not everybody is going to land on a container platform, right? You're going to have VM specific workloads that have to persist for a while right now We don't we can't schedule VMs with this tooling mesospheres hinted about it But but that's not really what this is for and we don't view it that way It's not a high priority for us because we have IIS so I think it's just you know If you don't have it internally you have to decide whether you're going to do it internally or you're going to go outside to a public provider but you know, it's if you can take four people and Kick this thing out the door in three four months, you know with a pretty high degree of Operational readiness then that's saying a lot about all the the add-ons it or the foundation that I ask gives you Did I answer your question? Absolutely. Thank you Just a quick question Regarding operationalizing that your clusters after remember you're going to be dealing with so many teams Later on in your organization. So what is your plan in the future? Are you going to give them access to the cluster of the maces clusters? Because remember you're going to be running one cluster and multi tenants in there Right, how are you going to manage all of this if you can just allow well So they have access to their marathon instance so they can they can run frameworks whatever they so desire right and the same thing with Bobby so those those are where those are the multi-tenant interfaces for the customer the underlying mesos cluster They don't really need to have access to that, you know dealing with the meso side of the thing is really just making sure that you've got enough capacity You know in available IAS or bare metal to you know throw those nodes in another cluster and go to town But all the all the end users will have their own interface and they'll be able to do you know What they need to do to run is really what it boils down to Cool quick question The network infrastructure underneath your cluster. What are you running there? Are you running just a pure layer 3 cmp network? Do you have overlay underlay? You know, how are you managing that to to basically provide the connectivity for all the the underlay or the services underneath? Yeah, so I'm gonna let Jason answer that question It's OBS into Jennifer. Yeah Okay, so you're you're basically using a pure layer 3 network from the network infrastructure side to provide the OBS connectivity between compute nodes That was an affirmative answer. Yes Repeat that. Oh, so the question was what was the what was the network plumbing underneath and it's OBS in the layer 3 And it's juniper hardware. I think is what the platform is Do you have the monitoring system on your application that you deploy or do you have any plan for the monitoring? so, okay, so We will provide some plumbing for consumers of our platform to monitor their applications That is a service, right? We want we have a separate monitoring ecosystem for our purposes and operating the clusters Jason also provides manasca as a monitoring as a service. So There's a there's a variety of tooling that the that the application The people deploying applications on the platform can take advantage of but we're not gonna. We're not gonna do that monitoring for them Does that answer the question? Yes, so it means that the people who were responsible for the application will have to Responsibility for the monitoring by themselves. Yeah, no, they should be right I mean, this is kind of one of those things where it's like, you know The engineering teams need to own their app and they need to be looking at it, you know all the time and you know We've had this discussion internally where it's like, you know in a classic enterprise shop You've got a knock somewhere and they've got monitoring and all that but you know I always say look if the knock ever tells me that my platform is hurting before I already know about it from You know my monitoring alerting and pager duty, then I'm probably gonna be pretty irritated. So So, no, I think the engineering teams the responsibilities on the engineering teams and what that does is make sure it assures that they are doing the Instrumentation inside their application footprint, they know it and they you know and they're in tune with the health of their their application So this is absolutely, you know, we stop at a very specific level and they own the rest Okay for this first of all question is will be do you will you provide a monitoring system? I mean like a console as to for them or just they have to fight by themselves. No, no, no We will they'll actually be several monitoring services that they can take advantage of we'll have some on our side That are that are published as a service and then Jason provides the open-stack Manasca Tooling and they can plug into each one and do both right so they have some redundancy there So yeah, they'll have plenty of options for monitoring. Oh, okay. So I looking forward for your mother is the same then yep Thanks What's that? Oh Yeah, so there is a manasca one of Jason's team members is doing a manasca presentation tomorrow. I don't know what time Yeah So you might want to check that out Hi, have you done anything to integrate? persistent storage or any of the IAS offerings around block storage into mesos As like Docker volumes for example, are we doing? Just NFS at this point and then the EBS behind the the instances themselves So that is a topic that we will get to but we have not started into it yet. That's that's an interesting one Thank you Any others think we're good. So you mentioned that you guys are doing charge back against the open-stack project Are you how are you and are you doing charge back against the past? so there is a set of metrics that are emitted and Currently they get us most of what we need to map there There is a little bit of subtlety to that to that because we don't know for example the open-stack tenant ID that that actually all that charge back flows through and collects So we're working on how we're going to tackle that but there's also more metrics around charge back there That are coming from an add-on package from mesosphere So that's going to be the tricky part is how we actually map into that and we will get there Thanks everybody have a good one