 Can everybody hear me? We're good? Started? I think I'll get started here with a little story before While people are coming in who here flew in to Vancouver who he is from out of town Okay, so am I I flew in yesterday and I'm really grateful that I can be here today because when I was going through When I was going through customs yesterday. I'm in immigration. The guy actually started stopping me He's like hey, why are you here? And I'm like, oh, you know, he's asking these questions I'm like, yeah, I'm coming to this conference open stack conference open stack summit. He's like, okay, that's interesting So what are you gonna do there? I'm like, well, I'm a presenter You know, and he's like what are you presenting about? I'm like, hmm He's starting to ask some interesting questions here. Anyway, maybe he's just interested or what so I'm like I'm presenting on some cloud infrastructure. He's like, okay, can you go in a little bit more detail about that? And so I'm like, uh, I wonder if I'm gonna be able to make it to this conference There's something here. That's not good He's triggering something that maybe I have to go in some back room and give him the presentation So I'm really really happy that I can be here with you guys Thank you for coming and taking the time to come see our session. We're really excited to tell you guys what we've been doing You know, we wanted to start with just saying, you know, we're not here to tell you guys how to do stuff We're here to learn how everybody else is doing it. We are telling a story a customer story We're gonna tell you the road we took to get where we're at today Kind of the experience we've had but we're definitely still learning. I think as is all of us. We're here to collaborate So this is a great venue where we can all collaborate learn more and make this product a better product for all of us. So With that, I'm gonna get started here. Let me introduce myself. My name is Franz van Royen I'm cloud architect for Adobe digital marketing I sit in the tech ops organization and the team called compute platform team and with me. I have Tim Galter He is a cloud engineer. We're in the same team So wanted to start off a little bit here because when most people think when we say we work for Adobe They think about one thing they think photoshop or acrobat reader or acrobat something like that And yes, those are great products and that's what we build But we also have another side to the business another business unit called digital marketing Quick quick tell you who actually I've heard about the digital marketing business unit that doesn't work for Adobe That's that's pretty much what we get all the time So I'm gonna give you guys a little bit of an introduction to that just give you a little bit of overview Not also help you get context for the rest of our talk. So digital marketing business unit is Business unit from Adobe that provides digital marketing services To numerous lines worldwide. We don't do so with multiple products and these products are SAS base So software is a service. They're sitting in data centers that we either co low or we own our shelves worldwide And so we as technical operations are the operational arm for digital marketing We run the operational aspect for all of these servers. So there's some numbers up there You can see we have beta bytes of data tens of thousands of servers You know, 13,000 VMs or more than that You know, we do 800,000 transactions 70 to 70 trillion transactions done in 2014 I think the point of the slide really is we deal with quite a bit of scale And we deal with scale every day from an operational perspective. So we see some interesting stuff When we work and how team dynamics work how everything all of this comes together And so being that we we're faced with some unique challenges and how we address those channel challenges Has been a very very good learning experience for myself, you know, whenever I think about working for digital marketing I'm always happy to tell everybody that I absolutely love my job I get to go in every day and work with very talented individuals that you know have been running this from Before most people knew what SAS is and they've had to invent ways that we're only trying to figure out today That they've been doing for years. So, you know As since I'm up here and I can use this opportunity I always just want to do a call out to everybody within technical operations Thank you for giving me the opportunity to work with you guys So that gives you a little bit That gives you a little bit of an overview of the digital marketing within a couple years ago We decided digital marketing mostly has been physical machines a couple years ago We wanted to introduce cloud infrastructure or the idea of cloud within these within these data centers that we have And as we started going down this road to the cloud We did some discoveries as we were building out this infrastructure And so kind of to start it off. I'd like to talk about this road to the cloud that we had And the first one that we that we quickly understood is that we need to make sure we do standardization Now most people when they think about standardization They think only by this type of switch or only by this type of server But that's not really where the standardization needs to happen that standardization needs to happen in the abstracted layer one layer above the Software layer if we're able to abstract and standardize on the technologies We use in the software the hardware underneath doesn't become as critical to standardize So if you look on this graphic, we have like this triangle here and it's a three-level triangle So we have standardization. We have automation. We have self-service. So if you look at the area of the triangle That's actually effort that's put in to get to the next level So standardization really is one of the hard pieces to get right to both successful cloud infrastructure The next one once you have that standardization layer in place You can go and say okay now I can go automate because a lot of your variables have been eliminated, right? You're standardized. You know what you're working with and then once you get that Self-service comes kind of as a bonus feature once you have automation and self-service in place Because these components are there you're able to automate you're able to standardize now you can now have true cloud infrastructure So as we're looking at this, you know some of the technologies we've used and from a standardization perspective You know I mentioned the the fact about you know going into software layer cumulus is a good example here Cumulus and I don't know who you're familiar is familiar of cumulus Cumulus switches, okay a couple of you so just a brief overview cumulus is basically running Linux on a switch So you can use any type of switch Dell has switches HP has switches There's commodity switches available, but the operating system on the switch stays the same Which is in this thing in this case a cumulus distro So once again, we were able to run a different hardware platform, but able to abstract that standardization into the software So it doesn't matter if we're running HP or Dell we're able to have the configuration and all of the variables the same in that software layer using cumulus So that are a good example of the standardization methodology Automation some of the automation tools we use salt puppet shift things that it doesn't enables us Enables us to get to the next level here to be able to say How do we do this on a large scale without changing one thing on one server and then over and over for? You know 30,000 machines and then self-service giving me the user or the tenant the ability to get those resources themselves So that they don't they don't have to come through you and you stick it as a service or say, you know Please go ahead and provision it they doing it. They're doing it themselves So these are some of the things that we saw what we would rate us to understand what the road to the cloud is So I want to talk a little bit about where we're at right now and then this will lead to the discussion of how we got to deciding on open stack so as far as the initial the initial Foray into cloud networking goes or sorry into cloud infrastructure goes We we tackled compute first. Okay. We looked at the compute environment and said hey We have some problems of compute. It takes a long time to get compute resources You know it can take weeks or months to get a you know a server spun up So if we abstract out and give a hypervisor then we're able to pre-provision and that becomes a lot faster So we also remove a lot of the complexity no longer There's a consumer or the tenant have to be concerned about how do we plug this thing in? You know what network ports that they need to be blocked in what VLANs need to be trunked down here All of that is now abstracted out all they need to ask for is a compute environment The product teams are able to become more agile because of that instead of having to worry about Shipping the hardware and tracking the orders all of that's being taken care of now And once again, it provides a standard layer of compute and we introduced some of them automation And it was actually a big success a lot of people really liked it They're able to use it they able to use it in as an elastic component So able to stretch out to it when we have big workloads coming in they can ask for compute in a very rapid fashion We are able to provision it for them, but from our team's perspective We could quickly saw that there's other things holding us up We're able to get them a VM or a compute unit or an instance You know within sometimes minutes or 15 minutes or 20 minutes But as soon as that thing comes available, it doesn't mean everything else around it is available It doesn't mean networking is ready. It doesn't mean that storage is ready So we saw a weakness immediately in this offering and so I'd like to draw analogy here Of a steam train so if you look at a steam train it consists out of the locomotive And we we can say that's kind of like the compute. We have the tender. Who knows here what the tender is Cool, thank you. It holds the coal in the water the storage, right? So it has you have a coach which in this case is the tenants Okay, and you have the tracks, which is the network. So the problem. We actually solved was to see Oh, wait a minute. These guys are coming in. They're building their own coaches. So bring all the parts They're coming to this the tracks and putting this coach together And then they're like, well, this is taking a really long time So all we did is we actually built the coaches forum. We didn't fix anything else. So they're coming in They're saying, oh, this is nice. We're getting on and the coach is done We don't have to worry about all of it The thing is it worked great until everything else started slowing down the coal is running out the tracks You know the switch man isn't able to go run down there and switch out because now we don't just have two tracks to switch on Because everybody's moving faster now. We have a hundred different tracks, right? So it's not the other pieces of the system isn't moving fast enough So with that we understood that there's a bigger problem here We really had to address the overall infrastructure not just the compute We have to look how do we do networking better? How do we do storage better and then how do we leverage the current compute environment in that in that environment? So this is where we came up with a project that we're currently working on. It's called project Adam Project Adam is very much still a POC. It's not running in production today We're still we're still very much in a flux state of changing things and dealing with it But we've also learned a lot From Adam the reason why we named the project Adam is we weren't dealing with the smallest pieces of what we see Computers today, which is the you know or infrastructures today like containers and things like that We weren't also dealing with things that's physical. We were dealing with something sitting right in the middle, which is that? Is component So the the idea here was to include storage and networking compute like I mentioned And if you look right here, we looked at it and said how do we include storage network and compute? But provided as a unified layer that these people can consume both on a platform 2 and a platform 3 Applications and those are pretty much when we say platform 2 and platform 3 cloud native applications and non cloud native apps Applications and I'll dig in a little bit in here. Why that's important But we wanted these consumers to see it through one layer, which we we Decided to be open-stack after going through some iterations. We did look at some vendor, you know vendor provided cloud management layers or crowd management platforms And as well as open source one and open stack really became a very a very good one for us because it made one It had a good a lot of community support behind it. I mean provide an open API and as well as the ability for us to Consume that API through heat The other pieces that we put in though is also the operational Aspects that we really thought important from an operational perspective. We want to be able to do showback and charge back We want to be able to know how much this thing costs us to we want to be able to do rapid provisioning and have provisioning systems sitting behind it And then of course monitoring we want to know when this thing from an operational team when this thing is up and down So like I said, we you know, we evaluate many turn keepers Many term turn key solutions And then finally decided to go through on the boc with VMware and why VMware will will go down there Go down that rabbit hole here in a bit So But it's still a very much an iterative process, right? There's still the things that can change but that's one of the reasons why we still chose open stack We are able to go in and change here underneath the complexity and rip and replace right if we want to decide on a different Network technology we can the consumer won't know they're they're going through open stack layer If we're deciding on a different storage technology everything from their perspective stays the same So from their state perspective, it's a unified layer underneath. We're able to deal with the complexity So to dig in a little bit more project at them Adam hardware. We started at the top of rack like I mentioned earlier We were looking into cumulus. It's a great solution You know as far as from a standardization perspective go and we're able to switch between all of these hardware platforms Without necessarily having to change the configuration or the operation or berating system on the switch for our spine We did a spine leaf architecture here and so for a spine. We use the s6 thousands That's a Dell Dell switch for our leaf We use the 48 10s and it's the o-in switch the o-in switch is for open networking So then it enables you to run cumulus on that switch For our PUC we actually ended up with one 6k and two 4k is that we tested on so why cumulus? I think I addressed that a little bit But you know standardization the ability to do configuration management on a switch The ability to have to have that complexity abstracted into software So some of the compute components here We realized that there's two computed compute components. There's a management component Where the management clusters are running and there's the compute clusters the management component doesn't have to be as dense of storage But it has to be dense of compute. So actually for the management component We work with Dell and they have a platform called the FX2. It's a great great platform. It's a 2u server That provides four blades in the server And then so we are able to get a very dense compute environment there as well as very resilient But for the compute clusters we abstracted out and said okay, we're gonna do the Dell R730s a lot of disk space here So we can run our block storage and Then from a storage option We're like you said here there's so much work to do We're kind of looking at using the same hardware that we're using for our compute environment. It might be a good fit From the object perspective, you know, but there's still a lot of things we need to figure out We need to understand as far as what is going to be the right platform from a hardware perspective for that So here goes this so let's go down this why VMware? Why the VMware stack? Why bother keeping it around? Well, there's a couple of reasons the first one is resiliency Okay, so if we're looking at the stacks earlier in the in that graph I show that both cloud native and cloud non-cloud native applications need to consume this infrastructure When it comes to the non-cloud of the platform 2.5 applications, a lot of them aren't very resilient They require the resiliency to be Provided by the hardware underneath so they're expecting that hardware to be very highly available Versus a non cloud cloud or a cloud platform platform 3 Cloud application or platform 3 application, whereas the resiliency is abstracted out and being provided and the application itself Right, so it doesn't expect the hardware to be very resilient actually expects the hardware to fail all the time So having those two having those two we have to be able to deal with the fact that Hardware has to be resilient VMware is great at doing this because the the VM vCenter platform provides us hA functionality DRS a lot of the stuff They've pulled enables us to do just that for platform 2.5 provide that resiliency Also, there's a current investment in the technology as well as the people themselves they're knowledgeable about VMware They understand how to work it they understand how to support it So it's a great, you know leveraging concurrence technology right there They have some unique technologies as well ESXi very good hypervisor NSX, you know, great and great networking virtualization tool and visa and an up-and-coming You know block storage capability and then in this last in this last line right here I went in and sat down with my director and we were talking about the fact that you know How do we build out this open stack thing and he was saying, you know And he's saying what do you need and we're saying well, you know we need a lot of resources most of the open-stack open-stack teams are fairly large in size and Our team is not we have a pretty cool with small team that runs the cloud Infrastructure and he said well find somebody else to paint your fence I said, why are you talking about and he's like well, it's a Tom Sawyer reference Do you know the story and so he went ahead and told me that you know Tom Sawyer went and said made everybody else wanting to paint his fence So some other people went and did his work VMware is painting our fence here They're helping us out whereas we don't have a big team to be able to do this They're helping us being able to do this So small team are able to get an open-stack deployment set up in in And being able to be used by people without necessarily having to grow out that team NSX and vSAN I kind of talked about this but you know a little bit deeper dive here why NSX and vSAN NSX, you know give us a distributed routing option. So we are able to Provide a true distributed routers across the hypervisor that router shits within the kernel of the hypervisor And then the big one is just micro segmentation, you know, it's big buzzword Everybody always says micro segmentation micro segmentation. It's cool. All of this different stuff But I actually found real value in it. If you remember earlier, I mentioned that Digital marketing is consist out of different product groups Those product groups we all we all all require tenancy. They have different maturity levels They have different stacks that they run and they want to be able to provide Or be able to access their resources, but only with the entertainer It's micro segmentation actually here makes a lot of sense for us. We can actually segment them out They're able to get security. They're able to wrap around a lot of the Networking components without necessarily having to grade VLANs and things like that which takes time vSAN, you know that there is many other storage options available We used vSAN because it's easy to provision easy to deploy It's a good. It's a good technology for us to use during the POC. It makes a lot of sense So the compute layer here once again, I mentioned earlier, but it's either running the ESXi hypervisor You know a lot of people like yes, you know still don't necessarily know all of the VMware technologies and and you know probably shouldn't but the the you know, it's not once again, it's higher the open stack layer, but you know Their technologies are what everyone enables us to be able to provide this and things like HA high availability where we're able to provide true Resiliency when any host in the cluster or any multiple hosts in the cluster goes down And we're able to keep those machines up and running because of the technology and like this We were able to reach our goal of providing a certain SLA or if we're able to you know If we're able to borrow how these clusters are balanced, you know As we were adding physical compute resources or as these virtual resources are added to this cluster How do we make sure that all of this is balanced? How do we make sure that each machine is actually using the optimal amount of Compute DRS is a great tool to do that. It balances our clusters. It keeps it running We're once again a great tool, you know right here You can see we use vCenter as the centralized management platform ESXi and then a shared storage aspect with which is not sand vSan in this time We we purposely started moving away from sand and then with all of that coming together We finally have the vio project or product being able to be launched on On our infrastructure and this is kind of the logical view of what vio is So there's three or four network segments one being the API network This is where both you will access horizon as well as the API components You have your management network as where all your management components will be running on vCenter NSX manager, you know the the actual open stack components You have your edge clusters and your compute clusters, which is for future tendency And then of course your transport network where the floating IPs will be and they'll be being attached to that So just a quick pre-requisite on deploying vio Because we're used in NSX MTU needs to be upped so default MTU size doesn't work has to be sixteen hundred or above For our keymail switches, we just push it up to jumbo frames and it was working just fine We have the ESX hypervisor, but we were looking at scaling once again So standardizing a great tool to be able to do that was auto deploy We're able to run all of us our hypervisors in a stateless mode So no hypervisors ever installed on any the kind of desk it all runs in RAM So when the hypervisor or the compute unit boots up it picks he boots the image down Pulls it into memory and loads it there any updates any kind of changes We only have to change one image and start by rolling a reboot and we have the updates going Vs v-center we kind of talked about already, but it manages the edge and compute clusters as well as v-san Has to make sure that's enabled You have to have NSX deployed and your availability zones created and then you're ready to go and deploy via vio via ovf So all of this stuff mostly the pre-requisites was already in place We had a good understanding of how to build all of this out from a visa via VMware's perspective There was nothing new here from a vio perspective. It was nothing more than deploying ovf And so the next slide is actually the walking through the actual deployment process So I just want to check where we are on time What how many what what time this decision in? Sorry Okay, so what I'm gonna do is I'm just kind of quickly walk you up through the deployment process you guys because I want to Make sure I turn some time over here to Tim and leave it open for Q&A So with a vio deployment here We have the initial Screen coming up. So this is actually being logged into v-center and we've been wanting to deploy an open stack instance Using all the infrastructure that I showed you before all of that's in place. I'm going to go ahead and build out that infrastructure So I'm authenticating with v-center I select my management cluster that's available to be deploying my management components to I'm going to go ahead and set up my Initially my management network and that rate to a red line that you guys saw earlier on the logical view And this is where all of the management components will run I'm actually going to give it an IP range and all of those machines will be IPed in that management network automatically This is configuring the external network. So this is where the API components are going to sit as well as the as well as the as well as horizon And this is configuring load balancer pair. So we're going to front this We're going to front the interface with With a load balancer pair We're going to go ahead and add a nova cluster. So We're going to front the we're going to front it with a no we're going to front nova with a compute cluster This is the r7 30s that I said earlier And then we're going to tie into a data store in our case that would have been a vSAN data store And then we're selecting data store for glance So where do we want to keep our images and we can actually keep our images in the management network data stores as well If we want we can actually move them between them In this case, we just used all of the data stores we could And then we're going to go ahead and configure Neutron in this case, we're going to use nsx And we're going to go ahead and log into the nsx manager nsx has been deployed with a three controller setup So it is fully redundant In the if one of the controllers failed the other controller will be able to take over And then we're going to go ahead and set up keystone of authentication We were just looking using a local database authentication in this case since it's poc And we do have the option of doing ldap or active directory And then set up syslog we want to make sure we are setting up monitoring again So we're sending this to a syslog server And then review to complete So that's pretty much it after that we click finish And then what it's going to do is actually going to use the provisioning system in the back To go ahead and start provisioning those machines So it'll build out the management cluster It'll build out all of the open stack components that's needed as well as spin up initial nsx Instances for edge edge devices so that we've preloaded meaning that as we're as tenants come in And start provisioning components. They don't have to wait. They have the resources available So with that we are actually having a fully available Open stack environment for tenants to consume. So what I'm going to do now is I'm going to turn the time over here to Tim And he's going to walk us through the race Okay, so we got the the green light that we've got short time So I'm going to go fairly quickly through this hopefully get some of the big ideas In there and then we'll take some questions. So one thing I wanted to point out again This is user story session. So, you know, what what's our What's our story? What why do you care what we have to say, right? So one of the things is the digital marketing business unit has made a ton of acquisitions Just over the last five years that I've been there. There have been, you know, I don't know at least at least a half dozen or so It's hard to keep track each of those Obviously we'll have one or more applications And so the amount of applications with different software stacks different infrastructure needs, etc Just has has grown to a point where, you know, we see ourselves in a lot of ways as service providers Just like, you know, a private or a public cloud solution would be and so With that, you know, we have some problems like ip you know overlapping ip space differing needs for storage for network for compute, etc And so we we quickly decided that we needed a multi-tenant approach, right? We can't treat everybody in the exact same way At least yet, right? Frans mentioned the standardization. We're trying to get to that point But until then we we do need to play the multi-tenancy game, which is often like tetris, right? You've got this finite amount of space and resources and so forth And you're trying to jam everyone in there and make something functional at the end of the day So at the same time, you know from the operation side We have these requests of, you know, needing certain things and then on the the other side of the fence We have developers that are saying hey look we need to, you know Quickly go to market. We can't fuss around with installing operating systems and all this kind of stuff We need image-based deployments. We need to be able to self-service We can't do ticket as a service as we like to say And we don't just want to do it via web portal, right? We want command line. We want API And then at some point, you know, we want to be able to burst a public cloud as well If we don't have the right resources, so we want to be able to support that as well So here's what we see as the the power of OpenStack I'm not going to go into as much depth as I was hoping to with our previous infrastructure service You know, we we do a lot of hardware bare metal installation We use Cobbler, we have Kickstart, CentOS, etc The normal story that a lot of you are familiar with I won't go into that other than to say it's very much ticket as a service, right? Someone puts in a request Some weeks or months later they end up getting their infrastructure Fronts gave an overview of Project Atom what we're trying to accomplish there But there are, you know, multiple components we want to focus on In order to get rid of the delay in getting those resources to our internal customers And just wanted to point out we're not done, right? Once we have OpenStack in production, that's not the end of the road We're very much thinking about platform as a service We're thinking about ways that, you know, we can get past this idea Of just having an instance, having an operating system But having something functional that that's standardized for application development as well So Fronts mentioned that we're using a Cumulus for our POC Reason being the commoditized top of rack switching I just wanted to show a quick example of the manual configuration This is one of the things like just just getting in and starting to play around with Cumulus was very easy That right there is from a switch that's in our production data center that's doing beta work That's the entire config So all I did was I grepped out all of the non-comment lines And that's the entirety of the configuration So very slick if you're doing it manually But of course configuration management's an option as well So if you like any of these tools or any others it is Linux So you can install whatever agent you want You can manage it in whatever way you like So, you know, just a shout out to those guys We haven't, you know, we don't have any contracts or anything So we're not committed, but it has been a fantastic tool to work with Just wanted to quickly show an example of deploying a heat template Because this is kind of the end goal for this phase of our Implementation of Project Atom We want to say, hey, look get out of the business of doing single instances Instead we want to say spin up an entire stack at once So for dev test, this is fantastic For AB testing, this is again fantastic And in general, you know, this is where we want to go We want cloud formation compatible templates That'll work in our private cloud, in the public cloud, and so forth And so this, you know, this isn't anything new For anyone who's done anything with heat templates But anyone who was new to it, I thought a quick demo would be nice there So I'm going to speed this up just a little bit It's launching And at the end of the day, we end up with a stack So here's a, you know, a simple three-tier application And, you know, we're doing a lot of experimentation in this space currently We see this as a great way of going forward And getting out of, again, the single instance game And then just some thoughts on future vision Again, we're not done at OpenStack So we're very anxious to see what work is going to be done In the container space within OpenStack and elsewhere So these are some of the, you know, the buzzwords The technologies that our developers are screaming about That we're starting to play around with as well And with that, I guess we'll break for questions If there are any Oh, yeah, I won't go off that But go ahead Yes I'm not familiar with Yang, Franz Hello Oh, thank you The question was, are you looking at using Yang-type modeling For some of these configuration systems? So Yang model is a, it's a standard system For building an XML template or model system for Either deploying configurations or services And it talks to being used There's also another model called Tosca That's used for that Another standard way of doing that Where it actually not, it's not only Does the deployment of the services But also the configurations of your downstream systems Yeah, so Tosca in particular is on the roadmap Yang I hadn't heard of before It's something that's a little further out But yeah, we're thinking about it We're not involved in that currently Okay, great Have you started kicking around the idea of a Lightweight OS in your container approach Which CoreOS comes to mind But there's some other options out there as well Can you talk a little bit about that please? Yeah, so we have kept that on our radar We've been looking at CoreOS We've looked at Photon from VMware There's Alpine There's, you know, the Ubuntu snappy There are a bunch out there We're still in the evaluation phase CoreOS for now seems to be pretty mature Versus the others But we also like, we're a big CentOS shop So Red Hat's Atomic is very interesting as well But no determinations made yet I have two questions One question is how did you make the business case deck To basically buy everything from VMware And if you do the full STDC Suite from VMware Why didn't you go for Vee Realize? So both good questions Let me go with that in backwards order So Vee Realize, we actually did try We POC that product out We felt that for our company And for some of the objective We were trying to meet It did not meet or did not meet those objectives So it didn't have to require functionality That we wanted necessarily out of Vee Realize As far as I understand it Is very much focused on the provisioning aspect of machines We are already very mature in that aspect We provision operating systems all day long And we have a very mature provisioning system Therefore we didn't necessarily think That that's necessarily a good product The other thing I did mention earlier Is when we selected the open stack layer We didn't necessarily want it a vendor driven layer there We wanted an open layer Where we can change things underneath If we wanted to without being inhibited by the vendor As well as being able to contribute back to the community And make our own updates and changes That we see valuable versus having a vendor drive that roadmap So that's kind of why Vee Realize From the VMware perspective We already had infrastructure build out on top of VMware Before we started this roadmap or this project And therefore we wanted to be able to leverage A lot of what we've built out already As well as the knowledge we've gained from that To take advantage of that And then back to my opinion The things methodology We were able to leverage some of the knowledge VMware has a big team working on open stack We were able to leverage their knowledge And their work towards us And then just to reiterate The other piece was the big difference Between platform three or cloud native Versus platform two applications We have a lot of applications That are highly dependent on infrastructure And so having instances go offline Is customer impacting And so we're trying to phase out Those sort of applications But until then we need the resiliency offered By VMware for our virtual instances When do you anticipate your open stack Moving from POC to prod? Current plan would be later this year We also have some reference material in the back If anyone wants to read more about Well, I guess VMware has more reference material In the back if they want to read more about it So it looks like we've got about two more minutes Let's take one or two more questions Okay, thank you guys Thanks for coming, appreciate it