 Okay, it's quarter after by my watch, so I think we'll go ahead and get the show on the road. So thanks everybody for coming. My name is Mark Velker, I'm the Open Sack Architect at VMware, and we've got a great customer panel here to tell you a little bit about what these folks are doing and what problems they're solving with Open Sack. So let's take a minute and just let you guys introduce yourselves. Hello, my name's Peter Bogdanovich. I'm the Manager for Compute and Virtualization Services for Nike's Consumer Digital Technology Group. I'm Tim Gelter. I'm on the Compute platform team of Adobe's Digital Marketing TechOps organization. And I'm Prashant Rao. I'm with Wells Fargo. So I manage the engineering group responsible for deploying Open Sack within Wells Fargo. Great, thanks. So let's start off with a little bit of background. Obviously you guys are all looking at different business problems. We've got a couple of different industries represented here. So why did she use Open Sack? What was the kind of business problem that you were trying to solve and how did Open Sack help you get there? So for us, we didn't start out trying to like build a private cloud necessarily or even we didn't say, hey, we want Open Sack. We started out with just trying to change the relationship we have with managed service providers. So we get a lot of services from managed service providers. We got very slow responses from them for infrastructure changes. And we wanted to sort of change that relationship. So we basically just wanted to manage our own vCenter and our own vSphere infrastructure. And then as we sort of worked with the DevOps teams, they wanted APIs. And as we looked at the ways that we could deliver the APIs to them, we sort of landed on Open Sack as the one that was most obvious. So something of a similar story for us. We throughout the years have had a ton of acquisitions, different companies, and they'll have one or more products within their organization. Each with their own way of doing things, right? You've got different operating systems, different network technologies in use, et cetera. And so we looked first of all at a way of standardizing a lot of that. There's a lot of inefficiencies when everyone's doing their own thing and not sharing between each other and that sort of thing. And then frankly, time to market was very important. We wanted a way of allowing people to allocate resources in a non-ticket as a service model, right? We want to give them API access. We want it to be a stable open API that we can count on for the next at least dozen years, as opposed to something that might go away. Yeah, over the past ten years we had virtualized a large part of our infrastructure using VMware. And a couple of years ago, about three years ago, we developed like a homegrown web app, which has just allowed self-service access. And it was just a .NET-based web application that was we called Dev as a Service. And then about a year and a half or two years ago, there was a real decision made at the executive level to say, okay, we want to build out a enterprise-grade private cloud. And so the decision point at that point was, okay, let's bring an open stack. And so that's really what kind of drove us from a kind of proof of concept model of self-service offering via web UI to a full-fledged, open-stack-based private cloud environment. That's it. Okay, so that's actually kind of a good segue into talking about kind of the workloads that are running on top of this. Obviously, in the open-stack space, we see a whole lot of different kind of workloads. Some are kind of platform two, some are more platform three. There's Petch, there's Cattle. Can you kind of give us an idea of the kinds of applications that you guys are running on top of this? Yeah, so for us, this stuff that's in the data center is sort of the legacy apps. This is we, and one of the requirements that we had sort of going into this is that when we sort of took more control over a v-center or over the v-spirit infrastructure is that we didn't want to require any application changes. We didn't want to require anything of the application engineering teams. And so these are much more like pets, unfortunately. But we're trying to sort of treat it more like Cattle or at least bring some automation to the deployment of these things in the hopes and sort of work towards making these things sort of shorter lived and not so long lived. But it was hard when it was like a managed service sort of, everything was via a ticket. There's a lot of resistance to ever giving up a BM because it took so long to get one created. So this is sort of a part of the progression. But we also had to sort of live with these apps that have a lot of, they sort of expect to live in an enterprise data center sort of environment and provide that kind of experience. Gotcha, gotcha. And so in your case, the consumers for OpenSack are actually your internal app teams, both new and old. Yeah, and really my consumer for OpenSack is really just sort of the release management group because they own the sort of path to prod, all the various environments that the application packages are deployed to as they move to prod. Gotcha. So we have vastly different groups. We're kind of on both ends of the spectrum, right? We've got the 15 year old SaaS application that was born in bare metal, very manual processes, expecting, we buy the most expensive hardware, expecting it'll be up for five plus years, right? And then you've got the other end with these new acquisitions that started in Amazon that expect redundancy in the application as opposed to the infrastructure. And so we're trying to accommodate both of those workloads, right? And then you've got the others that are saying, hey look, we kind of see the point of the cloud. So we're gonna try to move towards that model, but we're just not ready, right? We still need to have the infrastructure be there in case something goes wrong, but we want to be able to spin up resources quickly. So in your case, you've kind of got the full spectrum. Absolutely. We've got some really old stuff, some really new stuff. Yeah, you want to say platform two, 2.53, we've got it all. Sure, okay. Yeah, we're much more similar to Peter. And we have a lot of legacy applications that are treated as pets. And we have to ensure that we can provide a platform that supports those. And also because we have highly regulated environment, so we have to ensure integration with asset management system. Operational readiness, compliances, etc. So in that regard, our workloads are primarily legacy applications. And enabling them to take advantage of the cloud is really almost secondary. Because really there's a lot of changes that just need to take place in order to get those app teams to onboard onto our cloud. And there's a whole process that we go through of onboarding. Do they meet the criteria? Do they need load balancing requirements? Can we satisfy those? What kind of tier classification is that application? Is that a critical application? Therefore, it's probably not a candidate for our cloud right now. And like Peter also mentioned, our end users or our customers are primarily what we call build and release teams. So these are essentially release engineering teams that our act as proxies or surrogates for the application development teams themselves. And so they're the users of our cloud primarily. Using currently the horizon interface. But eventually we're hoping that it'll migrate to API driven usage. Yeah, so that's a good segue as well. When we look at the sort of spectrum of open tag users out there, we see a lot of different consumption models as well. There's some people like you say that are kind of using horizon. They're doing the point and click thing. There's some people using the APIs directly. Some people using heat. So it sounds like in your case, a lot of folks still kind of in the earlier phases where the numbers are a little bit smaller that they're working with at any time. Peter, I know you and I have talked a little bit about some of the larger scale stuff that you guys are doing. So the consumption model is probably a little different for you. So we've only had this sort of, we had this in our lab about six weeks ago. Now it's, we've sort of deployed it in sort of this new vSphere environment that we built out. And we've only had it in there for about a couple weeks now. And just maybe 10 days ago, I sort of like let the DevOps team sort of add it. And they're immediately sort of experimenting with sort of scale and things like that. So we are using heat and they're also using a library called Fog, which is a Ruby library. And so we're trying to do everything through either heat or fog. Like there will be no, we really are discouraging anybody using the GUI for anything or even the command line tools for anything. So one of the principles that we want to have is sort of everything should be checked in, like any change that we make to the infrastructure anywhere should be documented in code and should be checked into a source code management. It's a very devopsy consumption model. We're trying to push that all the way down to like the floor of the data center, that sort of design pattern. Got it, okay, and how about you? So again, with the full spectrum model, right? Like we definitely have teams that launching an instance at a time is absolutely all they need, right? They just want to get a VM, so they do it and off they go to the races. And then we've also got, again, the other side where it's like, okay, heat is fantastic as a starting point. We want to start there and API is important as well. But we also want to move that outside of OpenStack specifically in a way. We have a homegrown CMDB system that's pretty powerful. And ultimately we want all of our configuration for infrastructure to come out of there, right? So you define your cluster, you define your application inside of the CMDB, then it then is going to go into OpenStack via API using heat templates or maybe not, right? We'll see how that works and define what that looks like in a way that we can deploy to our private cloud, to the public cloud, whatever we want to do. Cool, so OpenStack's got this really big ecosystem. There's lots of different pools of abstraction, there's different networking back ends, there's different storage back ends, different hypervisor back ends. You guys obviously all made some choices that are fairly similar. So let's talk a little bit about that. What kind of got you excited about where you are today in terms of how do you go about selecting vendors or different back end technologies that you've been working with? And I think we got a couple of different use cases here, so let's start with you. Yeah, okay, so for us it was mostly sort of brown sort of comfort and sort of experience with VMware. Like particularly my management has a long history working with VMware, much more so than me, and had very strong feelings that that was the most important choice was to choose a really reliable hypervisor platform. So that choice was sort of made before we even got started with anything. And then it was sort of where we struggled more in trying to figure out what we were going to do was trying to figure out what we were going to do for orchestration. And so we looked at a lot of different sort of orchestration tools. So the idea that we'd build out sort of a tried and true vSphere architecture was sort of that was sort of a done deal. And the questions were sort of like, okay, well, let's work with Cisco and VMware and we chose NetApp instead of EMC, but whatever, like these big enterprise tools very well understood architectures. And then where we were struggling was with the automation layer. And we looked at a number of different things. The DevOps guys basically hated all of them until we got to OpenStack. And now they're suspicious, but they don't hate it. Tim, how about you? So for us, believe it or not, virtualization is actually a very new thing within our organization. Two years ago, you could have counted the number of virtual instances on two hands basically. So we've gotten from that to about, I don't know what the latest count, 13,000 VMs I think in the last couple of years. So that was a big sell in and of itself, right? So these other technologies are scary for our teams, right? We're very siloed. We've got a storage team, we've got a networking team, we've got a security team, etc. They're all very much in their own domain protecting their kingdom, right? So if you start talking about software-defined storage, software-defined networking, etc., they hesitate a lot, right? So we had the experience with VMware, we were able to show business value. So we were able to also say, hey, look, we have technologies from VMware that we can trust to get started so that you can get a taste, stop being afraid of it, and then start to come along with us on the journey. Okay, Ed Prasant. Yeah, to a certain extent, the choice of a hypervisor was really not a choice. In a sense, that's a good thing, because obviously with the introduction of OpenStack and a cloud, that's enough of a process and cultural change that de-risking that choice of a hypervisor was a smart thing. So for us, we'd already been working with VMware for a number of years in virtualized large part of our infrastructure. We have a lot of the VMware administrators all well-trained, etc. So it was, we kind of left that aside. And then we actually made a distinction between infrastructure deployment and then application provisioning. And so with application provisioning essentially, excuse me, infrastructure provisioning, application deployment. And for the higher layers of the stack at the app layer, we went through a process of looking at various software configuration management tools, Puppet, Chef, etc., and selecting one of those. So we made that distinction and that allowed us essentially to offer an infrastructure platform layer for provisioning infrastructure. And then at the application deployment layer allowing a little bit more flexibility and choice, whether we use like heat and heat templates or Puppet or other aspects for app deployment. Got you, okay. So it sounds like a couple of you kind of mentioned a couple of common themes. You were talking about how it was kind of an easier on ramp to OpenStack, which in itself was kind of a scary technology at first, but kind of met the needs that you were looking for. And you also kind of mentioned the ability to operate over time. I think, Sean, you mentioned you have an IT staff that were already pretty well trained in that technology, so it's familiar ground for them. So let's talk a little bit about operations. So day two is obviously the thing that everybody's more concerned with than the OpenStack space now. A lot of the day one problems have kind of been solved. And now we talk about how do we operate it over time? How do we deal with all the log messages that all these distributed systems create? How do we keep it up and running? And how do we know when things are going wrong? So can you give us an idea about what kind of things we're looking at in that space? So yeah, so that's gonna be really challenging for us because we've relied so much on our managed service providers. Part of what we're doing is by bringing more responsibility on ourselves is that we have to now provide, we also have to provide these services. These operational roles have to be filled. And so that is something that is, there's a lot of work going into that right now and there's sort of teams working on sort of building out our sort of monitoring capabilities and our sort of incident response capabilities. And so that's sort of yet to be seen how I'm working with them. I want them to be very successful. I don't want my phone to ring all the time. But I'm a little bit, that's something that still has to be sort of proven in our organization. As a part of our installation though, we installed VR ops. So when we purchase the vCenter things, we purchase the vCloud suite licenses. So we installed VR ops and we installed the login site tools. And so we have that wired in and we have that wired into sort of our manager of managers kind of alarm panel basically. So we have that sort of basic guts of that wired in. The thresholds, the tuning, the sort of run books are still sort of in development. So with us, I mean one of the fortunate things with VMware in particular is that it seems like over time it's been a very operations focused organization for a long time, which is where we're at, right? We work with the developers and so forth, but that's what we care about most. And so we already know going into it, you guys have thought about these things. We've got ways for upgrading, which in a lot of cases you talk about upgrading an open stack and it's a can of worms you don't want to open, right? When you talk about just the ability to roll back, right? If there were some issue, right? Again, that's not something that's easily done. And then the monitoring as well, so we have dedicated monitoring teams and we're still working with them on how that would work. But the things we've been able to do with login site have been pretty incredible that are just out of the box, right? You install the thing and you're good to go. So I think a lot of it we're able to view operations as kind of a boxed offering from VMware, whereas we'd need a much bigger development team to cook those things up on our own. Cool, sure. Yeah, I mean obviously in addition to using some of the VMware based tools and vCops and things like that to look at the VMware infrastructure in particular, we also use, we've found a great experience using just Nagios obviously and integrating with our existing network management systems. And working with our NOC for that. And then in addition, we also for log aggregation, we use the Elk, the Elastic Search and Logstash Kibana stack. And that's also been another really great offering. So I think the combination of the kind of the existing tools out of the box with VMware and then some of the newer tools are specific to the OpenStack environment, that's how we're kind of bringing it together. And it's still, there's a learning curve in those new tools for sure because they're not vendor supported, they're open source tools. And so there's a lot of, it's allowing us to kind of get the best of both worlds. Okay, so you're kind of getting the best of kind of the open ecosystem as well as the products that you already know. Absolutely. Gotcha, okay. So in terms of operations, we talked about a couple different size clouds here, a couple different styles of deployments here. How big actually are the ops teams that are running these clouds? Are they fairly big, fairly small? I'd say pretty small. I think there's like five of us that are building this and then we're going to plug into an organization that's bigger. That's they support the stores, they support, there's a sort of bigger organization out there that we will then sort of plug this into it. And then you're dealing with much larger teams that will be dozens or, I don't know, they might even be bigger than that there. Because there's a whole sort of support organization out there that sort of reports to different management and sort of work for the engineering side. Gotcha. Very similarly, the idea of using OpenStack was born within the cloud team. But we've got these organizations, I mentioned that kind of like the siloed teams that we've got. We're working with those guys, they're coming along. And fortunately, we've been able to get a lot of buy in. So we've got people here at the conference from our storage team looking into SEF and SWIFT and these sorts of technologies. We've got a network engineer here who's the architect looking into what do we do around networking from a physical perspective. And we've got a security guy here talking about how do we secure this thing. And so ultimately, yeah, our team's small. We've got three people working on this actively. But we've got, again, dozens within our grasp to grasp onto this thing once we're looking at more of the day two operations. Gotcha. Yeah, and for us, I mean, the operations team that supports OpenStack is about a dozen people so far. And that's primarily to support the OpenStack infrastructure. But the actual instances that created, obviously, there's a whole broader operations team and support team, tier one, tier two, et cetera, that supports that whole side of it. So from right now, I think we're actually doing OK with the current staff that we have for supporting our operations or for our OpenStack infrastructure, per se. But it's going to continue to spike as we grow. Sure. So it sounds like pretty small teams for the most part today that are pulling into a larger organization as things scale up. Cool. So you mentioned networking and security, which are kind of two hot topics in the OpenStack space. We saw those come up in the keynotes earlier in the week here. So let's talk a little about those. In the networking space, obviously, there's a lot of different choices to be made and how you architect the networks to go into these clouds. And a lot of it kind of is driven by what your workloads are intended to be. And we've got a couple of different workloads here. So give us a little overview of what you're doing on the networking side. So we didn't want to require changes of the developers. And we didn't even really want to require a lot of changes in the configuration management. So there were tools that were being used already, you know, F5 load balancers, firewall, like physical firewalls between like physical database servers and app tiers, things like that, that we weren't going to require people to make changes. We wanted to basically be able to deploy these apps sort of unchanged into this environment. So as we looked at that and we looked at our choices there, we decided to use provider networks. So we basically built a regular sort of like data center with the same tools, with F5 load balancers, and then with, you know, a sort of east-west firewall in there. And then used provider networks and sort of mapped VLANs in and sort of kept, tried to keep relatively large, flat networks that we deployed lots of VMs into. And I haven't had, it's sprawled. So now there's probably a couple dozen VMs that I have to keep track of. But I tried to like, you know, we, I very consciously tried to like sort of say, well, you know, we're going to do like slash 21s or something like that or slash 20s. We're going to make it really big and flat. And so that was sort of our, you know, our choice that we made this time around. So in Adobe's case, you guys have a lot of acquisitions, a lot of different teams that are working within the cloud. So maybe a little different networking model. It is. So, and also the other piece to that is the physicals, right? We can't ignore those. We've got, you know, more than 75% of our infrastructure is physical and we have to interact with those systems, right? So we're very much in the midst of making a lot of those decisions. There are some things that we've come to terms with, like, you know, if we try to extend what we're doing in the open stack world to the physicals, we're not ready, right? We're not going to be able to do that. And so we're taking more of a green field approach. We're saying, look, we're going to, we're going to, in certain data centers, we're going to roll this thing out and we have to decide does it need to communicate with the other instances? If so, what does it do? Does it go over the WAN? Does it go direct, connect? You know, what are we going to do there? And it seems like we're kind of coming to a consensus about is that we're going to be doing, working with the network engineering team to get a very solid underlay network. And then we're going to transport over VX LAN and, you know, hope to do translations to VLANs and, you know, we're still cooking it up, right? So this isn't, this isn't production today, but it's going to be a complicated problem. Cool. Yeah, when we made the decision about a year and a half ago to go with OpenStack for our private cloud and the enterprise, we actually decided to go with Nova Networks still at that point. So we've actually have stuck with that and it's basically just using Nova Network with flat DHCP manager. And in one case, we actually do have a need for supporting VLAN manager within Nova Network. We're in the process of upgrading or, you know, making a decision to go with Neutron and NSX, et cetera. And so that's going to be happening in this year sometime. So as such, you know, there's obviously, that may require a forklift upgrade, we're still exploring options. But with regard to things like load balancer, et cetera, we still have a pretty big team, a load balancer team within the company and we're leveraging that, you know, so those capabilities outside. So we don't necessarily, we're not, we're still taking the crawl, walk, run kind of approach to things in the cloud and really just getting adoption onto our cloud and moving them from just virtualized infrastructure and to onto an open stack administered virtualization infrastructure. So, you know, you've been around the OVNSI community for a while now. In some cases, some of you are relatively new to the OVNSI community. Give us a feel here for what does OVNSI community need to do next to make customers like you more successful? What are the big gaps that we have left to close? Yeah, I'm not even sure we even really understand sort of like where the problems are. Like our first problems that we've experienced are the things that we're sort of missing for us is sort of around sort of like, sort of understanding the API scale and just being able to, you know, we're having a hard time, sort of, you know, we're just so sort of new to it that we're having sort of a hard time sort of understanding kind of like where the edges are and, you know, where things, how far we can push things. So I think that sort of the maturity, and this isn't maybe so much about OpenStack, but maybe around the VMware's integrated OpenStack, is sort of understanding sort of where the limits are there or best practices for design there. I think that that's still sort of an evolving thing and that we're gonna be a part of those decisions. Yeah, yeah, I'm not sure I have a whole lot more than that though. So I'll answer in a couple of different ways. One from the VMware side of things and then community in general. So, you know, we've got Dan over here, we give him a hard time all the time, you know, it's a fantastic product. One of the things we've been drilling is, you know, we want our developers to not really know or care that VMware is behind it, right? So if they find something online, a heat template or something that they wanna try out, we need that to just work, without any caveats of oh, this doesn't work because, you know, the image has to be stored in glance in OVA format or, you know, that sort of stuff, right? And so masking a lot of that so that developers just, they don't care what's in the back end, we can tweak it as we need to is a big thing for us. From the VMware side and then from the community side in general, it seems like, and this, you know, not to hurt any feelings or anything, but over the last months that we've been working with, that it seems like, you know, a lot of ways the community tries to do everything Amazon's doing, try to be everything to everyone, taking on too much without spending enough time, you know, stabilizing, getting rid of bugs, working on the performance so that we make sure we can scale to the extent that we need to. And along with that, there's a lot of confusion about, you know, what can I use today versus what is, you know, just incubation don't really wanna touch it yet, so we don't really know where to make our bets. Yeah, it's interesting. I mean, if you look at the keynote this morning, I mean, there's a lot of talk on Docker and if you go to some of the sessions here, the Docker and Container and Cloud Foundry sessions are just completely jam packed. And I was kind of reflecting on that and thinking, well, if we look at the whole pets versus cattle, I mean, pets are here to stay. And it's very unsexy and it's not really interesting to work on. But one of the things that I would challenge the OpenStack community would be to continue to offer support within the platform to extend support for pets, because they're here to stay. And that's one of the reasons I'm with VMware. It's a great choice for us because it allows, you know, you have that stable platform that has all those underlying capabilities. And I think that's something that's lost. I mean, when we have Docker and Mesos and all this kind of cool technologies going on, there's all the stuff that kind of runs the day-to-day business of very large enterprises like Wells Fargo that we need to continue to support and we need to provide capabilities within OpenStack if we're gonna get the adoption from those companies. Cool, thanks. Okay, I think we've got time for maybe one or two questions from the audience. There's a microphone here in the middle if folks can make their way there. Or just raise your hand and shout. Yeah, so right now our cloud is internal. Our consumers are internal-facing. So it's all internal business applications. So it's not the external-facing transactional, financial transactional applications currently. But that can change in the future. Oh, yeah, I think that was to me. So yeah, absolutely. They've been engaged since day one. We've had to have a number of security reviews in order to bring in any new products. I mean, with VMware, of course, that was de-risk. That was well-entrenched. But to bring in OpenStack and any of the ancillary technologies, they were embedded with those day one. We have to get their sign off. And there's a security review process that all new products have to go through, that it went, it did. About the rest of you guys. The security team's pretty involved. We work closely with the security teams as well. And they met with us regularly. They're actually very excited about NSX. I mean, NSX is a big selling point for them. And so as we sort of become more mature and start to use the security groups better, because my boss, even before we thought about using OpenStack, was really hot for NSX. Everybody was like, oh, we're gonna get it anyway. But then from our perspective, it's like, well, how would we manage deploying things? We have to figure out how to, like in some orchestration tool, we have to reach into vCenter and we also have to reach into the NSX manager if it's gonna be really kind of complicated. This actually simplifies a lot of those problems. But there's a lot of enthusiasm for our security group around NSX. Yeah, so to kind of add to that thought, we had a similar experience. We had our security team, first of all, when we went to them with this idea, they said, oh, whoa, we're gonna do a cloud. This is intense. And then we started talking about some of the things they could accomplish. NSX in particular came up as very interesting for the policy application, as well as surprisingly images. We up to this point don't do anything with images. Everything's a bare metal or an installation onto a VM. The operating system laying down the configuration, users and so forth. So they see that as an opportunity of saying, hey, look, we can actually standardize this thing and we can run it through a pipeline before it goes to production to check it for compliance and so forth. And so they see a lot of wins there as well. Cool, so in some cases it sounds like OpenSax actually giving you better security than you might have otherwise had. That's the hope, yeah. Excellent, excellent, excellent. Other questions? I can go first if you want some more time to think. So I think I already kind of showed my hand a little earlier when I talked about the challenges that we've had working with internal teams. A lot of time it hasn't been a technology problem as much as it's been a problem with sharing the vision, getting people to work with us and not against us. And fortunately that's changing. So I think making an effort at a high level to tackle this as a group would be a good thing to change for us. And I think that, well, we sort of went at this backwards. We sort of said, well, we want vSphere and then it was like, well, how are we gonna run it? Well, then we'll need automation software. I think that we basically just were completely upside down. We should have said, what do our customers want? They want these APIs to do provision infrastructure. How are we gonna provide that? And that is sort of a mistake that is sort of my fault, but sort of like kind of driven from above too. And so I think that if had we had more time and there's just, it's a very aggressive team that in the CDT part of Nike. And so there was a lot of pressure to just sort of just go do it, just get something done. So slowing down. I like that. Just do it. Just do it. Where can I call up? Yeah, and I think, you know, if I had to do it over again, I would have probably relaxed our controls that we had for access into our environment. So we had a pretty rigid onboarding process. And so that perhaps stifled a little bit of the adoption. And so I probably would have changed some of those policies and then said, let's, you know, really make this more of a elongated POC to begin with. Right now, not for us. I mean, it's really, you know, pets types of applications or P2 type applications on our environment. And so they're highly vetted even before they get onto our platform. So if they need, let's say, you know, some type of, you know, an Oracle database and, you know, it's in a different data center, you know, then that's just not even a candidate application. So they're pretty highly vetted before they even get onto our platform. From our perspective, I mentioned the hardware footprint. So we've been managing that sort of spill over with spare hardware in the past that's just always online and ready to go. So those types of applications we're well aware of and we're looking at them specifically to give us feedback. But I think I mentioned before, you know, we're still in POC. We're working with them to say, how well does this actually work? We're starting to work with, you know, Dan and team on how do we do auto scaling so that the, you know, the admins don't even have to think about it, right? So that, you know, Solometer triggers something and instances get spun up and added into the infrastructure. And, you know, those are all roadmap at this point. No real experience to speak to there. Within Nike, there's teams that are building apps that we consider cloud native apps. And so the things that are cloud native and they're deploying to public clouds have these things sort of built into them and designed into them. We're dealing more with sort of legacy things. We believe that there's opportunities for responding to workload in this, but it's sort of a level of maturity thing. So this is something that we sort of have like these code names for this. We're calling this one like there is the old stuff we called Coke class. Like this is Coke MX and is the code name for what we're doing right now. And then, you know, the idea that we'd be able to, you know, respond to workloads and stuff like that is sort of more as we try to containerize things and move things in that direction. So that's sort of like a 2016. And I think we're just sort of calling it rainbow right now. Because it's like over the rainbow. Okay, in the back. So from my perspective, a lot of it's just unfamiliarity. You have a lot of people who have been doing the same thing and doing it quite well for a long time. Anything new is gonna be questioned, right? And that's not a bad thing, right? You absolutely should be doing that. So I think just selling the idea that, hey look, maybe there's some efficiencies to be gained here. It has been the best way of going about it and trying to solve some of their problems with the tools. I mentioned, you know, with security, for example, we were able to show some value there. And we've been able to do the same with some of the other teams as well. Yeah, for us, it's somewhat cultural, but also because we've already virtualized a large number of our server infrastructure, you know, the question they ask themselves, the app games ask themselves is why? And so we've had to use sometimes, you know, the care, sometimes the stick. And you know, eventually we're trying to get them to the point where, you know, you get onto our platform, then you can actually engage in application transformation and, you know, make it cloud aware and take advantage of our APIs, et cetera. But right now it's just, you know, trying to get them on board. We're not really seeing a lot of resistance. People are very enthusiastic about being able to have control over their infrastructure. The idea, you know, they're used to working in an open a ticket and wait somewhere between 24 hours a week to have something happen for them. The idea that they can immediately like make changes and get it worked on that day is incredibly attractive. There is a little more, there's, we're having a little harder time with the folks that the windows, sort of the legacy windows apps, we sort of, you know, invited them to sort of, you know, put their development stuff into our, into the cloud. But they, they, so far I've not taken this up on that. Any questions? I don't know. We're not moving there yet. So I, That's why it's over the rainbow, right? It's over the rainbow. We're not there yet. This is, we're not, there's no Docker container model in anything that we're doing. We are, we are, we have our own deployment, homegrown deployment application deployment tooling that's like tenure, it's very mature. It's been around for years and it has the sort of key value substitution database kind of thing that it does. And it has its own package format. And all we need to do at the infrastructure layer is get a BM image built that has sort of enough smarts in it to like get puppet running that deploys this, these deployment tools and then you can deploy apps there. So we also see that as a big problem. Containers, data proliferation, you know, sprawl and so forth. And, you know, it's something that's becoming more and more apparent as we start to actually plan these apps, right? We've had discussions with developers and we've gone over like, you know, all these different tools they want to use. You know, they want to have physical machines with RAID. They want to have Mezos running on top of that. They want to have multiple copies of data in HDFS and, you know, it's a problem, right? And so we're hoping that the community comes in and helps us out with that because I don't think from an operation standpoint we're ready for it quite yet. Maybe that's just us, but that's kind of how we see it. Yeah, and yeah, we haven't embarked yet on the container and our strategy for that yet at all. It's too early. Who really got a stand-alone security policy inside your private color architecture as an easy example, UCI? So we sort of, we separate everything out in vSphere and we have completely separate tenants and separate networking. So it's completely kind of isolated. I'm not sure that we're going to put the PCI, it's sort of, this is sort of an idea. We sort of built the clusters for PCI and we sort of built the tenants for it and stuff like that, but I'm not sure that we're going to deploy there. We sort of talked to the security guys about it. They've given us the okay to do this, but right now it's all in the dev space and we just kind of fake the PCI stuff in the dev space. So we create the same sort of interfaces for it. So you have to kind of go through and authenticate through a different tenant and stuff like that if you wanted to deploy things into the PCI area, but it all lands on the same clusters in vCenter, at least in the dev space. There's, and we've sort of modeled how it would work on the prod environment, but we haven't actually built it out yet or even we're not really sure. I think we're just about out of time. So lightning around real quick. Any other answers? No. Okay. All right, so I think we're about out of time. Thanks everybody for coming. We've got a couple of case studies. So come see Trevor if you want to take on one of those.