 Good morning, almost afternoon. So we were going to talk to you guys all about OpenStack with Intel IT, but Guillaume from Digital Film Tree inspired us. We're just going to talk about sweaters as a service. It's a new technology. And I got some of my cohorts, Greg Bunts and Tredar. So I hope you guys are OK with that. It wasn't what we submitted as abstract when we got accepted, but it's good to be on the leading edge. So we've been OpenStack users for quite some time. Also Intel is obviously a major OpenStack contributor. And what we're going to do is just basically walk you through the life of three years. Greg and Tredar are also going to do a talk, I think, tomorrow at 5.20 to go into even more depth about the specifics. Of course, those slides will be available video too. But are you guys going to serve beer at 5? OK, so we'll try to get some beer in their session. We'll see how we do. OK, so let's just jump right in. And we're going to try to leave five minutes for Q&A, but we're also easy to find. You can find me at DOS at Intel. And I can connect you with Greg or Tredar too, as well as we'll be available for chatting. So just a real quick view, about 100,000 Intel employees or, of course, we're across the world. If you don't know, we predominantly build silicon. That's the brains of most of the data centers and still a pretty substantial client base out there. We have lots of data centers across the world. We've been focusing heavily on how to drive that data center capacity to be highly optimized. So Cloud is one of those technologies. All three of us have experience in our design grid environment, as well as what we do in traditional IT. The talk today is going to be mostly on traditional IT. So just to give you a kind of perspective, internally, we look at this thing called domes, is what we coined inside. But we basically segment our different types of environments based on what we're doing. So very similar to how a business would look across their vertical segments. So design is the majority of our environment. Greg and I both grew up in the design space, supporting our grid environments, about 60,000 servers that basically handle chip silicon simulations. We have manufacturing, of course, we put a large data center basically next to every factory that's building a silicon or doing assembly a test. And then what we're going to focus on today is office enterprise and services. We OpenStack actually will affect all these areas, but we're doing it a different pace for the different use cases. So I'll describe that in a bit. And just real quickly, so we called Cloud, well, sorry, Grid, the uncle of Cloud, because Grid has most of all the attributes that you would think of from a cloud, self-service, resource pooling, multi-tenancy, elasticity. But we focused heavily on a design computing grid a number of years ago, where we made the switch from Unix to Linux, and we drove massive optimization in the environment that saved us quite a bit of cash. And we took a lot of those concepts and applied those to the private cloud environment. And then this cool thing called OpenStack came out, which to us, Linux was for the hosts and OpenStacks for the data center. So it brought an ability for us to continue to optimize, and that's what we're going to focus on. So we do run our IT shop as a business. You guys, what do you want to talk about to our big three goals in here? Basically, we're looking at velocity vectors, get from 120 days to land a physical server through the virtualization phase. We got it down to under 20, and now it's under an hour to land individual instances. So with the platform as a service on top, we would be driving toward idea to production in basically one day. So these types of velocity improvements for our end users. Always on compute, right? So our customers never feel or see any downtime at all. And then sustain our operations while dealing with a flat to down budget. It increased the ratio of servers to engineers that support them. Yeah, we've used these three vectors for years now, and they work well at our CIO staff level, and also for our technical teams to say, hey, how do I keep pushing the envelope on what we're doing? Just to real quick, most of you probably have some ugly server landing processes. This is way back in time, wow, a long time ago, Q4 2010, but this is what the day in the life with acquiring a server was, and I'm surprised to find that still some enterprise IT shops still have these crazy processes. There's lots of people in all these steps. But so in 2009, it was 90 days of physical, as Greg said, 24 days for virtual. In 2010, we did our first private cloud, and a distinction that we have between a lot of enterprise IT shops is we believe in self-service for everyone, for dev tests, non-prod and production. So fundamentally give the ability for the app guys to make things happen. So it was three hours virtual, but two weeks for networks. And then in 2000, 2013, where we're running now, under 30 minutes for compute in most scenarios, and storage and network now on demand. So, and as Greg said, where we're going next is idea to production in less than a day, because everything is about enabling our app developers to try things really quick, fail fast, and they have to have those capabilities. So not only do we drive open stack usage internally, also at the platform to service level with cloud foundry today. So a little bit on cost. So we believe in this concept of owning the base, rent the spike. We've done lots of math calculations, being a fairly large enterprise shop and being willing to take a lot of cutting edge risks in order to push down the cost envelope while increasing our feature parity with the public cloud. We found that this model works really, really well for us. And so it doesn't mean that we buy more capacity than we need because we use the ability to rent the spike. And just quickly on our strategic direction, based off the cost vector, the agility, the reason that we're doing this is we have apps anywhere, anytime, for any device, for our internal end users. So they should be able to access the stuff, which means the software developers have to be able to build the solutions really, really quickly. For a cost vector, it's driving the large scale automated hybrid cloud infrastructure. And we're also very, very interested in helping our peers in other enterprise IT shops by showing them, hey, this is what we did. And also we use it to help us figure out the tough challenges. What we love about OpenStack is it's created a community. So we can actually now talk with our peers openly versus talking about our proprietary code, or maybe we can't talk about NDA discussion that we got from a vendor. OK, so you're the next one, right? History and path to OpenCloud. So I talked about the design grid since the 1990s. 60,000 servers, we call this Cloud's Uncle. We did an enterprise private cloud in 2010. This is running about 13,000 VMs across 10 data centers. Really just compute infrastructure as a service. But everything is virtualized in this environment. Almost everything is on demand. And then we decided to open source private cloud. And I'll walk through these in a little more detail. So our goals is federated, interoperable, and open approach. So federated, since we own the base, rent the spike. And we also use software as a service solutions. We need identities to be federated across all these different environments. So from an end user's perspective, it's really easy with your software developer or an app user to be able to connect to these different services seamlessly. We strongly believe in interoperability. The standards bodies haven't caught up to the massive pace that we're making in open source today. So the natural choice appears to be interoperability through open source. And again, open. So it doesn't have to be open source necessarily. Open standards are good for us too. But we do want to open APIs that allow us to run public and private. Hoping you guys can see some of this wording back there. But this is a journey that we've used internally for our CIO staff. And then the Open Data Center Alliance grabbed it and is using it to describe the path to this goal, this federated, interoperable, and open cloud. We used to, you don't have to use the years. You can just say versions. Some people moved at a different pace. But I'm just going to walk you through how we went down this path. For our consumers, we have IT ops. So that's where we drove a lot of automation. The app owner who was in the app developer, the key distinction there, app owner buys an app, installs it on a server, app developer writes code. And then our goal of how we enable the end user to be able to access any data, any apps, any time, anywhere, any device. So all of our work has been focused on how to both drive those value vectors that we talked about and to make sure our consumers, which are Intel employees, can get what they need. So pre-OpenStack, we ran this private Cloud Gen 1. All three of us have lots of scars and fun times of when we just something simple as provide self-service to app teams, how many times we have disagreements and arguments with some of our folks about just turning on self-service. Pretty much daily. Yeah, it was a daily thing. So I remember it was like 90 days. And we said, hey, we want it under three hours. Everybody said you're insane. Do it three days. But we really just had to push through the concept and get it out that self-service should be the norm. It is the norm in the world. If you don't do it as an IT shop, why would the app developers not go and ignore you and go do something completely different? So we connected all of our available infrastructure. So we didn't just do Greenfield. We built a proprietary software stack in-house that allowed us to expose basically our compute environment, virtual machines, on demand. So we saved quite a bit in this model because everybody chose to use that versus buying their own servers. So a lot of this story has been heard before and other are doing the same, but massive savings on resource pooling. But what we found as we started working with these Cloud Aware app guys is that they needed more. Acquiring a virtual machine is definitely not enough. And when we really thought heavily about how do we do full private infrastructure as a service, which is give me compute storage and network on demand, as well as higher level services like low balance and firewalls, just way too much technical debt to create the full solution. So we went out. We investigated all open and proprietary solutions in 2011. It feels like that was ages ago. So we did a lot of analysis. And at this point in time, I think someone could correct me if I'm wrong. When we made this call, I think there was like 80 developers on OpenStack. But what we found in it was the signs of early Linux and a true community approach. We were very, very early Linux adopters too and helped drive a lot of that mainstream. But so we made a decision that we're going to bet on OpenStack. We had a very small team, probably 10 people. We convinced our CIO in a closed room that this is the way we got to go. We built a small DevOps team to just basically go fast. And by June 2012, we're online for production Cloud Aware apps. This was Diablo by the time we actually got in the environment with Essex Keystone. So we started with Cactus in the lab. But we needed a public cloud solution. We didn't have enough capacity in this space to be able to handle all the demand that we're going. And the legacy apps need love too. So you see the Pets and Cattle concept a lot at OpenStack summits. But you could think of it, the legacy apps are built not for design for failure. And they're going to be around for a while. Even with the projections at, say, by 2018, something like 70% of the environment will be these new apps, we still have quite a bit of an environment that's not. So what we did is where we're at right now, we're going to dig into is we have live migration enabled. We're moving forward with a single control plane. And the guys will talk about this. And we have two POCs going. Actually, we just completed one, where we're doing hybrid OpenStack in a public private environment. So from any user perspective, they can go to Horizon or the API. And we can basically treat a public cloud environment as a region. So we're very close right now to our five-year goal. Want to take over on the choices? In terms of our private cloud, I'll share some of our design choices that we've made, as well as some of the architectural choices that we have made, and share some of the architectures that we have. In terms of when we started off building our private cloud environment, one of the main goals for us is to be able to abstract the underlying infrastructure from as well as the cloud providers from the user. And that's why OpenStack was very useful for us in terms of achieving our end to abstract the underlying infrastructure. And another key aspect is you have multiple cloud instances at different regions on the public side, as well as the private side, to have a common identity store and be able to federate across instances that you're provisioning across those. And another conscious choice that we made was Open Source first, primarily because we wanted to minimize any proprietary API lock-in and to give us the choice in terms of having different solutions in the back end but have a common control plane that abstracts the user automation and basically orchestration automation thereof. And as Das indicated, our goal is to support both cloud-aware apps as well as legacy apps because legacy apps are still around in our environment and they'll be around for a while. Hey, just one quick key point on back there. So when we abstract the cloud providers, what we also need to be able to do is expose key hardware features. So I don't know how much you guys know right now what's going on. Even at Amazon, they're now making it really easy for you to find an instruction set. So most software developers and probably, sorry if I knock anybody here, but most software developers initially when they start doing complex solutions, they're not too concerned about performance and some key vectors that you can gain from instruction sets. When you start really, really optimizing your software and getting it optimized for speed and cost, you have to start using more sophisticated solutions underneath the hood. For instance, Amazon today exposes key instruction sets for encryption as well as for massive graphics analysis. So these types of things we have to be able to expose out at the same time that we, for the software developers, they have that one API that allows them to work across all environments. In terms of a high-level technical strategy where we started off in 2010 when we went on our OpenStack journey is, in fact, a private cloud journey. Our first goal was to leverage as much of our own infrastructure as possible because we had a significant amount of internal infrastructure and pretty much of a cloud provider size, if you will. And so there was a conscious choice for us to take the goal of use our capacity first before going and paying an external provider thereof. Where we have been using public cloud is for targeted purposes for non-differentiated apps and primarily in the software service and more so in the infrastructure service as we have gone on over the last couple of years. Where we want to be is where we are evolving towards is essentially have the smart orchestration layer where we are able to make policy-based decisions whether it be cost-related, whether it be proximity to the end-user, capacity-related or basically based on certain security capabilities to be able to make the choices and move the workloads whether it be on a public or a private cloud and be able to create, move the applications irrespective, basically very transparent to the end-user. And while there are solutions on the market today that cover that orchestration layer, we strongly believe that this is the next area that will be massively open-sourced and there are some open-source solutions today but we don't feel they have the full community backing yet. So part of our goal here is as we enable that. First, it's open-stack as the API layer but some of the more sophisticated telemetry and scheduling that's required for provisioning across multiple environments isn't in open-stack yet. There's some people that are solving that across multiple clouds but we think this is an area ripe for massive open-source sophistication soon. Absolutely. In terms of why Intel IT picked open-stack as our infrastructure as a service control plane, we've touched on a lot of these items but just quickly going through this, it allows us to, the API is provided by open-stack, allows us to expose the underlying hardware infrastructure in a self-service manner, not just the compute aspects which we had done some custom automation on but storage as well as network aspects of it. So that by exposing the infrastructure, it allows us to build higher level of automation on top of that which increases our velocity in terms of delivering solutions to our customers as well as by leveraging open-source solutions, it increases our efficiency primarily because we are able to minimize the internal technical debt. If we had to build this on our own, it would take much longer than us to leverage the community and participate in the community to actually advance that. So in terms of how we are shifting our strategy as we go into 2014, there is a significant amount of cloud environments and virtualization environment that we had that was based on proprietary hypervisors and our Gen 1 cloud solutions as Das had indicated. As we move on to OpenStack and OpenStack enabled open-source-first type of cloud environments, what we are doing is actually using OpenStack as a control plane, but be able to provision to both environments, both our Gen 1 and Gen 2 environments so that we are able to minimize the amount of migration that we have to do, but at the same time, we see that it gives us a choice in terms of what providers we use for the various infrastructure pieces in the backend. Intel in general has, we make another conscious choice which is to dual-source or multi-source our infrastructure providers. So this enables that strategy as well in terms of having a common control plane, but be able to provision to multiple infrastructures thereof. Yeah, we can't underestimate the value of this. So this is a huge deal. So like if you took away one thing from us, it's the fact that OpenStack can actually be the control plane for all the investment that you have today and the ability to allow you to bring in new investment or try out new technologies all while giving your end users the ability to write once, run almost anywhere. And the fact that we're starting to stitch this across the private to the public environment with OpenStack as the control plane, this is massive. So this has really never happened in the industry before and what hasn't happened is that all the current major players have basically decided to join forces with OpenStack and build plugins. So it opens up a massive opportunity for the ecosystem to be very, very disruptive and it forces everybody that maybe wasn't moving fast enough before. It forces them to move very, very quickly to keep up with all these startups that are showing up that are, who we had, ink tank was bought for 175 million last week from Red Hat. And we use this today in our environment but there's an opportunity to basically cause massive disruption. So really the one thing I would say is the single control plane is massive. And you guys are gonna deep dive on this too tomorrow? Yeah, absolutely. We go much deeper in this and I'll talk tomorrow at 5 p.m., okay? In terms of some of the things that still need work and where we see as key areas to close for the enterprise from an OpenStack community perspective, one is, I mean some of these we have put the challenge up in 2013. Some of the things that we're actually leveraging for example, shared block storage for boot volumes. We also have the live migration capability in place because that is one of the elements that is very key when you are supporting both legacy apps and cloud-aware apps. Live migration is not as necessary for the cloud-aware apps but for the legacy apps, you definitely need the live migration capability because of the nature of the apps. Hey, just one thing on that. And actually I wanna ask how many work in enterprise IT or IT at all? Almost. Okay, cool. So this pets versus cattle thing is interesting and live migration too. At Amazon today, if they wanna take the host down all the instances go down or you're basically, you have to build everything designed for failure. But Google did something pretty interesting, not too long ago. They didn't make a lot of press about it but they basically turned on live migration with KBM and Google Compute Engine. So the fact that there's a recognition that we can, you don't have to go completely one direction though we highly suggest everybody builds skill-out apps. There is the capability with the technology to solve things like live migration. You don't have to pretend that it's impossible. It's totally feasible both with our existing infrastructure and what we're doing today with KBM and Seth with Share Block Storage. It doesn't have to be one way or the other, right? Absolutely, absolutely. Another thing which is really key for us especially when we are trying to support legacy apps on top of our traditional enterprise apps if you will on top of cloud infrastructure is the restart of instances when a host fails, primarily because of the nature of the apps and they're not able to, not designed for failure if you will like the cloud over apps. Disaster recovery and being able to abstract the infrastructure and work with the APIs is another key aspect. Another thing as we are moving our journey as does indicated year four and year five where target we want to move towards is a federated hybrid cloud environment. So as we move towards that to have the ability to use basically identity abstraction, whether it be through Federation or whatnot to be able to work across the different environments and keep the user experience consistency absolutely key for us and that we see as a next level of evolution from an OpenStack community perspective. Another thing which has already been called out even in the keynote today in terms of rolling upgrades being a very key aspect of moving towards keeping making this an OpenStack and based environment enterprise sustainable if you will. So rolling upgrades, making it secure, adding more capabilities for us to be able to do audits against the infrastructure be able to detect issues very quickly is absolutely key as we move forward on our journey based on OpenStack. So this is some of the things that we had DASA shared at the Hong Kong Summit in terms of areas that are areas of challenge that need to be addressed from a community perspective. We have hit some check marks in terms of things that we already implemented in an environment like boot for volume and live migration but there's still more work to be done in terms of the restart on failure and some of the other things that we have just touched on. Hey at a quick check for all the IT people or actually anybody who would want restart on failure? A few of you. Yeah so this is a thorny topic and we're trying to get the PTLs to help us tackle this where hopefully we'll get some more time this week but it's a tricky problem especially because it requires OpenStack and all the components to be aware of different things that are going on. So the way it's handled in some environments today is just massive intelligence of what the host is doing, what the hypervisor is doing and what the guests are doing. So we need that type of same intelligence to basically automate the concept of restart on failure. So we're gonna talk about this later but there's gonna be enterprise birds of feather and we're trying to get everybody that's involved in enterprise to meet up tomorrow and we have our list, I bet you have your list and we wanna help basically as users communicate back to the PTLs, hey this is a really tough thing. I would like restart on failure to be solved by Juno. I don't know if that's possible but it's definitely a big ask and you have some more asks too, right? So in terms of storage, we definitely are looking to leverage more and more of the open distributed block storage solutions and that's the indicator we leverage stuff in our environment but there's more to be done in that space to be able to harden the block storage solution so that it is ready for enterprise scale from that perspective. As well as some of the capabilities that we are seeing coming, they've evolved quite a bit but there is more room to be able to scale to enterprise needs from a block storage solution perspective. Another thing, from a networking perspective we've seen the core capabilities be available via cell service like the routing and switching elements be able to do security groups and some of the basic access control but as we go forward, the load balancer service and having it basically have more richer capabilities than where it is as well as the firewall service so that all pieces of the infrastructure are abstracted. So today for the load balancer service we actually leverage proprietary APIs that our hardware load balancer vendor provides and use that for our cell service but as we go forward what we would want to see from the community is expand those capabilities so that we don't have to use the proprietary APIs that off. So in terms of 2014 and going forward focus areas some of these we've touched on rolling up upgrades absolutely critical so that we can actually upgrade our cloud environment in a sustainable manner and very key for the enterprise IT needs. Be able to connect to all existing infrastructure from a single control plane. We talked about various elements there are a lot of elements that we are already connecting via single control plane but as we indicated load balancer and firewall as a couple of examples where we would also want to see those infrastructure elements also be able to encompass or comprehend it within a single control plane strategy. Restart of VMs we've talked about it quite a bit. Hybrid cloud to create that abstraction layer between private and public cloud instances and expand OpenStack to take on more traditional work like backup and recovery, bare metal provisioning is another area where from a virtualization perspective the single control plane and the abstraction of the infrastructure works quite well but as we move forward we would want to be able to provision physical servers the same way so that the whole hosting environment the whole data center environment can be managed from a single control plane. Greg do you want to- Yeah so we talk a ton about technology but we actually probably should have put this first and we promised we'd talk about some of the tough challenges we had and I'd say people was definitely it wasn't all technology right? Right. Yeah it turns out that people and culture will be some of the biggest barriers. You know we tend to come to conferences like this and talk about CI CD or DevOps and they have some industry buzz to them those words but we need to recognize that there are additional dimensions that need to be acted upon inside the people environment to affect these types of changes right? So we would be talking about workforce transformation in terms of driving from a workforce with proprietary skills they can go very deep on particular tools but they're not very broad and pluggable in the sense that we can use them in a broad spectrum DevOps type of approach. So you can see on this foil we've driven from some very centric type roles on the hardware side and the tool side to a software centric workforce right? So software engineering, large scales systems and management would be the types of skills that we're after. In addition to the workforce transformation you may be capturing a few unicorns that there are the support model type considerations. So driving from a L1, L2, L3 kind of established support model to DevOps where applicable right? There is a spectrum where it makes sense but as we saw in our grid computing environment once you're able to get to that end of the spectrum you don't need to go into your data center and deal with every single host that's down. You wait until 10% of your hosts are down and that's when you deal with them in batch. We'll talk more about these tomorrow but there are additional factors that need to be taken into account. So driving from proprietary tools to scripting fundamentals put your developers on the front line right? So have them taking tickets, give them the opportunity to deal with problems on a longer term basis, do some actual problem management instead of being on call and being barraged by incidents The key technologies would be broadly OpenStack, Linux, Python. You end up with some pretty, pretty pluggable people. We call them around Intel T-shaped resources. So pretty broad across the top and able to go deep somewhere typically but pretty pluggable overall. So drive small teams of experts geared around feature teams drive agile software development life cycles everywhere and automate everything, get away from the pages of documentation. The knowledge articles, this is again that front line type of a, put your dev people in the fire to automate the environment. Base, base. And we still do like ITIL, right? Yeah. So we follow all the IT practices but we basically had to go in and show a lot that you can automate. So when you do your problem management rather than just writing a knowledge based article just right fix it, right? With a script or and share that knowledge with your peers through the script. Less documentation that tells you how to point and click and more documentation and code. Change or become irrelevant. Yeah, we were thinking a lot of ways to show this as a picture but that's probably good enough. So there's a slight disagreement I think in the industry whether or not OpenStack and it probably does needs that really easy, easy button. So anybody can do it and ensure that that needs to happen but it's really important we believe that your IT workforce is skilled and is changing. It's very easy to become irrelevant now with the capabilities that are coming out. So you have to go deep. You have to be that T-shaped model and really help your team become software developers or at least able to read the code and be able to give feedback in but change or become irrelevant. So just to kind of wrap up so our Intel IT open cloud these are the three vectors I showed earlier velocity, agility, automation to drive help us drive down the cost while scaling and efficiency a lot of that's on the resource pooling concepts. So we've been pretty successful both with our gen one environments as well as what we're doing with OpenStack too. We did say we talk about some ugly stuff as well so I don't know if we wanna give any examples of really terrible situations that have happened over the last year with OpenStack. Can you guys think of any that you wanna anything really ugly? It's tough because OpenStack mostly works. For block storage, I mean but it's not really OpenStack right so we've had when we had failure of nodes it's basically caused some of the memory and CPU to spike up which caused an impact from that perspective. So that is what we meant by hardening the block storage aspect. And that's actually a good example. So for us it was learning where basically we had some failures not enough distribution of nodes too large of nodes for handling this type of failure zone. But what we learned out of that was hey we can slow down the recovery of the technology this one was set and be able to keep it up. We're still dealing with some performance issues but nothing catastrophic so we were hoping to tell you guys a lot of ugly stuff but OpenStack actually works pretty well and I think probably most of our stuff was just getting the culture change out there and getting people to accept that this is code and you can work against it. So we want all IT shops involved. There's the analysts and the press I think are starting to pick up that OpenStack is real. We kind of think it's a done deal. OpenStack is the cloud operating system for data center just as Linux was for hosts. So you can join us on Wednesday we're gonna do Birds of Feather it's the kickoff. There's a special focus from the OpenStack foundation to help drive a very clear direction to make sure enterprises can utilize OpenStack. Shridhar and Greg will be talking about Intel IT in more depth so a lot more on the control plane a lot more on our CICD processes and a way if you're not writing code is to help us with blueprints. So what you'll find if you go talk to some of the dev guys they're very, very nerdy and they really want to maybe deep dive in the quality of the code or the concepts of the code but we need people in IT that think solution scale and can help us make blueprints that make a lot of sense and help the developers because not all the developers actually have a massive experience running a large scale environment so we need those of you that do to get involved and help us create these blueprints. So just wrap up, we're federated, interoperable, open, strong success, we're going forward with a single control plane, key thing to take away and there's lots of changes required to run at scale culture, skills, processes and technology. So with that, we have a few moments for questions. Four minutes. Oh, I gotta do one plug. Who wants a MacBook Pro? A new one? So over in our booth, it's a little teeny booth but we have two clusters running. The fastest person to get OpenStack up and running and launching VMs, we'll get a MacBook Pro. There's also prizes for some other competitors so if you wanna try it out, see how fast you can do. We're hoping there's some wizards out there and we're gonna run these at every summit, not just Intel, lots of people in the community to basically say, hey, here's a really tough thing. So everybody wants a really easy way to install OpenStack so we just wanna basically prove it. So if you wanna show your chops, go over there and you can win a high-end MacBook Pro. Question, Mr. Helion. Nice shirt. Feel free to go for the short answer of this, especially given the timing. Sure. For doing spikes where you're sending them off to other vendors that are not necessarily OpenStack based, I'm assuming that it's not just simply press a button, I'm assuming that the customer who's an internal customer for you is gonna need to recode a little bit and that there's gonna be some work but that it's still a compute instance, can be a compute instance, but it's probably not like auto migrating or anything, it's more, oh, let's try and move these, let's shut these down, start them up over there, let's start resetting them. I mean, basically how much pain is there? Yeah, so basically we chose to not put any magic in there, we actually had an orchestration layer before that was internal debt, we've shut that down, we believe the model is OpenStack interoperable. So today if you wanna run something that's not OpenStack, you basically have to run in both locations and be able to scale out. So you keep a small environment, there's no magic with cloud workload movement, everybody thinks that's nice buzzword, but we actually have to have active instances and then we can scale out in the other one. If you're somebody that needs to burst for the spike. I guess you workloads to OpenStack, did you have some kind of criteria, what kind of workloads you wanna migrate to OpenStack? Yeah, so let's be really clear there, we're not migrating the workloads, what we're doing is we're taking OpenStack and we're importing all the VM metadata, so OpenStack becomes the control plane. So everything that's already running today, and this is one of our key points we wanna get across to, everything that runs today, we're not moving it, we're gonna absorb it into OpenStack so OpenStack can control it. So you can have all the basic functionality, you can get out of OpenStack with everything that's already running today. Did you need to have, so you talked about live migrate, did you need to build some custom tooling on top of it to say get higher SLAs on some of the workloads, such as evaluate or something if you have to get it working to get to support higher SLAs for you or someone just worked with us? So, so, evacuated and restart on failure doesn't exist very well in OpenStack today, so this is why we firmly believe in, we have live migration working in our existing giant private cloud environment, so again, we can keep that as it is with the restart on failure, we're not throwing it out of the way and control with OpenStack, and what we want on the peer open source play environment is the ability to restart on failure live migration. We do live migration today fairly well, it's not as fast as we would like and it's not always perfect, but it does work, so you can do a host maintenance mode with Nova and you can, it'll schedule out across the cluster if you have shared block storage behind it. Cool, I think we're out of, oh wait, one last question, and then we'll wrap up. So, he actually kind of touched that and you kind of did already, so I was wondering about the live migration piece. You mentioned about it, not necessarily working well for some workloads, the other thing I wanted to mention is I totally feel your pain on the set defaults. All right, hey, thanks everybody and have a good rest of the summit.