 So, what I wanted to do today was I wanted to, the way that this talk was originally positioned was it was kind of, it was in sort of the product track or something like that and I think it might have, our product manager might have originally wanted to kind of do like a product walkthrough but I don't know about you guys but that's kind of boring to me. If you can find it on our website then who, then you know, why should I present it to you here? It doesn't really make sense. So, I kind of morphed this a little bit into kind of talking about kind of our design philosophies as we were building our product and so we're talking about more kind of big picture items which I hope will still be of interest to you. And I was kind of hoping we could figure that out and you could ask me some good questions at the end. There's a few pieces I might go through quickly because they're not super interesting. But the important thing is to talk about why cloud scaling out of so many open stack companies cares about making open stack look a lot like Amazon Web Services. So just real quick for a background on me, I'm on the OpenStack Foundation Board of Directors have been since the foundation was formed. I'm used to building fairly large projects, the largest of which were three 100,000 square foot data centers in Hong Kong, Taipei, and Seoul in 2001 which were fairly cutting edge at the time. Cloud scaling was part of the original open stack launch in summer of 2010 and we got a number of the first including the first public compute cloud, the first public storage cloud outside of Rackspace and the first public storage cloud in Korea. So sometimes people give me a little bit of hard time because we don't contribute as much code back although we contribute more code back than any of the startups that are our size that we compete with. But part of the reason is that we are very focused on building and deploying clouds and you'll see more of that in a second. And just one thing that I think is really important for people who are new to OpenStack is I was working on and building clouds before OpenStack existed and kind of OpenStack became a vehicle for what we believed a cloud need to look like and hence the focus on Amazon Web Services. So if I'm really honest, our first wave of customers didn't go so well. I'm not going to get into history on that. Over the past year, we've kind of had the second wave of customers kind of on the 2.0 product and it's gone fairly well as you can see. We've been very focused on production deployments and I don't really want to do kind of a comparison against other people. I don't know how other people are doing but when I say that we're actually deploying and managing and supporting large production clouds, you can see here that we're doing at a pretty reasonable scale for a small 30-person startup. I've got three, three and a half support people supporting 600 servers with 8,000 cores and roughly eight and a half petabytes with over 10 production clouds. So I think it's going pretty well and two of those are Fortune 15 companies. As far as I know, I'm the only startup this size who's able to capture and hold on to a Fortune 15 company for better or worse. So today, we're going to talk about why Amazon, why Amazon-like, why an elastic cloud is what I call it. I have to talk about hybrid cloud interoperability to really bring that home and then I'm going to sort of talk about the pieces that make OpenStack be more flavored like Amazon Web Services. Pretty much anybody can do it. It's not just us and I don't necessarily want to hold any of the secrets close to my chest. I just kind of want to highlight what it takes to get there. So people have probably noticed this, but the majority of the big public clouds are not running OpenStack and probably will never be running OpenStack. And yes, I might get some pushback from some of the folks who are, but I think it's kind of like when's the last time you saw a company that was growing as fast as Amazon suddenly plateau and then lose all of their market share. I mean, that just doesn't really happen. So we have to be realistic about the fact that Amazon is the market leader in public cloud and that has implications for any kind of hybrid cloud scenario that any enterprise is thinking about. So digging in a little bit more, what is an elastic cloud? Well, it's something that's designed along the lines of Amazon Web Services or Google Compute Engine. It's got kind of a scale-out model. It's designed for smaller failure domains. It's the cattle clouds, not the pets clouds. It's a foundation for cloud-native applications. People have probably seen sort of the Netflix use case and how they've been designing a three, four, five, nine app that runs on a two-and-a-half, nine Amazon Web Services infrastructure. And that's because those apps are out around failure. And that's the lesson learned from Google. If you look at how Google runs their normal business, 10 to 15% of their capacity is down at any given time. And they still run a four, nine business because their applications are designed to expect failures in the system and then to run around those failures to do their own data replication and so on. And then finally, I think it's a side effect of making these decisions when you move responsibility for uptime into the application. The infrastructure can be less expensive. And what that means is that most elastic clouds are very inexpensive. So this pets versus cattle meme that some people have... Who's heard of that? Excellent. So I stole that originally from Bill Baker, who was using it to talk about scale-out versus scale-up. And I was the first one to apply it to using it to talk about cloud. And then it took off. And the reason that it really worked out for me is that I struggled and struggled to explain to IT staff and CIOs why you wanted a cloud that looked like Amazon. And the problem was is that every time I walked into a meeting with folks, they were all on the pet side and they didn't understand why you would want sort of a cattle-style cloud. And we would always get into sort of like, well, what do you mean I don't have nick bonding to the switches? What do you mean I only have one top of rec switch? So it was always kind of like going back to the like pure technical aspects of how you build one of these things. And so I found this to be the best analogy and it basically goes like this. We're moving from the era where we treated servers like pets to the era where we treat them like cattle. And in the era where we treated them like pets, we gave them cutesy names like Bob the male server. And then when it got sick, it was all hands on deck to nurse it back, to nurse it back to health. And in the new era, we treat servers as disposable. We treat them like cattle. We number them, you know, WWW001 to 100. And then when one gets sick, we take it up back to the woodshed and then we replace it on the line. And that's the key change in mentality that has to happen when you're thinking about an elastic cloud. Your servers are disposable. They can go away at any time. Your black storage is disposable. It can go away at any time. Your data can be lost. You've got to take care of it. It's your responsibility at the application layer. And that's this thing that I think people don't get. When you take the responsibility and you shift it from the infrastructure, when you take a two or three nine application and you basically move it to being more of three, four, five nine application because it's taken responsibility for itself, then the underlying infrastructure no longer needs to be gold plated. So in 2010, before OpenStack existed, I spent most of 2010 in Korea in Seoul, where we were doing one of the largest three cloud stack appointments of the time for Korea Telecom. And when I first walked in to start talking to Korea Telecom, they said, we need a five nine cloud. And I said, do you have any idea what a five nine cloud will cost? I mean, I can build you a five nine cloud, but that is going to be really, really, really expensive. And they didn't. And so we tried to walk through it and they still didn't get it. So I said, OK, what are the applications that you have that have five nine requirements? And they said, well, our billing system. I said, OK, great. What was the uptime of your billing system last year? Three nines. Year before that? Three nines. Year before that? Three nines. So I said, so you want to take a three nine app and put it on a five nine infrastructure? And they were like, yeah. And I said, well, it's still going to be a three nine app. So Joe Tucci, CEO of EMC, recently did some presentations. And he noted this very interesting shift, which is that if you look at the applications that are going on Amazon web services, if you look at where the enterprise is spending money developing applications, it's all around net new cloud native applications. And you can see there's a 10x difference in growth between sort of the legacy traditional apps that can't router on failure and those that can. And this is the key that I think that most enterprises start to wake up to, which is that cloud is not about cost savings or server consolidation or virtual machines on demand or any of that kind of stuff. It's really about driving business agility into the enterprise and the way to get that agility is via cloud native applications. So you have an application that routes around failures. It needs to use up a lot of resources to do that. You've got 100 servers, 10 disappear. Once you spin up 10 more, maybe you need 100 servers of capacity for your application. And you always run it 110 or 120 so that you have as much as you need to meet the demand. So you're a little bit over provisioned. And you're managing that all dynamically. Well, if the unit of cost of each of those servers is very expensive, then you're going to be highly incentivized to not use additional resources. So what's funny is that there's this thing called Jeven's Paradox. It's this law that we've learned over time where if you've got a resource that drives towards zero in terms of per unit cost that becomes more of a commodity, oil and gas, electricity, minutes, bytes on the wire, as you approach zero, you actually see the usage take off. So part of Elasti Cloud's being less expensive is less about cost savings. It's more that you can do something fundamentally different, right? If you go to your developers today and you say, look, I can give you 10,000 servers for an hour, for 100 bucks. Can you do anything with that? It turns out that there's all kinds of stuff that they can do. So we should care because enterprises are in this tough spot that's emerging, which is that they are starting to die out as they are unable to adapt to change. As the internet and cloud forces us to move faster and faster and faster, the only way to remain competitive is to be one of those businesses that as you grow larger as a business, you get faster, not slower. And that's something we haven't seen. But you look at businesses like Amazon, Facebook, Google, Twitter, even Microsoft's in the midst of making this change, Apple. And you compare them to sort of a classic general electric or something like that. And as the old style of enterprise, basically, as the business gets larger, the communication overhead gets very high, projects get very costly, very expensive, take a very long time to basically lead to success. But the way that the new enterprise will have to be to be competitive is one that moves much, much faster and that is leveraging cloud to get that agility, both in elastic infrastructure and cloud native applications that run on top of it. So OpenStack is a platform. It's a framework that you can use to build a lot of things. Think of it like the Linux kernel, right? Linux kernel, you can run an Android handset. Android itself is an operating system that uses the Linux kernel but has none of the normal Linux tool chain or user land. You can run a Linux kernel on a Cray supercomputer. Those are two different things. It's the same code base, completely different reference architectures. They don't look anything like each other. Therein lies both the strength and weakness of OpenStack. It can be used for anything, any kind of cloud, any kind of cloud service. But if it's like that, then what that means is every single deployment of OpenStack looks different. It looks like a snowflake. So given all that, we still think that OpenStack's best fit is as an elastic cloud. So for those who don't know, the very first two projects, Nova and Swift, Nova was built originally as the clone of Amazon's Elastic Compute Cloud and had the Elastic Compute Cloud APIs in it from the very beginning inception, and the later OpenStack APIs were only added after the fact. And Swift was built by Rackspace as a competitor to Amazon Web Services S3. So both of those services were basically designed from that sort of Elastic Cloud reference architecture model from the very beginning. So we think OpenStack is the best fit for that, even though it can be used for many different purposes. So one of the key things about getting agility for a business is that you want to be able to have maximum flexibility and use the right tool for the job. And the reality of the situation is that there are going to be use cases where you want to be on private cloud, and there's going to be use cases where you want to be on public cloud, and there's going to be use cases where you want to mix them or burst from one to the other. I've seen people doing it every single way that I can imagine. And that has really shown me that pretty much everybody is looking at hybrid cloud as sort of a key enabler to get this new business agility. Why would you use up your infrastructure inside your business if you've got a launch that's going to happen for three days? Like go put that on Amazon. I can give you a dozen other examples. So as we see enterprises adopting DevOps, building these cloud native applications, we really see that hybrid cloud is sort of the go forward solution. So people like to talk about interoperability in OpenStack Land, but they don't like to talk about what interoperability means. And that's a part of what this talk is about, although not the whole of it. So I think when people are talking about interoperability, what they really mean is they want application portability. They want to know that they can design the application in one place and move it to another place or replicate it. Maybe they're running, like I've seen all the financial services guys, they all want to run QA Dev and test outside the firewall. So they don't have to pay for that capacity. But they're never going to run their production workloads outside the firewall. So they want hybrid clouds so that they can actually do that kind of exercise. But they don't want their application to look or smell differently between their QA Dev and test environments than their production environments. That's not a good idea. So most of the time what people want is application portability because they want independence and they basically want to be able to have maximum flexibility. Portability requires interoperability. And I think that's where people get a little bit tied up. And they also get tied up in thinking that if you've got the same API in two locations, that that gives you compatibility or interoperability. Excuse me, and it does not. So again, why this matters is that we want OpenStack, or at least I want OpenStack to be as interoperable with the major public clouds as possible. Because I believe that OpenStack will win on the private cloud side of the equation and will only get partial penetration to the public cloud. It will never be dominant in the public cloud. And if that's the case, then it's very important to me that there are flavors of OpenStack that are able to hybridize and interoperate with some of the major public cloud winners. In fact, I think that if we don't do that, it could actually impinge on OpenStack's ability to win in private cloud given that pretty much every enterprise that I've talked to so far, no matter how big, how little, how security conscious, whatever, is all talking about hybrid cloud. And so the problem with API compatibility is a focus. Oh, we've got the EC2 APIs in OpenStack already is that it sort of misses the force for the trees. You can have two systems that have the same API and don't behave anything like each other. These are the same API. You've got a steering column. You've got gas. You've got brakes. You've got clutch. You've got all of that. You've got transmission. You've got mostly the same bits, but you have two fundamentally different systems. They behave differently. They have different performance characteristics. They have different service level agreements. I mean, they're just different systems with the same API. Look, an API is easy to code. I can get an intern straight out of a CompSci degree to come in and basically rewrite the Amazon Web Services API to talk to something over a weekend. That is a trivial task. An API is an interface to the system. The system is what creates interoperability. Who here used IPsec VPNs like in the mid to late 90s? Excellent. So the thing about IPsec VPNs is that in the early days everybody implemented IPsec to the standard, except there was so much looseness in the standard that they all interpreted differently, and none of their shit worked together. You couldn't do IPsec VPNs between vendors. There was no interop, same standard, same protocol, no interop. So it's about the system. It's not about the APIs. So these are the six requirements, I think, that you need in order to get interoperability. If anybody comes up with any more that I missed, that would be absolutely fantastic. I'm looking to continue to update this. And I think we're going to talk about these in painful detail, or well, maybe not too painful detail. So matching SLAs and availability guarantees. So you want two clouds to basically provide the same basic guarantees around availability of the API endpoints and of the virtual machines and so on. Because a lot of the frameworks that talk to a given cloud or API-like Bodo that talks to the Amazon web service and so on, they have timers and other things in them that are related to the behavioral compatibility at the end, which I'll talk about. And so you really want basically the same SLAs and availability guarantees, particularly around the API endpoint availability. Second is you want performance. If you go from Cloud A to Cloud B, and you're using a large instance on Cloud A, and you go to Cloud B, and you're using a large instance, and the large instance definition on those two clouds is fundamentally different. Or even if it's the same, the performance has got awfully horrible because there's a huge amount of oversubscription on the back end that you're not aware of, that's a problem, right? You pick up an application that requires 10 VMs here, and it thinks that that's its key scaling multiple, and then you stick it in another cloud, and it needs 20 or 30 at a time. And that's really an issue. You don't want surprises, right? Application portability, pick up the app, run it here, have it basically look and act the same. The next thing there is you want to have roughly the same set of services, right? I mean, you want the sort of lower end services, just the core infrastructure services, but you want to have the same services in both clouds, right? I mean, they don't have to be called the same thing. As long as they behave mostly the same way, then you're good to go. Block storage, object storage, virtual machines on demand, whatever the bits are that you're using. You need to have similar costs, right? I mean, you have to have a similar TCO, right? If you're running a private cloud that's 10x the cost of Amazon Web Services, and you're trying to hybridize that, people who are inside the business who are going to pay for that are going to gravitate towards the other cloud. And what you're really looking for is you're looking to get within half the cost of public cloud or private cloud or 2X, somewhere kind of in that range. You don't want to be too much above or below it because it makes too large a disparity between those systems. You need the basic API compatibility. It's table stakes because the tooling around the different ecosystems, Amazon, OpenStack, Google, is all a little different, and you want to be able to support all of that different tooling. And then the last, this is the thing that I think that people forget all the time when they get hung up on API compatibility, it's that behavioral compatibility component. If I move from cloud A to cloud B, and on cloud A a VM spins up in five minutes, and on cloud B it spins up in 60 minutes, the way the application is going to manage auto scaling is going to be really, really different, right? Drew Smith, who I don't know if he's in here, but he's on my team, he's giving a presentation tomorrow talking about hybrid cloud landmines, and he made this great point to me as we were doing run-throughs of the decks. He basically said, the thing is that a developer goes into an environment and they basically build their application in that environment, and they make all their assumptions based off the way that that environment operates. So the problem is that if you want application portability for that, if you're going to try to move that app to another cloud, it's got to have a very similar environment, it's got to behave really similarly. If it doesn't, then you're going to run into all kinds of surprises. One example I love to give there that I think helps, too, besides the VMs is that last one, which is that if you go to Amazon Web Services and you spin up a VM, every VM by default gets a public IP address. On OpenStack, it doesn't. And that's OK, it's just a difference, but it's fundamentally a different behavior, even though both have access to the EC2 API. Also one last thing I'll say there about the behavioral compatibility is that behavioral compatibility typically isn't modeled in the API. That's the big difference, right? There's no way to talk to most of these APIs and say, spin up this VM and if it doesn't spin up in five minutes, kill it and start a new one. That's actually not modeled in the API. And the reason is that systems are gigantic and complex and APIs are these little teeny windows into them. They can't model the whole thing. That's part of the problem I have with Neutron, but that's a different story. So when we're talking to customers, we like to point out that they need to move to a hybrid first cloud strategy. And the reason is that whether you're building a hybrid cloud now or later, you want to make sure that you're thinking about it first thing and you're thinking about those six elements there. So how do we get Amazon Web Services flavored open stack? We need these six elements. Let's walk through them really quickly because I want to make sure I can take some questions. First is that we need to match SLAs and availability guarantees. I can't give examples for every single thing you should do here, so this is just one example of what you could do. You have a 2.5.9 infrastructure like Amazon. That's your guarantee. OK, great, perfect. However, you want your application to be more than a 2.5.9 application, which means that the API can't be a 2.5.9 availability API. It needs to be 3, 4, 5.9s because when your application detects a failure of a server or something like that and needs to spin a new one up, it needs to talk to the API. I mean, that's the whole benefit here. So you need to make sure that those API end points are highly available. So one of the things that I see happening a lot in open stack land is a lot of focus on things like HA failover. The problem is most HA failover takes 45 to 60 seconds for a failover. Master elections take 45 to 60 seconds. It depends on your environment. But that's kind of like a lot of suckage if you're trying to get close to like 3, 4, 5.9s because in a minute only six of those failovers and you're already down from 5.9s to 4.9s in a year. So it doesn't take much. What's funny is that we actually have this other pattern that we use in our version of open stack called load balancing. Probably all are using load balancing today, I'm guessing, but load balancing is awesome because you can run active, active, active, active, active, active, active to when the cows come home. And so this is a much, much better pattern if you want high up time for the API end points. And the thing is unlike websites, APIs are very transactional. You make a call, you get a response. Boom, done. There's no need to have sticky session tracking or cookies or any of that complicated stuff that you would normally have to use with the load balancing service. Second is performance and quality of service. So right now, you see, it's funny is when I was working at GoGrid, the way that the scheduling system worked, it was just so obvious. And the reason is that if you look at a public cloud provider, if your scheduler just goes and sprays VMs like wherever there's availability, you run into a problem because you're like, fill up boxes partially. You'll have a box that's running out of CPU and it still got RAM left. And for a public cloud provider, that's just unacceptable. You have to sell every plot of land. I mean, when we've done the modeling, the most extreme example is you could only get 50% of your capacity sold if your scheduling is really bad and you've chosen bad instance sizes. So what Amazon and Google do is they, in pretty much every public cloud provider that I'm aware of that knows what they're doing, has a bin packing scheduler. They treat every hypervisor node as really a single plot of land and they subdivide it into acres or hectares depending on where you're from. And so a given virtual machine is one, two, four, eight of those plots of land. And there are even multiples of each other, one core, two core, four cores, eight cores. And if you go and look at Amazon Web Services, M1 instances, M2, M3, you'll see all this. If you look at Google, you'll see the same. And so what that allows you do is you can assign a class of instances to specific hardware designed for it. You can evenly bin pack in all the virtual machines and that gives you fixed proportions of resources which allows you to manage over subscription rates and know exactly what the performance will be for those systems if they're all banging full out. And you can do this today in OpenSec fairly easily with host aggregates or a custom filter, right? It's really, and it's not a lot of logic either. I'm giving away secret sauce now. So third is we need to have the set of services that these elastic clouds have. And this is, I meant to get the Amazon Web Services logo up here, I apologize. But this is the set of Amazon Web Services. You can see the Amazon does a lot of stuff. But really all we care about is that box at the top, right? Because most of this other stuff, like, I mean, I'm going to get myself in trouble here. But you're a fool if you use DynamoDB. I mean, you're locked into Amazon. And the worst thing in the world is to be locked into a big public cloud provider with all your data and something like DynamoDB. I mean, how are you ever going to get out of that? So all of the stuff there, the core infrastructure services are mostly already an open stack, right? I mean, we don't always have all of the Amazon Web Services APIs, but we have quite a few of them. EC2, S3, CloudWatch, CloudFormation. I'm probably forgetting some. We're working on adding virtual private cloud to the EC2 API. So there's a bunch of good stuff there. So most of what we need is already in there. So that's awesome. Meaning total cost of ownership, what I find really humorous about this is I see this problem continuing to happen everywhere I go, which is that people don't quite get what the costs of running a cloud look like. These numbers are from Amazon Web Services. This is James Hamilton basically coming out and publicly saying, how much does it cost to run the Amazon Web Service Cloud? So for a big, gigantic, public, elastic cloud who buys and bulk, who drives all their costs through the floor, who has no gold-plated infrastructure, has minimal redundancy in every element of the system, servers are still 57% of the overall cost. So if you go and you say, well, I'm going to put OpenStack running on my super high-end blade servers, that's great. But that pie chart starts to look a lot different. That blue is like 80% or 90%. And your ability to have a total cost of ownership that looks anything like Amazon basically dissolves. So I shit you not. There is a customer that I've talked to that is in the financial services business on the East Coast. And they're running their Hadoop cluster on Cisco UCS B-series blades attached to a fiber channel sand. That is the wrong way to think about solving cloud problems. So when we've modeled out our price, for example, total cost of ownership, power, cooling, space, labor, software, hardware, Amazon Web Services direct connect, cost of being with any and everything, what we've seen is we can build a cloud that's basically half the price of Amazon Web Services wholesale costs. One-year reserved instances. So there's this mythology that they have more buying power than the average enterprise. And the reality is that you can get most major hardware vendors to get down to very low margins if you've got just moderate buying power, a few million dollars. So OpenStack's already got all of these APIs. We worked on the Google Compute Engine APIs. AWS APIs have been in there for a while. Google Compute Engine APIs are in Stackforge. Thank you, Alex. And we welcome anybody participating with us there. So this is the last and most important thing is the behavioral compatibility testing. So this is the only way to figure out whether two clouds are interoperable or not. It's testing, testing, testing. That's the way the IPsec problem was solved. They got into these big, huge IPsec VPN bake-offs between vendors. And they just basically get all the vendors in the room and figure out what the heck wasn't working. And then they would hammer through it and figure out what needed to happen. So it turns out that we've already been modeling and testing for behavioral compatibility with Tempest. And we've started to create some more Tempest tests that we're going to upstream push back. But what I'd love to have is a suite of tests that allow you, Tempest tests, allow you to run against your cloud if you care about being Amazon Web Services compatible. So you've got a checklist. And you can run that same set of tests against Amazon. You can run against your cloud. You can run it against vanilla OpenStack, whatever that is. I still don't know what that is. If anybody can ever figure out what vanilla OpenStack is, please let me know. These are the Tempest tests that are in there today for Amazon Web Services behavior. You see that pretty much it's lit up green everywhere. Just some minor differences. OCS being our product there in the middle, just for comparison. This is the new set of tests that we're in process of cleaning up. And we're going to work to upstream them. You probably have to be turned off for the CI system. But at least they'll be in there so that people can turn them on as they need to. But you can see that there's a whole bunch of behaviors that we can test on an OpenStack installation that's configured to look and smell and taste like Amazon Web Services that effectively tell us whether we've gotten more of the AWS behavior or not. And I think that what I would really love to see happen with OpenStack is that there are many different flavors of it. Some of them are like this. Some of them are not. But that we can test these flavors hopefully with the ref stack work that Josh McKenzie and others have been working on. I'd love to see there sort of being like this is OpenStack for Enterprise. This is OpenStack for AWS. This is OpenStack for HPC. And you've got reference for how that gets turned on. And then you've got a reference for how you test and guarantee that it's the flavor that you want. So this is pretty easy. How do you know you're done? Same availability, same performance, similar services, equivalent TCO, same APIs, and it behaves identically. With that sixth one being kind of the unifying principle. This is a new architecture diagram, which is not on our website for our system. I'm not going to talk about it because we're running out of time. But it is sort of like in the product track, so I got to show my product here. You can see that we've got a bunch of these pieces in here. It's not perfect. No product's perfect. I'm pretty proud of what we've done, though. We worked on solving certain hard problems. And the response yesterday when we were talking about our Layer 3 networking plug-in in this room actually was pretty astounding. I didn't know that people were jonesing so bad for a functional network stack, but I did get the message to get that upstream sooner rather than later, and I will work on that. So business agility is what enterprises give a shit about, and it's the thing that's existential for them. If they don't have that, they're not going to survive. The path to that is through cloud native applications, which require moving to that DevOps model. And the elastic cloud type architecture, no matter what the software is behind it, whether it's Amazon, Google, OpenStack, is a foundational component to make yourself successful there. And largely I think that's because we want to hybridize the private installations to the large public clouds. And so my main takeaway for you would be we'd love more people to be interested in sort of this effort, and there's a lot more people in the room than I expected, so that's great. I don't want to be the lone voice in the wild. I'm used to that, but I don't want to be that crazy old hermit talking about this when nobody else is. So I think that's my last slide. Why is this not advancing? So I'm happy to take questions, and could you please come to the center mic so that they're picked up on the video. Hi. I have a question. What's your view on pass with respect to this question of hybrid cloud and pass? Well, the way that we look at the world is that we're going to solve the infrastructure problem, and then there's an ecosystem of people that run pass on top of Amazon, OpenStack, and so on, and that we'd rather have those people come and work on top of our system. So it makes more sense to use Cloud Foundry unified across those three different systems or something like that than it does to bind to Amazon's pass or for me to deliver a pass. And I'm an infrastructure guy, not an application developer guy. There was a recent court ruling where Oracle. Yes. And Google, where Google's found responsible for re-implementing the job APIs. How does that affect your thoughts here about embracing a proprietary public cloud API? So that's a good question. And essentially my stance before after talking to Google was that this would not be overturned and that we would be in the land where you could copy APIs. In talking to Google further, their position is that the battle's not over. I don't know whether it is or not. But what I do know is that, and I hope everybody takes this in the best way possible, but Amazon is totally ambivalent about OpenStack. I mean, they just don't care. And I don't think that it's going to become a big enough threat that they will care. And so there is some need to try to clean that up from a legal perspective. And maybe over time they'll care enough that we can get them in the fold. One of these Fortune 15 companies that I'm working with does care about the Amazon Web Services APIs, and is a big Amazon consumer. So I think over time some of the customers will help us get maybe a blanket and deminification or agreement between Amazon and OpenStack Foundation. That's certainly a risk right now. But if you're really uncomfortable with that, use the Google Compute Engine APIs because they're not going to see you. They don't care. Any more questions? Oh, come on. I got extra time here. I got two minutes. It's two questions. I usually like to get at least three questions before I leave the stage and stop haranguing the audience. So I didn't quite hear it all of it. But yes, OpenStack metering and cost measurements are extremely immature and not designed to scale. So I was wondering about the QoS that you're talking about and things like CPU limitations. Are you using the KVM hypervisor? And if so, how are you managing those limitations? Yeah, so it's basically through managing the subscription or oversubscription rates now. We don't actually use any baked-in quality of service. Think of it as a manual way of doing quality of service that's related to the way the bin packing schedule works. If I put four VMs on a box that each have one virtual core and the box has four virtual cores, I know that I'm going to get most of one core for every VM. I can even potentially pin the VMs to a particular core, which Amazon does in certain cases. And I can get rough guarantees that I'm going to get what I'm going to get. There is overhead for the hypervisor. Some hypervisors are better than others in terms of dealing with large amounts of oversubscription rates. So there is some variance. But the general way that I think about is that one of the problems with using a more sophisticated quality of service system is that a lot of times they manage everybody down to a really common baseline. And then they just allow certain people to burst above that to provide guarantees. And the problem is that in a more dynamic world with cloud native applications, you sometimes don't know what's going to take off or not. And you don't want to throttle everything by default. It's still a very valid technique to deal with quality of service problems by throwing brute force at it. Big fatter pipes, big fatter boxes, more cores, more RAM. OK, thank you very much.