 All right. Hi, everybody. Thanks for having me. So today I was going to talk about bringing applications back from Amazon. But it might be more accurate to describe this talk as being about bringing applications back from the public cloud generally. And then even more specifically, it's really about applications for hypercloud. Because when we're out there and talking to customers today and helping them figure out their cloud strategies and build an open stack based cloud, every single customer that we've talked to is interested in doing something along the lines of hypercloud. They all think about that, even big financial services companies that are trying to figure out what their actual go forward strategy is. And they know they'll only be in private cloud in the short term. They're still thinking about hypercloud for the long term. So I apologize. I normally have a clicker. And it died on me. And I like to wonder. So I'd probably be wondering around the laptop and then coming back to hit go on it fairly regularly. So just by way of background and introduction, I'm on the OpenStack Foundation Board of Directors. And cloud scaling was one of the original pioneers in doing large scale open stack deployments. We were responsible for the first compute cloud, public compute cloud in the United States anywhere based off of Nova. And we were responsible for the first open stack storage cloud in the United States outside of Rackspace. And the first public's, all right, we'll figure it out, the first public storage cloud in Korea. I apologize for the technical difficulties. We're still going. See if this will do this for us. All right, all right. So I'm an advisor to DotCloud, which are the folks behind Docker, the containerization system that some of you may be playing with. And I've been called the top 10 cloud competing pioneer by Information Week if that means anything to you. So I not only am a bias, but I have a bias. I think all presentations have a bias. This is inherently mine. I run an OpenStack product company. I'm expressing my own opinions. You could associate those with the product company since I'm the founder. I think that's fair. But otherwise, I'm not representing the foundation or the community. This is all just the way that I think about things. I really believe that the pioneers to emulate our Amazon web services and Google Compute Engine, I mean, that's been my experience. And in the past, I've done a lot of large-scale systems. So a lot of data centers, large data centers that have thousands of servers and hundreds of switches. So I tend to have my viewpoint informed by this background. So just a quick FYI, some housekeeping. There was a presentation I did in spring of this year called the Stay the Stack at the Portland Summit that was very well received, Stay in Your Room only. And that's sort of a whole timeline of OpenStack history, who all the players are, what all the politics are, where all the components are in OpenStack. Really kind of an OpenStack 101 in about 45 minutes. And we're doing a version two of that. It was accepted as a talk for this week. But due to some miscommunications and snafus, co-presenting was somebody who was not prepared to give that talk with me. So instead, what we're going to do is we're going to do a live webinar of that talk. So if you want to check out Stay the Stack version two, live, we'll be doing it via Bright Talk. You can go to that URL, cloudscaling.com slash stack. And that'll be on Thursday morning. OK, so now get into the good stuff. So this is the basic story arc. We're going to talk about why you would bother repatriating, what repatriation means, and how that impacts the architecture of the clouds that you're using. So how does the application layer talk to the infrastructure layer? And why does that matter? And then we're going to talk about compatibility and interoperability, because I think that's fundamentally what it gets down to. And hybrid cloud, because I think that's the mechanism by which people are trying to do repatriation or expatriation. And then we'll wrap it up. So why repatriate? So fundamentally, private and public clouds are two sides of the same coin. I think sometimes people get a little bit confused, Amazon, Stackers, other people, because they want to believe that it's an either or situation. And I think we all know at this point that it's not an either or situation. It seems pretty clear that private and public clouds solve very different problems. When you look at a public cloud, it's really a general purpose system that's been designed for a number of different workloads that's got multi-tenancy at its heart that is really designed around profit-making opportunities for the public cloud provider that's actually built that public cloud. And perhaps most importantly, it's really designed around renter economics. I pay by the hour, I pay by the months, so to speak. Private clouds are quite a bit different. It's more like buying a house or buying a car rather than getting in a taxi. You're actually going to invest into your private cloud. You're not trying to make a profit off of it. But almost always the reason that you've chosen to buy a house or a car is because you want a very specific kind of house or a very specific kind of car. So private clouds are optimized for a particular purpose, almost always. And then most importantly, you want to have direct control. And that's something we hear over and over again with customers. And then, of course, there's a number of drawbacks for both private and public cloud as well, which I'm going to skip over. But the key point there is that there are just two very different systems that serve very different purposes. So if you're looking at bringing applications back from a public cloud provider, you're probably being driven by one of these three key factors. At least that's what we've seen. It's typically either cost, control, or compliance. So by cost, what we've seen is that when you use a public cloud for long enough, as you scale up on it, as you grow up on it, what you'll find is that the public cloud gets progressively more expensive. And at some point, you start to realize that, hey, if I'm spending $300,000, $400,000 US a month on this public cloud, maybe I should look at building my own system. I'm already spending millions of dollars per year. And that's very common. So Zynga, the games company in the United States, they had many tens of thousands of virtual machines on Amazon web services. They were spending many millions of dollars per month on Amazon. And they decided to build their own private cloud and to basically run on there so that they could cost optimize their system and reduce their overall public cloud costs. So that's great. And it made a lot of sense. But it only makes sense for the baseline load, the so-called own the base, rent the spike. It's combining owner economics with renter economics. I'm going to have a house that I own. And then I'm going to have a house that I rent when I go on vacation. So Zynga did that same kind of thing. They basically built a private cloud. They deployed their own private cloud on their premise. They made it look as much as Amazon web services as possible. And then they designed it so that they could run somewhere between 50% to 70% of their workload on that system. Their steady state workload reduced their cost for that. And then they moved all the elastic and peaky workloads out onto Amazon web services. New product launches went on on Amazon web services and so on. The other aspect is control. And there's control in a bunch of different ways. So I'm just giving one example here. When you go to a public cloud, you don't really have control over that system. It's not an open system. It doesn't matter if it's using open source software. It doesn't matter if it's using OpenStack. It's somebody else's service that they're running on your behalf. And you're beholden to whatever choices they've made. And so one of the choices they probably made, at least in the case of Amazon, is that they've said, look, you get these size boxes. Every VM is a fixed size. So if you have a workload that uses up an entire box of resources in terms of CPU and network IO but not disk and RAM, you've paid for that disk and RAM even though you didn't need it. So a lot of companies, especially web companies coming off, they want to best fit the hardware around the workload. They know the workload really well. I'm running Hadoop with HBase and HDFS, so I want 64 gigs of RAM and 16 cores and 16 spindles. That's exactly it. I'm going to use all those resources. So that's part of what control means. And then in terms of compliance, I mean, public clouds come a long way. Systems like Amazon web services are extremely secure. I'd say they're much more secure than the average enterprise. However, security isn't just about how secure a system is, but also how compliant it is. And what's difficult for an Amazon or a Google Compute Engine to ever deal with is the fact that regulatory requirements change so dramatically from country to country. So if you're in Belgium, for example, and you've got a requirement that you can't allow personally identifiable information, PII data, outside of Belgium for Belgian citizens, then you can't use Amazon web services, EU cloud, because it's an Ireland. You're just not allowed to do that. So there's always going to be a bit of a gap there. And that's another reason why you want to have private cloud to enable a hybrid cloud solution. So that's why hybrid basically allows us to get repatriation done. If I've got a public cloud here and I've got a private cloud here and I can combine them together, now suddenly I have choice and flexibility. I can decide where my apps are going to go. I can decide that I've made a mistake about where an app should go. Now I need to bring it back in-house, repatriate it, or I need to take it out of house, expatriate it. In some cases, we've seen customers like Financial Services Company say, we are never, ever going to put production workloads on the public cloud. But we want to use a public cloud for dev testing QA, because for them it makes sense because they have non-elastic apps to actually put the places where they've got some elasticity, the number of different development and QA environments out in the public cloud, because they only use them periodically. In other cases, we've seen customers like Netflix, for example, or cloud customers like Netflix, who want to put all their production workload on something like Amazon Web Services, but then put dev testing QA back behind. But the key there is that you need a hybrid cloud that allows you to pick and choose where your application goes, public, private, or both. So what does that mean? What are the requirements for building a hybrid cloud that can solve the repatriation problem? So first of all, repatriation is not automatic, right? I mean, you just don't, you can't just build a system, you can't just take an application and move from one cloud to another, right? There's all these factors that come into play. That's why that private cloud needs to look as much like a public cloud as possible. And by look like a public cloud, I mean, it's gotta be semantically, architecturally, and behaviorally equivalent, right? If your private cloud costs three to five times your public cloud, then what was the point, right? You just blown the cost driver out of the equation. So in order to get the right economies of scale around operational costs, costs of hardware, to allow your application to be portable, and to build towards an elastic cloud model, which I'm gonna talk about here in a second, you've gotta actually think very carefully about how you build and design your private cloud. So I always think of there as being sort of two major flavors of cloud. Some people will disagree with me, they'll tell you there's dozens. I think there's two big buckets. I usually think of sort of, they're being sort of virtualization 2.0 clouds, which are essentially focused on legacy applications, SAP, Oracle, and so on. And in those applications, they're very static, they're manually built. The infrastructure has to be resilient underneath them. And then I think of there being sort of a new model that's arisen with the Amazon Web Services and Google Compute Engine that I call Elastic Cloud. And Elastic Cloud is focused on the next generation net new applications that use DevOps, they manage themselves, the application deploys itself, it scales itself, it heals itself. And that's a very different kind of environment. If you were to take something like OpenStack and put it on a VCEV block and try to build your own private cloud, you'd wind up with a cloud that's three to five times more expensive than Amazon Web Services street price. Some people might think is a useful spend of dollars, but if you developed your app on Amazon Web Services, you don't need what a VCEV block provides. It doesn't make a lot of sense. If instead you use OpenStack and you deploy it and design an OpenStack-based private cloud that looks and smells and tastes as much like an Amazon or a Google or a Rackspace or Azure as possible and you use and bake in all those design principles, you're gonna wind up with being something like half the cost of a public cloud over three to five years, including power, cooling, data center space, labor, the whole shebang, okay? I'm gonna skip that slide. So when you look at the major clouds out there, sort of the guys who are the main players for these next generation applications, I like to look at the right scale state of the cloud report that went out earlier this year. They talked to the user base. Right scale was the first cloud application management tool that ever came out. They've been in the game for a very long time. They have hundreds of thousands of customers and they went and did a user survey with hundreds of responses. And you can see that the top clouds that people are interested in are Amazon, Rackspace, Google and Azure, right? It's not the virtualization 2.0 clouds. Every single one of these is what I'm calling an elastic cloud, right? So that tells us something right there. So if we wanna make that private cloud look as much like a public cloud as possible, we are gonna assume that the application manages its own fate, right? It's gonna deal with the fact that, you know, VMs or physical servers could go down at any time. We're gonna design around commodity hardware because we are fine with there being failures all the time. I mean, for example, Google is known to run with 10 to 15% of its data center capacity down at any given time. It's hardware just broken, waiting for somebody to repair it. You're gonna take that operational model. I don't know if people have seen that whole meme around cattle versus pets, but if you treat your servers like pets, then you name them cutesy names, and when they get sick, you know, you nurse them back to health, oh, Fred, the mail server's down, you gotta fix Fred. Whereas if you treat your servers like cattle, then when one gets sick, you take it back out back, you do what you gotta do, and then you add another one to the lineup, right? It's treating servers as being disposable. So all of these top public clouds are elastic. They all follow this model. So when you're thinking about running an application, building an application and application deployment framework that can run in multiple places, in multiple clouds, and then the very first thing you're gonna start to stumble into is differences in behavior from one cloud to another. Abstractions and APIs can only do so much, right? If you call an API on Cloud A, and you call an API on Cloud B, and it's the same API, and something different happens, how do you handle that? The only way that you can handle that by putting more business logic in the code that checks for these differences in behavior and then deals with it. This is actually very common. I'm gonna give a whole bunch of examples here in a second, but I think what I find very interesting is that when you look at Google Compute Engine, which I think will soon be the number two cloud in the United States, and you compare it to Amazon Web Services, they are almost 100% identical in terms of semantics, architecture, and behavior. Why is that? It's two software stacks that are proprietary software, hand-built, and they're more similar than any two open stack deployments that you might compare to each other. How did that happen? So if we look at behaviors, you'll start to see that there's a real difference, that it has a real impact on your application, the deployment framework, and the way that your application manages itself. Remember, on Elastic Clouds, applications control their own fate, they manage themselves, right? So if on Cloud A, VM spins up in five minutes, and on Cloud B it spins up in 60 minutes, there's a real problem if I develop my application deployment framework on Cloud A, because when I do that, I'm going to start checking, as soon as I hit the five minute mark, is the VM up, is the VM up, is the VM up, and if it's not, I'm gonna take action, like start another VM, kill that one, right? That's embedded in all this business logic inside of tools like in Stratas, and right scale and scale R, and Bodo, and like all those different frameworks, right? So if I do the development on Cloud B, and I build in the framework that it's 60 minutes between, every time a VM starts up, before I start checking to see whether it's actually up or not, because that's how long it can take, then I sort of have a problem, right? Because who autoscales at a 60 minute interval, right? I mean that's a problem, right? For these next generation apps that manage themselves, you don't want 60 minutes of lag time, right? So there's an inherent difference in behavior between clouds, even for some standard thing like starting up a VM. The same is true for block storage, right? On some clouds, you get incremental snapshots, which means that every time that you take a snapshot, you know it's very, very quick, and in others, every time you take a snapshot, it's a full snapshot. So if you have one terabyte of disk, it could take several hours to take a snapshot. So if you're trying to use the persistent data stores, the persistent block storage systems as sort of an application deployment acceleration tool as Engine Yard, for example, does in Amazon web services, then that full snapshotting is actually very problematic. But most interesting here, to me, is that if you take two versions of OpenStack, the exact same versions, and you configure them slightly differently, one command line or one configuration parameter in nova.conf, then suddenly you get extremely different behavior. So there's an option to do auto assignment of floating IPs in Nova, so when you turn that on, every time you spin up a virtual machine, you get a public IP address, right? Great, you build an application deployment framework on Cloud A again, you know, you have that behavior, you never worry about public IPs, you take that application framework and you move it over to Cloud B, and suddenly it doesn't work because your virtual machines don't have public IP addresses. You have to go in and actually add new business logic now. Two different OpenStack Clouds, same software, same release, same version, same everything, except for one configuration parameter, and the behavior is different, same APIs, same abstractions. So this is what's important to understand is that the behavior is what matters, not the APIs. So just to kind of, you know, to bring this home, right, repatriation, being able to take your apps, put them on public or private, wherever you want, it makes sense a lot of time for those reasons we discussed. It's not automatic, you need to have some level of compatibility and interoperability, which means that the Cloud behavior that from Cloud A to Cloud B always matters, and so if we want to enable compatibility and interoperability with OpenStack and Hybrid Cloud World, how do we go about that? So in order to go about, to talk about that for a little bit, I first have to talk about systems, because people sometimes don't understand what a system is, and I've got to explain it, and this is a little academic, but it's not very complicated, so I'm sure you'll follow. So a system, in my mind, is a set of components that have been put together in a specific architecture for a specific purpose, and then when you do that, they're greater than the sum of their parts, right? An example of that is a car, right? It's a system of a bunch of components, gasoline, driver, a bunch of interfaces, and when you put it all together, you have something that takes you from point A to point B, but when you have all the parts disassembled, you know, even if you have them all, it's a very different experience. So when you look at any system, pretty much universally, whether it's a computer system or another kind of natural system, there's some kind of input or API that allows you to control or interact with the system on the top, and on the bottom, there's some kind of behavior that comes out. When you poke it at the top, something happens at the bottom, and in the middle, there's an architecture, which is how all components get put together, and there's a naming scheme. There's a set of semantics that describes what these things are. So an example for a car would be, you've got the gas, you've got the brake pedals, you've got the steering wheel, that's all the API that's the interface to that system. And in terms of the behavior, the car accelerates, it decelerates, it turns left, it turns right, and then you've got all the pieces that make that actually happen, sort of the core architecture, helicomponents are put together, the names of the systems like the transmission, the engine, steering column, and so on. Clouds are no different, right? There's an API in the front, there's a set of behaviors on the back end, there's an architecture that's in there that's really embedded, and there's a set of semantics that sort of describe all the different components. Just real quick here, you also have to, as you're looking top down, you will recognize that the API is the thing that gives you a window into the set of semantics that combined with the architecture which drives the behavior, so there's sort of a top down in terms of the way it works. So that brings me to sort of contracts for systems. So when you're interacting with the system, there's an explicit contract. When you grab that steering wheel in your car and you go left like this, you know the car's gonna go left, right? You don't know which way the tire's gonna turn, that's not important. All you know is that when you turn it left, the car goes left, right? There's also a set of implicit contracts, right? Which are the things that you learn kind of through experience with the system of how it actually behaves. And the implicit contracts are actually the piece that I think everybody overlooks. You can have the same APIs, but if you don't have the same behavior, you don't have the same system. So here's an example, Nova boot up a volume, great. Turn on a VM, that's a common API command. You can see that across many different clouds. But the implicit contract here might be that the VM usually starts up 95% of the time in four to five minutes or less. I mean, certainly that's the way it is with Amazon, for example, for many OpenStack systems, it's also probably similar. So when you're starting to think about compatibility and interoperability, you have to really think about the fact that systems can be compatible, they can be interoperable, they can be compatible and interoperable. And both matter and compatibility is typically driven by the APIs interoperability by the behaviors. So if you want something to be mass adopted, you need to make sure that APIs are the same everywhere because you don't wanna retrain people. You don't wanna rewrite your application when you go from cloud A to cloud B, right? That's not what you wanna do. So when you've got compatibility, you as a person can get into any car, for example, and from a Ferrari, to a Toyota Corolla, to a Mazda, basically the interface is the same with some minor changes. But if you get into the cockpit of a jet plane, and then or a space shuttle, and then you try to transit to some kind of orbital spacecraft or to an armored personnel carrier, all show up there, completely different APIs, completely different explicit contracts. So a car and a semi-truck or a lorry, those can be compatible but not interoperable. They've got the same basic interface, right? But they're not the exact same system. You can't take somebody who has learned a car to learn a lorry because some of the behaviors are actually different in that lorry. Whereas two cars of any kind, of any of the same kind, are both compatible in and interoperable. So again, when you're looking out there at those major public clouds, the ones I already, most of the ones I've already listed before will leave Microsoft out, they're their own beast to a certain degree. Like I said, Amazon and Google are highly overlapping in terms of their semantics. Their APIs are completely different but their APIs are very, very much the same. And the reason is is that almost all the calls for both APIs exist in both places and all the expected behavior is essentially the same and many of the semantics are all the same. Like I have regions, I have availability zones, I have nodes, I have networks and so on. You look at somebody like Rackspace and we would call that an elastic cloud. If you look at somebody like Rackspace, that's also very highly overlapping with the Amazon, Google, Compute Engine but a little bit different. Again, pretty much an elastic cloud. And then if you look at something like a VMware-based system, vCloud, director-based, vBlock, whatever, that's a very semantically, architecturally, behaviorally different system. It's got variable-sized VMs and there's all kinds of other differences in the way that they actually operate in the real world. So this is a key thing to take home. OpenStack is not a system. Again, going back to our definition of a system, right? Set of components that can be put together for a specific purpose that's greater than some of its parts. This is not to say OpenStack isn't awesome. OpenStack is awesome but it's not a system because it only gives us a set of components. It doesn't have any kind of reference architecture. It doesn't tell us how to put them together. And when you just slap any openStack stuff, if you just download OpenStack and turn it on, you're not gonna get something that's greater than some of its parts. And part of the reason is, is that as soon as you download OpenStack, you've got this sort of toolkit of stuff that you can use to build the system, which is great. That's the power and flexibility of OpenStack. That's why it's amazing. But as soon as you download it, you have to start thinking about what it is. Am I gonna use the dashboard? Am I gonna use command line tools? What hypervisor am I gonna use? What networking model am I gonna use? What storage am I gonna use? Am I gonna use block storage or object storage? Am I gonna snapshot from block storage into object storage? Like, there is a gigantic laundry list of decisions that you have to make in order to turn OpenStack into a system, which means, fortunately, that you can use OpenStack to build all kinds of things. You can use it to build private clouds. You can use it to build public clouds. You can build platform as a service. You can build a metering service on demand. You can build a cloud application management framework. You can build all these things. And it would all be OpenStack APIs and OpenStack code base. But the key is, you gotta take all those pieces and integrate them into a system, right? There's a big difference between OpenStack software and OpenStack running as a system. This is reason, there's a reason that the HP cloud does not use Keystone and that Rackspace doesn't use Keystone in their public cloud and that this other new cloud storage cloud that gets announced next week also doesn't use Keystone. And I'm not trying to diss on Keystone, but that same storage only cloud, it's Swift only. It's not using Novar Glance or any of those other pieces too. So the point is that you have to take the software, the components and you turn them into a system. And when you do that, you're gonna make choices. You're gonna make choices either directly or indirectly. So if we want some of those versions of OpenStack that people are gonna deliver to the market like the product that I build, if we want them to be compatible with public clouds, we need to embrace the architectures, the semantics and the behavior of those major public clouds, the ones who are the actual leaders in the space. Of those four or five that I listed, only one is an OpenStack based cloud. The rest are something else, but we still care about compatibility with them. And the reason is, is we wanna reduce the friction. We wanna allow applications to move back and forth between those clouds and our OpenStack based clouds. And we want to, that's repatriation, sorry. And then we wanna reduce the friction of people who are building application portability frameworks and application management systems like RightScale and Stratius, and all the folks who are sort of building, trying to build an abstraction layer so that you can easily move an application from one cloud to another. So fundamentally, the API is the track gauge, the railroad track gauge that we're building all of this on. What I mean by that is that in the United States in the 1800s, when trains really sort of came about and people started building all of their different railways in the very early days, everybody built completely different gauge. It was not interoperable or compatible. So you couldn't take the steam engines and cars from this one railroad and interchange them with another railroad. And then over time, they became highly standardized. So cloud is here. We're in the early days. We have no idea what the emerging standards are gonna be in private and public clouds around APIs or behavior or semantics in the next five years. We have no idea. I mean, the best guess I can make is that Amazon remains dominant in public cloud and OpenSec becomes dominant in private cloud. That's my best guess. But we also know that hybrid cloud is the future. I think most people would agree with that, but maybe not. So I see OpenSec as being the key to building that hybrid cloud future. I want the fact that it's perceived weakness of being sort of a framework without an integrated reference architecture because that doesn't give you interoperability and compatibility off the cuff. That's why it's a perceived weakness. I really see that as being its greatest strength. We can build a lot of different OpenStack systems from OpenStack that serve different purposes for HPC, public cloud, private cloud, and so on. And some of these will be designed to be public cloud compatible. So again, public cloud compatibility and interoperability means having the same APIs and the same behavior. The implicit and explicit contract must be the same between cloud A and cloud B regardless of the software underneath them. In order to get there, we have to have the same semantics and the same functional architecture, which gives us a hybrid cloud, sort of brings those two worlds together. Now I can build an application and the application doesn't need to worry so much about where it runs because whether it runs on cloud A or cloud B, the same things happen. So OpenStack really needs to start thinking about embracing what I'm calling the hybrid-first cloud strategy, which is that we need to accept that not all the public clouds that are winners will be running OpenStack. And even if they were, they might be fairly different. And then we need to actually build versions. We need to continue to make OpenStack be able to be as compatible and interoperable with those public clouds as possible so that OpenStack can win private cloud game, right? Because as you saw, you know, back here on the track diagram, we're on these early days where there's no standards. So if OpenStack can be as flexible as possible, then whatever the emerging standard is, OpenStack's already there. So there is work in OpenStack to really focus on compatibility with Amazon Web Services and soon Google Compute Engine. And I'm hoping Azure and some other systems as well. In particular, you can see all the testing frameworks in Tempest and RefStack around that. And then we can also start looking at the test frameworks out there that test both the implicit and explicit contracts for these public clouds and start to roll some of that into the OpenStack testing so we can make sure that people can build versions of OpenStack that can be focused on hybrid cloud. So again, just to wrap this up, if you wanna be able to take your applications and put them wherever you want, private or public, you need to consider the fact that private clouds can be significantly more cost effective, half the cost of the public cloud. So that's number one. Number two is that a hybrid cloud solution allows you to have the best of both worlds. It may be half the cost, but it's not half the cost if you have to over build your capacity, right? You don't wanna build your private cloud as big as the Amazon Web Services. So you really do want that hybrid cloud model where you can have some of your capacity that's all bursty and more elastic running on the public cloud and the stuff that you wanna cost optimize on your private cloud in a hybrid solution. In order to get that hybrid solution to work without a lot of pain, you've gotta have that behavioral equivalency so that cloud A and cloud B behave the same, which means that OpenStack as a whole needs to really be thinking about that hybrid-first cloud strategy and how do we set it up so that OpenStack can be the winner in terms of building the bridge between private and public clouds. And I powered through that a little bit faster than I meant to. So I've got time for questions, which could be good or bad. Questions? I have to take at least three questions before I can let any of you out of the room. We're gonna lock the doors. We're close, that's the definition of it though. It's one way, but it might be multiple direction. So what do you think about that? Some type of standard for metadata so that whatever the cloud is, it can recognize the characteristics and the needs of the application so that it can move back and forth as needed. Right, so there's expatriation, which is the opposite direction from private to public, right? And then you're asking about whether there should be a metadata model attached to the application or the cloud? To the application, not the cloud. Okay, so can you attach metadata to the application to express what its requirements are for the underlying system and then the underlying system can either meet those or not? Sure, I mean, I think it's fair. I think metadata is messy and I think that it's hard to manage and I think it's hard to get people to agree and you look at things like XML standards and it can get really messy very quickly and building one single metadata framework for, say, HPC apps as you would for a web scale app as you would for some kind of custom financial services app, I don't know, it feels a little difficult and I don't think we've moved things far enough that we actually know what all the sets of behaviors are that an application requires. There's a lot of interest in aggregators and things that enable that kind of bubble layer. You can't have aggregators until you've got a standard, until we've standardized. I mean, that's all there is to it. I mean, you know, if all of the cars use different gasoline, like we wouldn't be driving. Anybody else? Oh, you waved back there. So you had a question? Anybody? So this may not relate to this topic but as a cloud pioneer. So do you think what's the biggest challenge of OpenStack today? Can you ask the question one more time? Yeah, what's the biggest challenge of OpenStack today? The biggest challenge of OpenStack today? I'd say that the biggest, the biggest challenge is that there are, there's a disconnect between the user base that wants one thing, the developers who want another thing and the operators who want a third thing. That's the single biggest disconnect. Hopefully the summits will help us get through that. The users want interoperability and standardization. They want a common reference architecture. But the problem is that the vendors and a lot of the developers don't want that. They want it to be a very pluggable solution. And part of that's driven by the fact that OpenStack has been a very inclusive environment. If you suddenly go out and you say, no, we're not gonna do networking like that, then you shut the door on some people who wanna make networking work a certain way in OpenStack. So the inclusivity, the thing that's made so many people get around the table and make it go fast, is also the thing that makes it be more of a framework rather than a system, right? And then you've got the operators who also are looking to some degree for interoperability, but many of the times they drive requirements into the developers that don't necessarily make sense. Like, I'm a VMware shop. We only use VMware, you know. Like, this user does not care what the hypervisor is. Only this operator does. But now he's forced the developers to figure out how to make VMware work with OpenStack. And it works like crap with OpenStack, right? So, and it's not ever gonna get any better because of the VMware architectural model. So, that's the biggest challenge, is the disconnect between those three constituencies. So, when you're talking about hybrid clouds, how do you, what do you think will happen to the data? Where does the data sit if you're, I mean, computing in the cloud, in the public cloud as well, where it's on-premise as well? How do you make the data transfer work? What do you think will be the future of the other data sites? The data resides wherever your application decides it resides. And I'm not trying to sidestep the question, but you're gonna design it around particular goals. So, if you, going back to the example I gave of sort of somebody deploying a private cloud in Belgium who wants to use Amazon Web Services, they can split the application into pieces, right? I mean, we all use a kind of service oriented architecture and they can leave the data primarily in Belgium and have the front end or maybe data processing services with anonymized data running on the Amazon Web Services cloud. So it's, that's an application design decision kind of thing. Hi, Randy. My question is, which role do you think new technologies like containers, oh, well, a container is not a new technology, but Docker and containerization will play in transforming what you call the car in another car or on top of the infrastructure. What if the cloud provider is just not providing the car, but just a fuel and then you have a car on top of it? I don't know if my question was clear. I think I understand the question. If I don't and I'm responding correctly, then you can ask it again or ask a variation. Containers are going to make a very large difference and the reason is actually something very simple. The reason is that virtualization is completely worthless for many workloads. You look at something like Hadoop, look at many web scale applications, many customers that we've helped come off of the Amazon Web Services. The first thing they said is like, do you have an option for bare metal? Because my Java app is tuned to run at 128 gigs of RAM and 16 cores on 15 spindles like that. I'll use the whole box, right? Zynga did this. He moved from using three virtual servers to using one physical server at a time. So in that case, virtualization doesn't add anything in the virtualization performance penalty and the complexity does not make sense. But taking a bare metal system and putting containers on it and using containers as a way to package and maintain the image that goes on the bare metal is a very valid way to actually make bare metal work because there's almost no performance penalty for a container. So I think we're gonna see the emergence of containers for very least because they make bare metal actually more viable. And then in terms of how people are gonna be more or less clever with them and the different platforms and service systems, that's a bigger topic that I wouldn't wanna get into right now. Does that answer your question? Yeah, you almost answered the question. I see that, I've seen three weeks ago, CoroS has been funded by Sequoia Ventures. So there's a large market for... Who got funded? Sequoia Ventures, CoroS. And they are developing an operating system for bare metal running Chrome OS with the Docker and API stuff. Exactly, yeah. So they are going exactly in that direction. So as far as I see, in the meantime, why all the stuff gets ordered and gets... People get, understand this. Developers are organizing in some other way and setting up their minds and doing some other way. So containers may be another option they have to move workloads. Absolutely, I mean Docker's in OpenStack as of Havana, right? You knew that, so people will... There's a renaissance happening around containers. It's an old technology. It's always been too clunky. And what somebody like Docker's doing is just making it so you have to worry less about the technology and you can just use it. Don't have to understand how it works and I think that's the big breakthrough. All right? Thank you. Any other questions? Are my out of time? Am I still going? Yes. Yeah, again, I got a lot of good comments even people at the OpenStack Foundation, the staff steal some of my slides from the state of the stack. You know, the earlier version in springtime of this year. So I'm gonna do the reprisal on Thursday morning. I've got a ton of new data. I've integrated in a bunch of select data from the OpenStack user survey. You can get a chance to see the user committee from the board give that presentation. And I'm also integrating a bunch of the data from, more data from the right scale state of the cloud presentation. So, you know, I like graphs as much as possible. I sometimes wind up with a lot of text but, you know, the state of the stack is pretty well baked and I think it'll be worth your time to spend some time watching that talk. So just sign up there. All right, thank you.