 OK, so let's go ahead and get started. So thank you for coming. This is the panel on application portability, of course. My name is Everett Taves. I'm a developer advocate at Rackspace. And I'm also a project management committee and committer on Apache J Clouds. J Clouds is a multi-cloud Java SDK that works across Amazon, Google, OpenStack, Rackspace, HP. And so one of its purposes is to enable application portability. And I'd like to introduce you now to our fine panelists here, I guess, starting with the man with the mic. I'm George Risen, the former CTO in Stratius, which was acquired by Dell a few months ago. So now I'm a director there and focused on multi-cloud management applications across multiple clouds. Randy Baez, CEO of Cloud Scaling, a long-time cloud guy, both building infrastructure and deploying apps on top of it. I'm Euron Parasol. I'm VP of product management for Gigaspaces. We have Cloudify, a product that orchestrates and manage application of our different clouds and heavily uses J Clouds. Hi, my name is Hunter Neal. I'm with Morph Labs. And my experience is really along the lines of using the Fog Library and contributing that a lot of the code that came around through Essex and improving that over time to really enable movement between clouds on the Ruby side. Cool. Thanks a lot. OK, so I think there are many aspects to application portability. There's API, network topology, cloud provider capabilities. Even billing could be considered in application portability and the kinds of images that are available on the various clouds as well. So I invite you to start thinking about your questions across those aspects or others and work it down to a question you can get out in at least 30 seconds or so. And I'm going to start things off here with a bit of a softball, just to warm us up, that is application portability across different clouds, whether it's public or private? I mean, is it really a practical concern? Is it achievable? What have been your experiences actually porting applications between clouds? I think it's actually a very difficult problem. I don't think there's necessarily a simple solution at the moment. I mean, I think we do have a lot of good libraries out there, like Fog and like J Clouds, that are really helping enable sort of a common compatibility layer. But there's certain things that are really not helping or easy to do when you're dealing with applications in various different levels of complexity and features that are provided across the clouds. So I agree that it's not an easy task. We've been doing that. We've been dealing with that for the last three years or so. I think it goes much beyond API, and there are some things that you must follow if you want to have cloud application portability. So one of them, I think your application code should not be familiar at all. It should be completely decoupled from the cloud. Otherwise, there is no chance for portability unless it's a one-time exercise, and then you spend a lot of effort, and it might happen. Yeah, I wouldn't consider that particular aspect part of application portability if you're picking a whole thing up and moving it over entirely. I mean, that's more of a migration than a port. Exactly. So we're talking about other use cases, such as VR, or cloud bursting, and hybrid clouds, all of those use cases. You can sum them up as hybrid cloud. They have different flavors. So I think one rule of thumb would be to decouple your application completely from the underlying environment. And so for the application, the cloud should be nothing more than operating system, in my opinion. And then if you want to consider using some layer of abstraction, it should be one aspect of your application. So it should be concentrated in one place. And even in that case, J Cloud would do a great job, but would not cover all. It's not going to provide you with 100% compatibility. So you'll always have to go to a certain extent to the native API. And so if you're doing it with one aspect, it becomes much less of a painful exercise. It's much more practical. There are other aspects, but I'll let other people speak. Well, so I don't think first off you can separate out application migration. I think application migration is a use case for application portability. You can't have application migrations, or at least in an automated fashion, unless you've got a portable application. There are many different elements, depending on what you're trying to do with your application. You don't have to fully decouple your application from the underlying cloud. You can, for example, use Amazon RDS and have mechanisms in place so that you're not necessarily relying on RDS if you want to have that application port to other clouds. If you want to do some sort of automated migration, then you need orchestration on top of that. But one of the, I guess, important things is that a lot of reasons this comes up in the context to OpenStack is because first and foremost, a lot of people originally got excited about OpenStack as if it were going to be the thing that delivered portability between clouds. And OpenStack no more does that than VMware does that between VMware clouds. And there are complicating factors within OpenStack. Sometimes I'll say that OpenStack isn't compatible with itself, and that creates problems just within OpenStack. And then the other thing is people look to API standardization as maybe some sort of magic tool. But library, and I actually have written, and it's part of the multi-cloud manager and stratus product Java abstraction library for clouds does on cloud. You can write against that and work against any cloud, but that still doesn't give you application portability. So if we solve the API issue, we still don't get application portability because I think the secret to being able to operate an application in any cloud is really to, first and foremost, start with best practices around developing share-nothing horizontally scalable distributed applications. The nuances of APIs and images and stuff are secondary concerns. Not to plug myself, but I'm going to. I went into this topic on hybrid cloud on Tuesday during the AWS repatriation talk I gave. The video is up. You should go look at that because I'm probably going to repeat a lot of what I said there, and that's going to have more detail. I think that this is really simple. If you want to solve the application portability problem, you have to accept the fact that all systems, especially cloud systems, basically have three key components. At the top, you've got the API. In the middle, you've got semantics combined with architecture, and at the bottom, you have behavior. And what I mean by that is that when I say spin up a VM on Cloud A, and I say spin up a VM on Cloud B, they need to do the same thing. They need to have the same behavior. If on Cloud A, the VM spins up in five minutes, and if on Cloud B, it spins up in 60 minutes, I have a problem, because when I designed my application deployment management framework on Cloud A, it assumed that VMs come up in five minutes, or it takes action. When I go and I run that on Cloud B, it doesn't behave the same way. And it doesn't matter what APIs are in place. The problem is that the behavior has to be the same, and that's why when you look at conformance tests, they test the behavior of an API or a system, they don't just test the API itself. And I think the thing that people really get stuck on is that they think about the APIs, they think about abstraction layers, and what they don't realize is that the application is not running in a vacuum. The infrastructure runs on matters, the behavior of the infrastructure matters. OpenStack Cloud A, I have floating IPs with auto assignment. OpenStack Cloud B, I have no floating IPs with auto assignment. And so now when my framework on A is basically designed to assume that every VM that spins up has an IP, when I take it to OpenStack Cloud B, it breaks. And that's the fundamental problem. We have to come up with some reference architectures, and then we need to be able to test them. So each flavor of OpenStack behaves the same way. And then application portability between those flavors will actually work. Okay, so it sounds to me like, obviously there's no silver bullet, and I don't think anyone was coming here expecting to get a silver bullet for application portability. And it is the kind of thing that you need to, I would say consider almost upfront, if possible. If you're building a cloud native application and you don't wanna be locked in, you gotta give some thought to application portability upfront. Do we have any questions from the audience? Would anybody like to pick the brains of our panel here? Sure, oh, just wait, we'll have, we'll get your mic there. I didn't have a question, but I had a follow-up comment on the application portability. So as you said, application portability is a really hard problem, especially when you are talking about portability between different kinds of cloud. For example, OpenStack to AWS. It's a very different and very complex game. It's hard enough. I think it's a more achievable target is to achieve portability within clouds of the same nature. For example, portability between different OpenStack clouds from different vendors or between a private or public OpenStack. If we achieve that as a step one, that would be a huge pain from a practical portability perspective. Yeah, I mean, that's not exactly true though. That's my point, right, is that I can take OpenStack and I can make it look exactly like Amazon Web Services. I can make it behave almost identically that's my product, that's what I did, right? I can take two versions of OpenStack and I can make them behave completely differently, right? The problem is in the behavior of the clouds themselves. It's not in the software stacks. If you look at Google Compute Engine and you look at Amazon Web Services, they're 95% semantically, architecturally and behaviorally overlapping. It's very, very easy to port an application from one to the other. You support both of those, right George? So and how easy is it to take an appointment on the multi-cloud manager and deploy that same deployment on either Amazon or Google and have it act the same? Those are easy. It's when you try to take something to your point from Amazon to IBM's smart cloud, which is completely indeterministic and how long it could take two minutes to deploy it to VM. It could take literally 24 hours for a VM to deploy and it's one of the reasons it's going bye-bye, I guess, but you know, but the point to Randy's point is that if you've got orchestration logic that times out after 15 minutes, then you can't rely on smart cloud as being a functional cloud period at the end and application portability goes out the window, whereas if you've got Amazon and Google and they have same behavioral semantics, who cares that the APIs are completely different? That those are, that's a completely irrelevant issue. It's that you can rely on them having the same behavioral characteristics. I bet, yeah. So given the fact that we can agree no cloud is ever gonna perform exactly the same between multiple different vendors for the mission of providing application portability, do you think it would be better if there were an abstraction layer similar to the Google App Engine that everyone could agree upon? And so for the purposes of app development, develop your app against that abstraction where something below that handles the distribution and portability. So sorry, just to clarify, are you kind of suggesting or talking about platform as a service effectively? Well, not exactly platform as a service, because that's a little bit different. What I mean is actually a way of, if you look at Google App Engine, not Compute Engine, but the App Engine, when you develop an app against that, you literally put it into the cloud and the whole idea is that it abstracts the access to the database behind it and everything else so you have a common set of tools that would, in theory, if the abstraction worked properly on different clouds, it would provide the same set of resources. So it shouldn't be taken from the fact that, so we've put this, Google and Amazon behave alike, good, smart cloud, different, bad. And that's not necessarily a good breakdown because different clouds should behave differently, otherwise they add no value. No, I agree. And so, when you look at it from an individual application perspective, an individual application requires certain performance characteristics. Now, in some cases, the best way to deal with that is to leverage a pass like a Google App Engine. In other cases, the best way to deal with it is to leverage an orchestration tool like a multi-cloud manager or any of the competitors that will worry about the behavioral similarities of different clouds for you so that you don't have to. So orchestration is another way to get a guarantee of similar performance characteristics across different clouds. So I definitely agree about orchestration and not about the pass because, first of all, pass is quite simplistic. It's a black box model. Usually it doesn't suit many applications, many mission-critical applications. And if you look at Java application servers as an example for, let's say, previous generation kind of application portability experiment, it never worked. So even with Java, I think a lot of people would agree that if you write once, you can run it anywhere, at least from API perspective, even the JDKs, the IBM implementation never worked quite the same way, but more than that, the different containers never worked in a fashion that you could have ported and enterprise application from one application server to the other without any effort. I would say that I tried it a few times and it was quite painful. Hunter, did you have anything to add? I think there's probably, the biggest issue I think that we're sort of all really talking about here is a consistency in that behavior. And if we're really needing to look at the consistency you see in terms of launch time or in terms of functionality that's being provided by a cloud, then that's some... You may not be able to abstract all of those sorts of things out of there into a platform container or anything else. So either it's a means of finding a way in understanding your application and seeing those requirements that are actually going through there that having to be able to choose to... Those providers that we actually be using is still a very fundamental part of it. Do we have another question over here? Randy's got something to say. Oh, sure. Maybe we'll move it back to the back. Yeah, so I got to call bullshit on some of this. I mean, the reality is, is that there's only going to be a few flavors of cloud, right? I mean, imagine if every power company provided you a different kind of power. Like, you'd be totally screwed, right? I mean, look at the diversity of applications that consume electricity. The reason there's a diversity of applications in use cases is because the electrical system is standardized. It's because of that. You have to have standardization at the bottom. That's why the highway system works. That's why TCPIP works, right? What's the level of interoperability between any two load balancers, right? It's crap, right? But what's the level of interoperability between two IP routers? It's very, very good. The reality is, is that over time, we're going to have standardization occur at the infrastructure layer that's the only way to make this stuff work. And behavioral, copying behavior is really not as hard as it seems. It's a task, but let me give you an example. How many people are familiar with Android? OK. How many people know that Android runs something called Dovlich, which is their virtual machine that's a replacement for Java? How many people know that it is 100% byte code compatible with Java? It's a complete and utter copy of the Java virtual machine behaviorally in API. And Java code runs on it natively with no problems. And there's no way that Google can just rip off the source code. They had to rewrite the thing from scratch, measure the behavior of the Java VM, and recreate it. And that's how you do it. That's how you make it happen. So then we just have to decide, what are the reference models? Is it AWS and Google Compute Engine? Is it vCloud Director? Is it some other thing? I think you know where my bet is already. I don't know. So maybe just a quick, quick answer. I'm going to pull this into 50 amps. No, it's none. If I go back to your metaphor of power, electric power, so the interface would be the same, right? It's the same socket that you're using. It's the same voltage. But first of all, it's quite a simplistic service compared to, I would say, compared to enterprise applications. And from SLA perspective, I would claim that different vendors would not give you the same SLA, even with electricity. And there were a lot of problems with power-offs, I think especially in California, in other places where they have different vendors providing electricity. They had problems with that, because you cannot rely on some companies to provide the same quality of production of electricity as other companies. So it would be the same thing with the cloud. The cloud is a virtualized data center in a large scale. It's a very complex operation. You can never make it be exactly the same. Even the same code stack, like OpenStack, when one company operates it and another company, different company operates it. It's different hardware, different people. It will always be different. Absolutely will not. One deployment of OpenCloud system, my OpenStack product, looks exactly like the other one, exactly. 100% down to the nuts and bolts, regardless of the hardware actually used under it. And if you're talking about the California power outages in like 2000, 2001, those had nothing to do with the transmission system. They were directly related to Enron and the deregulation of the power industry and the inability of the power companies to actually buy power. If you look at the way the power works, it actually operates within a fairly narrow range and it's very standardized. So an interesting tangent, maybe we'll head back to a question in the back here. Sorry, we have a gentleman back here first. So Randy, people are going to buy solutions other than yours, in addition to yours, right? So how do you get other people on the same page as you as far as showing behavior dynamics? Is there some sort of behavioral description language that shows SLAs, or what are you thinking as far as kind of getting everybody on the same page? Yeah, that's a great question. So the way I look at it right now is that OpenStack is a framework. You can use it to build any kind of system. That's great. Some people are going to use it to build HPC clouds, right? Party on. Go do that. So what we need to do is we need to start thinking of the different sort of flavors of OpenStack. That's what I mean. It solves specific problems, right? What Argon National Labs is doing with OpenStack, which is all on Infiniband, is quite different than what you and I are doing with OpenStack, right? So my flavor is what I call elastic cloud. It's meant to look like Amazon and Google, there's other completely legitimate flavors. And so we need to have, there's a project right now called RefStack that's trying to have a set of tempest tests that can do basic conformance testing of sort of one of these kind of flavors. And so what we need to do is we need to start adding more and more tests in there. And we need to figure out what the handful or a dozen of different flavors are. And then the marketplace and customer adoption will cause some of these to succeed or not. And then OpenStack will still be standing tall because it is flexible. So are we kind of talking about the emergence of a de facto behavioral standard with the RefStack and the testing it? I mean, that's what it does right now. RefStack and Tempest tests EC2 behavior, for example, as well as the OpenStack behavior in the system. They do things like spin up a virtual machine, make sure it comes up, make sure it's got networking. But it also does the timing on that as well. I don't know if that's currently a test or not, but that's an example of the kind of tests that need to be added to expand the testing in the current systems. Sure. I don't want to bogart them. No, yeah, yeah, I was about to say. Did anybody want to expand? Or we'll take another question here. We'll leave that mic up here if we could just a moment. I think we need to, Randy, this is actually as a response to you. I tend to agree with him that there can be differences. What I mean by that is we could say, hey, look, all these OpenStack implementations are similar. But we have to bear in mind that that's just the API. The underlying hypervisor itself, when that changes, and I see us forgetting that a lot. We all talk about moving to OpenStack, whereas we're just moving to the OpenStack API. And the underlying hypervisors can be different. And we all know from experience that each hypervisor has its own plus and minuses, depending on the distros and everything. So if you were to run an application on, say, Ubuntu based VMs and they work a particular way, tomorrow when you move over to, let's say, Red Hat's KBM, that has to be tweaked and tuned in a different way. So even when you're migrating your app, even between KBM, but from different vendors, you'll have to pay a bit of attention there. OpenStack API would be the same, but your application now starts to. And your question is. And my point is that when you talk about app portability, you have to bear in mind that it's not enough that just the API behavior is the same. Your application performance characteristics will vary depending on the underlying theme. So it does vary. You cannot say that it will be the same. So I think that you do want different clouds that have different behavioral characteristics for different types of applications. And like I said, just because provisioning in 24 hours is bad any way you look at it, there's no scenario in which that's what you want. But you may have, so one of the things you encounter a lot of times people will say, why do we need to worry about multi-cloud? We're dedicated to OpenStack. Well, you probably don't want, in an enterprise with thousands of applications, you probably don't want one big giant OpenStack cloud. You probably want multiples, you want geographic redundancy, and you probably want stuff that have higher performance characteristics, some with more commodity generic ones. And that's all good. Applications are not all alike. The important thing is to, when you are building a cloud, and the value proposition of what Randy does is he can give you a cloud where he can tell you what those behavioral characteristics are and you can expect them. And that's a really important thing to expect out of your clouds. I mean, you look at something like Joant, Joant doesn't do a lot, but what they do do, you can predict how they're going to behave and act. And the predictability is the important thing because you can't orchestrate, you can't run a pass, and you can't just plain put an app out there and appreciate how it's going to behave. You can't predict it if the provisioning times between two minutes and 24 hours, or you get these IO hiccups that last for 10 minutes at a time, randomly out of nowhere, so. You are in there, Hunter? Any interest in adding to the behavioral? The reality is that I think the cloud behavior is not even well defined yet in the API. So for example, network SLA, I don't think it's that well defined affinity, anti-affinity, so. Yeah, exactly. And that's a major aspect of application portability. So I think the good news for the SIs that they're always going to be work for them. And like I said, I still think that the application itself cannot take care of that difference, of those differences. And there should be an external tool that inspect that and even with dynamic changes throughout time, with peak loads and stuff like that, you need to scale your application, change it according to the performance. The performance may vary. I think just briefly going back, and there was sort of a mention of sort of a talk of one cloud running Ubuntu and one cloud running Red Hat or something along those lines. And I think that adds so much more complexity to the mix and is obviously avoidable at all costs. I think the comment there was the underlying hypervisor and the differences. But there is, I mean, the differences there, and I mean it's, that's where the standardization on behavior and I think RefStack becomes a really good part of that to enable that consistency and certainly help out the OpenStack providers to offer something that is actually going to be reasonably consistent and whether or not it's going to be performant and offering benchmark testing and those sorts of things as well. You know, it's probably a very important consideration in this discussion. Let's go back to the audience here. Before I ask my question, I'd actually like to preface it with a couple of statements of assumption, assuming for a moment that not all of the cloud providers that I'm wanting to write an application for are completely homogenized across each other and assuming that I am not entirely decoupling everything that my application is doing from whatever cloud I'm wanting to put it on eventually or port it to eventually. What can I as an application developer do to leverage not just the common functionality across all cloud providers but actually leverage explicit advantages that each cloud provider might provide me? So in particular, for J Clouds, I know that we do have some portable abstractions. There's an abstraction layer that does the commonalities across the supported cloud providers but it actually makes it quite easy to drop down a layer. It's just another call to get the specific, say, OpenStack API and you can start making OpenStack specific calls and then if there happens to be some very specific cloud provider functionality even within the OpenStack ecosystem, you can take yet another step down, just another call and you can get a hold of that, the handle on the API for that very specific functionality. Or follow up? Following up on that a little bit, is there anything in existence right now that would let me as an application developer be able to reference common functionality trivially but also request from what is provided, what cloud it is that I am on and based on that selectively? So you're talking a bit about like discoverability of what features that the cloud is providing? To essentially ask, what cloud am I on and based on that, if I'm on an Amazon cloud, then I know that I can improve my performance by making these additional calls. If I'm on Azure, then make these calls. Is there anything existing that could help me with that? The Fog Library itself will have certain capabilities and it can differentiate between OpenStack vanilla running on HPE or Rackspace or any of those sorts of things which have different capabilities but there's no intelligence behind that. You just need to know what you're running on. So I don't think there's necessarily a magic solution for that at the moment. I don't know if that's something that really makes sense, does it, I mean it's... So the key to that is first to start off with abstraction, so let's, for example, let's say you have an application that is interacting with a key value data store and so you're using simple DB in Amazon. So when most other clouds... And most other clouds, you don't have an analog to that. So I mean, so you could do something like, I mean, so Dozen Cloud has a metadata capabilities description thing that would allow you to know, okay, this cloud I'm in doesn't have a key value database, but that doesn't help you if the cloud doesn't have a key value database. What you really need is to have your application talking to an abstraction, some sort of basic interaction API that if it's in Amazon, it will talk to simple DB. Otherwise, it will provision a VM and bring up React or something else and talk to React. So the application itself is completely ignorant of the fact that it may or may not be using a cloud key value data store. In my opinion, it's better to have, I mean, this may be a little self-serving, I think you need orchestration up there so that you have less logic sitting in the application, but without orchestration, you need that abstraction there that makes it not care about that. I would like to differentiate between most of the cloud APIs which are more administrative in nature, okay, provisioning stuff, configuring stuff, and some of what you're referring to is more like programmatic APIs for actually accessing data, et cetera. So I think there's nothing new in that kind of exercise of obstructing, for example, databases or even obstructing relational databases versus object oriented applications or M tools exist for a long time. And as a developer, you should know that there's always a price, there's always a penalty and a trade-off in using layers of abstraction and that is performance. And again, as I think was said before, not all applications create an equal. From administrative perspective, I think all of what we discussed in this panel applies, I think external orchestration that can handle the differences between behavior applies both during setup and later for scale, for remediation, and so on. Randy, did you wanna add anything to this discussion or? I feel like I'm letting a fuse. Discoverability is not what it should be in these systems. There's a challenge there because the API is basically an explicit contract about what you can expect from the system, but because of the nature of any kind of system, you can't model the entire system in the API. But it's only a snapshot or a window into behavior that you've decided to expose, may not be in there. So an example is, why can't I call a cloud API right now and say, spin up a VM in five minutes, and if it doesn't spin up in five minutes, kill it and give me a VM in five minutes again? You can't do that. You can't actually bake the SLA into the API call. You have to learn a bunch of the behaviors through trial and error. It's like a discovery process. And so a lot of the cloud providers and a lot of the cloud systems don't actually provide the discovery process to provide the discoverability that would be necessary so that you can do that, what you're asking in a way that is more programmatic and makes sense. Instead, it's a manual trial and error process. You've got to use J Clouds or Fog or something like that. And you've got to sort of figure out how to determine which cloud you're on and then change your behavior through business logic. And you have to do each of those little pieces one at a time depending on what you're trying to accomplish. It reminds me... Let me take the mic over there. Reminds me a bit of writing a bash script for Linux and you're not even sure what flavor of Linux you're gonna run it on. And so you have to do a bunch of little tricks to find out, you know, maybe using uname or whatever or looking at some particular files for particular distributions. If they're there, okay, well, now I'm using, I should use apt-get or I should use yum or whatever. It's obviously probably a little more complicated in the cloud, but just made me think of that. So we have time for one more question and go right ahead. And it's a dumb question, you know. Ha ha, perfect. I have a customer in Vietnam and they are using Cloud Stack and they're telling me that Open Stack is good but it doesn't matter because in the future they will probably move from Cloud Stack to Open Stack. What I find is that J-Cloud actually don't support Cloud Stack. So what does it take for Cloud Stack to be in J-Cloud? J-Cloud supports Cloud Stack. Do they do? Do they do? Mike, yeah, Mike? Oh, I can't find it on the website. So actually by giving the J-Cloud layer then they can move it from Cloud Stack to Open Stack easily. If they're using that portable abstraction there, but go ahead. And if they're using the common denominator. One of the things, I guess one of the things I don't like about J-Cloud is that you can write against an abstraction or not. But, you know, Dozen Cloud supports both and it's fully abstracted. So you write it and it works against one that'll just work against the other. There's no, as long as you make, you know, the proper metadata checks for capabilities. Yeah, I think the different, yeah, like the way you summed up the difference between J-Cloud and then the multi-cloud manager, I think there's just different use cases. Well, that's Dozen Cloud, not multi-cloud. Dozen. Dozen Cloud. Yeah, I'll do it. Dozen Cloud is an open-source product. Right. Yeah. You're welcome to try to cloudify the port between Cloud Stack and Open Stack. Greg Owen? Just one more. So I have no idea about J-Clouds and obviously first time at Open Stack, I know it's mostly about compute network storage, right? So my question is, if you talk about application portability at the beginning, you said we should decouple it from the specific clouds that we expect, right? Normally, our applications are not directly communicating with network compute and storage, right? They are communicating with things like queuing and you know, my execution layer and so on, which is the kind of RDS and all this kind of stuff which goes into password, right? So mostly what I heard here was issues around behavior, around non-functionals effectively for porting applications for actually bringing them up, right? Which do you guys think that application in the future they will actually... I'm talking about business logic applications, right? Will they actually query and execute these calls by themselves? So are we actually talking about not really the standardization or non-standardization in the application code itself? Or are we talking about the existing non-standardization in the behavior of the clouds that we have today? Of course, at the end of the day, if you talk about service, and I mean, we talk IES, right? Infrastructure as a service. The service is always defined as I guess you said about functionals and non-functionals, right? And if we don't test against that, right? If you don't make policies and basically promises on how quickly these things are, how quickly these things execute, then obviously we are not compliant, right? Right. If you would do that, if you would ever be able to do that, right? Like let's say provision in that space, as you said, right? I mean, and the application code is decoupled from these specific IES course, then wouldn't we have portability? I think, you know, in a world that did have common behavior and this sort of testing, I mean, perhaps in just an open stack world, you shouldn't have to really worry about those sorts of things, but I think it's never quite that ideal. And if you're moving between AWS and an open stack or cloud stack or anything else, you know, I think there's a lot of testing and a lot of consideration needs to go into an application designed to be running across multiple clouds. I don't know if there's any way to get around it, but last comments from anyone else? I guess I'd say like if, especially if you're running like legacy type applications, the differences in networking among all the clouds is enough to screw you up big time, you know? I mean, with properly architected, designed for failure applications, you can get away with it, but if you're talking like I'm deploying SAP or Exchange or something like that, that indeterminacy of the structure of the network and the behavior of the network alone makes portability really, really problematic. And Randy, you can play us out. So I think that what's key is to understand that it's about the application management framework like George's job is easier or design's job is easier or JCloud's job is easier if there are some flavors you can rely on, right? I mean, who's running Slackware on their servers today? Mandrake, Linux, Gen2, okay, right. So you're running Ubuntu, you're running CentOS or Red Hat, you're running Suze, right? I mean, that's the deal. There's emergent winners and then we all standardize on that. Now, I'm a BSD guy. I wasn't happy that Linux won. I was pretty unhappy about that, right? But I gave it up because I knew the value of everybody running off basically the same flavor of stuff, right? And so you can build these abstraction layers and get application portability. If you know you've only got two or three things that you need to make it work against then you can have your if, red hat, yum, if Ubuntu, app get, hope I got that right. Yeah, I'm pretty sure I did. Hasn't been that long since I touched Linux. But the point is is that then you don't have this combinatorial matrix of problems that the application deployment and management framework has to solve. I mean, George's problem, and he and I were commiserated about this before, is that he'd go from one cloud stack, I'm putting words in your mouth, George, one cloud stack deployment to another and the network work differently even though it's the same software and stuff start breaking. But if we were making cloud stack or open stack or whatever work by different flavors then you know when you went to the AWS flavor you get AWS networking and that would be the same from one to the next. And then you just have to check in my own cloud with AWS networking. Oh, I am, okay, great, expect this. And I think that's just what I want people to understand. It's like we're almost there. It's just people have to stop making snowflakes, right? As long as all the clouds are snowflakes, I mean, we're kinda screwing the pooch, right? It's like everybody makes and builds their own car. I mean, we don't do our own Linux distribution, do we? I mean, it just doesn't make any sense if you stop and think about it. But we're all such special snowflakes. We are. So we're out of time here. Thank you so much, guys. I think one of the things that I've personally really taken away from the panel is the behavioral aspect of application portability. That's, I find that really valuable and we'll see what happens with ref stack and open stack and what emerges. So very interesting. Thank you very much, guys. And thank you all for joining us.