 All right. Good morning, everyone. So I was told to speak a little bit louder because apparently this room has a weird echo, and I can realize that because I hear myself speaking, and that's kind of disruptive. Good morning, everyone. I'm very excited to be presenting and moderating this panel here on hybrid cloud. My name's Sebastian. I found an open source project that's called Scaler, which is an open source replacement for things like horizon and heat, as well as a lot of other hybrid cloud management. So I've got a special experience and a lot of experience working with large and small companies deploying hybrid cloud applications. So with that, I'd like to introduce our panelists here, Randy, Mark, and Jena. I'll let you guys introduce yourselves, and then we can kick it off. I'm Randy Bias. I'm Randy Bias, the CEO and founder of Cloud Scaling. I'm on the OpenStack Foundation Board of Directors, and I guess I've been a cloud guy and blogger and pundit since December 2006 when I put my first application up on Amazon's Elastic Compute Cloud. And I'm Mark Williams. I'm the CTO of Redapt. Redapt is a systems integrator that enables large-scale data center infrastructure and turnkey cloud solutions, both private and hybrid. Before that, I was actually a customer of Redapt at Zynga where between 2008 and 2012, I was responsible for infrastructure operations, private data centers, our expansion into a lot of Amazon consumption, and then ultimately, we built a private cloud called ZCloud where we moved everything back out of Amazon. Hi, I'm Jena Hossain. I've worked at Google for about 10 years. I'm currently an engineering director leading the cloud networking efforts. And before that, I've been doing that for about two years. And before that, I spent the last, the previous eight years working on site reliability and production engineering. So I was part of the team that originally productionized Borg and shipped a bunch of the cluster management and networking components that we use today and that we are trying to now turn into cloud products for public cloud. Excellent. So hybrid clouds are pretty hot topic these days. And it's pretty much on the checklist of all IT departments. Why is that? Any idea why it's such a hot topic? Well, I mean, when we're talking to customers, they just want flexibility. I mean, what I've seen so far is that pretty much everybody is trying to look at both public and private. And they kind of want to hedge their bets and have a little of both. And they're not sure if they're going to wind up being 10% private, 90% public, or 10% public, 90% private. They're not sure if they're putting DevTest QA outside or inside. And it sort of seems to change vertical by vertical. So financial services guys are like, we're never putting production out on the public cloud. But if we can put DevTest and QA out there, that's great. And then people like Netflix are saying, we're all in on public cloud. Maybe we've got a little bit of onsite capacity, but that's it. So I think it's about flexibility and choice more than anything else. I think another thing that's influencing it is the fact that price drives a lot of change in the prioritization of what standards you might have around governance. I've seen a lot of relaxation over the past several years of those primary concerns about public cloud and the safety of data. You don't hear about just catastrophic exploits in public cloud because of hypervisors being shared by multiple tenants. So I think that the lack of horrible news there has influenced that. And I agree with Randy, too, that just having the agility, the fungibility of resources to handle the unexpected surprises, I mean, certainly that was advantageous for us in deploying unpredictable workloads. So just having the ability to do both is a natural conclusion to that. So I think a lot of it that I've seen that comes down to both trust and cost, and not just outright cost, but also outlay, like whether or not you are focusing on your capital expenditures and that's your primary motivation or you want to focus on OPEX. And you look at a company like Netflix, and they really want to put all of their money into serving customers, right? So they would rather pay a head on OPEX and not invest. No, that's not accurate. I talked to Adrian about this extensively. And what he said is that they didn't know what their growth was going to look like. They didn't want to try to build a data center and have an environment and then blow through that growth. That was number one. And number two is they wanted to enter new markets, like the European Union, without having to take down data center space or buy servers or hire boots on the ground. And that was the primary motivator for them. It wasn't so much the conversion of CAPEX to OPEX. I mean, you're making good points for some people that's true, but it tends to be startups not established $2 billion companies. OK. I mean, the last thing I want to do is get into a debate. Why not? A lot of the things you just mentioned, a lot of the things they didn't want to spend on is CAPEX. That is boots on the ground, pouring concrete. That is hard work. That is outlay. It's not just money. It's also the agility to do it. But I think it's all wrapped up together. And definitely there's different kind of motivations. And in my case, initially, Zynga was very happy consuming a lot of public cloud. And then being pre-IPO, looking at that OPEX line item and how that materialized into valuation of a company versus the ability to spend cash on hand to do private infrastructure was actually part of the catalyst to which led us to build private cloud. So again, it definitely depends on the economic kind of value of CAPEX and OPEX internal to the business. Yeah. So I mean, I think the thing that's important to understand there is that when you are thinking about, is it CAPEX or OPEX? Like you just suggested, Netflix might have been for going into, say, the European Union. Sometimes it's not really actually about that. It's about the ability to experiment and have very minimal consequences. If Netflix went into the European Union with the Amazon region there and nobody in the European Union signed up, they just turn it all off. No harm, no foul. It's a few thousand dollars, maybe 10s of thousands of dollars to do that experiment, whereas if you have to go in and you have to take down data center space and sign long term leases, it makes it a much bigger thing. So it is about cost and there are factors that are related to money, but I think business agility is the primary driver and cost winds up being sort of dragged along as sort of a consequence or a side effect. So since we're talking about cost, I've often heard that it can be economical to own the base and rent the spike. And with the recent price wars, how has that changed that model? So I've helped a lot of customers do the ROI analysis of typically they come to Redapt looking for a path out of Amazon, feeling the cost has been incredibly high. So after April 1st, that modeling that I'd often do kind of got a lot worse. That said, there's still a lot of valuable requirements for people who have performance needs, et cetera. But the concept of own the base, rent the spike, I think is great when you think of cloud just being the infrastructure. But if I think about one application living in that particular model, I get concerned because we actually did this at Zynga because we were the ones kind of broadcasting this capability, right? So most of what we were doing was actually taking workloads wholly off of Amazon and wholly into our Z Cloud. And the reasons why we're doing this is cost and performance. We had built something that was ideal for our workloads in our private cloud. So after we had done probably 80% or 75% of that shift in back into our private cloud, what we wanted to prove is that we could actually burst back. So we took one game, a very stable game, not a very risky game in terms of value to the company. So we proved that we could actually do it. And what we found was to do this with one application, you're kind of getting the worst of both worlds performance-wise. You're dealing, especially when you're dealing with, say, that web tier that's user-facing, that's what you want to be able to scale and burst, right? What we found was we had built something ideally performant for our workloads in the private cloud. And then when we kind of mixed that with a public equivalent, C1 Excel, for example, you kind of get all the noise that's in there and it kind of drags into your private cloud environment. And then there wasn't really much value for that. So I think it's valuable for, I want to base rent the spike in terms of being able to move whole workloads into different clouds. But to operate one distinct application across them has some disadvantages, depending on how your performance can be expressed through those layers. Mark and I like to disagree on this one. So the thing is, and I remember when the whole notion of hybrid cloud first came out and people were talking about it, and my problem with the way they were talking about it was they were like, we're just going to jam any two clouds together. OK, well, to Mark's point, it doesn't really work out very well. You got a big gigantic VMware vSphere cluster running on Cisco UCS with a fiber channel saying you connect down to Amazon and it's like apples and oranges. They just don't make a lot of sense together. But if you design a private cloud so that it looks a lot like the public cloud that you're going to interconnect into, and there's very low latency between them, it can look like an extension of your data center. Also, it's very use case dependent. There's a company called Teradata. They're a data warehousing company from the 70s. They're trying to move their model to a SaaS-based model. 90% of their traffic comes in during one month a year. For something like that, even if the performance is wildly different, even in that case, it still makes sense. Because why would you over provision 10x for the other 11 months of the year? It just doesn't make sense. Yeah, I think where you have users not experiencing those potential differences, and you have an operations team that isn't going to have so many layers of technology and difference between two environments to dissect when there's an observed problem, I think that actually does work for games in our particular use case. Because user experience mattered and the performance and responsiveness of the server mattered, that's where it broke down for us. And just at a steady state as an operations person, when there's a wobble, I want as few things to step through to eliminate as fault as possible. And you get a lot of noise in a blended environment for one app. Do you have an idea? We're both right. So you brought up one important point which comes down. You mentioned latency. And we keep bumping into Google internally in the way we run our workloads. It's always about data locality. And people build apps and expect them to behave a certain way based on where the data is. And so if you want to move the app, you tend to have to move all of the data. It's very difficult to build an app that is actually able to span. These days, even more than a couple milliseconds is enough to really throw off a lot of workloads. So even if you are able to have a hybrid cloud environment where you are on-prem in the same metro as a public cloud, you run into challenges where you have to build the app to be able to split that latency boundary. Yeah, I mean, I know developers are really keen to get to a nirvana where they don't have to worry about storage and network and processors and that stuff. And they just have a magical container that auto-scales. And they put their code on it. And everything's wonderful. But latency is a key problem. And it winds up showing up in the app. There was a company called Triton Containers in San Francisco. They built an app in their data center with zero millisecond latency. And then they stuck it out in Singapore and they found out that this thing was really chatty. And all of a sudden, it was 100 times 200 milliseconds worth of latency every time that they did a transaction. And they're like, why is it slow? Well, the network isn't always zero milliseconds, and it's not infinite bandwidth. And the applications actually need to be designed to be sensitive to latency for the disk, for the network, and so on. And that is totally key. So in which cases does it? What type of applications make sense to put on a hybrid cloud with the applications on? Mark, you were saying that it makes more sense to put a single workload. Oh, we have a surprise. Sorry about that. All right, Ariel, go ahead and introduce yourself. Yeah, sorry, guys, for some reason, I thought we were on an 1115 set of 11. My name's Ariel Taitlin. I'm a venture partner with Scale Venture Partners, which is a mid-stage venture investment firm in cloud, IT, infrastructure, mobile, SaaS. Before that, I was running operations and cloud platform tooling at Netflix, which I think is the reason for my participation. We value your opinion on investments, but yeah. So we were talking about what types of applications make sense to have on a private cloud, what types of applications make sense to put on public cloud. If there's any use case for having an application right both simultaneously in that sense, and what types of applications are contentious there? So some of the use cases I've seen on what I would consider successful or well-balanced. And certainly as a company who's never started doing anything with hybrid before, one easy path toward that is taking advantage of, say, S3-like or object storage-like functionality where you're already counting on latency being a factor in your application. So it's a very reliable type of service from Google and Amazon. The way you can build your application to not influence or impact user behavior or experience there is a low bar. And it's a lot harder to build your own object storage with those same capacities around geo-redundancy and multiple copies, et cetera. So that's one that I think works well. On the opposite side, where I've seen customers get into trouble is one particular customer I've worked with had a MongoDB cluster. They started everything in public cloud, and they ran into this just pain point around the performance of the instances supporting MongoDB. And I think it was mostly around EBS being kind of the bottleneck there. So they had a great problem, successful application. They had to move the MongoDB to their private data center and then have this direct connect between that. Great. Well, now what? When there's a failure, originally where they were failing a MongoDB primary master node into an Amazon one, everything just grinds to a halt because you're back to that performance state. So you have to be very careful with how you lay out your failover capabilities with your clusters, et cetera. We ran a hybrid cloud for probably about two years at Netflix, and really the reason was while we were going through the public cloud migration. And so while we were doing the migration, we would take small pieces of functionality, break it out from this monolithic Java app that we had in the data center, and push it out into the cloud. And like Mark was saying, we created a direct connect back, a tunnel back into the data center where it would go back and fetch data or push data for the source of truth. And then slowly, as we migrated more and more functionality, we deprecated everything that was in our data center. And we're running all public cloud. And then the other piece of that is I don't know if I would call it a hybrid cloud, but it was a multi-cloud. And that was for backup. And so we were running all of our Cassandra clusters, which was our source of truth for all the production data. In the cloud, in Amazon, we would back up all the SS tables into S3. We would back up all the S3 buckets into another Amazon region. And then we would back up the other Amazon region into Google Compute basically for business continuity purposes. So I think one pattern that shows up a lot is since backup is a common use for public cloud, there's also a strong case there for doing long-term data analysis, like places where you would want to apply compute to data, but you don't need it in immediate real time. And you can also use burst capacity there. So you already pushed the data over there. And again, this comes back to data locality. You've already pushed the data there. And you don't need to use it in real time. It doesn't need to be completely synchronized to what's in private cloud. And then you can find new workloads and to kind of chew the data a little bit more deeply. It's a double-edged sword, though, because on one hand, it's great to have it in a system like Amazon or Google, where you can have tons of computing power to apply it to large data sets. On the other hand, it creates data gravity there, so it's hard to get off if you need to. So if you've got multiple petabytes pushing that across the wire, it's very challenging, right? Yeah, my point is that because it's the backup use case, that's already part of the business plan, right? The data is already being pushed, so now there's an opportunity to apply compute to that data that you wouldn't have otherwise. Data analysis on the backup set. Yes. Are there any examples that are clearly a bad fit? Like, I've been hearing about cloud bursting quite a bit. And I personally don't believe in it. I've never seen it work. But what's your experience on the cloud bursting use case? I totally think it can work, but I think it's a technologically sophisticated thing to do. You can't just ram two arbitrary clouds together and burst from one to the other. Latency is going to matter. But as an example, an extreme example, if I had an AWS run private cloud in the same data center as the AWS public cloud with 0 milliseconds of latency and tons of bandwidth between them, why can't I burst between them? They're same software set, same general performance. It's technologically doable. I think the problem is that it's not trivial in all the issues that Mark and others brought up here come into play. And I think it's one of those places where it's dangerous for the uninitiated in cloud who just make a lot of assumptions about the way that infrastructure works. Sorry. So I have a little bit of tunnel vision having run Google's front-end serving infrastructure for a while. But I do find that large-scale web serving workloads are pretty amenable to hybrid cloud, especially if you've built it where you've got a lot of your back-end serving store and data sets in private cloud and you are used to serving from a web front-end setup in private cloud. You can move the user-facing component, the stuff that's actually terminating TCP out to public cloud pretty easily and often take advantage of geographic diversity, low latency termination to end user, and sometimes cashing. And so you can drastically increase your surface area via public cloud while keeping the actual back tiers and backends in private cloud. I mean, I think it's theoretically very possible. I haven't really seen anybody doing it. It's a great idea in practice, but it does take a certain degree of sophistication in the way that you architect your deployment pipeline and the way that you architect your application topology in order to abstract away all the underlying clouds. And unfortunately, we don't have clouds that are compatible to the same APIs and to the same behaviors. And that was a great talk that Randy gave yesterday about the things that we would need to see in order to have those capabilities. But today, you'd have to build a lot of that logic and a lot of that deployment tooling yourself. And that's a pretty major undertaking for, I think, a relatively small value. What would be some rough guidelines for the types of applications that could support those use cases and the types that absolutely can't? It's less about the application specifically, but it's more about the strict religious discipline of treating infrastructure as code and having no humans in the process, especially with the complexity of hybrid cloud. If you're really going to be vacillating workloads between multiple distinct entities like that, you don't want to be counting on humans. So I hope everybody in this room has already kind of wrapped around that conviction because you can't do it. It's not like I have a lot of customers who are just hearing the marketing of hybrid cloud and thinking it's this panacea of, oh, well, this just is going to solve all my problems, but they don't have these disciplines already. And I think those are necessary prerequisites regardless of what application you can do. And I've seen it done, I've seen Puppet on Windows work pretty well with some advanced customers, but that's a necessary prerequisite, in my opinion. Well, if you didn't see the presentation yesterday, you should go check it out because I cover a lot of this stuff about what I consider requirements for achieving interoperability with hybrid cloud in some reasonable detail. It's really hard to pin down specific workloads. I think that we've already identified that they need to be latency and sensitive as much as possible or built around making assumptions about latency. An example of that is that if you were using S3 in a public cloud and you wanted to have your front end here in another public cloud, if you're using the caching service, say like varnish caches in front of that object store, then that works out pretty well because you're going to have mostly be hitting the cache. Occasionally you're going to miss the cache, you're going to wind up reaching out to that other cloud. Maybe it's 100 milliseconds of latency. Maybe it's even another continent, but that's happening so infrequently that it's not really a problem. And so it does have to do with a combination of both the workloads and the application architecture. And if you are designing it right, if you're thinking about the fact that there's the nine laws of availability, I think they're called something like that, but the network is not always zero milliseconds. You don't always have infinite bandwidth. The other side is not necessarily going to always receive your connection requests. Like if you build that in, if you use like the circuit breaker pattern instead, like a Netflix did in their application, then I think that you can be very successful at doing it, but you should probably kind of put a toe in the water first and then a foot and then a leg, trying to like jump right to like very sophisticated cloud bursting is probably going to wind up in a lot of pain. Thanks for mentioning circuit breaker. Ariel, do you think you could talk a little bit about that pattern? Sure. I mean, I think the point that was being echoed was that if you design your application with the right architecture in mind, not even for doing hybrid cloud, but just in the way that a good distributed service-oriented architecture will be, then some of these use cases are enabled. The circuit breaker pattern, the way it works is think of a circuit breaker where every service invocation goes through the circuit breaker and the circuit breaker monitors the latency and the response codes, 200, 300, 400, 500 of the returning requests. And if everything goes well, it goes through and the request from the client goes to the service and it gets returned and it kind of follows the regular loop. Whenever there's a problem that's detected and a problem can be either it's configurable, but it can be either high rates, high error rates, or some kind of latency threshold that gets crossed, then the circuit is opened. And once the circuit is opened, then the downstream service is protected from further service requests and the client no longer actually makes the request to the service, but instead gets a fallback. And so you need to configure a reasonable fallback. One example that we use at Netflix, for example, was we had the personalization service, the moving personalization service. And so whenever a user logs in, you show them their personalized recommended listed movies. If for whatever reason that personalization service becomes unavailable, then the fallback is to display the top 10 listed movies or top whatever listed movie that gets cashed by the client service. And so what happens is, you know, you prevent these thundering herds problems because once a service gets restored, typically you get all this backlog of backed up requests that now gets flooded with the service, whereas instead now you've, you know, gracefully degraded the service while your downstream services are down. So since this is the OpenSAC summit, if you were to give advice to folks in the audience here that are building an OpenSAC cloud with the idea in mind that it's gonna connect to public clouds and they're going to have hybrid cloud workloads, what would be the requisites for that? Randy, you give some excellence on like the truck versus the car and can you start off with that? Yeah, so I had one slide yesterday where I was talking about sort of API compatibility and how that's not really enough. You can have clouds that have the same APIs on them but have, you know, fundamentally different behaviors. And the example I gave, you know, to kind of make an analogy was say like a sports car versus a semi truck. I mean, the basic human interface, the API is the same, you know, there's a steering column, gas, pedals, you know, it's basically the same interface but it's two fundamentally different systems to bring it home to something like OpenSAC. So you're really clear, like out of the box, OpenSAC does not assign a public IP address when you spin up a virtual machine but if you go to Amazon, you spin up a virtual machine and every virtual machine gets an IP address right out of the box. Both of those have the EC2 APIs but there's a fundamental difference when you spin up a VM by default about what happens. And so the problem is that if you're an application developer and you develop on Amazon or you develop on a private cloud like OpenSAC, you bake assumptions into your, you know, application framework that are, you know, based around how that system operates and then when you try to port it or burst or whatever into another cloud, all those assumptions break because they're no longer valid. So, you know, what I have found and what I'm promoting and why I have my whole business is to focus on trying to make a version of OpenSAC that looks, smells, and tastes and acts like Amazon and is also provably, testably as similar as possible. Now you can't ever be 100% similar but you know, I just always remember the days of IPsec VPNs where you had 20 different vendors, none of them had any interoperability whatsoever even though they implemented the same standard because there was so much looseness in the standard. And the only way they fixed that was through testing, testing, testing, testing until they got to a point where they were able to make sure that everything worked together. So I feel like that's the only way forward is to have sort of like reference architectures that we agree on whether it's Amazon, Google, HPC, something else, and have testability around that so that we can make sure there's interrupt between those clouds. I think the other path to ensure that you're starting down is whether you're starting public and then adding private or vice versa recognize that, you know, you have your customer internally who's going to be consuming these infrastructures and that things are gonna be changing over time. Whether you're starting with one private cloud and adding multiple different private clouds or starting with one and needing to upgrade it that you want the user experience for how they interact with that infrastructure to be consistent. So that's that interoperability point. So something like a scaler really helps unify that and abstract away and offer a way to parameterize the differences between the different clouds that ultimately get instrumented. The other kind of caution I would have for those who may be already in public looking to go private is being very careful about the most evolved and more recent services like databases of service, ElastiCache, those kinds of things that are proprietary exclusive to those public clouds. You need to, and in our case we knew as we were going at Zynga from non-cloud private data center into EC2 that we only wanted to do instances and basic networking. We did not use ElastiCache load balancing. We didn't use anything that was kind of specifically proprietary for that known expectation that we wanted to be able to control our destiny and still do all the automation and configuration management through our interoperability tools but to not get stuck where we would have to replace a proprietary technology on the private cloud. Yeah, I mean, I'll echo all those points. I mean, I think the first thing you got to do is start with an open stack distro that is actually compatible with public clouds. Cloud scaling is a great one, another one. And, but that's kind of the base requirement because I don't think that's enough. But what you really need to do then is think about all the non-development related activities that you're gonna have to go through in order to actually deploy to and manage a multiple cloud environment. And so, you know, in Netflix, for example, we built a whole bunch of different open source tools that we released for how we operationalized all of our software. And so, you know, for example, we have Asgard, which is the tool that we use in order to deploy and manage software in the cloud. And, you know, I don't think to date there's an open stack implementation of it, but I know that Eucalyptus, for example, took it and implemented, ported it to Eucalyptus as well. So, you know, if you're in a Eucalyptus environment and using a public cloud environment, then you can use something like Asgard. And the different clouds really just show up as different regions. And you can deploy your code to different regions and they really just end up in different clouds. So, you know, whether... PayPal ported it. They did. PayPal did port it to open stack. Unfortunately, they didn't contribute it back. They just did it one way. So that doesn't help most of these people. But yes, you're right. But, you know, whether you use Asgard or whether you use any of the other tools out there that let you have that deployment and operational automation, monitoring, managing, auto-scaling, right? All of these different things that, you know, what Mark was mentioning, where you really don't want humans in the process of operating the software and you really want them to be managing exceptions rather than managing the normal operations. That's really what I think the key is to making a hybrid cloud successful or any kind of multi-cloud successful. Here's the GitHub repo. Here's the GitHub repo for the PayPal open-source Aurora project that is a port of Asgard to open stack. I just, I can't, I'm an open stack guy and we're in an open stack conference. I just want people to know Eucalyptus does not have any superiority in this area in any way. Sorry. It's there. It's there. You were talking about people need a framework, so one exists. Yeah, just like it comes up throughout all of this, but if you really want to maintain that flexibility and actually not accidentally lose it, it's going to come down to testing, right? You have to actually build systems that can run these workloads on public and private and multiple public clouds, multiple private clouds and actually be willing to move those around with some frequency, maybe not the production stack, but at least the dev stack, the staging stack and the more automated unit testing you have where you can just have a stack come up, give you a full green and then move that and constantly be running it on different environments. That's the only way you're going to catch regressions. Randy. I want to turn this around to you, Sebastian. I mean, you're the only one here who actually is regularly talking to a ton of different clouds. I mean, does it, I know that you can take a framework like Scalar and you can somewhat patch over some of the differences in the clouds, but how much grief and how painful is it for you, how much time do you spend sort of dealing with differences between clouds where if they were sort of somewhat more standardized, you could spend more time adding value to your management system. Yeah, so to Mark's earlier point, it really depends, if you're going to choose a cloud and even if it's open stack and you're missing things like Cinder, if you're missing things like Quantum, then like when you're going to build your application, you're going, even if you want to port it, even if the architecture is good for it, there's going to be pieces missing. And that's one of the challenges that we have when we build the Skater Management Platform on top of multiple clouds. And that's one of the reasons we don't want to work with the vSphere or like any of the things that are totally not cloud, is they just don't have the same objects, the same constructs. And so that makes it very difficult to build a common management layer and enable application portability because the highest common denominator is really, really low. We have a really distinguished panel here. At this point, I'd like to open it up for questions. We've got one coming. Hi, I'm Kamesh with Dell. So underlying all of the discussion that's going on here, there's this concept of application mobility, right? And it's not just the VMs that you're moving from one cloud to the other, but the underlying network and the data that's associated with it. It sounds like every one of you has solved this in a different way or in a customized way. Is there like a standard toolset or somebody building something that enables application mobility? Are you talking about how you kind of lift and shift in application from one place to another like this? Isn't that kind of one of the use cases where you want to move an entire workload with all its components, whether it's VMs or databases or networks or load balance or what have you. I mean, it's not just one VM, right? We all know that workload is a complex pieces that all glue together. So that's my question. Jen, I was talking about the data gravity. I don't know if you want to expand on that or? Well, I don't think it's moving workloads that we're talking about with this hybrid cloud use case. It's having multiple execution environments where a single application can run in. So you have one application or one service that might be talking to another service that's in a different place, in a different cloud and abstracting that away into something that basically looks like reusable pieces, reusable deployment environments that you can just deploy code into and having all the glue around the discovery of being able to find other services that you need to talk to, having all the operational tooling around it so that you can monitor when things go wrong and the deployment tool chain so that when you have a new piece of code, you can get it from your source control system out into these different execution environments. That's, I think, the use case that we're talking about. I understand that use case. I'm specifically talking about the Zynga's example for example, right? You had a whole bunch of stuff running on AWS. At some point, you said, this is not gonna be viable for us, I wanna bring everything back in-house. And maybe after some time you say, okay, I'm gonna put some stuff back in Amazon. It means to me that you have to move stuff. Right, so I don't believe at all in any lift and shift fundamental technology. I think that, and to describe more deeply exactly how Zynga was moving applications, it would be at a minimum about, and especially when we started, about a seven week project with all the developers that know the underpinnings of the game and then my operations team who are providing infrastructure as a service collaborating for seven weeks with a project manager and all that work. What it's, with a lamp stack in our particular case, what that involved, because this is a game, right? This is people expect their freemium, purple cows, et cetera, and you can't take that away from them without impacting customer service, right? So the way we did it is with the MySQL databases, you add another slave in the private cloud that's real-time streaming everything. You make sure that's stable. Then you add the memcache layer internal to the private cloud and then you need to pre-seed that memcache and then it would normally take us and then you can light up all the web servers and be ready for effectively a DNS cutover, right? Or we would re-register the game URL and Facebook. We got it down to taking seven minutes of downtime to actually flip that over. So it's not lift and shift technology that's a lot of hard work to pivot it. That's my point. Excellent question. I think I made my point, right? Yes. That's the difficulty of... Yeah, it's not migrating VMs ever in our case. And I don't think that's sustainable, repeatable. You want to have the recipes and you want to have... You need to think about that hard cutover and the fallback process as well. I mean... A wide area of migration is a fool's errand. I think about this in two ways, Kamesh. So the first is that there's a variety of frameworks and there's no way to get away from that because if you want a lot of control, you're going to use Chef and Puppet or build your own framework that gives you very low level control. If you want less control and sort of more containment, you're going to use a platform as a service system or possibly something like Scalar or RightScale so that you can have templatized deployments. And then those templatized deployments taking aside the problems with data synchronization should be the same when you deploy the application on Cloud A or Cloud B, you want the same thing to happen on both. And so you're going to use a variety of frameworks. There's no way to get away from that depending on what level of control or ease of use you are for that application on a spectrum. And then the second thing is that if we're moving towards a model where IT is more like the electricity model, we've never gotten to the point where the individual appliances at the edge of the electrical system don't have fundamental differences in how they plug into electrical system. They all have different power adapters because they all have different power needs depending on the application of that appliance. But what they're able to do is because all the electricity looks the same is they can all build that interface to that same standard. And that's really what I try to promote a lot is I think that Amazon and Google really have made a de facto standard. And if we all kind of hub off of that reference architecture, then a lot of the problems around dealing with the application interface to those start to go away because you just build one application interface for that one reference architecture. And then as you go from cloud to cloud to cloud, there's a lot less variance. Any other questions? Hey, it was Hamlin, Trevor from RMS. We're going down the hybrid cloud approach right now where we'll have a majority of our stuff in the private and then we'll be bursting out into public. And so I know that it sounds like there's a lot of intricacies in doing all this, but I would appreciate if some of you guys could explain some of your war stories from basically the failures that you've seen or maybe the successes where you've seen this has worked. Excellent question. I hope we didn't ruin your day, Trevor. So some of the war stories, not this particular one isn't mine personally, but I had briefly mentioned it before where an acute workload drove the data persistence layer into a private cloud so as to take advantage of high-end hardware. And still to preserve, in this particular case, the public web servers which are customer facing or user facing, you still needed to have the caching layer and reed slaves to the Mongo cluster there. Now, what became volatile and broke down on occasion is that especially with Mongo and then this particular customer was using kind of lock-enabled rights, they would lock the reed slaves and thus the rights had been persistent that if there is a wobble in the direct connect or anything that's feeding where the right master is getting its data in the private side and that kind of can clog up in the public side or if there's a genuine failure and a master needs to be promoted in the public side, the war story is that the performance difference is going to crush the right performance and the application is gonna go down and you have the thundering herd problems that Ariel mentioned. So there's a lot more moving parts there and just as an operations person responsible for availability, I always try to keep the environment as simple as possible so that doesn't always mean hybrid distribution of an application for me. Is this still true with like dedicated instances and being able to get all SSD instances as well? I mean shouldn't that get rid of a lot of the noise in the neighbor problem? This occurred right before those became available and I believe they have since solved their problem and with that technology, but still having it persistent beyond the SSD node. So yeah, I guess they've got clusters that have solved that. Junha? So leading the cloud networking group, obviously I have a particular gaze but I do see a reliance on specific network topologies to be a real impediment to being able to split the difference. Often also I see a strong reliance on using numbering as a grouping mechanism. So knowing that a given sitter is going to represent a given set of machines and when you move into cloud, even when you move into private cloud, you start to tear away those beliefs, right? They become extracted away, you define groups and new mechanisms. And so a lot of times I've seen that baked into applications and makes it very, very difficult to move even into private cloud. Yeah, I mean the one thing that I've seen that's proven so far is when you have multiple applications and you're just gonna pull one off at a time. So you're using hybrid cloud in the sense that you're not trying to stretch one application across two different clouds but you actually have different applications in different locations like Zynga or Ubisoft where they basically said, okay, this game is flat lined. Let's take it back to private cloud where we can cost optimize it and that works pretty well because we've got lots and lots of applications out there already that communicate over the internet so there's no, tends to be no hiccups. The latency issues tend to be really within kind of an application grouping. All right, we'll take one last question otherwise. Oh, are we out of time? All right, okay, they're throwing things at me. All right, let's give the panelists a warm thanks.