 Good afternoon. For those of you who might be lost, you're wandering into a talk on hybrid clouds, landmines, things that happen in the real world that cause problems with building hybrid cloud applications. So given that we only have about 40 minutes and I tend to over talk, I'm going to get right into it. My name is Drew Smith. I am a cloud applications engineer with cloud scaling. Cloud scaling you've probably heard of before, but if not we build a distribution of OpenStack. We are very much in an architecture company, which makes it a little bit weird that we have an applications engineer on staff, given that we don't actually build any applications. But it's actually, it's a super exciting position for me because what I end up doing is mostly exploring the different technologies and the different ways that people interact with OpenStack and becoming sort of a subject matter expert for the team so that we can guide the way that the development goes forward. I liken it to if you are a team full of plumbers, you should probably have somebody trained up as an electrician if you're going to go cutting into walls all the time. So yeah, cloud application is generic cloud scaling. Today we're going to talk about hybrid cloud architecture and problems that you come into it. Now I see some of you guys are already pulling out your cameras. Don't worry about it. Slideshare.net slash DrewMuloNimbus. That's, that's cumuloNimbus like the clouds only. I'm Drew, so that's how it works. This is our, our, our story arc for the day. We're going to go through the what's and why's. We're going to go through what enables hybrid clouds, understanding your application and some usual approaches to hybrid clouds design. And then we're going to sort of dig into some of the landmines that you're going to actually, this stuff that you will probably run into depending on the level of, of detail you go into with your hybrid cloud. So the big question, which we're not going to spend too much time on because we've already been through this all day is what is hybrid cloud? And the real answer is it's different for everybody. Everyone has a different idea about it. And we, we agree, there's a lot of different things. Like it can be, it can be your application tier in public cloud and maybe for PCI compliance or regulations, your data tier in, in a private cloud. It could be a containerized app running in a couple of different environments. It could be, oh geez, it could be your app in one cloud and a hot cold failover in another cloud. There's all kinds of things you can be doing with hybrid cloud, but we have sort of agreed on a definition for this, that hybrid cloud is your apps leveraging the functional stack of multiple cloud infrastructures. And just in the side there, that's not an insult, the hybrid cloud for dummies thing on the side. I went looking for a, for a good image for this. And this is actually what came up when I searched for a hybrid cloud application design. As a second aside, my CEO Randy Bias is actually quoted in this book. So, yeah, so anyway, it's a hybrid cloud is your apps leveraging the functional stack. So what's a functional stack? A functional stack is a collection of services that make up a cloud environment. So you can think of it like, like the lamp stack for instance, which is Linux, Apache, MySQL and PHP. Now that stack, and we're gonna, we're gonna stay stack a lot here today, but that stack is something that really enables an entire generation of web programmers to build applications that, you know, took the web to web 2.0 and really pushed forward the technology. So let's talk a little bit about that functional stack. What we've identified is these things right here. This list of stuff we, I think that we feel are in any cloud-based application. Anything that uses what we call cloud native design. And that's design of applications that really, that are built to route around failure, like the Netflix of the world, the Google to a certain extent on the hardware level kind of thing. So cloud native design, you're basically addressing all of these points. So remember that we said hybrid cloud is your apps leveraging the functional stack in multiple environments. Well, basically what that means is that you're gonna have to basically deal with every one of these things on one or more environments in order to get to a hybrid environment. So what does that really mean? And I guess there's a little bit too much of the word stack around here. But if you see here, I've drawn some red arrows between the functional stack and open stack and on AWS. So like, for instance, the orchestration layer on both sides, on the open stack you have like heat, on the AWS side you'd have cloud formation. On the data storage side you'd have Cinder and Swift, on the other side you'd have EBS volumes and S3 kind of thing. But the reality of this is those arrows don't really look as just straightforward as we all think. So it's not so much, building a hybrid application isn't so much about just mapping one to one. Well, that's not true. It really is mapping one to one, but it's the mapping that's the details. So it's not so much a negotiation between the two, but your hybrid app design is figuring out exactly where the safe point is between those two functional stacks. So let's talk a little bit about what enables hybrid cloud. And this is a spectrum image that one of our guys came up with and it's labeled abstraction. So on one side of the abstraction spectrum you have control, on the other end you have ease of use. Now where you personally fall on this spectrum is gonna depend on a lot of things. Mostly your, well, your level of comfort, for instance, and your level of technical prowess. And also it's gonna depend a little bit on the tools that you've chosen. So the tools that you've chosen in advance are gonna determine where you're gonna fit on the level of abstraction. But also it plays right back into itself in that if you're just coming into this early, where you are comfort wise in the abstraction spectrum is gonna determine what tools you actually choose to go forward. So which one is the best for you? Well, the real answer is that depends. Every application, unfortunately, these days is a unique and beautiful snowflake. And we talk a lot about the idea of cattle versus pets, but that's on the infrastructure layer. And today we're talking more about applications and those sit on top of the infrastructure. So yeah, basically you're out forward picking your tools, figuring out where you're going, begins with understanding your application. And really this is unique to everybody. You're gonna have to put your time in here. I would recommend even sitting down starting off on paper with even such simple stuff like a pros and cons list. But some things to ask yourself before you actually head towards building a hybrid application are like, why are you going there in the first place? Like what is it about your application that you feel makes it suited to leveraging resources in multiple environments? What are your expectations? What do you think you're gonna get back from having this application running in multiple places? Like are you, if you're looking for increased security or increased availability, decreased cost, what makes it, what is leading you there in the first place and what do you hope to get back? And lastly, what are your likely bottlenecks? And this is really basic. Actually, you know what I forgot to ask earlier on is in the audience here, how many people are, how many people consider yourselves like application developers? Nice. How many roughly infrastructure people? Way more. Cool. And how about people who find themselves sort of making decisions about technology but don't really count as the earlier two? Yep, so okay, first and third are probably roughly equal and mostly infrastructure people. So I really aims to have kind of information for all three tiers there but depending where you come from, this is what Randy likes to call secret sauce, this is a really great tip. If you haven't read this book yet, this will really help you to understand not only your environment but your workflow and it's really a book about DevOps and how DevOps moves forward. So once you've understood your application and you got everything written down and you're ready to move forward stuff, it's a good idea to determine really early like what success looks like. Now the reason for this is this technology has a tendency to move really fast and enable people in ways that it is kind of unprecedented. So if you start off your, if you start off your month saying okay, our goals for the month are A, B and C and you've actually met that at the end of the first week, there's a tendency to add MNOP and XYZ onto your goals for the end of the month. And then by the end of the month you've hit MNOP but you haven't hit XYZ and your boss is saying you failed, so basically success looks like writing everything down first and determining exactly what your goals are. So we like to think of it like this. So able to deploy an app into or across multiple environments with common operational tools or processes and consistent performance. Does that sound like success? Pretty much. Now the real question is, is that necessary? Do we really need to nail it quite that hard in order to call it success? So I consulted for a company a little while back who were doing a lot of image processing and they were doing it all on their own hardware and it was basically a PHP application doing a lot of image magic stuff. So they were really CPU bound. In the end the solution that they built that they were very happy with was bursting into the AWS cloud manually. So they kept a couple of machines, I'm gonna simplify this a lot, but they kept a couple of M1.small machines running full-time in AWS and one of them was a database, one of them was a PHP image processor unit. And when they needed to burst into it they manually went into AWS and cloned the database and spun it up as an M3.xl and then manually cloned the PHP machine and spun in 25 of those up as C1.medium or something along those lines. And they had their cluster just manually built within about an hour. And that was good enough for them. That actually took away all of the workload they were working on that gave them the end result they were looking for, which was suddenly they had 10 times their capacity for two or three days and then they could tear it all down again. Now that counts in my mind as a hybrid application but they didn't need to really nail it to win. Now on the other hand, what does failure look like? So just to go off on a different story here. When I was in high school, I delivered pizzas for a living and all the guys that were really getting laid were driving nice cars. They had their parents bought the Mustangs or something and it's a very small town on the East Coast and everything gets around. But so I figured I needed to get a car and I did my research and I went out and I found a farmer on New Brunswick's equivalent of Craigslist that was selling a 1974 Mercedes 280 SL for $3,500. And so I board my mom's Mercury Tracer and drove out to his farm and he opened the barn and there it was and but he looked me up and down and he said, son, I won't sell you this car. And I just flustered of course and I said, what, why? He said, son, the most expensive Mercedes you can buy is a cheap Mercedes. I got really angry about that and basically stormed off and spent $3,500 on a very fast 486 with 16 megabytes of RAM and still to this day I've never actually owned a car but I do know what he was talking about now. Like we've probably all seen this when we try to implement something that is supposed to save us time and it actually ends up taking more time. So if you implement a hybrid cloud environment and it actually introduces more complexity than you started with, you've probably failed. If you implement it with the aims of having like better performance across it so you can spread out and have a net some greater speed for your application and the end result is actually a net slower application speed. Randy talks about one in his talk earlier today with Korea and Telecom where they built an application in their data center with zero millisecond latency on the network and then deployed it to data centers with 200 millisecond latency and it turns out the application had a lot more chattiness. Every API call was taking 10 times longer and the sum together was that each action within that application was now taking 30 seconds and they weren't very happy with this. So what about if it takes workarounds or hacks? So we're gonna talk a little bit in a bit about ways you can actually get to hybrid but if all of the places where you're doing scaling or talking to multiple clouds at once are functionally a series of bash scripts and every time there's a change to any one of your clouds you have to go back in and rewrite your scripts and figure out what that change was and how to work around it, you're not winning. All right, so let's talk about usual approaches. So we're back here to this spectrum again. Let me get your control and your ease of use. We're gonna go through each of these three ways to approach hybrid cloud. We're gonna start off with a DIY app management and it's important to note that I'm not talking about DIY open stack here, I'm talking about DIY application management. So what does that look like? Well, if you've got a really great team, if you're extremely technical and you have a whole bunch of Python guys or a bunch of Ruby guys who just can speak to APIs in their sleep, like that's a viable option. You can absolutely script out your own stuff and with the Botolibs for Python actually, spinning up machines in AWS is like, God, it's like three lines of Python, it's nothing. So doing it yourself, not only is it easy, it's actually pretty fun for the right guys, but, okay, this is actually an aside, so my flow's a little broken, but so the problem here is sometimes that people tend to use images, like system images, AMIs as a kind of change control and that's not a great idea, but we're gonna get back to that in much greater detail in a little while. The other thing is with, even if you're building your stuff completely from scratch in Python, even if you really know what you're doing, often orchestration is gonna be pretty tricky. If your libraries work really well on AWS, they probably work mostly well, which we'll talk about in a little while on OpenStack, but you're gonna have to figure out something else entirely for Rackspace or for Heaven Forbid, Microsoft, Azure, et cetera. So it can be a little bit tricky. On your second tier, you got your pre-baked abstraction layers. Now, if I'm being honest, this is actually what I find myself recommending to most people. The more people ask me about this, the more I kind of go in this direction. So these are third-party applications that sort of run above the cloud and do all your orchestration and your image management and your metrics and your monitoring and everything for you, but it comes at a little bit of a cost. There is less effort, but you also have a little bit less fine-grained control over everything that's happening. And if your team is a bunch of guys that compile their own kernel, then they're probably gonna balk a little bit about this. You can swing them over if you show them, this is all the benefits of going in this route, but yeah, this can be a little bit tricky to convince them. The other side is it's a lot more expensive. Like if you're, I mean actually, Scaler has a nice free tier, which is very appreciated, but you can look at 30 to 50% more upfront costs on these things. Running machines in the public cloud. In the private cloud, you're used to those VMs already being paid for, but now you've got to send somebody a check at the end of the month to manage stuff that's on your own cloud, and that's kind of a hard pill to swallow sometimes. So yeah, there's a lot of options, as you can see. We put a bunch of them up there. I had a guy talk to me in great detail about Instratius last night, and I'm really interested. Now I gotta go home and dig into it. So on your third tier, you get your platform as a service frameworks, and these are the cloud foundries in the open shifts, and these abstract way, way more. My build's broken. They basically give you the option of things like rather than look at a database as a server unit, you can look at that database as a service. You can just say, I need a database. I wanna look at this endpoint, and I wanna just run SQL queries at it. I don't wanna worry about scaling. I don't wanna worry about taking care of it, go. And these things will take care of that kind of stuff for you, and they'll get you to where you want. But you have the least granularity and control compared to your DIY levels, and the more you build into this, the more you're locked in, and that's kind of tricky. And of course, depending on which direction you go in this, I see OpenShift also has a free tier, but I'm guessing if you go really enterprise level of this kind of stuff, you're gonna look into a costly long-term contract. All right, well, we're gonna jump into landmine stuff. This is kind of the bulk of the presentation, but this is again just a sort of a table of contents. We're gonna go through all these just really quickly, so. And if you missed it earlier, all the slides are available online already at slideshare.net slash drew me low Nimbus, and they'll be up at the end too, so don't worry about having to snap pictures every time. All right, there we go. So what about feature coverage and gaps? So here we've got like, what I'm talking about here is features that might exist on one cloud platform, but not necessarily on another. So for instance, say you've built an application that depends on Amazon's SQS service and DynamoDB and Route 53, and now you're gonna try to make that application go hybrid, you're gonna have a really bad time here. You have a lot of migration to do before you can get that to work in Google Cloud. Now, there are some features like that, they've really made a lot of effort to stay completely compatible, like AWS's S3 service to switch over from S3 to the Google Cloud platform is literally just changing the endpoint, and everything just works exactly as it should. But for instance, has anybody here ever tried to implement the Netflix OSS suite on OpenStack? Nobody. Yeah, that's a tricky one. I spent a couple of weeks trying, and there are a few applications, even though some of them are written straight up in Java, you can get them to run, but they had to write a lot of underpinning stuff and they use the AMI subsystem really heavily. They build their own AMIs, they have glue under there. Yeah, it's a tricky one to get that running. They really did build into a couple of the Amazon services deeper than I would have liked, but that being said, it's amazing software and their team is just on top of stuff. So, dependent on cloud specific services, that's what I was just talking about with the, if you depend on Route 53 or SQS for queuing or something like that, you're gonna have a rough time trying to get to a hybrid environment. Differences in cloud features is another thing that can affect that. So say, okay, how about like, what is it called again? EBS volumes in AWS, you know, you create a volume, you attach it to a server. That works just great, but if you wanna do that on Google Cloud Platform, they call that persistent disks or PD, and they work almost identically, except there's a nice little trick that on Google, you can take one of those PD disks and attach it read-only to multiple instances, which I find extremely valuable, and really, when I started with AWS, that's what it kind of, you know, assumed how things would work. It's only read-only, but if you've architected an application around that understanding that you can take, you know, a persistent disk and attach it to multiple instances and read it back, and then you try to move that over into AWS or OpenStack here, if you're out of luck, that's just if they don't work the same way. And lastly, even similar clouds might not have the same stuff enabled. I know, having dealt with a whole bunch of different OpenStack clouds in the last few years, that, yeah, even OpenStack clouds these days are pretty much snowflakes. Like, you might have Solometer, you might not, you might have Heat, well, not without Solometer. You're basically, even if they're the same technology, they might not have the same features, so that's a landmine that you're gonna have to worry about. Behavioral compatibility is a different thing. Can this actually be read? Oh, nice. So the thing about behavioral compatibility is that application developers tend to look at whatever they're developing on as a reference architecture and that doesn't smoothly flow into a public cloud. It seems, in some cases, and I talked to, believe it was Jamal from NetApp this morning, and his argument is very much that Docker is the way to go here, that obviously, if you can just put your application in a container, that's the answer that's just running everywhere. But I argue back that Docker gives you a very common environment, but it doesn't give you a common architecture underneath. And that's really the headaches there. And this comes back to the difference between, again, developers and infrastructure people. Developers, as infrastructure people, how many times in the last couple of years have you heard, well, it ran in the dev network or, well, it runs on my desktop? It's, you know, this is a very common developer thing to think, okay, well, it runs in this little space, so therefore, just make that little space and make it public. And that's just not how things work in the real world. So the images about actually partially implemented are partially compatible APIs. And the joke is that, so the AC2, AWS AC2 Compat API will respond to requests for a list of volumes, but it won't respond to a list of volumes with a filter. So, yeah. Configuration differences between similar clouds. You can think of this like, say you have two OpenStack implementations and one of them has floating IP auto assignment turned on. So every time you bring up an instance, it automatically gets a floating IP and can talk to the net and the net can talk to it. Another almost identical OpenStack installation has that turned off. This is literally like a true or false in Nova Network config, but it just goes to show that like, if you're building an application that's gonna fit onto one OpenStack cloud, that even if it has all the same components, one little switch in a configuration file is gonna make the difference that your application, if it's expecting to be able to talk to the internet when it gets up, if you try and drop it in the other cloud that doesn't give it a floating IP automatically, you're gonna have to add that extra step to issue a floating IP. Maybe that's just an extra line in your script or maybe that's two days to figure out why you can't talk to these machines if your technical team is too busy to dig into that kind of stuff. And the last one is variable performance from one cloud to another. And so now as we're getting to more and more public clouds and people are just deploying OpenStack willy-nilly and starting to sell it, what do we know about what's going on underneath there? Does one cloud, is it being oversold? In a certain sense, it almost should be for like a budget cloud. Like the hotel industry is known for years that you can safely oversell 10% and people are not gonna show up and it rarely happens that you show up at a hotel. It's just a bit super rare that you show up at a hotel and your room is gone. But the same thing happens with clouds. If you don't know what they've sold you don't know who else is on the same network link as you. Maybe they're some sort of gossip site. Maybe they're getting a lot, a lot of traffic. That's a thing. All right, what about image management? And this is the one I was talking about a little earlier with the AMIs. This, this is a really big mistake that a lot of people make especially as they're starting out. So the idea is as you're building your first or second or even a bunch of ways in you have a tendency to take a, take an AMI or a system image of Ubuntu or Red Hat or whatever your preferred is and configure it, you know? Set your, set your SSH keys up, set your Etsy MOTD, configure post fix, get it up to, you know, your specs and then snapshot it and then call that the gold master and throw that on the image server. And that's really normal and really typical. The problem with that when you start to move into a hybrid cloud environment is that like even in AWS between availability zones you have to copy that image to multiple places. Like you can't use the same image in US East one and US West one. You have to actually copy it to the other end and it gets a new AMI ID. Now you're maintaining three. Once you bring up say 20 VMs with that image now there's a patch that comes out. So, you know, it's important so you patch it all live and you tell yourself, okay, well I'll spin up an extra one and I'll patch that and I'll remake the new master but because these are slightly configured differently with Chef or something now you've got to back out with the Chef now you got to put the new thing in and snapshot again. So yeah, the problem with this is that staging and patching in multiple environments takes a hell of a lot of time and it gets out of hand really quickly. The more environments you add the more images you're maintaining. So, you know, the real trick here at the end of the day is configuration management. Is rather than look at image management as your change control for your images. You know, start with an extremely stock say Ubuntu or Red Hat image one that comes directly from them and apply everything with change control. Use your pick what you want. You want Puppet or Chef or Salt or Ansible or CF Engine or whatever you like and stick to it, dig really hard into it. It really does seem like it's the best way to do things when you're starting out just like make your own gold master yet, you know, when I started out, man, I built all my AMIs literally from scratch. You know, you spin up an AMI and start installing a new version of the operating system into a temp directory and then clone that out into a S3 in buckets and then pull that back down as an image. And yeah, so don't do that. All right, so monitoring and auto scaling is a really tricky one too. Lots of people don't actually draw the line between these two but the trick here is that you can't have auto scaling without monitoring. Like if you want to auto scale based on CPU use, how do you know what that CPU use is from a central location if you don't have monitoring? And so the problem here is that there's no one standard that goes between multiple clouds. So Amazon's got CloudWatch, OpenStack's got Solometer. Google's got a really weird lack right there actually that I don't quite understand just yet. They have a reference model of auto scaling that's a Java application that runs on Google App Engine and then you have to run a little agent on each one of your VMs that listens and will like report metrics as that Java app kind of connects to each one. But they don't have a single kind of, okay, this is the back end. This is how we talk to all of our instances and report back and that's critical. That's something that every cloud is gonna need and it's all gonna have to be reconciled back into one place if you want to auto scale between a bunch of places. Here's where abstraction comes in which works out really nicely. The right scales and scalers of the world actually, they do have the same kind of thing, an agent that runs on each one of your VMs. But what they provide is images that are pre-baked that already have all other agents up and running and those agents report back to their central kind of web interface and show you all the metrics and also handle all the auto scaling information that they need to spin up extra stuff. Yeah, if you're doing it yourself though, you're on your own. There's no one single way. Amazon has a really nice way of exposing CloudWatch and there's a patch in GitHub that you can download that will pull those CloudWatches things into Nagios or Sensu. But really, it's 2014. Do we really still need to use Nagios? Is that it? Anyway, so security and access is a big one and we're not just talking about locking down the machines to SSH keys, but we're talking about a broader range of access here like security groups and access to resources and how each machine can access drive shares and network links. This is a really big thing and there's no way to bridge it across clouds right now. Like there is no single model of security management that works across multiple environments. If you're doing it yourself, maybe you could use some kind of LDAP plus Kerberos thing but I'll see you in a year or two when you get it done and we'll talk then. It's an insane headache unless you're really already good at Kerberos. Google has something really, like they've basically tied it right into Google apps for business, which is really nice since that's actually a cloud scaling. One of the things that we're already doing for our internal office network is we've got a custom radius plug-in that we authenticate against Google so that we can authenticate internally against different resources in our labs. But I mean, even that's pretty hacky. It works and it works great, but what are you gonna choose when it comes to managing access across multiple environments? One thing that these abstraction layers do pretty well is key management. So you can declare user groups and such and they'll put the appropriate SSH keys onto the different environments. But once you abstract outwards into user groups or access to resources there, it starts to get a little fuzzy depending on what environment you're in. VPN and VPC, so network layer security. It does exist and it's good, but it's not the same in all clouds. So in OpenStack, we have a version of VPC in... What I really like actually about Google is that they go straight to VPC and they don't actually have a traditional Layer 3 networking model. And OpenStack, we have both and I'll just throw a little pitch there. It's actually cloud scaling that has both. But the other one is managing security incidents. So I like to say that there's two kinds of sysadmin and I don't mean guys that just walked out of IT school and they have a diploma that says I'm a sysadmin and they're knocking on your door. I mean guys that have been actually doing this actively for eight, 10 years kind of thing. And those two kinds are guys that have been hacked and guys that don't know they've been hacked. So when you're looking at this kind of stuff, you really need to think about how quickly can you respond to a security incident. Say, this was actually written a month and a half ago or so just before the heart bleed thing came out and I would jump up and down every now and again, I'd get excited. But I jump up and down and say, what if there was a security problem that affected everybody? And people say, oh, it's nonsense. So you really have to be thinking about this kind of stuff. Like it's when you design your app, like what happens if there's a kernel level exploit and you need to be able to roll out new code to your entire network at once? You know, if you have, if your entire application depends on one particular server being up all the time, good luck. You're going to have some downtime. So let's talk really briefly about some of the other landmines on the list here. So we got data staging and replication. This is a tricky one. You could have latency in bandwidth issues is a really big problem. So as the thing with Randy earlier where he deployed a thing in to create telecom and the latency went up and the application went down, this is the thing. So if you're trying to keep up replication between databases and multiple clouds and you're used to that replication being able to keep up constantly but you're trying to do live replication to another, to another cloud and that replication isn't able to keep up, what happens to your application? Does it just bomb and fail or like where are you going to be able to route around that problem? Yeah, so the other thing is they can get expensive really quickly since you're paying for data. Like a lot of like web scale applications have almost as much data traveling on like back and forth on the backend as they do traveling out to the world. And that gets really tricky. That gets really expensive and you're paying for data. So app messaging. In this, we're talking about queuing systems like the SQS or RabbitMQ and stuff. If you need a common one across the environment and you've invested, got it. Then you're going to have a rough time. You should really basically lean in early and go towards well-known, well-understood, open source, lowest common and armory tools like RabbitMQ, ZeroMQ if that's the kind of thing you're into. Messaging is going to have to be a tool that can spread across multiple environments. And also, can you do it securely? And that's another secret sauce thing. If you haven't checked out console.io yet, you should really check out that website. It's fascinating where the world's going. Networking and network management, we kind of covered a little bit. And that's, well, here's a different example. Like variations on NICS. In certain environments, you bring up a VM and it's got ETH0 and it's the public access. In another one, you bring up it and ETH0 is the internal private network. And ETH1 is the public access. And another one, you bring it up and it's got ETH0 is internal, but it's knatted out to public. So you can talk to both directions. If your app is architected to look for ETH0 and set it up that way, then that's going to be a headache. And again, VPC for isolation, sure. But it's different across all the environments. Same thing with HNDR. So high availability and disaster recovery. If you think about like VM, where one of the ways that it does high availability and disaster recovery is it has, if you mark a VM as high availability and it goes down, if the functional compute unit that it's sitting on goes down, then the system automatically brings that VM back up on another compute node. And if you know that, great. But if that's not what you've understood your app to do and you're trying to build on top of it or that's what you've always known and now you're trying to build on top of OpenStack, that's gonna bite you. And common tools and processes. In this case, we're really talking about, oh, okay, so like you don't really have a vision into your network from one spot if you're trying to do it yourself. These abstraction layers, the right scales and scalars of the world give you a view into your network that you can use as a common sort of portal into all of your clouds. But if you don't have that, you have like Horizon. Horizon's not gonna show you AWS. Aurora is pretty cool actually. Aurora is what happened when PayPal rebuilt Asgard which is Netflix's auto scaling portal interface thing. And that works across multiple OpenStack environments but you're out of luck if you wanna talk to Google as well. Yeah, so also high level tools with abstraction. I'm gonna skip ahead, curious as I got three minutes left. So, summary. Your steps towards building an application that's gonna work in a hybrid environment, things you need to think about is understanding and documenting your application. Start from the ground up, understand everything about it and write it down. Figure out where your headaches are gonna be, figure out even if you're wrong, if it's on paper and you understand it, great. Employ a cloud native design, which is, as we said before, the idea of building applications that route around failure, sort of the Netflix and Google style of building. Use well understood open source tools. This is the lowest common in all of their stuff, stuff with a lot of user base, RabbitMQ, Puppet and Chef. Abstract everything, or at least to a level that you're comfortable with. Automate everything, and we're talking at this point about basically Puppet, Chef, et cetera. And ensure behavioral compatibility. Also that's not misspelled, just so you know this is, I am Canadian so I am legally allowed to spell it that way. But what I actually mean by ensure behavioral compatibility is testing. So unit tests, tempest tests, test everything. Test the speed of your connections to the database and don't expect them to be the same across multiple environments. And that's me with one minute and 35 seconds left. So I saw you standing, I assumed it's a question. Okay, so the question was, like why would somebody wanna do hybrid in the first place? The answer to that is really, it depends on the application. So for instance, if you're finding that you don't have, say here's a good one, say you're a tax company. Say you have all of your internal environment and you've proven to yourself that OpenStack is the way to go. You've proven to yourself that you need a cloud for your internal stuff, but three months of the year, you need five times the capacity. So it makes a lot of sense for you to burst into a public cloud. It doesn't make a lot of sense for you to buy five times the capacity for the rest of the year. So every application is different and every business is a unique and beautiful snowflake that has different requirements. That's very valid, but then we get into press questions. I'm sure you that that is not the case. Every OpenStack deployment is currently not very interoperable. You will actually get more compatibility between Amazon and Google, which are completely different proprietary software stacks than you will get between most OpenStack deployments. Excellent, and I think that's me coming to an end. Thanks everybody for coming and I appreciate your time.