 I cannot see anything. OK, I can hear myself now. So welcome, everyone. My name is Miguel Suniga. I'm with Symantec, and right now we're going to talk a little bit on how to create a hybrid cloud. We're going to go, I'm just going to go through a little bit of what we're doing here, I mean, or our history. Probably two years ago, a little bit more than two years ago. We started doing private cloud. We decided to go OpenStack. We went over and deployed. Right now we have two, three, four data centers running OpenStack. Pretty much like a mix of all the different flavors of the releases, because we're basically using some, for instance, Keystone. We're going to be deploying in Liberty. Then we have Nova running on Kilo. Then we have Horizon. It's going to get deployed in Mikata. So it's pretty much a mix of everything, right? But it works. However, one of the things that we need to go over and start looking into the future is pretty much that it works for our purposes in there, but sometimes you need to go over and start pushing out stuff into other clouds, whether you like it or not, whether if it is for some reason that you need to go over and move forward, or you are basically looking to some locations or some more specific different reasons why. So this is what we're going to be talking to you about. It's going to be pretty much the agenda. We're going to go through extending your cloud, pretty much all those reasons that we were talking about it. Then we're going to talk about something that is a little bit of difficult to deal with hybrid cloud, which is the Darvegett user experience. Getting into one cloud is easy. Getting into Azure, AWS, Google Compute Engine, and OpenStack, that's a whole different story. There are different features all over the place, so we're going to talk a little bit on how you basically, you can mitigate the risk about them. Then we're going to go into, okay, do you need one only, or you can actually jump into many of them? Doesn't matter if it is whichever public cloud provider it is, it really depends on a lot of the reasons of why. It will dictate where you're actually landing into it. After that, we're going to talk a little bit about how you secure your stuff. Coming from semantic, I mean, we're semantic, so we basically put a lot of emphasis in the secure section, a lot of it. And it really is not only the type of security you think that is, okay, we're going to put some VPN and connect to it, no, it basically includes all the way up to the application layer, like who's going to be able to access the application layer, who's going to be able to do something on the different clouds, who's going to be able to provision a lot of different things in there. After that, we're going to go over and jump into a little bit of what, how do you put it? How you maintain the thing, and how you keep your sanity without going crazy, and saying, now I have to not only sit down and take care of my private cloud, now I have to take care of one or more public clouds, where every single cloud has different ways of actually working. Some of them will give you monitoring, some of them not, some of them you'll basically will have to go and create it for yourself. After that, we're going to see, now that we went through the, how we're basically going to go over and keep the sanity on us, we're basically going to talk about multi-cloud tools. What I'm referring to multi-cloud tools is pretty much any type of orchestration engine that you can use, where you can actually manage all the clouds and do an abstraction layer on top of it so that your users basically don't care about where I'm going over and deploying. It's the same tool, doesn't matter where it is, it basically just works. After that, we're going to talk a little bit more of architecting the applications for hybrid cloud, like which type of workloads should be where, what type of applications are actually good for certain specific types of clouds, which our applications need to go into it. Some of them, let's put it this way, in our case, we're not running only Linux, we're not running only Tomcat servers, we have a huge amount of servers that are Windows-based. So it really depends into where actually you're going to drop that thing into, the way that you have to rearch it to your application or it's just like dropping place, you just grab it, zip it into something and just move it over. And at the end, we're just going to talk a little bit about the cost of scalability on demand, like how your workloads need to be running and how you pretty much have to go over and take the decision of what is going to be running where from the cost perspective. So that's the agenda, kind of like a really small intro and let's start. So first of all, extending the cloud. Why are you actually going to go over and jump in there? First of all, what is hybrid cloud? There's a lot of different definitions out there. Some of them will go and say, well, using hybrid cloud is just I have my cloud here and I'm going to go over and use AWS. Some others will go and tell you, you know what, hybrid cloud is because we're using something here and then we're using box to provide storage services. Some others will go and tell you, you know what, hybrid cloud is multiple private clouds connected together. It's pretty much, there's a lot of definitions of hybrid cloud out there. What we're going to focus here is the first one that is pretty much you run your private open stack and then you need to extend that outside into some other public provider. Now, jumping into the reasons of why you need to extend it. There are usually those three reasons. You can have more of them, but those are the most, kind of like the most common ones, put it that way. You either need to go into public cloud because it's going to be cheaper and you're going to be running some specific workloads for a really short time that you don't have enough capital to go over and build another data center or to extend your private cloud. That's the first one. The second one is location and governance. Why I refer about location? There's a lot of different countries out there that they don't let you grab your data and just send it over back to the US. And after the whole thing of, oh, you know what, the NSA is actually looking at us, a lot of them basically said nothing actually gets out of our country, less even actually send it over back to us. So that's another reason why you usually go into public cloud because you don't have time to go over and say, I'm going to create this private data center in some other location, go through all the costs, whether you can just jump into any of the public cloud providers that are right there and you can actually start getting your business running in minutes. And the last one is speed. What I'm saying about speed is because creating a private cloud takes time. It's not only going over and just, okay, you know what, I'm going to install this and there you go. You have to go over and focus in networking. You have to go over and focus in a type of server so you're actually doing. The servers take a long time probably, I don't know. If you're lucky, you'll get the server within a week. If not, you're basically just sitting there until the purchase order comes in and then you have to go over until the data center team put it on the place. So it takes a lot of time. Sometimes you're just going to be running the workloads for, I don't know, for a season or put it this way, if you have something that is related to Christmas, you're only going to have a huge workload on Christmas and there you go, why you're actually working on extending something that is only going to be used for a certain amount of time. So jumping into the R stuff, let's just dig into a little bit more into the price. The reason jumping to the price has the same way is not only, OK, this is going to be cheaper or this is going to be more expensive. Picking something into the public cloud, you can actually lose control of the cost really easy. You have to go over and take care of how much are you going to be spending, what workloads you have to be running in there, and if it's going to be viable to actually run it. Sometimes you'll go over and say, yes, I can just jump and deploy my VM right away on this small instance. And out of the nowhere, you don't get better performance that is actually sitting on your private data center. But at the same time, it's costing you even more. So you really have to be careful on what exactly you're actually putting in there. The second piece is that cost is kind of like something cloudy in the clouds itself. You will go over and say, OK, you know what? I go to Google Compute Engine and they tell me, this is the amount of cost I will basically tell you, OK, it's going to cost you, I don't know, maybe 10 cents per minute or 10 cents per hour or something like that. But out of the nowhere, you start getting billed by some extra cost, like if you're using any specific storage in there, or if you're using too much bandwidth, some public cloud providers will go over and charge you for what actually you sent inside of the cloud and not what actually you're sending out. Some of them will basically charge you for both of them. But the thing is that getting the actual cost of what is going to take you, you can only see it once you're up there. So you have to be really careful on the cost, whether you want to go and jump in there or not. Now, jumping into the next point on location and governance, this is pretty much the reason why is because you have to go and see where your data is. Usually, the business basically goes over and centers around the data. And if you have data in some locations, like I mentioned before that you cannot take out, you're going to be restricted in there. Or you cannot jump in pretty much scale into that specific location. Some of the other telecoms, like in other countries, they build their own private data centers. And they build their own private clouds. And that's what they basically go over in there. The other thing about location is that after what happened in our case with the NSA and all that kind of stuff, we pretty much lost the confidence in a lot of things. So for instance, just here in America, Brazil decided to say, I'm not going to go over and let you ship any data back to the States. Mexico decided to go and do the same thing. Argentina is doing the same thing. So each of those countries, they decided to go on. And even though AWS is sitting in Brazil also, you can go and deploy in there. But in case of Mexico and Argentina, they decided to go and start building their own clouds. And the biggest telecom that is over there decided to go and start providing that thing. So now it's a little bit more difficult to actually go and ship everything out there, especially if you have petabytes of data, like we have here internally in Semantic. And the last piece of it is deployment. So cloud changed a lot of the deployment in overall. Before you used to go over and say, I want to go and deploy an application. And it takes me a week. Even now, some open stack environments, if you don't have the right orchestration, it will take you probably a day or a couple of days. Whether if you go over and say, you know what? We need this right now. You go and tell it, OK, we have enough capacity in there. But the speed that is going to take us to go over and get more capacity into our private cloud, it's going to take a while. So you pretty much have to go over and take the decision whether if it is going to be how fast you need it or for how long time you're going to be sitting on the public cloud itself. And this is going to be something out of experience. Like, for instance, way, way in the back, I was working for another video game company before actually being in Semantic. The way that we basically reduced the amount of time to provision a physical server was from a day to 10 minutes. And that was only physical servers, not even using cloud. As soon as we got into private cloud, we basically reduced that even more. But what happens when you try to, for instance, burst? If you want to burst your application into it, you're not waiting for the VM to come up in five minutes. You need to have pretty much add the capacity in there as fast as you can. So that was the reason why a lot of people go over and say, OK, we need to pretty much expand into a hybrid cloud. So, oh my god, I'm trying to go a little bit faster because I'm running out of time here. Second place, the user experience. One of the main problems is that you have multiple APIs. You go into this cloud provider, here's an API. You go into this cloud provider, this other API. You go into OpenStack, this other API. Even within OpenStack implementations, like Rackspace has its own API. HP used to have its own API. Any other type of OpenStack implementation, sometimes they'll basically modify the thing into it. Because I don't know, they don't want to give you access to all the API that is sitting in there. However, if you go over into public cloud, now you have to deal with the different versions of the APIs you have in there. How do you mitigate this stuff? Well, first of all, you can go over and say, OK, I'm going to extract everything and just give the users, this is the cloud tool that you're going to be using. And that, basically, in the back is translating against all the different APIs that you're actually talking to. And there are a lot of projects out there. There are a lot of things, a lot of businesses, and a lot of companies that they do that. But you will have it. You will need it. Whether you like it or not is way easier to just go over, and at least from the user perspective, it's way easier to go to a single point and then you can manage everything from there than actually going over and saying, oh, I need to run the OpenStack client now. No, I need to run the AWS client. I need to run the Google client or the Google Cloud client. Pretty much that's the whole deal of the main problem with the user experience. Now, on this is also something that you have to be aware of, because some features in some public clouds are available, some others are not. You might have more features in your private cloud than what the public cloud can actually give you out there. So the level of abstraction at that point is not even for the user itself, it's for your own benefit. Sometimes you go over and say, OK, we are going to need some kind of a Swift in there, because all our deployments are in Swift. Or we're going to need some type of BPN connections. So how do you manage creating a BPN connection against Google Compute Engine, creating a BPN connection against AWS, and creating a BPN connection against Azure? Everything coming from your data center so that OpenStack can actually talk to those clouds. Those kind of abstractions are the ones that actually will basically make your life easier, because even on that, on Google Compute Engine, you don't get IPsec tunnels. On AWS, you get IPsec tunnels. On Azure, you go over, and basically, you get some instance out there that F5 provided or that Cisco provided, and then you run your IPsec tunnel against it. So it's really different, and that's one of the biggest problems they usually have when you want to jump into a hybrid space. So yeah, let's move to the next one real quick. So this is the R section. What exactly do you need? Do I really need more than one? That's one of the questions that you have to go over and into it. And like I said before, that really depends on where your data is and your location, and where all your customers are going to be. Sometimes you're good into, OK, just deploy into one of them. You also have to consider the technology stack that you're using. A lot of you guys might be running Windows already in OpenStack. A lot of you guys might go over and think, OK, you know what? We can just do the same thing outside. But it's not. Unfortunately, whether I like it or not, I'm sure it's one of the best platforms to go and run Windows. The reason is because when they base their cloud not only into Windows technologies and everything, they're using Hyper-V. So outside of that one, even sometimes AWS goes over and allows you to do Hyper-V also and select that as a specific hypervisor. But it's not the same deal. So between where your data is and the type of stack that is sitting, you actually have to pick it up where I go into one of them that actually gives me everything in all the locations, or I need to jump into multiple. So going after that one is you have to go and take another decisions. And especially going down there at the bottom where it says the features of each other. When you try to make something portable from one place to another, if you don't have the exact same features, it will probably going to be really difficult for you to actually implement unless you go over and do the abstraction layer that I was mentioning and abstract all your applications to a higher level. If you try to go over and use iOS only, you might go over and say, OK, in AWS, I'm going to use S3. On Google, Compute Engine, I don't have anything to use S3 in there, even though they have their version of a storage also. Doesn't work the same way. If you go into Assure, it's another separate problem. So you have to pretty much use, and we'll talk a little bit further down there, to see which is the minimum denominator that you're actually going to be using. Whether you standardize on all the clouds, I'm going to be sitting there on compute, networking, and storage, and that's it. If you start going into a little bit higher level, into, oh, now I'm using some type of CDN, for instance. Let's put it in that point. If you jump into AWS with CloudFront, who else is going to provide you that one? Now you're locked in into that specific section, because you are using some specific public cloud provider feature. So the best way to go over and say, OK, if I'm going to need to jump into multiple clouds, different providers, let's just grab the standard and standardize into what we can actually provide in-house also. And it's going to be the exact same way that it's going to be provided outside. The reason why I'm saying is because once you get out there, there's going to be a point where you're going to get a huge face plant against, like, oh my god, this is costing a lot. So you're probably going to have to go and bring back everything into it. As soon as you start using some specific public cloud feature that you don't have in-house, you're locked in there. So it will take you, rearchitecture, or pretty much, rebuild the application from scratch. So moving forward, let's say you already picked, and you're going to the three major clouds out there, and you have your private data center also. So your private data center, you control it. You define who is actually touching my servers, who's actually going over and saying, I can go and put a firewall in there if I want to. On the public cloud providers, you cannot, especially if you're jumping into multiple. All of them are going to be just doing their own type of security. You can actually get some out of no contract with them saying, OK, explain me exactly how you do everything in the back. But it's not the same thing. And then there's the other piece, communication back to us, or back to your private cloud. How do you do that? You usually go over and tell them, you know what? Let's just build a tunnel into it, and we're going to send everything through it. But I mean, the tunnel is fine. It will basically encrypt everything that you need. But it's going to give you enough performance. It will give you the enough bandwidth to actually go through it. That's one of the reasons why you have to go over and think about it. Sometimes putting a tunnel is basically overheaded to your application. Sometimes it's not. If you go over, I usually go and try to put these three types of communication back to it. The users go into public cloud, and you can use it for bursting. All the traffic goes back through your tunnel and basically gets out of your data center. They don't have internet access. That's one way of securing it. The second one is you pretty much isolate your public cloud provider. And all the communication that is done is not done through a tunnel. But you have to encrypt your services, basically going back and forward, and say, OK, if you want to come back home, you will have to hit the firewall of the data center or the firewall of our company and go through it. That's the second one. And the last one is going to be pretty much you go over and give them access outside, like outbound access only. But you pretty much also close the access back to your data center. So those are usually the three types of communication that you open into it. Then how do you secure against people that is out there and say, oh, you know what? I'm just going to scan and map everything through the network of AWS or through the network of Google and just see what exactly is out there or see where I can actually jump. Usually getting hacked in public cloud providers are really the most common ones. It's done between 10 and 15 minutes, something around there. And the problem is that if you don't have security, that's pretty much a free entry to your data center into your cloud. So once you've defined how you're going to be securing the communication into it, you have to define from a network perspective how you're going to secure the public cloud itself. Whether you put firewalls at the host level, whether you use some type of firewall from the cloud provider, or whether you basically just don't allow access at all. There are a lot of guides out there that they will go and tell you, oh, you know what? Just drop the SSH keys in there, and that's fine. But the problem is that if you're running Windows, how do you do that? So it's pretty much you have to go over and think it from the point of not only the OS, not only the networking piece, but also going over it all the way up to the application itself. In our case, everything that goes into the cloud is using single sign-on. So it doesn't matter what our application is sitting, it will go back to semantic authentication and basically try to authenticate the user. Only if that happens, then they'll actually go in there. And we're taking a little bit even further. One of our teams is basically working in who's going to be able to provide or provision things into the public cloud provider. If they don't get through the actual SSO authentication, you're basically are not going to be able to go over and do anything in there. The reason is because once you get access to your, or somebody else gets access to your public cloud provider, it will basically allow them to spin up instances and you wouldn't even notice it. So that's pretty much the whole points that I was actually going to refer on this slide. Like I said, the thing that you have to take care is how you're going to be securing it from every single point of hardware or the highest point of view, then how you're going to go over and say, how it's going to connect back into it, how you're going to communicate in between each other, and which options you have to communicate the applications. And at the end is who is going to be able to actually go over and do something. Once you've done that, we basically keep and say, OK, now we have it secure, now we have it up and running. Now we're going to start deploying things. It's like how you keep the sanity to actually manage that thing. Going into public cloud is really dynamic. Even if you stand something in OpenStack, you can still recover the application itself. I mean, it's just a VM. You can still go into your hypervisor and grab it. What happens on the public cloud is that the instance gone. And that's it. You don't have to go. You cannot go over and say, oh, you know what? Restart it or bring it back. It basically is gone and it's gone. So how do you do operations in private cloud? Might be in a way that it resembles you how IT operations it is. How you do operations in public cloud is completely different. That's the whole reason why a lot of companies basically just focus in DevOps. Because in public cloud, you have to have so much automation that it will basically allow you to go around and say, if I lose something, I don't care, because I'll just bring it back. Way back to 2010 when I was actually jumping into these type of workflows, it was just like one of the friends make a comparison between the cloud and the private cloud. You just use the paper plate and you throw it. If you basically break or you pretty much just unfold it or anything, you can just get another paper plate. That's pretty much the mentality that you have to go into it. Same for the operation side. You have to go over and not only you can go over and grab all your pieces from the operations that you're using in private cloud. Like, for instance, you can go over and use Cilometer to actually grab data out of the instances in public cloud. You can use Keystone to authenticate to your instances in public cloud. That's fine. And even if you actually move forward all the operations in the same way, there are still some pieces that are going to require something a little bit more different, a little bit more specific to that specific public cloud provider. The reason of this is because of the different APIs they have in the back. What I'm referring here is that sometimes you go over and say, OK, I'm going to jump into I'm going to use the example of Assure. So I'm going to go over and jump into Assure. How do I keep track of everything that I have in Assure specifically? I cannot go over and run it in there because all the workloads are for Windows. The ones that are we deploying in there. So like I said, it's really dependent on, even though it might look something similar, you cannot go over and just deploy something on Windows. So it will basically go around and tell you, oh, I'm going to take care and I'm going to use this domain controller to go over and manage all the other applications using AD. Or I'm going to use these security groups that they go over and deploy in there. Windows is really good to do the actual automation of a lot of things. But even at that point, you basically have to take care on what you're actually pulling out of it. One of the things, one of the previous experiences, is that even on Windows when we're deploying something, and that's part of semantic is past experience, we're deploying Linux tools inside of Windows. The reason is because once you actually focus into one stack and you want to jump to another, it was pretty much impossible for us to use specific tools, like, for instance, Sensu. Sensu is a monitoring cloud tool that you go around and works pretty well for actual Linux services. It's actually designed to run things on the cloud and monitor things on the cloud. But once you want to go around and run it on Windows, a little bit tricky. First of all, you have to deploy Ruby, which is already enough tricky to run it on Windows. So all that kind of things is that sometimes you will have to basically, whether you like it or not, separate the way that you're actually doing ops from in-house to do it outside. And the main reason why it's a little bit different, and one of the big points is that you don't have access to hypervisors. That's one of the biggest problems they have. If a user goes around or like, if yourself go around and try to get something out of the hypervisor, you can. As soon as you jump into private cloud, you won't be able to access it. Doesn't matter which contract you have with the public cloud provider or what exactly you're actually paying them, you won't be able to access at that point. The last piece of it is, once you have both of them pretty much identified and figured out, OK, for this, I'm going to be using this type of tools. For this, I'm going to be using this sort of type of tools. It's just like based everything in data. Why I'm saying data? Collect as much metrics as you can from every single point of view. The same is a whole different story where you sit down in your data center and you can basically consume resources. And that's pretty much it. You can still go and draft resources and keep adding more resources. On public cloud, those resources are basically costing you a little bit more. So all the workloads, you pretty much have to take them really under the scope of going on and saying, I need to see how much I'm actually using the system because if now you're going to end up having a bunch of instances in public cloud, being used, I don't know, probably 50% of max. And I'm not saying like 50% of the actual utilization of the CPU or anything that you're actually using in there. So that's one of the biggest pieces that you have to go over and just draft as much data as you can because we'll see later down the road a little bit further that there's ways to deploy in hybrid cloud. Sometimes you go and say, I'm going to push this thing into hybrid cloud for just because I need that size of the VM that they provide in there. I don't have enough hardware here to actually provision a VM with, I don't know, maybe two petabyte disk. So you really have to know your application at that point and pretty much having operations that piece into it will give you the data you need to. So moving forward, because I'm running out of time and I talk too much, multi-cloud tools is something that you'll have to go and focus into it. The same thing that I was going into and saying, OK, you're going to need to have something working on private cloud and then you have something working on outside. Once you start jumping to multiple hybrid clouds, it's going to be easier if you have a multi-cloud tool. The holy grail, one tool to rule them all. A lot of them, they claim that they can do it. A lot of them, they go and say, yeah, we're going to be able to do all this stuff exactly the same. It's not always the same. But you can either go with something that is out there or you can just pretty much code it yourself. At the end is what you're doing in hybrid cloud. It's pretty much the same thing. You're just provisioning VMs. You're just giving the users like IS. It's nothing else. If you want to go over to a multi-cloud tool, some of them will go and say, OK, I'm going to give you access to provision IS, manage the networking, manage load balancers, manage a lot of different stuff. But the thing is that the reality is that there are multiple platforms and they change and they constantly change. So keeping up on a multi-cloud tool or if you go over and try to build it yourself, you're constantly will have to go over and update it so we can talk to the latest APIs, going forward and forward and forward. Sometimes it's better just to go over and grab something out there. Like here in Austin is really one of the tools that was really common was RightScale. RightScale used to go over and say, we can go over and we're basically doing deployment for you across all the clouds that you want. Doesn't matter if it is public cloud, if it is private cloud, if it is whatever you want. There you go. You can go over and put it. And it works. I mean, that's one of the things that works. There are a lot of other different things. Some people go over and say, OK, I'm going to go over and use Puppet or I'm going to be using Chef. And they just create a small plug-in for knife and they go over and start deploying all over the place. Same deal. Some others will go over and say, OK, I'm just going to be using Ansible. You have options. The only thing is that you really need to standardize into one of them because if not, you're going to start having things into this team is working on this side, this team is working on this other thing, and it becomes more messy. Instead of actually helping you out, we'll basically become a math messenger. So moving into the minimum denominator against the maximum. So this basically relates to what are you actually going to be running into. If you decided to go into one public cloud and that's the provider you're sitting, you stick to it. You don't actually move forward and say, oh, now, let's give it a shot to this one. Because at that point, you're getting locked in. If you go with the maximum denominator, you'll start using all the features that they have. And it's not saying that is wrong. You can actually help yourself a lot of it. But let's say you go into AWS and you use Dynamo. How you get out of there? Let's say you will go and try to bring it back into it. You'll have to recode the thing to either use Cassandra or use some other similar thing that Dynamo can actually resemble. But at the end, it will basically take you to, I have to rearchitect my application. I have to recode something. So that's one of the things is that you have to basically see where exactly your application is going to be sitting, and if you're going to be able to deal with it after the fact. Because moving back and forward workloads, they do work. And it's actually awesome. But you only are going to be able to do it if you have the minimum denominator. If you go over and just provision instances, networking, and storage, you can do that in households. You can do it that on Azure. You can do that in AWS. You can do that in Google Compute Engine. But as soon as you start adding a little bit more pieces, you're basically in there. And getting out of there is really difficult. At least it's going to take you a year to migrate some application outside of it. And then the last piece of it, let's say you go over and say, OK, I'm going with the minimum common denominator. That's a way of doing things. But it has some problems with it. You end up trying to get the portability of all the features that you need from one cloud to another. That basically takes you to, now I have to build my database as a service and run it on top of all the other clouds. It will basically allow you to port from one cloud to another without any problem, because you're basically going over and providing that service. You're not relying to something that is already there. But at the end, it requires a lot of coding and a lot of different things. It's not only just like, oh, I'm going to go over and build the database. And there you go. No, it really requires a lot of thinking from the operation side. As soon as you jump into a service type of things and you try to abstract that layer, it will basically force you to go over and say, OK, I don't have a DBA. Now I need a DBA to figure out how I can actually build a cluster, keep it up and running. And with some of the instances that in the public cloud provider, it won't actually crash everything. So that's, like I said, you have pros and cons on both sides. Either you get locked in and just deal with it, or you pretty much move forward and say, I'm going to use the least amount of features, and I'm going to build the other ones that I need. It's up to the company decision. I mean, like I said, it's pretty much depending on what you're looking for and where your data is going to be sitting and what you actually are required. So I have eight minutes. OK. So architecting, now you're in there. Now you're going to go over and say, I need to move the workload, actually. So difference between private and public cloud. Private cloud, sometimes you're still thinking of pets. And oh, my instance went down. Just call the submitted ticket and bring it back. OK, you can do that. That's not a problem. But you still have to go in. Once you start moving forward into public cloud, that actually becomes cattle. It's not like you have, I mean, and I'm not saying that the instances will die. Some instances will be up there and running for years. The problem is the unknown that you have that you don't know where actually when it's going to go. So you pretty much have to re-architect your application into, now I have to deal with this stuff. Now I have to make it HA for real. Not HA as in put the load balancer on top of it. No, like for real HA. And it's not only just like, I'm going to go over and push it into bring the instance back. It's bring the instance back, automate it, deploy the code into it, and put it back into whatever is the huge cluster that I'm actually using. So that's one of the things that a lot of the businesses go and they try to jump into it. If you're the first time that you're running into public cloud, you're going to get hit by that specific point. So the other thing is moving completely into services. Why is everybody going over and saying, oh, you know what? You need to use microservices. You need to use microservices. The reason is that once you have tracked that point and your application becomes a service itself that just talked to other services, you can pretty much cluster them. So you're not looking into monolithic type of applications. You're looking into groups of applications that they interact with each other. So if you lose a piece into it, you're not going to lose the whole data center for the public cloud provider dies. You're still going to be able to just graph one of those pieces and roll your updates into it, roll one by one, because at the end, you're basically putting a service out there. And I'm not saying go over and use some service from outside. No, I'm saying if you have your application that is something simple, like a shopping cart, you go over and you have your service that handles the users. You have your service that handles the billing. You have a service that basically goes over and handles, for instance, emails. You have a service that basically handles your catalog. You basically try to go over and see how your application is going to be working into those smaller pieces instead of having just one huge thing. Now, the other side, if you have one monolithic application that is completely stateless, then you can go over and put everything into it. And if it dies, you just bring it back. But like I said, that's pretty much up to how you're going to move forward. Now, the other thing is where you should go over and deploy this stuff. And this is one of the things that I wanted to go over and put a little bit of emphasis here. The reasons is that your applications and your instances are going to be sitting right next to the data. At the end, the data is everything. The data is what is going to give you how your business is going. The data is going to give you like, OK, I need to jump into this specific region in there because my customers are there. And I'm not saying governance into it, I cannot get it out. It's just sometimes you go over and say, OK, my application requires maybe 10 milliseconds response. Why are you going to go over and deploy it in Japan if your customers are sitting in Europe? So you just have to go over and know where you're actually going to deploy it because every single piece of it, you have to be aware that it's going to impact your performance of your application. You don't have to go over and say, OK, you know what? Now I'm going to go over and deploy it in this side and then use some CDN that I'm going to go out there and use the CDN to provide the content outside. That might work for some reasons, might not work for some others. So you really have to be aware on which place are you going to be deploying this stuff and exactly what pieces you're deploying into it. A lot of the public cloud providers have different models of pretty much charging the users. Some of them will go around and tell you, this is the amount that you're basically charging for an hour. If you go over and reserve it for more time, you go over and tell it I'm getting a different cost. If you go over and just like say, OK, I'm going to bid for it, you get a different cost. So you have to think really well on what exactly you're going to be deploying into the private cloud and into the public cloud, because sometimes you might be able to go over and say, you know what, PCI, I don't have any other option, put it inside. And then you basically just open all the billing services to your applications that are sitting outside. Sometimes you go over and say, you know what, I cannot do that. I'll just go over and deploy my credit card and PCI stuff into AWS, into some specific region, for some governance or for some other external reason. But at the end, let's say that PCI is basically running some massive, I'm not going to say Oracle, but some other database, massive database that is basically going to go around and keep your instances up and running constantly over and over and over. That's when you actually go around and say, some probably is better to actually move it back into open stack, because the cost there is not going to be as high as actually putting it out there. And yeah, I mean, you have to go and take the decision where you go over and push it. Just something that is critical out there, something that is not really critical, or something that you're just trying to burst. The most common use of hybrid cloud outside of moving the, you know what, I'm actually trying to get the data, or I cannot move my instances out there to that country, is bursting. Why is it bursting? Because you go over and you start deploying small workloads or big workloads for just a specific amount of time. Really small amount of time that will basically allow your application to survive, but then you can actually pull it back into it when you don't have that type of workload. And let me just jump to the other one before we actually run out of time. So the last piece of it is the cost of scalability on demand. I already talked about this a little bit, but you have to think about which are going to be the most critical. Sometimes it's better to just have a monolithic application where you can actually go over and bring it back inside of your private data center and open stack than reworking all the things that are really critical and put in a public cloud provider to go and say, oh, you know what, if that thing dies, I'll have to cluster it on some way. Or if that thing thinks dies, I'm not going to be able to just grab it back. The second piece is just use the multiple billing models that you have. This is something that AWS basically provides. And a lot of the people that go into AWS for hybrid cloud, they put out specific workloads on these type of instances, the other workloads on the other type of instances. And at the end, pretty much just put the spot instances for something that you're going to be bursting. Keep track of all your resources, and pretty much just enforce governance and restrict some features to your users. Like I mentioned at the beginning, pretty much depends on what you're going to give the access to users to do, where you can go over and say, OK, you're good to go, or you know what, you cannot go over and do that stuff. And I ran out of time again. Questions, if there's any time at all? If not, I think that's pretty much it.