 Hello and welcome to our webinar on designing a multi-cloud strategy. My name is Dave Blakey and I'll be guiding you through our experience and learnings on multi-cloud and what is driving the adoption of multi-cloud and multi-location deployments today. When I speak about multi-cloud and when companies like SNAP talk about multi-cloud, we're really talking about the idea of having your business critical production workloads running in more than one location. So it's all the way from co-location to true multi-cloud to hybrid cloud to public and private cloud and everything that's involved in the idea of having more than one provider for the services you use for reasons such as redundancy to picking the best of breed service and everything in between. So really multi-cloud to us and enabling multi-cloud and the challenges of multi-cloud are around these different platforms with different apps and different stages and different locations around the world. So what we're going to talk about today, the main focus of the webinar now, what to use multi-cloud for, how to choose between multiple cloud providers, how to deploy to multiple cloud providers, how to then secure them and think about security across different providers and then a little bit of insight into how SNAP does it in our organization. So the first step, what to use multi-cloud for. What are the benefits of multi-cloud rights and what do you need to be looking at it for? What are the low hanging fruits if you will? Obviously we're of the opinion that all organizations should really be preparing for or already using a multi-cloud strategy. As you saw, some 81% of organizations are already in multiple locations and clouds but the common benefits and kind of the easiest upsides I think are what I'd like to focus on first. And the first one by far probably the biggest I think where we see clients going to multi-cloud first and the most immediate kind of return I think is for redundancy and availability. Now I think the most important point here is cloud fail and network paths fail, right? So a cloud availability zone could go offline and an entire cloud provider could in theory go offline. You can see in the figure on the top right that out of 1,200 respondents, 27% said they have no downtime per month from cloud providers. So the vast majority experienced some amount of downtime each month from outages of cloud providers. And when you look at large organizations, you might not just be looking at outages in terms of availability zones or data centers going down but also just system outages, right? Losses of virtual machines or instances or services, they do happen, right? Clouds have a 1 to 2% failure rate per year so if you've got hundreds of systems you're all looking at a situation where there will be failing throughout the year on a daily basis almost if you're large enough. And that's the critical kind of component here. I think in the last year we've seen a lot of this, right? A lot of outages at clouds, you know, from fires of providers to network outages to power problems and like. And I think a real high availability strategy today must include a multi-vendor strategy, right? It's very hard to say that your application can guarantee 100% uptime if you're dependent on one single provider or one single network or even worse one single data center. So some of the challenges that we'll look at, you know, when we really talk about multi-cloud challenges especially like security and things like that, we're talking about, you know, using Amazon and Azure and GCP and Digital Lotion and Linode and all of these cloud providers or Amazon and your own VMware data center or whatever it might be. But even within one provider with multiple availability zones there are challenges around redundancy and availability, right? Very common for us to see clients that have a deployment in Europe and one in the US. And if the US should go down, American users should be sent to Europe. But that still doesn't come for free, right? There's a design element that's really important there. The next biggest driver for multi-cloud that we see are geographic considerations, right? And more and more workloads today are very latency sensitive and getting traffic to consumers in, you know, the lowest amount of time possible is critical for all sorts of different methods, you know, from things like finance to online gaming to gambling, sports betting, even just websites really. But, you know, when you look at 50 millisecond or below latency you're basically talking about geographic considerations, right? To give you an idea, to get to the west coast of America from South Africa takes 300 milliseconds if you have no delay. It's just that far. So there's simply no way like that an Amazon deployment on the west coast of the US can provide, you know, a 200 millisecond SLA to clients in Southern Africa. They would have to deploy in South Africa. Now, until recently, the only public cloud available in South Africa of the big three was Azure. So you could be an Amazon shop, bring on a big client in Southern Africa who requires a sub-200 millisecond reply on an API of yours and all of a sudden you have to deploy into Azure as well because there's simply no Amazon data center there. So we see this kind of thing a lot. It's not just a global problem, it's an in-country problem as well, right? Like in state, for example, you know, how do you deploy this workload in that specific state? And that really drives people towards multi-cloud. And remember the way I talk about multi-cloud is, you know, deployments in many places, right? You might find that there's no feasible cloud in a state you need to deploy in that you could use, but there is a data center or whatever. So you ultimately wind up with the same problem, but this is rising big time. The need to have systems within, you know, what we used to call in old days, the last mile, that's obviously not a physical mile, but within the last mile of the consumer is a big trend and a big concern of a lot of application types. The next one is cost and feature options. So the best possible cloud for each workload. It's easy to say, but really it's talking about using the most optimal solution for each problem that you have, right? Problems typically being your applications, but an application is many things. It can often be a SQL database and a key value store and a bunch of web servers or it can be serverless or it can be, you know, just general purpose workloads. But you can see at the top right, you know, the costs, there's no one cloud is cheaper than the others, right? It depends on what you need, what instant size you're launching, what services you need to consume, and so on. So really, if you have a truly multi-cloud workload or application delivery strategy, there's no reason why you can't always be deployed in the cheapest cloud. It's like something of a shocking example to people often, but we have a client, for example, that spends five to $6 million a month on cloud and they will move their workloads between the big three clouds multiple times a day depending on the rates that they get or, you know, their spot pricing and their instance requirements or things like that, right? So that's obviously an extreme example, but there's really no reason why you shouldn't be able to benefit from that. The other thing to remember is that cloud can become a platform for you. You might have an entirely containerized workload where you really can deploy it anywhere and then there's no reason not to just say, well, you know, run more containers where it's cheaper, right, and so on. And the final point, and this is an important one, is avoiding vendor lock-in. So when you design for a single cloud, you tend towards a vendor lock-in problem. You know, it doesn't mean you definitely have one by working on one cloud, but you start to utilize local cloud services and applications, right? Products that only Amazon has or an API that's very specific to Azure or, you know, GCP or whatever it might be. And it then becomes very difficult to get away from them. So what happens then is, you know, your business is looking to be acquired by someone, but they need it to run in a different cloud and it's not easy to migrate or, you know, that cloud's pricing changes and all of a sudden it's not the best option for you anymore, but you cannot move your workload off because it's become so dependent on it. These are lessons that we as an IT community learned long ago, right, with, like, SQL providers and things like that that used to be locked in and you could never leave them, you know, and no matter what happened with the pricing or the infrastructure or whatever, you were so deeply involved with their specific APIs and services that it became impossible to get off of them. There are deployment models that are helping this natively, like being able to deploy things into Kubernetes and then that's very much more easy to lift and shift. But designing your application with the intention of running it in more than one provider really forces you not to become locked into a vendor, of course. So this is more like a side benefit, I think, of it, but it is a big concern. You'd be surprised, you know, I'd say more than 50% of enterprises we speak to are actively working against being locked into a specific vendor or provider. It also even helps with just, you know, negotiating deals and prices, right, like if they know you cannot leave, it becomes much harder. So the next topic is choosing cloud providers, right? How to choose a cloud provider for your business when you're looking at these multi-cloud strategies. You know, often you will already have one, but how do you choose the second one or how do you choose many, you know, etc. And I think the first, you know, my first tongue-in-cheek point would be really, firstly, don't design for specific clients. Don't choose to. It doesn't mean you can't pick two to run in today, but don't choose two to build for, you know, in such a way that you could run in any environment is the key really, I think, because that helps you to avoid these lock-in dangers. And then, you know, it also helps you to build for like non-restricted environments and platforms, right? Like I said, like Kubernetes, where you can, you know, ultimately design your workload for anywhere or containers or even virtual machine images. It all depends, you know, can you use relational database services in a public cloud? Yes, because almost all public clouds have them. But as you go down that kind of, you know, vendor-provided set of services, you obviously become more and more worked in. So the first thing is, I think, don't choose cloud providers as a part of your application design or work or deployment architecture, you know. You can choose them based on what the cheapest one is, what the closest one to your, you know, location is, what the latency is like, what the quality of service the support is like or what the contract you got with them is. I don't think you should choose them based on technical requirements whenever possible. The next point is to plan your architecture. This is a big important part that I mentioned, right? Location and requirements. So where do you need to physically exist? You know, where will it benefit you most to exist, right? Maybe you're looking at a multi-cloud strategy for redundancy and reliability, high availability reasons. But your business has clients primarily on the West Coast and East Coast. Then you should deploy on the West Coast and East Coast. If you've got a lot of clients in the UK, maybe it would be a good idea to have a data center in Europe. You know, a lot of these things, you get benefits for free, right? You need two data centers for reliability, but you also then get a better performance in Europe because you're sending Europeans to a local data center. So I think that's the big thing. And then I think, you know, depending on your workload, you really should look at the costs, right? There are wildly different costs between cloud providers, especially when you start to look at an enterprise workload. Egress fees can be extremely expensive. You know, the fees on ingress, WAFs or load bounces can be extremely expensive. This is not an example that's specific to just Amazon. But by, you know, way of example, if you had say 100,000 new connections a second to an ALB system in Amazon, just the load balancing and WAF of that could cost you $20, $30, $40,000 a month. So as you look to scale up, you know, you can get a fright basically. And you should really look at the underlying costs, right? Not how much is my VM per month, but you know, how much data do I transfer and will I ultimately get charged for that and how much database storage do I need? My backups, I'm going to be keeping them, you know, what are the size of my files and that's where people often get caught up. So looking at that, I mean, the other thing that this really allows you to do is what we call staging clouds. This can be a big cost saver for an enterprise. So when you have an environment that's not vendor neutral, cloud neutral, you typically have to have your test staging pre-production environments in the exact same cloud. Because if you're using their services, you have to be in that cloud to use those services. I mean, you don't technically, but you have to be using those cloud services at the very least, right? Having a cloud neutral strategy means that you can deploy staging or test environments onto totally different clouds. Now, of course, you will typically want to have some sort of pre-production environment in your actual production environment, but there are many companies who will deploy their staging systems and CICD infrastructure and things like that somewhere like Digital Lotion where the infrastructure might be, you know, two to three times cheaper than AWS or GCP or Azure, you know, or maybe there are other clouds, right? There's Linode, there could be VMware, your own infrastructure, you know, you could do it on hardware locally, right? You could have your local Kubernetes instance that you run everything on because you're not tied to a single cloud. So staging clouds and stuff like that can be a big cost saving as well when you're looking at planning your architecture around these types of things. Next thing is to ensure your requirements are met. I've mentioned some of these things already, but you need to look at databases, right? Many clients will rely on the cloud provider to provide relational database services as well as NoSQL, key value stores, things like that, and they vary quite a lot between them, you know, there's a good 20 to 30% variance between them. Egress data fees can be very different, right? So that's the cost of sending data out of your instances and it costs a lot in cloud if you're sending a lot of data. It can catch you by surprise. Your staff abilities and training, this is something that people don't talk enough when they talk about multi-cloud in my opinion, but, you know, if you're a staff of familiar with Azure and GCP, then that's the two clouds that I would pick first. Those would be the two low-hanging ones. There's no need to deploy to a cloud where troubleshooting will be hard or where you need to, you know, upscale your staff and train them and things like that. I think more so you really want to have a neutral experience where you can use whatever clouds are appropriate, but there's no need to, you know, pick the one that no one understands. And then you also need to consider especially at larger enterprise regulatory requirements, right? For example, maybe you're working with government and, you know, you need a cloud provider that has an acceptable environment for you to deploy into for federal work or things like that. Another big thing that we're seeing lately is data sovereignty, right? American companies want to know that their data is stored on servers that are in America. European countries might want to know that their data is stored in Europe. Australia, you may need to keep it in Australia for a long time, right? So a lot of the time where we say, you know, this last mile deployment style can also be because, you know, if you want to do business with the Australian government as an American company, you may well need servers that are deployed in Australia. So, you know, what are the regulatory requirements around that if you plan on doing particularly government business, but you may have regulatory requirements in your business that only certain clouds provide, right? Like PCI DSS or, you know, how easy will it be for you to get SOC2 compliance on a cloud and all of those types of considerations are also important. Another good point about why it's nice not to be tied into a cloud, right? Because if you ultimately do need to move for some outside consideration like that, then, you know, it makes the whole process much easier, of course. That takes us to looking at some of the commercials and the differences, right? Contracts and commercials between the different clouds. It's very common for large consumers to get better deals with different clouds by signing contracts and things like that, but also the various types of support you get per cloud is very different, right? So, SLAs might be different. The fees might be different. You know, you can see that they can scale up a lot, right? This particular cloud example is $15,000 a month for 15-minute support, right? Now, if you've got 100% SLA guarantee or, you know, five nines, you need 15-minute support, so then you need to account for that, right? What are the liability and performance guarantees they give and so on, you know? And how does that affect the SLAs that you ultimately have to cover? Because as you saw on one of my very first slides, you know, 23% of clients or 27% of clients that they haven't had now to just run from public cloud, that's enterprises, right, with big workloads. So, you know, you really need to consider that, I think is quite important. The final point on this, on my kind of main tips for choosing cloud providers, is consider two times clouds or one cloud plus X, right? What I mean by this is, do you want to run your workload in two different hard clouds, public clouds, right? What you think of when you think GCP, Azure, AWS, you know, DigitalOcean, et cetera? Or one cloud plus X, hybrid environments. This we see a lot, and you know, it went through like kind of a bit of a wave, or should I say like a valley, where in the beginning of people using more and more cloud consumption, hybrid was very common, hybrid being that you had physical data centers or on-premise workloads, and you also had public cloud workloads, right? What I mean by the valley is that that kind of started to shrink down, and now we're seeing a big uptick in that in cloud plus X, and often it's like ABCDEFG, you know, much more than just X, because people are putting workloads on edge deployments or people need, you know, custom hardware for certain types of workloads where, you know, hardware is much better, or they need GPU workloads or whatever that might be. But, you know, like I said, I think multi-cloud and the story around multi-cloud is a good one, because, you know, 70% of people will just be in multiple clouds, but don't necessarily restrict yourself to just clouds when you're looking at deployment types, right? Data centers, app-based services, right? Places where you can deploy apps as a service and they scale them for you, you know, infrastructure as code stuff, serverless things. There are many different ways of deploying an application in multiple locations and handling that kind of thing. So the next thing is how to deploy to multiple clouds and what to think of, right? And the first thing is, again, to consider your architecture. So VMs are very different from containers, Kubernetes versus monolithic environments, right? So, you know, when you're deploying VMs to multiple locations, they obviously need a lot more consideration than just containers. Well, sometimes containers can be actually much more work, I suppose, but it really depends on what you're deploying into what environment. And a lot of the time, you know, we got told this kind of story long ago that hardware was dead and everything was going to be virtual machines. And then the next part of the story was that virtual machines were dead and everything was going to be cloud. And now the next part of the story is that cloud is dead and everything's going to be containers. And straight container orchestration is dead and everything's going to be Kubernetes. And the reality is across all of our clients what we see is a spectrum. They've got hardware workloads, they've got VMs, they've got mainframes, they've got containers, they've got Kubernetes workloads, they've got serverless, they've got VMs, cloud instances. And I think that's here to stay for a long time. And like I said, we are seeing more and more hardware workloads and more and more edge workloads. So it's not just, you know, all people are slowly getting to cloud. I think the real key is, you know, what does your infrastructure require? How does it get deployed? And how do you ultimately deploy to it? The next kind of point there is to automate absolutely everything you can around deployments, right? So like for us, for example, we have a CI CD environment. So we continuously deploy and it does our entire deployment chain for all of our applications and systems and servers and sites and everything around the globe. All are automated from our repositories, right? That's not always the case for every business. But as best as you can, look for tools that will allow you to, you know, automate the deployment process into environments, especially where you can define those environments through config files and code and APIs and things like that, right? Where you can integrate them into your test environments and continuous integration environments and so on and deploy automatically into these different cloud providers. So things like Terraform, for example, make it very easy to take complex systems like full stack VMs and deploy them into two different cloud providers and configure them to use the local, you know, deployments and things. So it's not, you know, just that containers are the answer, for example. I think that, you know, the automation process of it is very important. And also knowing that everything has to be automated really lends your team towards building and designing your application infrastructure and even your developers to developing it in such a way if you have developers, that it allows automation and easy testing and easy deployment and things like that. Automation is also incredibly important when you scale a business. Like ours, for example, you know, six or seven years ago, we maybe had 10 servers, you know, running our business around the world. You know, soon see we have hundreds now. And it's, you know, when you need to scale up rapidly, it really helps a lot. The next point is to utilize auto scaling functions where you can. So one of the problems with costs around high availability and multi-cloud strategies is that if you deploy the same environment in two places, generally speaking, you will be paying twice for it. And it gets much worse when you deploy it into 50 places. You know, we've got clients, for example, that have got somewhere between 12 and 14,000 applications running in 60 countries. They've got 450 ADCs of ours and behind each ADC, they got an average between 10 and 20 servers. So, you know, do the math, right? It becomes a real nightmare in terms of cost and scaling and management and so on. Auto scaling is something that exists in all orchestration platforms and in most cloud platforms. And if it doesn't exist, it can be scripted and tooled in quite easily. And what it's really saying is, well, let's take a simple example. You've got a website and depending on how busy your website is, depends how many servers you need. Now, that website can no longer afford to fail. Or you can't afford to have an outage. So you decide to deploy it into two different locations. But, in your example here, you send all of your traffic to your New York data center and only if it's down do you decide to send it to the New Jersey one. There's no need to have the same number of servers running in New Jersey as you do in New York when the New York system is online. And this is a key consideration of multi-cloud deployments, even if you've got an active environment like we do, where all of your servers could receive traffic at any point in time. Our workload right now is largely in the U.S. But two, three hours ago, it was a split between the U.S. and Europe. Six or seven hours ago, it was largely Europe. And so your systems can scale up and scale down as needed during those times. A simple example is something like Black Friday or Cyber Monday. Do you really need to run as many servers as you need on Cyber Monday all year? Obviously not. But clouds support these types of functions and I think it's important and it also allows you to make sure that, should say, AWS fail, that your GCP environment is ready to scale up to potentially double the size that it's ever been to handle the incoming users that can no longer get to AWS. So it's really a high availability function as well and very useful. And then ultimately saves you a huge amount of money, of course, because most of the time your environment hasn't failed, of course. The next learning and advice of ours is to root traffic with GSLB. GSLB is an acronym. It stands for Global Server Load Balancer. It's quite a simple concept really. But what it is to do with is having an intelligent DNS server or service. There are many companies that provide it. Most clouds provide it. Snap provides it, for example. That will root people to different locations around the world or data centers based on some information about them or about the place that they're trying to go. So DNS normally is the process of saying, okay, I'd like to go to www.snap.net and you leave your house or office and you go directly to the IP address. That's snap.net resolved to. In the data center somewhere. Once you arrive in that data center, it's the first opportunity that we, Snap, have to change your direction to send you somewhere else. So if you've arrived in US West, it's like too late now to send you to US East, right? Because the first time we see your traffic is when you arrive. So it's like trying to manage a traffic jam at the toll gate. It's very difficult at the toll gate to move people to a different road or interstate or highway or whatever it is. Or at the front of the traffic jam. If there's been an accident on the road, by the time people get there, it's very hard to move them. What GSLB is about doing is changing where they're going when they leave the house. So it's telling their GPS to send them on a different road, right? And it lets you respond to DNS queries for a site or many sites based on the health of the destination. So you can say, well, is the status in online? How busy is it? Is it saying that I should shift traffic elsewhere? You know, think considerations like that. Is it dead, right? And also based on the source of the user, where are they? Where are they from? What's their network range? What country are they in? What city are they in? And so you can say things like, well, okay, someone has come in and is trying to get to api.snap.net. Now, where is that person from? Okay, San Francisco. So we want to send them to our West Coast US Data Center. Then let's also say, is the West Coast US Data Center online? If so, cool, we're good. If not, send them to the next closest data center. New York, maybe. Or we might say, well, this user is not from America. Where do we want to send them? Things like that, right? So GSLB, very powerful system for being able to move between multiple clouds. It's a great enabling way of doing that. And it allows you to do things like test deployments very easily. It allows you to wait things very easily. So you might be saying, okay, well, how do I get to this multi-cloud dream that Dave keeps going on about? I'm going to set up my infrastructure in Azure as well. So we're not just in AWS anymore. We're now launching in Azure. How do I send some traffic there? Testing is obviously testing, but when you start to go to production, what do you do? So GSLB makes it very easy to say, cool, I'd like to send 5% of my users there. Or just users from our office there, right? So you could put in your office's IP address range and say, well, send all my office people without even telling them to the new data center and see if they complain. See if I get any alerts or things like that. So it really makes the function of interacting with multiple clouds much easier. So the next thing to talk about is how to secure multiple clouds. And this is hard. It's hard when your security is advanced. It's easy when your security is basic, but all security is having to become much more advanced. So why do I say it's so hard, right? Because security solutions now are no longer about layer four traffic control. So layer four would be IP layer, right? So is this IP address allowed to go to that IP address on that port? Like the classic firewalls, right? But much more now, they're about layer seven, which is what is the user actually asking for? And then above that would be all of the kind of security techniques that companies like us are employing now, which would be machine learning, anomaly detection, et cetera, for traffic patterns. As you scale to multiple clouds, it becomes extremely difficult to do that kind of thing. Because in the example I use of the very big client, I swear they have 450 ADCs around the world. It's almost useless for them to operate as individual islands, looking at just the data that they see. They're only seeing one 450th of the data, right? And when you move to multiple clouds, this can be a big problem. And especially as you scale up, right? So the first point that I have is to look for cloud neutral solutions for layer seven, specifically layer seven. You'll see I suggest cloud for layer four. So what I mean there is the application layer, right? Your things like web application firewalls, intrusion detection systems, SIMs, all that kind of stuff should be able to run in any cloud container, edge, device, hardware solution, anything. It's critical if you want to actually actively manage the kind of threat profile and risk of your network that you are running that solution in every location in your network. It's not much good having a very intelligent solution, one cloud when the other one is not, or when the two clouds cannot communicate with each other about threats, or when no one knew that half of your office was sending data to ransomware sites for the last week anyway. You really need something that's neutral, right? So able to run anywhere. It should have full support for cloud-native environments and monolithic environments. This is like such a problem that I see in the industry. People are either going to your more tried and trusted vendors in the industry who are more likely to run on monolithic environments or these new shiny solutions that only really work well in containers, for example, or in Kubernetes. And the reality of all corporate workloads is that it's all types of things. And then I think it's very important to support your CICD environment, your disaster recovery and test environments, etc. Like I said, it should be able to run in all of those locations, right? That brings me to I think the biggest learning we've had building a multi-cloud, large-scale multi-cloud organization is centralized monitoring. The ability to track, visualize, and verify everything in one location, right? Like the nightmare when you have a large network, like we do, of someone saying, some of our users in South America, or even worse, someone says, we've got a user in South America who says they're getting a 500 error when they try to log in. That's kind of odd. And then, lo and behold, two more tickets come in from two more users in South America having errors, right? But you look on the system, trying to look through the system, and you notice that there's like, you know, 300 South American users online that seem to be working fine. Where are those users even going? Are they, you don't have servers in South America, are they going to the west coast of the US or the east coast of the US? And then you begin gripping through logs and it leads to things like what we call tribal knowledge, the idea that there are people in your team that have knowledge that other people in the team do not have, and staff retention is crazy low in DevOps, ITOps, all the way up to CIO level in our industry. You know, in the valley, for example, our client types, staff retention is about five and a half months. So it's very difficult to have, you know, people that have scripted tools that are not accessible across the business, right? So central monitoring is key between all these environments, which really does lend itself to a cloud neutral, the end of neutral solution because ultimately, you know, you're going to be deploying to multiple environments and locations, so you need something that can monitor across all of it, right? So that's a key learning of ours. And then I believe that you should use the local cloud services for layer four solutions. So there, I'm talking about firewalling and things like that, right? What security groups can access? What security groups, what IP addresses are allowed in on what ports, what ports are not allowed in, no matter what you do, et cetera, right? Your standard layer four firewalling, A, it's really a commodity and it's very cheap to use public cloud for that and B, it's very easy to do with security groups and services like that, and I think that it requires so much maintenance at a layer four at an IP level that you have problems between clouds. What I mean by that maintenance is, you know, the worst is when you have a new SSL certificate now, let's say, expiry on your SSL certificate and you have to update it and where in all the places is this thing used, you know, which cloud do you ultimately forget to put this thing in that 12.01 tonight has got an expiry, SSL certificate, you didn't put it there. The beauty about layer four is that typically you are basically just saying deny outbound access from my systems out unless someone sends something in or they're doing an update, allow in port 80 and port 443, you know, web, HTTP and HTTPS, maybe allow in some other port from our Office IP range and it's pretty static and you kind of leave it at that, you know, so it doesn't become this nightmare to manage. Layer seven really does where, you know, for example, let's take that large client again, 450 devices around the world, they decide as of the beginning of next month, we're not supporting TLS 1.2 anymore. It can be a huge process to disable TLS 1.2 across an entire organization, right? So that's the layer seven consideration where I say that you should use cloud neutral solutions, the slide, of course, and then layer four when you're talking about pure IP, I really recommend using the native cloud solution, of course. The next thing I want to do is a small preview of how SNAP does it. So the first thing is our live network. This is from yesterday, but these are our live systems and deployments for our organization. So there's about 500 and something online servers or containers that are managed directly by us in order to deliver the services that we do. So you can imagine the complexity there, right? You can see in South Africa, for example, bottom middle of the map in three different locations, which are all three different providers and only one of them is a cloud provider across Europe, you know, a lot of those are data centers, U.S. a lot of data centers, a lot of public cloud as well, you can see South America, Australia, etc. There are many, many, many organizations that we work with that wind up looking a lot like this and needing to cope with these types of challenges, right? How you manage an organization at this scale. So our keys, our key takeaways, I've mentioned a lot of that already in this kind of presentation is that we use automation for absolutely everything. So there are no manual deployments of any websites, applications, APIs, services, anything we provide at all. It all goes through the ACD environment and then is deployed automatically. We avoid cloud specific tooling completely. So we don't have a single script that interfaces directly with any one cloud providers API. We do obviously in our product because we provide that service clients, but our deployment process is not tied to any specific cloud. So it is possible to do that relatively easily with the right design, right? And we secure everything with the same product. And then we use that product for monitoring visibility, etc. So we have a central dashboard where we monitor basically the entire infrastructure. We think the most important lessons that we have, the most important kind of advice to give here is really that we do not develop anything that has a specific cloud providers name in or platform providers name in, right? We don't want to use their direct APIs wherever possible and that we automate absolutely everything that we can. Thank you all for watching. We'd be happy to assist with any questions. So please reach out to us at snap.net should you need anything. Have a good day.