 Okay, good afternoon, everybody. Thank you for coming to today's session. So hopefully, day three into OpenStack, everybody now knows that Fujitsu has an OpenStack public cloud called K5, or Cloud Service K5, to be precise. Yes. So, did you know that we have eight regions around the globe? 16 different availability zones. 110,000 cores and growing. And then that's just our private... That's just the public cloud side of things. So, you'll see now in a few minutes the architecture of our public cloud and what we can do how we differentiate ourselves from other players in the market. So, here you can see where we are today in the world. So, you can see we've got in Japan, we've got three regions. In Europe, we're in Germany, the UK, Finland and Spain. And then also in the US, coming online at the end of this month, early next month, we'll have another region. There's a lot of concern. There's lots of happening in the industry and lots of people running away from OpenStack. This is just to show you our credentials as Fujitsu that actually we've been in the open source world for a long time. So, this is nothing new to us. We moved over onto the OpenStack platform in 2015, but we've been contributing to the open source world for a long time now. Hopefully, you'll recognize a lot of the different labels at the bottom of the screen there. But again, this is just an example of all the lines of code and contributions that we're giving to OpenStack. And you can see that's growing and growing as our platform is growing. As we're enhancing OpenStack, we're also pushing those enhancements upstream. So, we're brought into OpenStack, we're brought into the open source ecosystem. And then why did we choose OpenStack? The same reason why a lot of you probably chosen OpenStack. The same reason why a lot of our customers want OpenStack and want to use OpenStack. It's now, I think we can all agree, it's a private cloud platform. It's interoperability. We've seen the interoperability challenge last year in Barcelona, where we had 17 different distros of OpenStack used all working together running the same heat stacks. We've seen it again this year. We've also got to minimize. Some people say it gets rid of vendor lock-in. I would say it minimizes vendor lock-in because what you'll find is that even if different vendors are rolling OpenStack, they'll give you different features and capabilities that will effectively make OpenStack. So we tried to minimize and reduce that. Community-powered. So everyone here today, I think we can see from the conference, we've got over 10,000 people attending. And that's a very, very powerful statement for the software, for OpenStack. And it helps us all. So we focus a lot at the moment. We're focusing on Ironic. We're focusing on Manasca and on Neutron. But we target different areas, but then the rest of the community also target and add to the other projects that we've covered. So some people call it multi-cloud now as the trendy name. But I think we can all agree in the industry that it's not going to be one solution that fits all. But by running OpenStack in the public cloud and running OpenStack in the private cloud, you have the same APIs, same endpoints that you can target. So it makes it easier for you to work with your workloads, work with your CI CD pipelines. Okay, just a quick slide about me. So who am I? What have I done for the last 20 years? I've been working in data center and data center automation. I'm a cloud architect. So it's just data centers. It's all data centers, data center automation. I'm in a privileged position that I've worked with a lot of the enterprises. I've worked with a lot of the kings in the enterprise. And through them, I've also met a lot of different customers in the industry. And they all pretty much see the same, they're experiencing the same challenges. But again, with all of our customers, they're all on different operating at different speeds. So you'll see different maturity levels of your customer when you're trying to move them into cloud. Hopefully it's the right agenda that everyone's here for. I'm going to give you a little bit of background on Fujitsu and the rest of their cloud, what they have to offer. And then we'll go into OpenStack, which is obviously famous for being a private cloud platform. And we're using OpenStack to deliver a public cloud offering. So that comes with various constraints. Anyone who's tried to roll out OpenStack across multiple data centers will be aware of those constraints. And it's some of the building blocks that we've had to enhance with OpenStack and use in conjunction with OpenStack to enable us to roll out a public cloud using the OpenStack base. And then finally, I'll talk a little bit about enterprise migration. It's just a simple one-slider. In a lot of the architectural discussions that I have, customers are coming from every different area. And some simple questions that they have will, how do I actually migrate a thousand servers from one data center into the cloud? So it's a little quick overview of that. So first of all, I need to launch the demo. So it's just a very quick demo. I'm going to show you how to launch all the components that I'm talking about today will effectively... We've also enhanced the APIs in OpenStack. We've enhanced it in heat so that you can deploy them using heat stacks as well. And it takes about 20 minutes to show this. So I'm going to drop it quick. So this is our interface here. As I said, it's a multi-region cloud. You can see here you can select all your different regions. I'm enabled on the regions you can see there. Germany, Spain, Finland, Japan, East-West and the U.K. The U.S. will be coming online soon. And all I'm going to do is drop a heat stack onto this. Up here. If everyone knows heat, it's the orchestration... infrastructure's code, the orchestration component within OpenStack. I'm sure we all know that. And live demos are never a good thing to do, but this has worked eight times for me. So we go for a nine-to-time lucky. So I'm just going to let that run in the background. And all of a sudden, that's building up highly available across two different available zones with effectively two different data centers. And it's giving a traditional architecture a lamp stack across those available zones using the components you're about to see next. Okay, so we'll let that run in the background. And I shall jump back to my presentation. So Fujitsu's digital business platform, if anybody does a search or Googles for Fujitsu and for K5, you'll also see a lot of information come up around MetaArk. And just to try and clarify our position where Fujitsu's cloud service K5 is within the MetaArk platform, MetaArk is our digital... what we call our digital business platform. And what I mean by that, it's effectively, it's the tooling that we use. It's the...in layman's terms, it's almost the branding that we use to encompass all our tooling, all our PS services, all the different software and components that we use to help a customer and enable a customer to transform or digitize their business from today to sort of the more modern world. You'll see we discuss a lot of fast IT and robust IT. What we mean by that is your fast IT is like your web front-ends, your systems of engagement. So they're the ones, the low-hanging fruit that are quite easy to migrate up into cloud because you're already in a public presence, public-facing presence. And your robust IT tends to be your big back-end databases, any of your big business offerings. And we have lots of different tool sets to help customers with that and to leverage them and enable them in today's market. You'll see there, of course, we address all the key components we do have an IoT platform, big data platforms integrated with AI, but the big feature that I want you to see in the bottom left-hand side you'll see there, everything you see there, all our MetaArch platform is based on Fujitsu's Cloud Service K5. So all the tooling we developed today, all of our businesses that we're running today within Fujitsu as an enterprise, and we're a very big enterprise, so like 150,000 employees today, all that runs on Fujitsu's Cloud Service K5. So we're drinking our own champagne, we're using our OpenStack platform to move our enterprises onto it, so we're aware of the pain points that industries have, that enterprises have today when they're trying to migrate to cloud. And just to dig a little bit deeper into Fujitsu Cloud Service K5, what it looks like. So today, obviously, it's OpenStack back-end. Everyone's familiar, hopefully, here with OpenStack and infrastructure as a service, but we also have a platform as a service built into that, and again, we've stuck with the open source team, so we're using Cloud Foundry as our PaaS offering, and we also have some other components attached to that as well for API management and things like that. We use Apigee to help with that and to help customers improve that. Now, this is a very important slide. So this is a big differentiator when people will always say, well, what about your competitors? How are you different to your competitors? Well, this slide here will hopefully help to show that when you want Fujitsu Cloud Service K5, you can have the traditional model of a public cloud model where you see you just consume it in public cloud on the left-hand side of the screen there, your virtual shared. You can also have a dedicated hypervisor, so if you do have noisy neighbors or concerns, you want to guarantee your compute performance. You can select what we call Type 2 there, which is a dedicated hypervisor. The third one you have there then, if you want to have a full region, if you want to own your own public region, then we will also host a public region for you on Fujitsu Cloud Service K5 and manage that for you, and you won't have any other users other than your business on it or whoever you give access to that platform. And again, that's a public cloud, so you can have your own public cloud run and manage by Fujitsu. And finally, data domains is a big issue now when you go around the globe. Various governments are saying the data must be landed in their country and not go to any other country. And the last one there, Type 4, we call it dedicated on-premise, so we will give you a public cloud offering and we will land it in your data center, and that way then you know exactly where the data is, where your users are bringing data to and from all visible within your data center but managed by Fujitsu. So that's a big differentiator for a public cloud offering, the way you can have it, the way you can run it. Excuse me. And sorry, the last thing there, obviously, of course we do do private cloud. You can see on the right-hand side of the screen there, some people refer to that as Type 5, but that's where we can give you a private cloud if you just want the hardware stack and put your own distro on it, we can give you a Red Hat distro or Suzy distro to run on the hardware. You do have that option as well for Fujitsu. Okay, so the next piece, the piece that everyone is here for hopefully, architecting the public cloud. Very good. How many people have installed OpenStack here across multiple data centers? Was it easy? The biggest challenge that we had and the biggest challenge anybody will have working with OpenStack is when you're trying to split your control plane. So we have availability zones for compute, that's easy enough. We have availability zones for storage and for Cinder. Again, that's relatively straightforward. With Swift, we already have the concept of a system to replicate between different data centers. However, when it comes to the control plane, the control plane tends to sit across your compute. So you've got your neutron services and the rest of your control plane services that sit on top of those availability zones. I'll just quickly show you. So this is what a traditional OpenStack looks like. So we have our region, we have our availability zones, and in each of those availability zones you would have your servers and your storage. However, when you want to go into a public cloud scenario and you want to have separate regions, you want to have separate availability zones, and those availability zones, there's a big distance between those availability zones, you have lots of issues to have on how you get the control plane to stretch across it. And obviously this is what we're looking for, is you have effectively a full installation of OpenStack within each availability zone. However, what's missing in this picture? It's like having lots of different public clouds here. You're going, well, where's my central login? Where's my central control? And that component is missing. So that would just be like having a separate deployment of OpenStack. So we need, this is what we're trying to solve here. And this is what we did to solve it. Obviously, heavily neutron focused. So we needed to add some sort of availability zone context, or the concept of availability zones into Neutron. And to do that, then we had to, we created what we call the manager of managers, or multi-availability zone manager, MAZ, within K5. And that sits on top of the OpenStack deployments. And that will synchronize between all of your OpenStack resourcing so that you can come to one console and then decide where you want to deploy in a particular availability zone and it will handle the networking for you. Obviously, the big challenge there is when you bring in a manager of managers on top of OpenStack and you've put Neutron is still sitting within each availability zone, how do you do your load balancing? How do you connect between the availability zones? So it brings a few more challenges and we're going to cover those various components now. So the multi-AZ manager. Again, this has been developed by Fujitsu. Okay, that's failed. Sorry, I've crashed out of my system. Okay, so this was our multi-AZ manager. And effectively, this is what we did. We created another database so on top of each OpenStack deployment we created another layer on top of that. We called our multi-AZ manager and we used that to synchronize the resources that are there in each of your OpenStack deployments. So within each region, we were able to synchronize all of those OpenStack deployments so that when users come to use the platform all they have is a drop-down box that you declare and you pick your availability zone that you want to deploy to. You create your resources within that availability zone but then you want to deploy in the second availability zone, it's just a drop-down box. And then you manage a handle all that. They're not trying to log out into another OpenStack environment. And so again, at the top here, you're seeing traditionally you would have a fully separate deployment. Traditionally here, when you see a user, you've created all your resources, you've created all your security groups, et cetera, in one availability zone. And when that crashes, you have any issues. You'd also have to do the same effort, the same work in your other availability zone so you've got duplication of effort to do all that. So as a security manager, we synchronize our security groups across all the availability zones so you only have to create them once. And then we also have, you'll see, this funny thing called interAZNet in the diagram here. So we had to create a new component with an OpenStack so we've enhanced OpenStack and enhanced the APIs with an OpenStack to let you continue that connectivity. And by creating that model, obviously DNS access on top of it, you can lose a complete availability zone and your control plane will still affect you and the user's point of view will still be fully operational. They can deploy to other availability zones and they can deploy to other regions. So you saw there the network connector, interAZ network connector. So this is another component that we had to build. Obviously, again, we moved, we moved up a tier with our multi-AZ manager. Neutron now lives in each individual availability zone. So we had to develop a connector, a way of connecting between those availability zones so customers can transfer data quickly between them. And what we created, we created a new component that gives you lair tree routing between your subnets. So you create a network connector using Neutron. You create an endpoint in each availability zone and then you can connect that and get it underneath instead of using your internet. It uses a private connection between your two availability zones for data communication, lair tree rooted. So for the techies in the room that want to know what it looks like at a very high level, there's the API calls that you can use. And again, we've enhanced OpenStack. So this is all in addition to the existing API calls you have in OpenStack. We've added these components with the necessary calls. And again, we've also enhanced heat and the heat templates. So again, you can use your, you can incorporate that into your templates when you want to do any infrastructure code. And this is just the basic stuff. So we've made it all seamless to the customer, to the end user. So when an enterprise user does want to connect our two data centers, it's just a few API calls and we stand up in the background, we stand up the physical network ports that happens, everything you need to do to make that connectivity active. And again, we've also, as we're doing this, we also did one of the weaknesses within Neutron was monitoring and that's we've also enhanced monitoring to make sure that when we do create that network, there's a lot of time. So if there is an issue with that network connection, we can switch that without the customer ever noticing any issues, any problems before the customer notice an issue, we can swap and re-root that network in the background using our monitoring. So again, so K5 load balancers. And again, the challenge here was we obviously removed Neutron down below. We have a new Mazter layer on top. So how do we load balance between the availability zones? So again, we created a load balance manager. And it looks like this. We've created, we're using a VM. So we use a VM when you stand at the load balancers and you're doing multi working between the two availability zones, it creates two VMs in each availability zone to give you, make sure you have a high availability connection. And this all, we did look at Elbas V1, we looked at Elbas V2, but it didn't match our requirements because we're working on top of OpenStack. So we, again, we created all this. We used the APIs for Elbas V1 and Elbas V2 and we used the APIs for Neutron, but we created a new service on top that sits on top of it. So that's what it looks like at an abstraction level there. When we spin it up, it is HA proxy, but it's HA proxy within a VM rather than just running it as a service. And again, everything that follows through with that for your monitoring, for your logging, we had to do a lot of enhancements to for auto scaling. So anyone who's used Sonometer and Heat Stacks, that was quite immature when we were launching this platform. So Fujitsu has done a lot of enhancements to that and they've also included the capabilities with the load balancer to give you so you can write auto scaling Heat Stacks and with the load, when you integrate the new capabilities that we have in our load balancer, you can very easily ramp up or expand on demand and shrink your designs, your Heat Stacks. And again, this just goes over that we did look at Elbas V1, we did look at Elbas V2, they weren't mature enough at the time that we were using, testing them. What it gave us, the advantages of using the VM-based model for anyone who has to upgrade OpenStack, they realized it made it very easy for us because we do them in a HA, we do two of them at the same time, we can upgrade one load balancer, take it out of the load balancing and put it back in when it's upgraded to take down the other one. And then the same one when we had to do OpenStack upgrades, we could just simply move the VMs, move the load balancers because they were VMs onto another compute node and upgrade that compute node. Okay, and along with this comes enhanced logging. I don't know, again, if anybody has left security till the last minute and then they go into the security team and try and get an OpenStack deployment signed off, security team, the first thing they say is, show me the logging. How can I see my security group logging? How can I see what you're doing within the firewalls? By default, OpenStack doesn't do logging, but I think we all pretty much know it uses IP tables, you can use IP tables, but that will also have the chance of killing your server very fast, filling up all your logs. And we had the extra challenge because we brought the load balancer service out of the above neutron effectively. So we had to try and correlate these logs. So we developed a, or we improved the logging system. You can see there, it is all coming from IP tables. There's no magic there, but we also, we centralize them, so we take all the information from the IP tables and lump them into a separate storage system, and then we align the external, what the IP tables that are sitting outside of neutron with the IP tables that are coming from inside a neutron from your service groups and that, and then we align them and we do that for quick, for fault finding of the platform or troubleshooting any networking issues on the platform. And again, we've published a lot of that information upstream. So a lot of this work we do, isn't necessarily relevant to the OpenStack project, but anything that we feel is relevant to the OpenStack project are any features we'd like to see in OpenStack. We push them upstream to be part of the community, and then if they do get adopted, great, if they don't get adopted, we look at what OpenStack are doing to try and solve that problem and see if we can then upgrade our platform to match that. So there's a few links there you can go to try and see what happened or what we're doing, what we're looking at now to the demo. So this is what we deployed. Lovely colors. And if anybody recognizes it, yes, it's the wonderful WordPress. And lots of people say, oh, no, I can't believe I'm showing you WordPress. So the reason I'm showing you WordPress is because this is, I've called this an enterprise cloud, but the capabilities that you have in this platform, the capabilities that you can deploy automatically with infrastructure code, this is a typical stack that the enterprise cloud has built into the cloud. And this shows you that we've done this. So at the start of this demo, I drop the heat stack and what it does, what it gives us is that network connector between our two availability zones. So that's a high speed private link between the two availability zones. It's layer 3 rooted. And then it's also got the low balancers there. It's got a public facing low balancer on top of the solution. And you'll have a database synchronization happening so that I can lose a full az there. And then that WordPress is still up and running. Hopefully it has built. Let me just quickly. Okay, so create complete. Woo-hoo, sorry. So you'll see here on the, if we go down to the low balancers, let's grab this. So our low balancers, we work off DNS on our low balancers. Oh, sorry, I'm still going to present presentation mode. Sorry, my apologies. Sorry. Okay, sorry, a few technical questions. Okay, sorry for that. So just showing you there, we've dropped back to the infrastructure as a service GUI. So again, we don't roll with Horizon. So Horizon again, didn't have the multi-region capability that we needed at the time. Didn't really have the scalability that we needed at the time. So we've rolled our own portals. We have effectively we've got a portal for the central authentication. So people who only have anyone who has a central authentication portal for adding all their users, and then they authorize them on each of the regions. So they'll have a single sign-on access true to your IaaS portal, and then they can authenticate them or authorize them within each of those regions. And here I am again, I'm in the top right-hand side here, you can see I'm in the UK region. This is the project that I'm in within OpenStack Terms, the Boston Summit. And I've just got the HLAM stack. Hopefully we'll see a WordPress installation up here. So there you go. So it's successfully deployed, a highly-available WordPress system. And again, to the user, very, very simple. All they'll see on their networking on the subnet is they'll just see a port that's connected, the network connector port that comes in that's connected on to it. And for the enterprise market, it sounds simple, it sounds but it's all fully automated. Easy to do from K5 and you can do it using your infrastructure as code, jhe templates. So easy to network. All you'll see is a network connection in there. And that network connection is going out of OpenStack into our network connector component and being layer 3 routed across to the other side. Okay, we're going to drop back to the presentation now, hopefully. Okay, so this next piece is unrelated to global clouds. This is coming from where we have, I've been to a lot of meetings, a lot of customer workshops, and you've got the big challenge that you have in the industry nowadays is everyone goes to cloud and they see the POCs and the demos on green field sites. And that's easy. I put Brownfield there for what happens when, okay, how do we really do this? How do we really make this happen? And we've got all the challenges that you'll see there moving to cloud in any enterprise today. They have one person who's expected to do it all, they have one person who's expected to know it all. Or they get an old Windows guy and they say, okay, now you're our cloud guy and he doesn't know anything about Linux, doesn't know anything about trying to use and drive APIs. So all these typical things, again, an unfortunate enterprise, certain enterprises, there's a lack of budgets, a lack of funding to train people. They just expect cloud is easy. It's just simple, just deploy to it and it should happen, right? So it leads to a lot of frustration. And the biggest thing for this, getting those pain points are getting frustrated. There's two real routes to cloud. There's two main typical routes to cloud. There's the traditional, what I say, lift and shift or your mess for less as it used to be in the outsourcing world. And there's a lot of that today. There's a lot of that applicable today to the cloud. People are looking at what they have in their enterprise today and trying to quickly, as quickly as possible, migrate that in. So they won't look for the transformation. They won't be worried about cloud native. They just want to get everything into the cloud as quickly as possible. So that's the lift and shift model. Obviously, the model that we would love them all to do is the application transformation where they can actually leverage proper cloud native, your 12-step approach to applications, and then that makes the use of a cloud native platform, like OpenStack, much easier. They understand the concepts. What the Fujitsu Cloud Service K5 platform does, it supports both. The components I showed you today help them in the lift and shift model coming into K5. Which one is the correct approach? There is no correct approach. So it just depends on budgets. It just depends on the application, the length of life left in the application. But this is the last main slide. Just one to five steps. And this is where people just would love to know, well, how do I get into K5? How do I migrate into the cloud? So if they've got like 600, 700 servers, how do I get them into the cloud and not shut my data center down? Not shut it all down. Maybe a lot of people in the room are familiar with this and notice we find lots of people don't notice. They've been working in cloud, they've been working in the greenfield sites, and they've never actually worked out how you get from one to the other. What I'm showing you here, any network guys in the room will laugh. It looks very primitive in that it works. It's what people have used before. And what you're creating effectively is an overlay network. So if you look at step one here on the right-hand side of the picture here, it's your new cloud deployment. On the left-hand side of the picture is the infrastructure you're going to do a lift and shift with. So step one, you create your new infrastructure. Step two, you sort out your networking, in which in our case would be you can create a network connector. So actually, I left that part out when I was talking about our network connector. It actually provides we can reach in between availability zones, but we can also reach back in using our network connector into a customer's data center. So you can have layer tree routing into a customer's data center into the public cloud. Very important component. And that's what we'll be using here in step two, is the layer tree network connector into a customer's data center. And step three, then you see down the bottom here, with any migration you need to get the data across there. So you can use your own tooling. We have tooling that we can provide that helps you copy your data across. But that's not to be underestimated. There's a big effort depending on what your data is, what your applications are, the size of your data, to move your data into the cloud. Step three, once you have your data in there, then you're ready to say, well, okay, how do I go live? How do I knock on my server? And you want to keep the existing pool there. So assume on the left-hand side here, you've got six, 700 servers. And on the right-hand side, you're going to build up to six or 700 servers. When you want to bring your first server online, there's a great... Step four here, you'll see IP tables. So has anybody played with IP tables here? Has anybody familiar with IP tables? So in Linux, there's a great thing called IP tables. This can be any proxy, but this is normally quite inexpensive. If you buy an appliance to do this, it can be quite expensive. If you have a good network team, good network engineers, you can very quickly stand up a couple of IP table servers, one in your old data center, one in your new data center, and then create an overlay network using them. So on the left-hand side of the screen, when you want to switch off one of your servers in the old data center and go live in the new data center, using your IP tables, you would then redirect all the traffic via the IP tables for that server, pump it over to the right-hand side of your new data center. And again, you'll have a list on the right-hand side of the old servers to route back. And then just that list will grow on the left-hand side of the screen and shrink on the right-hand side of the screen as you progress across. You'll have no servers left on the left, no servers in your old data center, and now everything else is sitting in the cloud. So sounds very simple. I like it. I saw it when I got our network team and won the meetings and showed it to me and explained to me. It is a very, very nice concept. It's a very easy way for people to see how you can migrate a live system, live services across and into another, from one data center into another data center. And it's nothing new here. There's no magic. It's just, it's for people who are new to cloud and haven't had to do the likes of this before. It helps you see that it's actually a relatively straightforward thing to do, and you do need the IP table's expertise. It's a straightforward thing to do. Okay. Well, thank you very much. It's been an absolute pleasure. And now is the time where you can ask me a few questions. So do we have any questions? So that will really depend on the applications themselves. What I'm trying to give you is where that very simple model there would potentially be downtime, but it'd be very minimal downtime because you will have the server working in two areas. So depending on what that server's doing, how much the transaction, many different transactions are happening on it, it really will be on a case-by-case basis. But what it lets you do is it lets you out of, say, six or seven hundred servers, it lets you do two or three of the servers, get that whole process working, and then you can automate the whole process. So you can take a server and say, okay, we're going to lose a half an hour on this server while we switch over and build that all in and control that process. So it gives you a very easy way to automate that cycle. As for whether you can have zero downtime, it really just depends on the applications that you're using and what they're doing. No, so I'm going from a private IP address across to a private IP address. Okay, on the subnet there. In that particular model. So obviously Fujitsu has a range of public IP addresses that it owns and controls. We don't have a mechanism to bring in someone else's public IP address today and attach that in under the platform. If you're going to go for type three or type four, then let's have a chat offline about that because we would have the potential then to put them in if you have your own subnets. So the data migration there, whatever way you want to do the data migration. So it depends again on what your application is, what you want to use. So we have some products we use in-house. Uforge is one of our tools, tools that we use, and that does rather than just migration, it's full server lifecycle management. So we can scan your server and then it can deploy it into Azure, Cloud Service K5, AWS. So there's lots of different tools out there for the migration component, but we don't have anything built into K5 to do your migration. And some people just use DD, straight DD across. The best ones, if you're looking for good migration tooling, what you want is tooling that will do let you do it offline, and then just do the async or the delta copies of the differences. But there's lots of tools out there. And again, we do sell tooling that can do that if you're interested. Any more questions? So what I'm trying to show you there, it's not about migration. It's trying to show you how you can, the building blocks you would need to enable a migration. Oh, the database replication. Okay, yeah. And about that? Correct, yeah. So all those databases are the same. So it's the same database across. So each of those availability zones, when you target a particular availability zone, you'll see all the other availability zones. You'll see all the resources they have in there. And then if we did lose an availability zone, you could disable that then at the higher level. You could say, okay, don't present this availability zone anymore. But it's the same. It's the exact same. They're all kept in sync across all of them. So Keystone is centralized for the authentication. The authorization is managed at a regional level. Okay? So authentication is central. Any more questions? Okay, thank you very much. One minute and 40 seconds left.