 Okay, so welcome everyone to the OpenStack Summit and the first session today from Trio Ravmi World Welfare Solution Architect for the EMEA region and glad to join me from Red Hat. Hi Ian Pilcher, Product Manager, Red Hat. And that's Red Hat for you. So, session for today, Migration Without Migrates. We all know this topic, especially when management is included, which has a high view on those things, and then tell us to just do it. So, I had to task with my old team to migrate from an VMware and KVM environment to OpenStack and I was greeted by some real strange thoughts. Is this manufacturing company thought, first of all, it's just a lift and shift. We only change slightly the responsibilities. So, they thought that the administrator is just no longer responsible for the virtual machine. That's now the responsibility of the application owner, whereas we all know that's what. So, yes, this view is true in terms of responsibility, but it's more when you look at what is changing between legacy environment and OpenStack. The second wrong principle I was greeted with was, multi-tenancy is just combining the hypervisor cluster. So, they had hypervisor cluster for each department and fought with OpenStack. They were good creator tenants for each department and be done. So, again, a lift and shift expectation without any necessary changes to the virtual machines, to the workloads, which you all know can't work. We need to do more. We need to identify. And why is that? This is, of course, there is no one-to-one translation. So, when we look at how is a virtual machine defined in the legacy world, where the administrator is according to what he got told. We're finding, especially for the use case, the size of the virtual machine, he's creating ports, assigning them to the correct networks and everything. As it said, this is specifically created for the use case of the virtual machine. When you look on the OpenStack side, on the other hand, you have the flavors, you have the public network, you have the private network. Those are generalized use cases, which are then where you start and then move on to your specialized use case. So, we can't directly take the legacy world and bring it to the OpenStack world without some kind of transformation. So, as I said, I was at this manufacturing company and they told me, I don't care what's the issue, just do it. So, I started to talk with the application owners. What kind of applications do you have? And they said, yeah, we're running on Linux, Red Hat, Canonica, everything, so the whole bunch. Have more, have databases, have SAP, EAP, everything. I was then looking at them and said, okay, let me ask you a question. What is it that you care about? Do you care about that your application and the application data is converted over to OpenStack? And they said, yes. Okay, then we could have, then we can at least start on something, because as I just told you all, it is not possible to take just the virtual machine and move it over to OpenStack. There needs to be something more. So, I proposed them, first of all, since they wanted an easy and quick solution, a static approach. Well, I told them, you can do it that way. It will result in a working solution, but it's not what I would recommend. Because this one where you change the virtual machines on the hypervisor to include cloud in it, to reconfigure them to use cloud and when they are spun up, then taking those virtual machines, converting the image, it is working, but you will have a ton of images inside your OpenStack that are using space in your storage that are one of the most elements, which is a disadvantage here. It's not repeatable. You can't take this image and spun up a second, a third, a fourth of this instance, so when you talk about scaling, you can't just scale out of that. So, I told them, when we have the time, there's a far better approach. And this approach was we extract the data from the virtual machines on the legacy hypervisor on to specifically created volumes on this hypervisor so that we have the databases, the database data, the application data on these volumes and then only extract those volumes outside of the hypervisor. Move those volumes over to the OpenStack where we then spin up a fresh created instance, make a fresh installation of the application and attach the migrated data from the legacy world to this new created instance, which is now a repeatable state, which is a state that you can even move to anything else as you make a distinct way of application data on the one hand and the actual application and instance that's running on the other. So, that's how we then approach this and develop the complete CICD pipeline. And that's how this manufacturer company was able to migrate without a migraine from the legacy world to OpenStack. But there is, of course, now the problem how to go from OpenStack to OpenStack and here I would like to give to my colleague from Red Hat. So, thank you. Thank you. So, the reason Red Hat is here and the reason that we're working with Trilio is that there is, we see this big need within OpenStack to have a tool set that is able to do backups, restores migrations of actual workloads within OpenStack. It's relatively easy to backup a volume and image, you know, a blob, you know, a blob basically. We've been doing that in this industry for years and years and years. But what we miss when we do that, be it from a backup restore or from a migration, you know, or site to site type scenario, is we lose out on all the metadata. We don't know what this thing is, this big blob of bits that we've backed up. We have no idea what that is and how all these things are wired together. So, in the OpenStack point of view, you know, what flavor does this go with? What network is it on? You know, what tenant does it belong to? Who owns this stuff? What image is it based on? Et cetera, et cetera. What security groups are associated with it. And so that's what Trilio gives us. It has the ability to connect to an OpenStack environment in addition to other types of legacy environments. Understand that information, persist it, and if necessary, restore it back. Restore those workloads back into another environment, which is incredibly useful to us. Not just in the ways you might think in terms of backup restore, but also potentially in an upgrade scenario. So, to talk about this part, we need to talk about the way that Red Hat manages the life cycle of our OpenStack releases. So, I imagine everybody in this room is familiar with the OpenStack release cadence, where approximately every six months, you know, we have an OpenStack release, current one is Rocky, Movin' Until Stein, and I'm not sure if they have named the T-release yet or not. If they haven't, my vote is definitely still for train. So, what Red Hat does with this is we have a corresponding release of our product, Red Hat OpenStack platform, for every upstream OpenStack release. So, you can see on the chart here, our current GA release is Red Hat OpenStack platform 13, or you often hear us call it OSP 13, and that is based on Queens, and very soon now we should be releasing the 14 version based on Rocky, you know, et cetera, et cetera, et cetera. What we've done from a life cycle point of view is we've identified what we see as kind of two different usage patterns for Red Hat OpenStack or for OSP. And the first one is what we sometimes colloquially call the fast train customers. We're not really supposed to say that because that implies that someone else is on the slow train, you know, we shouldn't say that. But anyway, so the fast train customers, these are folks that are comfortable with that six month release cadence, you know, they want the new features, they want to stay relatively close to the upstream releases while still taking advantage of all the integration and engineering and testing and support work that Red Hat can give them. And for those, so for those folks, you know, they have the every release, those releases are supported for one year. So every Red Hat OpenStack platform release is supported for one year. So as an example, our OSP 11 release based on Okata, which was released 16, 17 months ago, is now end of life. Our Pike release, OSP 12, is coming up on its end of life. Every third release, so every 18 months, we choose, we release what we call a long life release. So you can see on here, OpenStack platform 10 was our first long life release, and OpenStack platform 13, Queens, is our second long life release. Those releases are supported for three years. And for folks who want to even go beyond that and want to write us an even larger check, we have what we call an ELS offering, extended life cycle or extended life support or something that will extend that out for another two years. So a total of five years on a particular OpenStack release, which is actually pretty impressive given that that's basically 10 times the life cycle of an upstream release. So for those two different usage patterns, we have different ways of moving along that release stream. So for the quote-unquote fast train customers, the people who want to go release to release to release, we have what we call major upgrades or version to version upgrades or end to end plus one. And this is a feature that has been in Red Hat OpenStack platform really since OpenStack platform 8, since our Liberty release. So when we released our release of Liberty, we provided the ability to do an in-place non-disruptive slash minimally disruptive, depending on if you're in marketing or not, upgrade from Kilo to Liberty and all the way up through Queens and soon Rocky. And we call that the major upgrade and the end plus one. So that's great that the end to end plus one upgrades, they take advantage of the fact that OpenStack, the OpenStack projects themselves support this. So basically at a very high level, we will upgrade our control plane services, leave the hypervisors or compute nodes at the old version and they can still talk to each other. So that allows us to very quickly upgrade the control plane, keep things working and then once we've upgraded everything else, then we can up the RPC version between the services, between the boxes. When we do a three version upgrade though, so for example, OSP10 to OSP13, Newton to Queens, we don't have that luxury. So we have developed a technology that we call fast forward upgrades. And this is the ability to orchestrate and in place, again minimally disruptive, so basically all your VMs, virtual networks, et cetera, all keep running. But we do take the control plane, the API services have to go down to do this. So very high level view is we take all the OpenStack stuff down, we leave the VMs and virtual networks running. We run the database migrations to go from Newton to Ocata to Pyke to Queens. We deploy the updated bits and configuration files for the new versions of all the services. Then we start the control plane services up, then we have to upgrade and reconfigure the services on the other types of nodes, storage nodes, compute nodes, et cetera. So it is, like I said, it's non disruptive to the workloads, but it does involve a fairly significant outage of the control plane. It's going to be down for a while, your API services. So very cool technology though. In fact, I've glossed over it pretty quickly. There's a lot of moving parts to an OpenStack deployment, as you know. So orchestrating all of that is very impressive work by our engineering folks, and it's pretty cool stuff when you see it work. But there is a third option. And because fast forward upgrades are new and cool and shiny, we probably not talked about that enough. And that is what we call a parallel cloud migration. And that is the idea. I've got, say, Newton over here, and I want to install Queens or Rocky. It doesn't really matter in this case. And this is really a process. This is not a technology. The idea is I'm going to spin up a new control plane with my new version, a few compute nodes, I start moving workloads over, and then as I move workloads over, I decommission my compute nodes over on the old OpenStack installation and essentially bring them up. You know, I'm scaling down over here, scaling up over here. So I don't need a complete standard, you know, side by side my environment. It's a more gradual migration. And this was, this was our long life to long life strategy before we developed the fast forward upgrade strategies. This was all we had. So just to summarize sort of the various different, various different versions of upgrades. And in particular, the two columns on the right are the interesting ones because if you're doing, if you're doing a one version, you're almost certainly going to be in the, in the, you know, N plus one column. But if you're doing a long life to long life, you've got that choice to make. Am I going to use the fast forward upgrade? Or am I going to do the parallel cloud migration? And as you can see from the table, there are, there are basically advantages and disadvantages to each one. So it's going to very much depend on the specific, you know, deployment, workloads, et cetera, in terms of which one is the right solution. You know, the fast forward is long life to long life only, which right now means OSP 10 to OSP 13, Newton to Queens. With the parallel migration, it can be any version to any other version. It could also be obviously a non-OSP to OpenStack to OSP, or it even could be the other way around. I mean, we'd advise against it, obviously, but you could do it. In place, parallel cloud is not an in place migration. It usually will require additional hardware. We do support running the control plane in a virtualized environment, so it's possible in that case you wouldn't need additional hardware. You would just spin up some additional virtualized controllers. In terms of the workload outage, neither of the, the two left-hand columns have a workload outage. Like I said, VMs will keep running, virtual networks will keep running. APIs will go down. The API outage in the fast forward case is going to be quite a bit longer. In the N plus one upgrade case, API outage may be 10, 15 minutes. In the fast forward upgrade case, it's maybe an hour, maybe an hour and a half. It really depends on how fast your systems are, et cetera. In the parallel cloud case, it's completely dependent on the workload itself as to whether there's going to be an outage. If the workload somehow has the ability to non-disruptively move from one place to another via load balancing or some sort of app level HA, then you won't have an outage. If not, you're probably looking at a maintenance window for that particular workload. Take it down over here. Move it over there. The fallback, in the first two cases that you need to restore from a backup of your OpenStack, which is a fairly complicated procedure in and of itself. In the parallel cloud case, because your old cloud is still there, if you do for whatever reason, you move a workload over, for some odd reason, it's not compatible. It's doing some stuff with the control plane, APIs have changed. You still have the older version there, so you can migrate right back. So, about that workload migration, you know, from a red hat point of view, we would say, you know, this is where we bring our consultants in and they'll figure out a way to move a particular workload from one environment to another, and this brings us back around to the full circle to the value that we get out of the Trilio tool set in their ability to move, you know, take the workload out of one OpenStack environment and move it over to another, and I will hand it back to Robert. As without us, you don't need to be more specific in step two, because we are the step two. Trilio, as a company and as a product, provides you with a backup and recovery solution for your OpenStack environment. Doesn't matter if you go with red hat, canonical, Miranda, SUSE or upstream, and since we follow the OpenStack principles and we know what challenges OpenStack owners and OpenStack users are facing, we also provide the migrate process. So when you take Trilio as a tool, you can take a backup of the complete tenant in your OpenStack Cloud one, we store it in a complete other tenant in the same Cloud, or which is why we are sitting here, we store it in a complete different Cloud or move it between availability zones. When you have a big OpenStack environment which is over, let's say one cluster is in Frankfurt, others in Berlin, both are the same OpenStack but different availability zones, you can migrate between those with us, or here we are at the upgrade process. The process red hat developed is great, but not everyone is using red hat, so you can take a backup with us from your Okata and go directly to Queens, so you have a safe way to upgrade or switch between the OpenStack distributions. So you can destroy your whole Cloud if necessary and start from scratch, and being secure that you have your data available once your Cloud is back up, as we will restore it for you. And with that, we thank you for your time and being here, and if you have any questions, we are here for you.