 My name is Jan Schafranek. I work at HeadHead and I am six storage tech lead upstream and also I am a team lead of a small team in storage in RedHead that manages OpenShift storage. So I wear many hats partly upstream, partly downstream. And I will talk here about a recent feature that we have been working on in upstream where we are removing code from Kubernetes and moving in somewhere else. And hopefully, without nobody noticing, if we did our work right. Here down you can find a link to the slides if you find it useful. There are a couple of commands that you may need. So before I start, let me show you brief intro into Kubernetes history. It's long. In July 2015, we released Kubernetes 1.0. I was there, I have my code there. And our storage story there was called volume plugins. We had 10 of them. The name would suggest that there are plugins. You can plug them in, you can plug them out in runtime, but that's false. There are hardcoded in Kubernetes GitHub repository. You can't link them dynamically. So anybody wants to guess how many volume plugins we had two years later in September 2017? Any number? Everybody, come on. How many volume plugins we had in 2017? We had 10 in 2015, so how many we had in 2017? 18, close, 20, 25, we got 27. So in two years, we got 17 new volume plugins for various storage backends, clouds, bare metal, software defined storage, anything. And we knew that we can't go further. We want something that is pluggable in runtime and we don't want to maintain in the Kubernetes. So first attempt was, were flex volumes. They were kind of clunky, both on the Kubernetes side and on the storage provider side. And in late 2017, we introduced container storage interface as alpha. And that was our future for storage in Kubernetes. As a carrot on a stick, so we lure everybody to use CSI, we implemented, we started implementing the new features like snapshots, like cloning, whatever, only as in CSI. So if you want snapshots, you must use CSI. In three volume plugins, we'll never get snapshot support. CSI got extremely, extremely popular. Yesterday, I did the numbers. I found 140 CSI drivers that I know about. I'm pretty sure there are many other drivers that are proprietary and I don't know about, so 140 drivers. And as we developed the CSI drivers for the clouds and storage backends that we knew, we ended up with the entry volume plugin that has some features and the CSI drivers that has the same features plus the new features. And we ended up with two codes that we needed to maintain and early in 2018, we decided we want to get off the entry code. The same applies to entry cloud providers. Again, we don't want to maintain entry cloud providers and external cloud providers, so entry must go away. That was 2018. In November 2018, we had alpha of CSI migration of the first volume plugins and here we are, three and a half years later, the CSI migration is GA for Azure Disk and OpenStick Cinder in Kubernetes 124. That means there is no entry code involved in these two volume plugins if you have Kubernetes 124. It took us three and a half years to get there. So what actually is CSI migration? I already said we are moving code. We are not moving your data, the data stays at where it is. We are not changing the API. The API object stays exactly the same. So if you have Kubernetes or OpenShift cluster with entry volume plugins, entry volume PV storage classes, state-of-the-art, demo sets, everything, and you upgrade to a version where CSI migration is GA, nothing changes. You still have the same objects, the same PVs, same storage classes, just under the hood, we are directing all the storage calls to the CSI drivers and they talk to the CSI, they talk to the storage backend. So in theory, nobody should ever notice that there is any CSI migration happening. We did our homework upstream, we did the test, CSI everything, the same with OpenShift, test, CSI everything, and we will support it, so we got your covered. But on the other hand, I am a software engineer and it makes me kind of uncomfortable. To throw away, here is an example of Amazon cloud provider and the volume plugin. We are throwing away 17,000 lines of code, including commands, including unit tests, that is tested to death. It runs in production for many years, everybody knows how it behaves, everybody knows its corner cases, and we are replacing it with some other code in the CSI drivers. The CSI drivers, they are running in production today, again, pretty well tested, we know how it behaves. Just the translation between the entry volume plugin and the CSI driver, it's not really well tested. Like, we did test it, definitely, in our environment, but we know that many customers are crazy, or maybe inventive, they use Kubernetes and OpenShift in a way that nobody would have ever imagined. So if you are one of these crazy users, please test CSI migration. And second thing is that CSI migration is a huge feature. So in OpenShift, it is disabled by default when it's beta in Kubernetes, and it will get enabled by default only when it gets GA upstream. That means, if you know Kubernetes upstream, when a feature gets GA there, it is enabled and you can't disable it. So if the CSI migration is broken and you upgrade to a version where it is GA, you can't turn back, you can't turn it off, it's too late. You have a broken cluster. So that's why I am here actually with this Lightning Talks. I want to encourage everybody to test it while it's tech preview in OpenShift. You can enable tech preview features very easily. You edit one CR that we call feature gate. You add the last line, feature set, tech preview, no upgrade. It will enable all tech preview features in your cluster, including CSI migration, including all the other tech preview features. But there is a catch. The no upgrade suffix means that you can't upgrade that cluster. So please don't do that in production. This is for CI, for testing, for your lab environments. So you can try new features in OpenShift before they get GA, before you upgrade your clusters. When you enable tech preview, no upgrade features, it will enable the feature gates in Kubernetes. And when it's doing it, it will drain your nodes and restart the QBLADs. So it will take some time. If you have big cluster, this will take some time. On small clusters, it takes, I don't know, five, 10 minutes. Here is the timeline. So what we do, both in upstream and in OpenShift, and in the middle, there is a column when you can start testing CSI migration in OpenShift. So as I've said, Azure Disk and Cinder are GA in Kubernetes, 124. In OpenShift, 4.11, it will be GA very, very soon. So please hurry with your tests if you are using one of those. AWS, Amazon Cloud, Google Cloud, and Azure File will follow next in the next release, Kubernetes 125, OpenShift 4.12, and the last will be vSphere in the next release, in the following release. So the timeline is pretty tight. It's really the best time to start this thing before it's too late. And let us know how it works. This schedule is kind of optimistic. This is just the current upstream plan for the future releases. We have shifted the GA graduation a couple of times already, but I think right now it is the right, it looks like it's going really to happen. And of course, please read these notes. For example, for vSphere, we are deprecating support for old vSphere versions and Kubernetes one, sorry, OpenShift 4.13 will most probably require version 7.0.2, which is the current vSphere version. So if you have anything on 6.7 or 7.0.0, start thinking about upgrades. Here are a couple of recommendations that I, with my upstream head, I would recommend everybody. If you are starting new clusters and you have a CSI driver available there, which is GA, then please forget about entry plugins. Use CSI exclusively. You will not exercise this CSI translation, CSI migration layer, and our lives and your lives could be simpler. If you have existing cluster with existing volumes, entry volumes, and you have the possibility to move to CSI, please move to CSI. I know it's often, well, in most cases it's not possible because you need to backup all the data and restore them somewhere else. So, but if you have a chance, please start using CSI, and in both cases, forget that entry ever existed. If you have existing cluster with entry volumes and you can't move them away, we are here for you. We have this CSI migration exactly for you. So, your cluster will work after, should work after upgrades, and we will support this entry PVs, entry storage classes in foreseeable future, which is very short this time, but we are not planning to removing the support of this entry PVs, entry storage classes. You can use them. Just you will exercise one additional layer that could bring complications, and life is better without that layer. And again, I encourage everybody, please, please test CSI migration before you upgrade to a version of the GA. So, as a summary, I already told you, we are not changing the data. We are not moving the data. We are not changing the API. You can use the API as it is. You can use the existing PVs, existing storage classes, but please help us with the testing. And for repetition, here is the schedule. And I will be around if you have any questions about storage or CSI. I can answer anything. Okay, thank you very much.