 started as the clicker, but that one got a new battery. So it's actually not only me, but it's also the wonderful Carlos Torres, who's a local. Especially solution architect in storage. And he even speaks Italian, so that's helpful. So we'll start with a little bit of an overview of what's storage, and then Carlos will talk about what we had with OCS3, which worked for OCP3. And then I'll give you a little bit of an outlook on what's coming with OCS42, which will be made available with OpenShift 42. All right, so with OpenShift and Kubernetes, you have a very broad overview of what kind of storage providers you can take. And you see some listed here. And if you were using OCS already, you were using Gloucesterverse there. And it's quite a large list of options. And today, we want to talk a little bit about why you should consider OCS for OpenShift. If you're looking at Kubernetes persistent storage, then there's two things you can do. Very early in the Kubernetes days, people are still doing static provisioning of PVs. So you, as an admin, create PVs. And then once a user creates a claim, a PVC, that gets matched with what's out there. And nowadays, we do want to do dynamic provisioning. So once a user does his claim, a PVC, we do actually create the thing in the back end. And when the user frees that up, it's automatically reclaimed, deleted in the back end storage. So that's what we do today. And most things support that already. The basic storage needs in a Kubernetes OpenShift environment is most of the time registry storage. So most people that talk to us do want to keep the registry on a persistent layer. Because that way, they can distribute that over all nodes. And that doesn't not only help with retaining the registry when a node fails, but it also allows you to distribute that to other locations. Then obviously, you want the file storage for the containers to store anything, including databases. But what's now new with 4.2, you don't also get block storage for containers. You can directly access blocks and directly write to things. You could even run Ceph on a Ceph provided block storage. And then something that's not covered by OCS. There's also storage, is ephemeral storage, where we're just temporarily storing information and we throw it away when the part is not used anymore. What we consider storage, we divide that into three categories. We have the traditional storage arrays and appliances, which I usually have a vendor login. You do usually have these in your data centers. And you can attach them from the outside, obviously, to your OpenShift environment. You got your point play storage things that are not necessarily Kubernetes aware. But the most important thing for us is they're usually limited to one environment. So either it's inside of your own data centers or it's inside of a single cloud environment. And what we target with OCS is we want to not only run in public clouds, but we also want to run inside of your own data center and make that a homogeneous experience for you so that your data can move wherever you need it to be. And that, for us, looks like this. You have your bare metal, virtual machines, containers, private cloud, public cloud, and your legacy storage needs. And everything is supported by the same storage environment. So that should be enough for a quick overview and hand it over to Carlos. Thanks, Chris. Well, as Red Hat value proposition in our portfolio, so we have a story for the present that means OpenShift 3.11. And the story that I'm going to tell you is the 3.11 product based on Glaster Engine. Then, Chris, we will tell the story about the future that will be 4.2 and it's based on SEV. So regarding 3.11 and regarding the overall story of container storage, so as Chris mentioned before, so we need the storage for the infra part. And Red Hat provides the storage for the infra. And the infra means registry, metrics, and logins. Very important because you are running on registry or container images. And your metrics and logging, you are probably under auditing process, so you need to keep your metrics and logging information safe. And then you have the storage for applications. Stateful applications, so in the previous presentation there were a session about Kafka. There are multiple applications, like 3 scales, that requires stateful, then requires persistent storage. Based on that, so what is the proposition of Red Hat? How can you deploy your Red Hat storage inside or outside OpenShift? So on the left side, we see a deployment model. It's based on storage for containers. And it means that you are running your platform independently than your storage. So you have your storage, based on dedicated VMs, where you run your binaries of our storage product. Then you connect through APIs the storage to OpenShift. But the both parts, OpenShift and OCS OpenShift container storage are independent. Then you have on the right the flavor that is called storage in containers. And it means your storage becomes an application. And it's the liberate inside OpenShift. It means that you have binaries, but the binaries are in container pod formats and are completely managed by OpenShift. What's the difference about both? Because for a regulation point of view and for internal process, probably you want to maintain your independence between infra and application stuff. So there are dedicated storage teams and there are dedicated developer teams. So on the left side, keep independent. You address this request. On the other side, you have everything managed at OpenShift. So developers can manage the storage independently. So what about the architecture? So this is the container flavor architecture. So the storage is running as application inside OpenShift. So you have a pod and then you have the data plane. The data plane of the current version 3.11 is based on Glaster. So you have the so-called in Glaster, it's called Bricks. So you have five systems and nodes that are federated together and provide you the cluster, the storage part. Then you have the contraplane. The contraplane is the API that is integrated with OpenShift and enables the dynamic provisioning features and all the features that regards the persistent volume claims. So basically with 3.version3, we deliver version after version new features. So this is a table about the three last versions from 3.9 to 3.11. What are the features that we deliver in these three versions? And you can see that version after version we deliver new features. So regarding Kubernetes integration, we support block file, object storage, then read-write many, read-write once, read only, dynamic provisioning, PV resize. Then from OpenShift point of view, in the last version you can manage the storage directly from the web console. You can install the storage from the playbook, the same playbook that you used to install the OpenShift. Then the storage services that we provide, as I said before, multi-protocol storage, snapshot, geo-replication for DR. In terms of infrastructure, the solution is agnostic. So runs everywhere when OpenShift runs. In terms of support, so the OCS3 version, 3.11, is aligned in terms of support with OpenShift. It means that, based on the lifecycle, OCS will be supported until the day that you have seen there. And there is the link for the public reference about the support lifecycle. And regarding the new OCPv4. So OCPv4 has several requirements. The main requirements are related to the operator thing. So you have probably heard my colleagues about operators, a very interesting session because they simplify the lifecycle of the applications inside the OpenShift. And then the storage, again, the storage must be aligned with this new way of lifecycle through operators. Then you have to deal in OpenShift version 4 with the standard Kubernetes that is called CSI. CSI is an agreement between storage vendors when, finally, all the more important vendors agree to follow and to create the standard APIs to manage the storage. It includes how to deliver storage classes, the ability to encrypt credentials, create multiple CSI drivers, how to manage the CSI per cluster. You can see in those tables what are more or less the APIs and the API scores that are related to the CSI driver. So this is a new challenge for storage, for storage industry to align to those APIs. And this is good because in the past, everyone builds their own driver. So different technology, very difficult to integrate. Now we have a standard CSI, and you need to be aligned with those standards. So we are planning to deliver and to respect those standards in OCP 4.2. So what is the plan? Is OCS 3.11 supported? Unfortunately, no. But we have a new product version. The version is OCS version 4.2 that Chris will talk about now. But regarding the use case where you are already OCS 3.11 customers, so you need to migrate your workloads, right? We have a solution to migrate the workloads from OCS 3.11 to OCS 4.2. So the solution is a migration tool that is integrated in OpenShift. So for more information, please keep in contact with us. We can provide you more information about it. So Chris, next version, 4.2. Next version, 4.2. Thanks, Carlos. And I do see that some people actually woken up and started listening. So that's wonderful. So you've heard about operators today a lot. So I just want to quickly go over the framework again. Goal of an operator is not only to install it, but to actually help you in-day-to operation. So updates, back out, failover, and restore. So you shouldn't be worried about all these things anymore. I think that most of you probably understood that now. But what's also important is it's a native application for Kubernetes. We're not reinventing anything special here. And because OCP 4 warns us all to run everything as an operator, obviously OCS also runs as an operator. So what has changed now? We changed OpenShift 3 to 4. And then consequently, we also changed OCS from 3 to 4. And to spice it up, we completely changed the back end for OCS. So as I told you this morning already, OCS 3 was cluster-based. And now we base it on Rook for SAP. And we base it on Nuba for the Red Hat multi-cloud gateway, which will allow you to do cool things between clouds for your object storage. And as Carlos already said, you cannot use OCS 3 on OpenShift 4. That's unfortunate, but I can ensure you that the weight is worth it. Because with OpenShift 4.2, you will be able to use OCS. And if you were to use OpenShift 3 already, there is a migration tool. And this is the default migration tool that you would use anyways to port over your parts. And that will also be able to port over your persistent storage from the cluster-based OCS to the SAP and Nuba-based OCS. So the new technology looks like this. We have Rook with SAP and Nuba. And SAP with Rook already has an operator. And we are basically putting an operator on operator, as you heard earlier, that will manage all the storage underneath. So why did we move to SAP from cluster-based? That's a question I get a lot. But it does make sense. SAP has already been supported from the very beginning of Kubernetes as a community effort. And we heard customers that also wanted to have an SAP 3 endpoint inside of their OpenShift clusters. And now with SAP and Nuba, we can actually deliver on this demand very well. So I'm not sure how many people know Rook already, but it's also a project that's community-driven. So it didn't start inside of Red Hat, but we're now an active contributor in this community. And its main purpose is to bootstrap a SAP cluster and make it available for OpenShift and Kubernetes and do the dynamic provisioning that I talked about earlier and also support lifecycle changes. So we will be able to do what we promise with an operator to do upgrades and backups and restores and all of this. The Rook architecture at first glance looks quite complicated, but the diagram on the left shows you the things you have anyways. And you can now have new objects that you control via a kube control or now the OpenShift console with the UI. And you can just request a new storage pool. And that will translate into a SAP pool, for example. Or you can request file storage or object storage or block storage as well. And then that will communicate on the right with the Rook operator that ChefDemons will do the actual work in the back end. And then either using the Flex driver or the CSI driver, you can actually attach and mount this storage onto your POTS. So with OCS402, we will be going the CSI path because that's now available in OpenShift 402. And it also allows us to very quickly develop new features for our storage. So that's just an overview. Down here, we have everything that we need anyways for Chef. So the OSDs that store the data, we have the monitors that contain the cluster information. We have the managers that will allow monitoring and communication with the cluster from the outside and the MDSs for the distributed file system. And using the Rook layer on top, the blue one, we can now export those volumes to the POTS, make that available. Now we talked enough about Chef. Talking about Nuba, because a lot of people don't know that already, is our answer to provide a very enterprise-ready S3. So Chef already implements an S3 endpoint that a lot of people use and that has been proven to work on a very high scale already. But Nuba adds very nice enterprise features, especially when we want to work in multiple clouds in parallel. So one of these multi-clouds gateway functionalities that are very nice is we can have an active, active read-write access between clouds. So you have one endpoint that your application uses. In the back end, you can use multiple clouds that your data is distributed over. And you can define how it's distributed. So we will call Nuba, the product and the company, R-H-O-C-S multi-cloud gateway, or short just the multi-cloud gateway. And it will be included in the regular OCS package. And this is just an overview side. So you have your apps in OpenShift up there. They're using S3. And they can use different buckets. And every bucket can have a different configuration, have multiple endpoints, whatever. So summing this up, you do have OCS in the operator half soon. And then you can just install it from operator half. You click on Storage. And either you select that you want to have specifically storage for Red Hat. Then you will only see that. Or you will see the other storage things. Once installed, you have access to the monitoring and management. So you can do everything from the UI. You probably remember this slide. We saw that this morning already. So that's healthy cluster, unhappy cluster where nodes failed. And you will have access to all of that. And it's included into the whole OpenShift metric system. It's hooked up to Prometheus. You can get alerts. So the whole thing that you need for the day 2 operations as well. So OCS 4.2 file block object support, Prometheus. It's FIPS compliant. If you need that, we do support VMware and AWS right from start. And we'll add Azure and Google Cloud in later versions. And to sum this up again, this is a very similar slide that we seen earlier. If you use OCS, you can deploy your storage onto anything that you deploy anywhere. Is it bare metal or VMs or inside of containers? And you can not only have the read-write once, but also the read-write many persistent volume claims. And obviously also use F3 as the gateway. So Carlos. That's very hard to slide because I'm a priestess, so I have to tell you something about the SKUs. So basically, the SKUs will not change. But let's say, since this is new architecture and we are offering S3 multi-cloud gateway, and probably you need to scale your workload. So I recommend you to keep in contact with your Red Hat representative. In order to evaluate case by case, what are you want to achieve? So if you want to stay with the same workload, basically it will be the same in terms of SKUs. If you want to scale, of course, we need to make some architecture considerations. So the facts to summarize and thanks, Keith, Chris, for coming. So basically, you need to evaluate when you are looking for a storage to integrate in OpenShift. You need to evaluate, is my storage CSI ready? So it's compliant with the industry standard technology. So we offer with OCS photo to this standardization. Then if you are moving to cloud-native workloads, so you want to target S3 because S3 is now is mostly adopted in the industry and now it's replacing even the file sharing because it's the new and flexible way to share buckets. It's very simple, share data and ingest data through S3. So again, we offer Nuba in the solution. So it's a real multi-protocol storage solution with file, block, and object. And basically, this queue, if you are already OCS customer, this queue will remain the same. So you can get access to both products. If you stay on OpenShift 3.11, then you can get access to the containers of OCS 3.11. Otherwise, if you move to OCP 4.2, you can get access to the new version product. With that, thanks. Thanks for your time.