 Good morning, good afternoon, good evening, wherever you are. My name is Gregory Turecki. I'm Director of Product Management at Infinidat. You might have heard about Infinidat. We do not spend much in marketing where customers are telling each other about ourselves. We are developing a large-scale enterprise storage systems and this is why we are coming to the discussion about the open-shift storage integration. We have today over six exabytes of capacity deployed globally. Because the world, this is an update from the end of last year. So we are now in bigger scale and our customers are large enterprises, banks, insurance companies, CSPs, MSPs, and others. Every deployment that we have is usually a petabyte and a bar for multi-petabyte deployments. And this is why we are talking about those really large-scale installations. Our storage supports block and file. We provide fiber-channelized quasi-NFS connectivity and SMB for our customers and this is our main product. The focus of Infinidat is really going along those lines. We are offering something that means the needs of customers who are really interested in the very large-scale deployments, multi-petabyte, very high performance. We're talking about millions of operations per second and the tens of gigabytes per second throughput of the system while offering capacity at cost much lower than what our competition does. I'm not going to talk about the technical details of the storage system itself. This is something that we will be glad to cover in a separate discussion. Reach out to me or to your Infinidat representative to talk more about it. I'll show the gears more towards the open shift and QBAR medicine in general. When we talk to our customers and our customers are mainly large enterprises that are starting to consider and starting to look more into the containerized infrastructure and into the QBAR medicine in particular. They mentioned those challenges as part of the QBAR medicine journey. First of all, persistence is not something that was defined from the very beginning in QBAR medicine. Most of the workloads, as you probably know, in QBAR medicine space were stateless and everything that required persistence was usually deployed outside of the QBAR medicine. However, this is changing. We see more and more customers starting to run stateful applications in QBAR medicine. It's not always simple for them to do this with highly reliable enterprise storage. Another challenge is once you start using the storage systems within your QBAR medicine environment, how do you scale this? Going from maybe a few persistent volumes into much bigger amounts and much higher capacity and performance, this becomes a challenge and especially for customers in traditional enterprise environments. Security is another concern that is mentioned by many of our customers, as well as an ability to really move the data between multiple clouds. One of the promises of QBAR medicine is an ability to shift the workloads between the on-premises and public clouds. QBAR medicine can do this in a great way, but the limiting factor is really an ability to migrate data together with the workload. That's another challenge and in general the amount of solutions in this ecosystem is exploding. This is something that also confuses many of our customers and they really are a challenge to find the best solution for their needs. This is also a line to what we see in CNTF surveys. They are talking about interviewing customers and reporting about 30% of those customers saying that the storage is a challenge when they start the Kubernetes adoption. We also believe that this number could be bigger if those customers were more at down the road of this migration into the Kubernetes and containerized environment. And this is exactly why we are starting to offer the solutions for the containers as well. But before I go into those details, let me introduce one of our customers. I unfortunately can't mention the name, but this is a large consulting company. And what you can see here is the breakdown of the deployment they have. They're not the largest customer for InfiniiVar. They are kind of small to medium in our scale. They have only 18 and a half terabytes of storage capacity on Infinii.storage across data centers. And what you can see here is what kind of workloads they are on today. In general, InfiniiVar storage is target for consolidation. So this customer is running a mix of the beamware and eggs and Linux and Windows and some backup and some other applications on their seven InfiniiVar systems across those three data centers. So they are starting to introduce now Kubernetes as part of the mix of the workloads that they use. And they started to use our CSI driver and one of the data customers for it. And one of the architects of this company came with this feedback that really they like the CSI driver. They like the integration and they expect to expand their usage of Infinii.storage also into the containerized workload as well. So why do customers use InfiniiVar storage? We are offering solutions at a very large scale. We can consolidate multiple workloads into the same solution. So it simplifies the infrastructure for customers. We provide standard enterprise features at great scale. Things like replication for both synchronous and asynchronous. Provide the snapshots that can be used and can be taken instantly for the storage. Therefore for the data without any degradation or impact on the performance, we support encryption of the data, we support quality of service. We get data, telemetry data from all the systems in the field and then expose insights on the performance and tuning recommendations to customers for our InfiniiVar tool. We have different purchasing models for the storage systems and all those things together really result in this very high adoption that brought us to over 6 exabytes of deployment today. So what we are doing now, we are taking all those features and making them available for customers running Kubernetes as well. We have launched our general availability for our Kubernetes CSI driver this week. It basically provides great integration to our customers. It is free of charge for InfiniiVar customers available with the source code on the GitHub. We have the container images for the driver on the Docker Hub and Reddit container catalog. You can see the snapshots here with both the driver itself and the operator for the deployment. Those are the features that we support with the CSI driver. So customers using InfiniiVar storage can manage multiple InfiniiVar systems from the same OpenShift or Kubernetes cluster. They can do dynamic provisioning and deprovisioning of persistent volumes. Those volumes can offer both the file system and Roblox access so customers running the applications such as Oracle Database in Kubernetes may consume persistent volumes with a Roblox interface. We support instant cloning of persistent volumes so the customer may provision a new persistent volume claim using the existing one will instantly create a new PV without duplicating capacity. So only the changes that the customer will make will consume the storage on the InfiniiVar box. We support resizing of the volumes. We support snapshots. Again, the snapshots are instant and customers may restore data from a snapshot creating a new PVC. We'll see all those things as part of the demo. Customers may import existing datasets. So if there was a static allocation before or if customers are migrating from some legacy storage into the InfiniiVar box array, we can take an existing persistent volume, import it into the CSI driver and manage it within the CSI driver from that point in time. InfiniiVar box is a unified storage. I think I mentioned this earlier. So we support all those protocols also for Kubernetes. Customers may choose which protocol works best for their environment, whether it's FiberChill, iSCSI, or NFS. We also have a special flavor of the NFS that we call NFS 3Q, an ability where we allocate a subset of a file system or a quota-limited directory of a file system as a persistent volume. This flavor is intended for customers who need a really large amount of persistent volumes. We're talking about hundreds of thousands of persistent volumes per InfiniiVar box storage array using NFS 3Q's method. We also see demand from customers for support of the easier deployment mechanisms for the driver. So we offer both HelmChart and the OpenShift operator, which is available on the operator hub for the deployment of the CSI driver. Let's take a look a little bit under the hood how the CSI driver works. If a customer has a Kubernetes cluster with a master or maybe a few master nodes and a worker nodes, he will also have the InfiniiVar storage array or multiple arrays in his session. InfiniiVar provides separate endpoints for the management access and for the data path. When the CSI driver gets deployed, there are a few entities that we create within the cluster. We create a secret that holds the credentials for the InfiniiVar box, which allows the driver later on to manage storage. There we deploy CSI controller as a deployment on one of the worker nodes and we deploy CSI node as a demon set on each one of the worker nodes. So whenever a persistent volume request claim comes in, the CSI controller talks to the management interface on the InfiniiVar box and provides the persistent volume as requested by the PVC. It may get all the required configuration settings, size obviously, some other things related to type of provisioning and so on. When an application pod gets scheduled on one of the worker nodes, Hublet will communicate with the CSI node instance on that worker node, which may contact the management interface on the InfiniiVar box, for example, to map a volume to the worker node if this is a block access, a function or a SCSI, or export this persistent volume to the specific worker node if this is NFS. It may also format the persistent volume if this is required to a specific file system, such as XFS, ESC3, ESC4. And then when the actual pod gets scheduled on this worker node, it may consume storage using this persistent volume. Talking a little bit about the CSI constructs and it's important when it gets to the demo to understand how things work. So there are parts that are handled by the storage provider in Pinibox in our case or maybe other storage systems. So we are... And there are things that are defined within the Kubernetes cluster itself. So the usual way customers are asking and requesting storage in Kubernetes, a developer may define a persistent volume claim that specifies some details about the type of access to the persistent volume, the size, and the storage class that's going to be used. The storage class defines which driver should be called to provision the persistent volume and Kubernetes calls the persistent volume that talks to the storage array and allocates chunk of storage. There is a similar set of constructs that was added later in Kubernetes for the snapshots management. Again, very similar to the PVC storage class in PV. There are concepts of volume snapshot, volume snapshot class, and volume snapshot content which is used to provision a snapshot of a persistent volume within the storage provider and we implement those constructs for the internal storage array. If we zoom out a little bit from a single pod and talk more about the multi-cloud deployment, I mentioned that some of our customers are interested in the multi-cloud deployments and an ability to really share the data and share the workloads between the on-premises and public clouds. In addition to the on-prem Infinibus deployments with Infinidot, we offer also a fully managed service that we call Neutrix Cloud. This is a service that we manage deploying it in the data centers adjacent to major public cloud regions. Customers can consume storage from Neutrix Cloud and pay for consumption without dealing with the actual physical infrastructure. Let's assume we have a customer running the Kubernetes cluster on-premises using Infinibus proficient volumes of the backend. We can replicate those proficient volumes into the Neutrix Cloud and we can expose those volumes to applications running in Amazon, Azure, Google, and other public clouds. So if a customer is running an EKS cluster or if he is running an OpenShift cluster in EC2, he may consume proficient volumes out of Neutrix Cloud, whether it's a replica from the on-premises environment or the new proficient volume that is just available from Neutrix. They can access proficient volumes also from Azure or Google Cloud or IBM Cloud or other clouds that we support with Neutrix environment. We also offer an option to support multi-cloud access from the same persistent volume if this is a read-write many storage like NFS can be accessed by pods running both in Amazon and Azure and Google at the same time. And this gives some interesting solutions for some customers and then the data can be replicated back to the on-premises and consumed by application there. I'll pause for a second before I go to a demo. Are there any questions so far then? There is one question that someone was hopeful that you would talk about syncing data between multiple locations. That's part of the demo or if that's something you can talk to now. Okay. So I'm not going to talk about this in demo, so I'll cover it now. In general, Kubernetes itself is not handling the replication of data. So this can be done outside of Kubernetes and we offer several ways to replicate data using Infinibox replication capabilities. So for example, a persistent volume can be automatically replicated to another Infinibox storage array or to a Neutrix cloud as was mentioned in the previous slides. And then the replica can be exposed as a persistent volume in a different Kubernetes cluster. I mentioned at the beginning that one of the features that we have is an ability to import an existing persistent volume that was pre-created into CSI driver and manage it from that point in time. And this also applies to the replica target. So you can create a replication from one storage array to another storage array. It becomes kind of pre-created persistent volume. Then this persistent volume can be imported into a second CSI driver in a different Kubernetes cluster and managed there from that point in time. Okay. So I will switch to a demo and I'll try to use a Jupyter notebook to really handle this demo. I hope it will work. I've never tried it before. But I think that might be a really good example of using the notebooks. So I was mentioning Kubernetes all the time. It definitely applies to the OpenShift as probably the most popular commercial version of Kubernetes that we see from our customers. We provide a certified operator which is available on the operator hub for deployment of the CSI driver. So customers may deploy the CSI driver for Infinity Box through the operator hub and use it. We support, the CSI driver supports any Kubernetes version starting from 114 or OpenShift 4.2. We also have an earlier solution that we released a couple of years ago, Dynamic Provisioner for Kubernetes that works since Kubernetes 1.6. It is not CSI, it's free CSI implementation. So customers who are on older versions may use that. Another node that I wanted to make, CSI is an evolving standard. So there are new features that are being exposed to customers with every Kubernetes release. And some of the features that I'm going to cover here, may not be available in older versions of Kubernetes. So all those functionality that we'll be covering in the demo are available starting from Kubernetes 1.17 or OpenShift 4.4, which was released just, I think, last week, or very recently. And some other features might be available as an experimental ones and should be enabled through a feature gate. So refer to your documentation, depending on the version that you're running, you may or may not be able to use all those features that I'm going to talk about now. I'm using KubeCaddle as a part of this demo. You can easily replace KubeCaddle with OC for your OpenShift deployments. They work the same way. And I think I mentioned that already, the deployment of the driver can be done for the OpenShift operator or you can help chart whatever the preferences are for the customer. So let's start with the demo. We'll start from just checking the cluster that we have. And I'll run my KubeCaddle getNodes command. And we'll see that my Kubernetes cluster that I have here has three nodes running in version 118 right now. Let's check that we have our driver deployed. And we do. So I'm running this KubeCaddle getPOD command. I recommend to deploy a driver in a specific namespace. In this example, I'm using a namespace called IBOX for Intinibox CSI driver deployment. And as I said before in the presentation, we have a single instance of the controller and one if prepare cluster and we have one instance of the node component per worker node. And this is exactly what you can see here. With this three node cluster, we have one instance of the controller and three instances of the node components. Each one is running on a separate worker node. So the next step for us would be to create a storage class for Intinibox. I'll use NFS transport as an example here. As I mentioned before, we can do also ifecaddy and fiber channel and we also can do NFS TQ for customers who want to use hundreds of thousands of persistent volumes. So this is the standard definition of the NFS storage class for Intinibox. The name of the storage class will be IBOX storage class demo. It refers to the provisioner, which is basically our CSI driver. We want the reclaim policy for the persistent volumes to be deleted. We specify that we support volume expansion feature. So customers can use our driver to resize the persistent volume after it's been created. And then we provide some parameters that are relevant for Intinibox storage. So Intinibox can use, can define multiple pools within the Intinibox that can be used for different applications or to just separate the allocation to different chunks. So we require a different pool for every storage class that you define in Kubernetes. And here we specify the full name on the Intinibox. For the NFS provisioning, we define a network space. This is another construct of the Intinibox storage. It's basically a set of endpoints that will be used to access data on the Intinibox. We allocate several IPs. And in this example, for NFS access, those are basically your NFS server IPs. And the CSI driver will randomly choose one of those IPs every time the mount is done to the persistent volume. We provide some other parameters like we want to do thin provisioning for storage. We use NFS as an access. We may specify mount options for the persistent volumes that will be used when the worker nodes will mount the persistent volume. We can specify export permissions for the persistent volumes as well. Again, we are talking about the NFS part here, so I can limit, for example, access to all clients or all clients within the specific subnet or other export rules that you may expect for NFS. And we refer to the secret name that can be used to provision storage within this pool on this Intinibox. Once we define this storage class and I run the kubectl create command, my Kubernetes cluster deploys the storage class, and I can see that the storage class is created and it's available. The next step for me will be to create a persistent volume claim. Define a persistent volume claim, call it iBoxPVCDemo. I will use a different namespace for those PVCs, a demo in this case. The persistent volume claim is as we write many access modes, so it can be shared between multiple pods if needed. I'm asking about one gigabyte of capacity allocated through this storage class. So I can do create for the PVC and if I run the kubectl getPVC for this PVCLC that the PVC exists, and it is already bound to a persistent volume. So this persistent volume is... My demo gods are with me, I guess. We'll check it out in a second. So the persistent volume... Oh, I basically have to change the name. That's why it's showing me the wrong result. So the persistent volume has been created. This is the name of the actual file system on the InfiniBox that was created following the PVC create request. You can see that this is a one gigabyte, read write many, and it's bound to the claim. So now once we have the persistent volume, there I can go ahead and create my snapshot class because I want to start doing the snapshots. So the snapshot class defines again which PSI driver should be used to manage snapshots and I refer to our InfiniBox PSI driver. I can create the snapshot class and I can check that the snapshot class exists. So we just created this iBox snapshot class demo. And now I can go and create a snapshot. So again, the snapshot would be another construct in the Kubernetes. I define a YAML file which is referring to the volume snapshot kind. I name the snapshot as iBoxPVCSnapshotDemo. Again, this is a namespace related construct. I will use iBoxSnapshotPlusDemo snapshot class to do this. And the source for the snapshot would be the PVC that we created in the previous stage. So if I run now this command and I create my snapshot, I can check the status of the snapshot and this is what I see that my snapshot exists for five seconds. And now I want to check the volume snapshot content name for the snapshot. So this is the internal name of the snapshot content that has been created. I can check it also from the volume snapshot content side and see that this volume snapshot content is available for 27 seconds. Behind the scenes, the driver calls Infinibox API and creates a read-only snapshot of the source volume which is stored on the Infinibox. It consumes zero space, but it's instantly available for future restores. So now let's assume we want to restore a PVC from the snapshot. I can define a new PVC, a persistent volume claim. I'll call it IBOX Snapshot PVC Restore Demo 2. It will use the same Infinibox storage class and the data source for this PVC will be the volume snapshot that we created in the previous stage. So if I create now this new persistent volume claim as a restore, I can see that this PVC has been created. What happens under the behind the scenes? We take our snapshot that was created before and make a clone of the snapshot which is a writable copy accessible for applications. So basically what we did, we instantly created a copy of the original snapshot and made it writable and available for the customers. Again, this writable copy consumes zero space so customers start making changes. So it makes the overall capacity allocation on the Infinibox extremely efficient and easy to use. We also allow creation of instant clones so without going through the snapshot stage, customers may define a new PVC that would be creating a clone directly from the source PVC without going through the snapshot. For the PVC, I specify the data source to be an existing PVC demo as opposed to the snapshot as in the previous step. And if I run this cube color create command and I see that my clone has been created and it's available also for applications. So let's see how it works with application. I can have my application pod definition here that would launch a VC box image and mount our iBox PVC demo PVC as TMP data. So I can now schedule this pod. If you remember this demonstration that I had for how the CSI driver works, now we have called the CSI node component on the relevant worker node. It made the magic and exposed the PV to the pod, to the worker node and then when the pod starts on the worker node we can see that it's running now and I can connect to this pod using cube color exact command and check that my TMP data is really pointing to the NFS mount on the infinity box as I would expect. And we can see that it was really done successfully. So the CSI driver took one of the IPs from the net space on the infinity box it used the export path for the file system and it is available under TMP slash data for the pod consume the data. So this is a kind of conclusion of the demo. One other thing that I wanted to mention is how it can be done in a more automatic way. One example of provisioning a MySQL database as a pod using Helm HR for MySQL and one thing that can be done here the customer wants to provision MySQL database using persistent volume from the infinity box I may specify here a reference to the storage class that we created and using this reference the Helm will provision a pod with MySQL database and will create a persistent volume on the infinity box using the storage class that we defined. Any questions about the demo before I go to summary slide? I have a quick one and I apologize if you ran through this quickly. I think you mentioned that there was an operator for this CSI driver in operatorhub.io or where could they find the driver itself? Yes, so you can install the operator for the operatorhub if you go in the OpenShift deployment and you go to the operator's operatorhub view. You will search for the infinity box and you will find the infinity box operator you can use it to deploy the driver and then assuming you have the infinity box storage deployed in your environment you can do all those things they showed here. Okay, so it's in the catalog operatorhub embedded inside of OpenShift but I was checking, I think what I misheard you saying was I thought it might be also in operatorhub.io so this operation coming soon hopefully, yes, there you go, perfect. Exactly, yep. Okay, so are there any other questions about the demo? So far not, nobody's had any questions so that means you've done a good job with the demo and documenting, thanks. Awesome, awesome. So just to summarize, we are always thinking about the customer's needs especially for the customers who operate at a very large scale we also come from this cloudy perspective even though we are operating in many cases in the on-premises we are selling a lot of systems to service providers in addition to our other enterprise customers in the financial and insurance industries and retail and others we provide solutions such as NutriX Cloud that allow customers to run their storage, use the storage not only on-premises but also from the public clouds and we see this growing interest from customers for the extensive integration for the enterprise storage that they used to now into the containerized environments this is becoming more and more important and we expect that this will be even more critical for them in the future and one thing that we are doing now is really helping them to address the storage aspects of the Kubernetes adoption and OpenShift adoption with the CSI driver what you can see here at the bottom of the slide is really this we were showing that the ability aspect of our solution I was mentioning that with NFS 3Q we can go to hundreds of thousands of persistent volumes and this is a screenshot from one of our deployments where we have over a hundred thousands of persistent volumes and there is a single and finish box storage array behind the scenes I'm available for future questions if there are questions now I'll be glad to take them if there are questions coming later on I'm available with this email I'm glad to talk about other infinite solutions or about the CSI driver and OpenShift integration in particular