 My name is Hiral Patel. I'm one of the founding engineer at DMRT. A little bit about DMRT as the title of my presentation says we provide purpose built bare metal hyperconverge infrastructure for deploying and enterprise Red Hat OpenShift or you can also deploy an open source Kubernetes. By running as we all know that Red Hat OpenShift can be deployed onto any platform. It is agnostic of the platform which it is deployed on. That means it can be deployed on virtual machines, it can be deployed into a public cloud or in this case it can be deployed into a bare metal hyperconverge infrastructure. By running an OCP onto a hyperconverge infrastructure, you get best of both worlds. You can take the advantage of all the OCP features like internal Docker registry, automated CI CD pipeline, security policies and so forth and by running it on top of bare metal hyperconverge infrastructure, you can take the benefits of hardware offloads for both storage and network resources along with guaranteed QoS for these resources. DMRT also provides the enterprise level storage services with respect to disaster recovery and data protection. Today we support deploying an open source Kubernetes or also deploying an enterprise Red Hat OpenShift container platform onto our bare metal HCI infrastructure. We expose storage and network stack as using an Kubernetes CSI and CNI plugins which makes it generic enough for end applications which is deployed onto on these platforms to provision these resources using Kubernetes constructs like storage classes, persistent volume claim, persistent volumes and so forth. All enterprise level storage services with respect to disaster recovery and also with respect to data protections like storage replication, mirroring, asynchronous replication, snapshot backup, all of these enterprise level storage services are exposed via CRDs, Kubernetes third-party resources. So it makes it easier for end user application to just know one CLI or one infrastructure to provision this services using a Kube-CTL on both either on Kubernetes or also on to an OpenShift container platform. Now let's take a look at it why and how DMRT provides the hardware off-load. First, why? As we all know, there are challenges when you try to deploy any application on to an hyperconverged platform with respect to you want to achieve a maximum CPU utilization for your end user application. You don't want any storage or network IO to take your CPU away from your applications. In order to achieve that, DMRT provides the hardware off-load for both storage and network, which means we do not take any CPU away from the end user applications. End user application has about 95% CPU utilization. They also get a guaranteed QoS for both storage and network resources and the way the storage and network IOs are off-loaded at the PCI virtual function boundary level, which makes sure that they are isolated, fully isolated. There is no noisy neighbor problem and also they are away from the denial of service attacks. We also support an IPsec and SSL encryption decryption off-load into the hardware. By running an OCP on top of DMRT, D20 hyperconverged infrastructure, you can achieve a 1 million IOPS per node. You are able to get 95% CPU utilization for your end user application, so you can compact or put as many application you want into a smaller amount of footprint. It is also able to take advantage of low latency, fast NVMe flash storage. We also support NVMe over 10 gig ethernet, which makes sure that allows you to have a data mobility across nodes into the cluster. It is also aware of the high availability clusters if they are deployed within the 2 millisecond latency into the same cluster. Our storage stack is aware of the high availability and it provisions the volume such a way that it is highly available across zones for the given application, which makes sure that if any zone goes down or any node goes down, you have the end persistent volume highly available and there is a very less downtime for your production workload, which is running onto this platform. These are some of our customer and deployment numbers from our HCI infrastructure. I'll just take an example of Splunk here. As we all know, Splunk is able to provide the highest injection rate from the data injection standpoint. They claim they can inject 2.5 terabyte over a day. We had a customer who had, who were deploying an application onto their platform, which was able to inject 1 terabyte over the period of multiple days. When we moved their application to our HCI infrastructure, we reduced their injection rate to 1 terabyte per hour, which means we were able to make it 24 times faster than what they were actually seeing it. There are other numbers here. You can take a look at them at our website. We have many other case studies also deployed, case studies also available onto our website at dmant.com. If you have any more questions, please come visit us at our booth at kubecon, it's S64. We are also present at one of our sponsor desks upstairs on the main deck. Thank you.