 Thank you Diane. So my name is Erin Boyd. I'm a Senior Principal Engineer for Red Hat. I work in the Office of the CTO. Currently I'm working on hybrid and multi-cluster storage and just the multi-cluster story we have coming out of Red Hat. I thought it was thank you Diamante for setting it up, talking about hyper-converged infrastructure. That's one of our main focuses, that's a powerful message for why we enter the realm of hybrid and multi-cluster storage. So why would you want a hybrid type of setup or even multiple clusters? Well especially in terms of AI and ML you might have performance considerations where you want to run some of your workload on a very specific cloud. You need fault tolerance, you want to back it up you know maybe between different zones and maybe you need some specialized hardware to run some of your workloads. The problem with that though is if you happen to choose one vendor for all of those services you're locked in and then there's the idea of regulation and you want to collaborate. So you want to be able to run your workload where you have those services wherever they might be. They might be in GKE, they might be in AWS and due to regulation you might want to run them on-prem. So those are the considerations where you want to look at hybrid cloud or multiple clusters within your Kubernetes deployment to be able to facilitate running your workload where it runs best. So there are lots of different applications of course in AI and ML and how do we share data within those. You know the typical four things we share are an object store or it could be a database, a file system or a queue. Almost every slide I've seen you know evidence of that. I've seen Postgres, I've seen Kafka, I've seen Splunk, I've seen a lot of these already showing that this is what we're using, this is how we use them between clusters and today I really want to focus just on how object store and what we're doing that's a little bit different and in the community today. So object storage is convenient in an AI ML presence in that you can have a bucket within something like S3 and have many different applications either feeding to or reading from that bucket. It's pretty easy to use the API, lots of different cloud vendors support that API and you can write your application easily to get and put from that bucket and provide policies. However if you're familiar with Kubernetes there's not really a standard for how we handle object today. So enter Nuba. Nuba is a new project that was started a while ago and it's just become open source the beginning of this year in May and has an operator. You see the operator hub.io, the Nuba operator and it provides a means for users that want to have this hybrid cloud multi-cluster capability to store data within their object bucket and set policies within that. So at the top it gives you buckets accounts and permissions things that you would expect from a typical S3 deployment of any type of storage. The middle layer though where it's differentiating between your normal bucket is you can set then mirroring tiering policies you can spread it however you want based on data locality based on performance needs and then the last layer is the bottom layer of the storage. So within that you're able to not only be tied to the you know S3 version that you're using on AWS you're able to abstract that out and use Nuba as an interface between any S3 provider. So on the back end Nuba also provides the ability to de-dupe the data, get sections of the data to move over so you have your basically your replication snapshotting and backup as well. All this is still tied into Kubernetes because as I mentioned before we don't really have a consistent API in Kubernetes for object storage. How many of you guys use persistent volume persistent volume claims today? Yeah a lot of people are very familiar with that. So what my team at Red Hat did is come up with a CRD that allows you to create an object bucket in an object bucket claim. These concepts are just like your persistent volume or persistent volume claim but specific to the very needs of object storage. And this way it can it provides a consistent control path so that I can create an object bucket claim just like I would a persistent volume claim and I can dynamically create storage on my back end. Then that storage can be served served up to things like Nuba or Azure and then you have a portability layer on your storage so you can create your applications creating an object bucket claim just like you would a PVC and you can move it from cloud to cloud having the same consistency. How am I doing on time Diane? So what if I'm not using object data? What if I want to keep my application the way that it is? I want to use all of you guys that raised your hand when you said you're using persistent data. Rook is another great open source product that project that allows you to automate the installation of things like Ceph or Minio and soon to be Longhorn. And so what it does is it provides the ability to have an operator to take the complexity out of the storage and provide this consistent backbone across many different clusters. So I believe in the Discover thing we heard earlier today they were talking about using shared storage using things like NFS or EFS. Rook also provides now a plug-in for Ceph FS so then you have the shared storage. So even if you're not using object data you still can use the power of an operator then to deploy your storage system consistently across many different clouds. So what are we doing in the community? I'm also part of the Kubernetes storage SIG and the CNCF storage SIG and the community is working to enhance how we use data more agilely in the community. Things like snapshots cloning and volume transference are soon coming out to be able to be deployed and used to leverage the ability to have hybrid cloud because as you know Kubernetes is always claimed we're completely stateless. We're agile we can move anywhere and then that all fails when we start talking about persistent storage. So look forward to many of these features helping improve the way that we can manage our data within the systems. And then lastly I think this is great this actually ties back to what Diamante was talking about. When you have hybrid cloud and when you have these challenges of managing different clusters you need this consistent administration across all this when you talk about your applications. You have to have network. If you don't have network you don't have the ability to have the distributed storage. You have to consider state the portability of your application and placement of that. And having that consistent control plane of administration is important. And that's why platform matters. I think you've seen today where a lot of people have leveraged OpenShift as that platform that ties all of this together. Not only having your end-to-end pipeline for data analytics but then a consistent user experience across all of your different clusters. It simplifies administration allows you to enforce things like quotas. I believe the discovery is talking about either and then apply those policies across all of them. So with that hopefully I met the under five minutes. Thank you. You did an excellent job of that. Thank you very much. And so we're going to get Kyle to come up.