 Hi everyone, I'm so happy here to bring our story, how can bring the data locality to our intensive workloads on the COVID-19 environment. My name is Shou Weichen, I'm the co-maintenant of the electric project, and at the same time I also make the roadmap for the open source industry. First of all, I want to address what is the challenge actually we saw in the COVID-19 environment, and nowadays like people almost devote everything to the COVID-19 environment, no matter it's a state-less application, even state-of-the-art application. Because so many different things deployed in the COVID-19 environment, we saw some common pitfalls we observed here. The first of all is more and more data has been generated and data has been stored in silos. For example, you deploy the COVID-19 in the cloud environment, at the same time you have the good native in-home promise data center, and make you copy and manage data all over the place. It's very costly, you have to do the aeroplane and it costs the delays for your users to consume this kind of data. And the second problem is the different teams need access to this data, it's a lot of trouble there. There's a new needs to support the new application and corresponding different interface to access the data, because we will have so many people cooperate on the COVID-19 access plan. Each of the integration of each of them requires times and a lot of effort to optimize for till you can go to the production. And on top of that, our tech industry is progressing very fast, which has been creating new compute and storage technology every three to eight years, which means with this kind of innovation in the cloud and the COVID-19 environment, the adoption of the hybrid or multi-cloud, the data stack has to adapt based on multiple environment. You can see you can have like a different compute, the AI, the big data analytics, and there's storage, object store and the cloud storage, all this kind of thing new and so it's very complex to manage all of them. So with those demands in mind, where I introduce a new layer between the compute engine and the storage system and in the COVID-19 environment, this new layer provides a complete virtualization across all data source to serve data to application who do not need to care about the location of the data. So you just need to consume the data instead of you have to worry about where should I put the data, when should I move the data, and where should I copy the data. The solution we have built is application or across environment, whether in the cloud or on-premise environment, it's mainly like anywhere you want to deploy your cluster with Kubernetes, or we can help you to build your data locality there. Just to give a very simple architecture here, this is the part of the architecture of the actual in the Kubernetes environment. We can mount into the volume with all fields interface at the same time or also can provide the cluster that will locality with the actual worker must in your Kubernetes environment together with your compute. So with bring the metadata locality and the data locality into your Kubernetes environment, actually you can get a much better performance with this kind of locality. And at the same time, we provide the square for you to install your actual in your Kubernetes environment. The first of all we provide is a home chart, and at the same time, we also have the operator there if you really want to do the maintenance and operate for your Kubernetes environment with our actual system. Because of time limitation, we cannot go through very detailed use cases here, but feel free to scan this QR code if you're really interested in the actual system. And you can understand what is the big analytic use cases you're seeing together with AI machine use cases and how you can do the fast data line lake and the fast AI machine in large scale training with your system in the Kubernetes environment. And also feel free to join our Slack channel and we are very active there and have to answer any kind of this kind of question.