 Hello, and welcome to FlintCon. Next, we will introduce Flint operator. And at the end of the session, there will be a deep dive workshop. You can get more details on how to using Flint operator in this workshop. I am Benjamin. I'm the senior architect of CoupSphere. I'm also a member of Flint and Open Function. Zhu Han is also the member of Flint and CoupSphere. He is also a member of Google Edge. So Zhu Han added the Flint support to Flint operator. He also created the work-through guide for the workshop. So thanks, Zhu Han. Here is the agenda. First, a brief introduction for Flint operator. And next, we will demonstrate different use cases of this project, the roadmap. And at the end, there will be a Flint operator workshop lead by a relatively in person. So actually, this project is previously known as a Flint beta operator. It's created and open sourced by CoupSphere community in January 2019. So we have released about eight releases since in CoupSphere community. And in August 2021, after some discussion with the Flint community, we have donated this project to the Flint organization. Since then, we have released about five versions. And in the last couple of months, we have made some significant changes to this project. For example, we have added the Flint support. And we have also changed the Flint-based CRDs from NAPSPACE to ClassScope. And finally, we renamed the project to Flint operator and released as 1.0 version. This is how the original Flint beta operator looks like. We have defined several CRDs and used the CRD to control the deployment of a Flint-based demo site and also to generate the Flint-based configurations from the CRDs into a secret and run the secret into each Flint-based pod. So this is the original project. So why we create a Flint-based operator in the first place? Actually, before we created this project, we have a login operator. But login operator has one problem that the logs has to go from Flint-based to Flint-based before reaching its final destinations. That means Flint-based is a monetary choice in the login operator. So that might cause some problem in some cases. For example, if your cluster has hundreds of nodes and all the logs have to go from Flint-based to Flint-based, and then Flint-based might become the bottleneck. It might become the single point of failure. So in our opinion, Flint-based should be optional, not mandatory. So this is the one reason. Another reason is Flint-based doesn't respond to dynamically reload the configurations. That means whenever the Flint-based config changes, you have to restart or recreate the entire Flint-based pod to pick up the new configurations. So it's not very convenient because you have some data-like position that will be lost after you are restarting the pod. So for this reason, we add the Flint-based water to the Flint-based operator. Flint-based water will watch for the Flint-based configurations. Whenever the config changes, it will restart the Flint-based process in the same container. So after we create the Flint-based water, and I think we should create a new product. So for this reason, we created the Flint-based operator product. And now we have add the Flint-based pod as well. So as you can see here, we have changed the CRDs of Flint-based from namespace to a cluster scope. And we have add Flint-based CRDs as well. And the Flint-based is deployed as a demo site. And Flint-based is deployed as a stateful site. It only receives logs from the network. So it is stateful because Flint-based can use barfers to cache logs. So a user can define a barfers section in the CRD to enable this feature. So this is the Flint-based operator. With the Flint-based operator, you can build the log-in processing pipeline as you wish. It's very flexible. For example, you can deploy a Flint-based only. Use it as an agent to collect logs, do some simple filtering and parsing and send to the final destination. And you can deploy Flint-based only. You can use Flint-based to receive logs from a network where ACDP forward all these logs and do some advanced processing and forward the log to the final destination. And of course, you can use Flint-based and Flint-based together just like a log operator. Flint-based actually act as an agent and forward log to Flint-D. Flint-D do some advanced processing and forward the log to the final destination. So you can build the log processing pipeline as you wish with a Flint operator. I think it's the most unique and differentiating feature for Flint operator. So we have already added Flint-D support in the one-part whole release. And next, we will add the HPE support to the Flint-D state-of-the-art site. We are also planning to add the Flint-D and Flint-based metrics support in Q2 this year. Finally, there will be a deep dive workshop. Let's take a look at the workshop. First, in the workshop, you have to clone this repository to your laptop. And in the prerequisite, you will be guided to create a KAN cluster ES and Kafka cluster to install the Flint operator. And also to deploy the Flint beta and Flint-D by deploying relevant CRDs. There are many use cases for each mode I mentioned previously. So I think Patrick will lead you through this workshop. So I'm handing over the control to Patrick. So OK.