 Hello everybody, I'm Benjamin. I'm happy to be invited to a FlintCon in North America 2021. Today I'm going to share a session about Flint with operator and our plan to involve it to a more powerful Flint operator. I'm a senior architect of CoupSphere, and I'm also a member of Flint and Open Function. Here is the agenda. We are going to talk about why we created FlintBit operator in the first place, and a brief introduction, use cases, analysis, and most importantly, our plan to involve current FlintBit operator to a Flint operator. And we'll explain the roadmap, a few demo, and finally, some community stuff. So let's begin. Why we created FlintBit operator in the first place? For a logging agent, it's important to be lightweighted and have lower resource footprint. FlintBit is just a perfect selection for the logging agent. So we choose to use FlintBit. And then there are needs to control FlintBit and its configurations through a Kubernetes API because we are developing on the Kubernetes platform. So someone might ask why you create another operator when there are already existing ones? So our opinion is that FlintBit itself is able to send logs directly to the final destinations. So FlintBit shouldn't be mandatory, it should be optional. So imagine if you have hundreds of nodes, and all the nodes are going to send logs to the FlintBit first and then FlintBit forward the log to the final destination. The FlintBit might become the single opponent of failure. So it's more efficient and more straightforward if you have the FlintBit agent can send logs directly to the final syncs. So that's the reason we created the FlintBit operator in the first place. And then we have some issues. FlintBit doesn't have dynamic config reloading interface, which means whenever FlintBit changes, the FlintBit port has to be restarted for the FlintBit to pick up the new configurations. It's not convenient in some cases. There are already some issues and discussions in the links below. So we have to show that first. And here is how FlintBit operator looks like now. You can see that we have six CRDs. And the first CRD FlintBit is used to define the entire FlintBit demo site. And the rest of the CRDs is used to define the FlintBit configurations. And as you can see, the FlintBit port will send logs directly to the final syncs, like ES, Kafka, Lokey, et cetera. So let's take a look at the details of the CRDs. We have input filter and output CRDs to define corresponding FlintBit plugins configurations. And the FlintBit config CRD is used to select which plugins to use by label selector in the right one. So as soon as the plugins are selected, it will generate the FlintBit config into a secret. And the secret will be mounted into each FlintBit port. Yeah. And the FlintBit CRD is used to define the entire FlintBit demo site. It's a position DB and which configuration to use. And the operator will watch for all the CRDs whenever each of these CRD changes. It will re-console and generate a new FlintBit configurations. So this is the CRD. And one important issue we have to solve is we have to solve the dynamic config reloading problem of FlintBit, right? So the method we choose is in the FlintBit port, we start a FlintBit watcher. We create a FlintBit watcher component. And the FlintBit watcher will watch for the configuration files in the port. And whenever the configuration file is created or changed, the FlintBit watcher will restart the FlintBit process. And then the FlintBit process will be able to pick up the latest configurations. So in this way, we solve the dynamic config reloading problem. So here below is the link. If you want to take a look at how it's implemented, you can take a look at the code. And here is the Docker file. And you can see here we compile the FlintBit watcher into a binary and use the official FlintBit image as the base image and copy the latest configuration and the latest binary of FlintBit watcher into this base image. And replace the original FlintBit entry point with the FlintBit watcher. So in this way, the FlintBit port actually started with FlintBit watcher. And the FlintBit watcher will start the FlintBit. So this way, we solve the dynamic config reloading problem of FlintBit. And a user can customize a lot of processing phases by choosing different plugins for different phases. For example, for the output phase, if you have ES, Kafka, and forward plugin, you can just enable the ES plugin by setting the label, enable to true, and disable other plugins. This way, you can keep the configuration for Kafka and forward without deleting them. And you can enable them again whenever you need it. So it's very flexible. And the same applies to input and filter plugin. So this is how the login processing can be customized. Then we can take a look some user cases. CoupSphere itself is a good use case for FlintBit operator. It's a true FlintBit for the login agent. We have needs to control the log settings from the web console. So the API server has to have to call some Kubernetes API to manipulate the FlintBit and the FlintBit configuration. So this is one reason we create a FlintBit operator. We have CRDs. It can be controlled by the API server easily. We will have a demo later. So to collect Kubernetes logs, you have to set up FlintBit CRD first. And CRD defines everything about the FlintBit demo site, including a position DB. And you can set host us for it, the request limit, and the configuration, toleration, affinity, et cetera. And then you have to define which plugin to use in FlintBit config CRD. So this is done by config the input filter and the output selector. So you can select different plugins for your needs. To collect the container logs, you have to define the input, the input CRD. So specify where you can find the container logs. My buffer limit is there are huge log volumes. And most importantly, you have to define filter CRD. The filter CRD will add some Kubernetes metadata to the log messages. And another important thing is to actually the metadata contains some useless field. So you can remove the useless field, like pod ID or container hash, to save your disk space. So then you can define the output CRD, such as ES, such as Kafka, and FlintD, to send logs to these things. And this is how the final log message looks like. You can see here the metadata. There are no pod ID or container hash. They are all filtered out. So this is how we create Kubernetes container logs. And we have needs. You already have needs to not only collect container logs, but also we have to collect the Kubernetes logs on the node. So how this can be done through FlintBit operator. The fourth thing is FlintBit Kubernetes is actually a system D service. So we can use FlintBit system D plug-in to specify a filter to collect the Kubernetes logs. And most importantly, you have to define a filter. What this filter does is we have a requirement to curate both container logs and KubeNet logs in one single login console. Which means the log message has to be in the same format. So we use this lower script to convert the KubeNet logs to the same format at the container logs. And this is how the lower script looks like. The most important part is here. It adds a Kubernetes metadata. It will use the host name as the pod name and use kubeNet as the container name. And it will put it into the kube system namespace. And finally, there will be a time step. It will be added. So in this space, we can curate the KubeNet logs together with container logs in one single console. And the same is for the auditD logs. You have user's requirements to monitor what happened on one node. So the auditD will put the audit information of one node in the audit logs so it can specify the auditD log pass. And similarly, we have to define a filter using the lower script to convert the auditD log into the same format as the container log like this, right? Yeah, this is the script similar to the previous one. We put the node name to the pod name, metadata, and the container name is set to auditD. And it also puts the namespace to the kube system. So this is how auditD and kubeNet logs are collected. Actually, we have a requirement to do log alerting. There are several ways to do log alerting. One is looks like this. The logs is sent to Kafka. And you can use some kind of consumers, Kafka consumers, to analyze the logs from Kafka in real time. And to filter out the logs, you are interested in sending it to some notification channel. But this method is kind of heavy, right? You have to set up a Kafka cluster and set out a few consumers. So a more lightweighted way is to process the log from the agent side, right? So we can also use a filter in the filter in the graph field. We can add the log pattern. You are interested. Whenever the log message matches, it will be set up to a web hook like this, right? In this way, we can do log alerting on the agent side. It's more efficient and lightweighted, right? So this is the use cases part. And fun way that we have already know, it's lightweighted. It's high performance and zero dependency and no single point of failure, right? But it also has some disadvantages. As we know, it has much less plug-ins than FlunD, right? It has less powerful filter and parsers than FlunD. So in some cases, the FlunD is still needed. So we decided to add FlunD to existing FlunBit operator and rename it to FlunD operator. So currently, the FlunBit operator looks like this. We have already discussed. And we are planning to add three more cluster CRDs here. Why we add this cluster CRD? Because FlunBit is actually a demo site. And it's collecting cluster-wide logs. So it makes us to add cluster CRDs to collect cluster-wide logs, right? And most importantly, we plan to add FlunD support. And FlunD will be active as an optional destination of FlunBit output. It's optional. It's important. It's not mandatory. So the FlunBit still can send logs from itself directly to the final syncs. And FlunD is one of these syncs. And FlunD can do more log aggregation and the more powerful filtering and parsing. And then send logs to the final syncs. The FlunD CRD contains basically five CRDs. The FlunD CRD itself will define how the FlunD department looks like. And the FlunD config and FlunD class config contains information about input and filter plugins. And the output and the cluster output plugin defines where to send the logs. So this is with FlunD and FlunBit CRDs are all added. I think we will rename the product to FlunD operator. So it's more powerful and flexible than current operator. So this is the road bike. Almost three years ago, we created the FlunBit operator. And we created about eight releases till August this year. And until August, actually more and more contributors and maintainers participating in this project from all over the world, including the US, Australia, Europe, and of course China. So after communicating with FlunD community, we decided we agreed to transfer this product to the FlunD log. So this is happened in August this year. And the 0.9 release is the first release after we transferred the product to the Flun organization. And the 0.12 is the latest release. We plan to add the FlunD support and rename the product to FlunD operator in Q4 this year. So if you are interested in the development, I will share a link to the proposal. You are welcome to participate in the development of FlunD operator. And here are a few demos here. This is the container management platform. You can see the log collection configuration here. We have already the ES and Kafka out to plug-in config. The ES is actually receiving logs. And the Kafka is closed. You can change the status here to be active if you want to send logs to Kafka. And here, if you edit the demo, you will see this is disabled here. And below is the Kafka configurations. And for the ES, actually it's enabled. Of course, you can turn it off. And you can add more things. You can add the FlunD. So next, I will show demo how to curate the KubeLat logs. As we have mentioned before, the KubeLat logs is put under the Kube system namespace. And we use the pod field to start the node name, actually. So if you search master, you can find there are some logs for the KubeLat. This is the logs for the KubeLat. But if you want to see the latest 10-minute log, you can find nothing because currently the KubeLat input has been disabled. So we have to enable it first, right here. It says it's true. Below you can find this is the KubeLat logs. You can see it. You're taking an effect. If you wait a few seconds, the KubeLat logs will show up. If you take a few seconds, you can see that the KubeLat logs are scrolling down, right? So this is a demo of how do we collect, how do we enable the logs and the plugins. So this is done for the demo, right? And so if you are interested in participating in this project, you can find the GitHub and Slack information here. And here is the proposal for the print operator, right? And finally, I want to thank all the maintainers and contributors, especially maintainers from the digital ocean. We are very passionate and professional. So thanks, all the thanks guys. And that's all my session. That's all for my session. So any questions, I will be here to answer any questions. Yeah.