 So, can I welcome Mikhail Ostetsky to talk about Cilium? Okay, so I'm Mikhail Ostetsky. I'm working as a software engineer at SUSE, and I'm working mostly on Cilium. And today I will introduce the project and tell you about the updates since last year, because last year on FOSDEM there was also another talk about Cilium, so I will focus on what changed in Cilium since that time after the brief introduction to the project. So, I will start from the introduction actually, but to introduce and explain what Cilium is, I need to start from BPF, the mechanism in Kernel. So, BPF is the Berkeley packet filter, the virtual machine in Kernel, which allows you to write programs which monitor Cisco's or filter network packets, and that kind of program needs to be written in C in the user space, compiled by Cilang to the bytecode and then that bytecode is loaded to the Kernel through Verifier and JIT and executed in the virtual machine. That program deals with network packets or with Cisco's inside the Kernel. And Cilium is a sort of program which takes advantage of BPF to implement networking for container run times. So, Cilium consists of daemon, of the agent which is running on every node where we have container workloads. It has also API and CLI, so you can manage Cilium. It integrates with many orchestration systems and container run time systems. So, for example, if you just want to try Cilium on one machine, you can use it just with Docker, but the most common use case is using orchestration systems like Kubernetes or Mesos. And then Cilium, in the most cases, allocates IP addresses for containers or pods if we are talking in terms of Kubernetes and creates VF pairs for the network namespace. And then on top of that, it creates BPF programs which are attached to the VF interfaces which filter packets and filter traffic going to containers. And especially in context of Kubernetes, Cilium implements two things, two concepts. There is a concept of container network interface. So, Kubernetes supports multiple network providers which implement the CNI specification, and Cilium is one of them. So, Cilium can allocate the IP address and creates networking for the network namespace according to the CNI specification. And then Cilium agent is listening, is watching Kubernetes API for the another concept which is called network policies. In Kubernetes you can provide policies from which pod to which pod you can connect or not or provides the list of IP addresses, for example. So, in general, network policies in Kubernetes are something I would say like firewall rules for Kubernetes and for orchestrated containers. And Cilium is implementing the concept and watching for on Kubernetes API. And then it generates BPF programs based on the data. Kubernetes has its concept of network policies but Cilium also extends it a bit and provides more features than pure network policies in Kubernetes but I will tell about that a little bit later. And now I will get to the part what's new in Cilium. But before that, let's note its effect. Before the previous FOSDEM, the last released version was 0.12. So, Cilium reached the 1.0 milestone a little bit after previous FOSDEM around April 2018. Now the newest version which was released around November is 1.3. And then version which we are propelling to release after FOSDEM is version 1.4. And from the 1.0 milestone, Cilium Guarantees the API stability is the support of releases and downgrades and upgrades in between each version which wasn't that obvious before version 1.0 was released. And now I will tell about features which were introduced in between 1.0 and 1.3 version. So, the first thing which is quite important is integration with Envoy and Istio. Unfortunately my talk right now is very short so I will be not able to introduce to the whole concept of Envoy and Istio in details. But in general, Envoy is the Sidecar R7 proxy which is used to redirect the traffic between services on L7 layer. And it's commonly used in Kubernetes. And Istio is a technology which implements the concept of service mesh by using Envoy and service mesh is something which guarantees that the traffic in between your services in your cluster is secured and encrypted. It provides the functionality of ingress so of exposing your services you have inside the cluster outside the internal cluster network. So to the outside internet. And also implements network policies on its own. And Cilium integrates perfectly with Istio because it provides its own extensions to Envoy. And here on the slide you can notice that Cilium can defend the compromised Envoy and I will explain what I have in mind like that. So Envoy and Istio support network policies and filtering but on IP version 4 and on TCP only. So there is a chance that Envoy can be compromised by IPv6 or UDP traffic and Cilium supports filtering those kind of traffic. So Cilium basically can still block some potential vector of attacks associated with IPv6 or UDP. And Envoy usually provides the network policies based on IP tables. But Cilium extends the Envoy binary with its own L7 filter which is based on BPF. There is also support for additional container runtimes in Cilium. So last year Cilium was only supporting, for example, Kubernetes clusters with Docker and now Cilium is able to work with trial and container D also. And Cilium provides also Prometheus metrics and many, many Prometheus metrics. So for example, on your Grafana dashboard or Kubernetes cluster you can look how many addresses were allocated, how many nodes are available and so on and so on. And there is also a concept of cluster mesh. So if you have multiple Kubernetes clusters, Cilium actually can connect the pods from different Kubernetes clusters and it provides the IP connectivity for those pods and provides also security rules for that. And it's done mostly because of one underlying mechanism. So despite the fact that Cilium integrates with Kubernetes, it uses its own ATCD cluster to register its agents and store all the data about networking it provides. So that's why in Cilium you can push for features which connect multiple clusters. And for now it's done with pod connectivity but later I'll tell you about the other use case of that. And there is also BGP support in Cilium. It's not done in Cilium code base itself. It's rather done by integration with KubeRouter. KubeRouter is another CNI plugin and network provider for Kubernetes. But since Cilium community had an idea to implement BGP support, in KubeRouter you can disable the CNI functionality and only run the functionality which watches for service IPs in Kubernetes and advertises them in BGP routing table. So the integration was like that. So Cilium runs a CNI plugin and handles network policies but KubeRouter runs in mode without CNI support and only watches Kubernetes API to advertise IP addresses. There is also implemented support for Cassandra and Mancached as protocols and by support for them I mean that you have extended custom resource in Kubernetes which supports network policies in which you can filter for example queries to Cassandra. So you can filter out specific select or insert or whatever operations database. You can also filter out or filter in based on tables for specific labels in Kubernetes. So for example if you have some pod or deployments with some concrete label you can tell it that okay you can operate on table A but you can't operate on table B. And there is a similar support for Mancached in which you can filter in or filter out based on keys in Mancached and based on operations which you can do on keys on Mancached. And now I will tell briefly about features incoming in version 1.4 about two of them. There will be a feature of multi cluster services so you will be able to have a service which has beckons across multiple Kubernetes clusters and there is an ongoing support of running on top of Flannel for now and a future maybe on the other C&I plugins. So Cilium will be able to run as a chain C&I plugin. So the first plugin which is running will be Flannel. It will allocate IP addresses and create VF pairs. And then on top of that Cilium as a chain plugin will receive that information, will create BPF programs for those VF devices. For now it's only for Flannel, maybe other C&I plugins will be supported in the next versions of Cilium. That's all I wanted to tell you today. Do you have any questions? I can for the agents. How much workload it adds to ETCD because I have some trouble with other programs which were adding too much ETCD load. So did you benchmark? That's a good question. I have no concrete answer for that. I know that there is some... Okay, so the question is about the workloads for ETCD and whether we've done some benchmarks about how many nodes of Cilium can be supported by ETCD cluster and whether we did some scale test. I don't remember the concrete numbers right now. I can follow up with you and look up for that. Some benchmarks were done. There is also plant work to even improve the scalability of Cilium but I don't remember the numbers right now. Yes? Okay, so the question is what experience did I have that we have I assume as Cilium community by using a BPF? That's a very general question and I'm personally not very engaged in maintaining the code for generating BPF programs specifically but there are experiences of the whole Cilium community is that we have performance gain and for example we have benchmarks which I can show later that's for example by writing custom filters to Envoy which implements L7 filtering and routing by BPF we had a performance gain. So the overall experience