 What I want to talk about is Cilium. Andre in the last session talked about IPv6 and the networking side. I will focus on the network security or the security side of this. So what is Cilium about? So Cilium is about BPF. Who has heard about BPF, Berkeley packet filter? All right. See a couple of hands? For those who have not heard of it, I think most of you have used TCP dump, where you can monitor packets on network interfaces on the wire. When you specify a filter expression with TCP dump, what actually happens is that TCP dump will compile and generate a BPF program, load that into the kernel, and that program decides which packets to display when you're on TCP dump. So most of you have been using BPF, even without knowing it. So this has been invented many, many years ago, 30 years ago. But it has been extended since, and it's become a revolution inside the Linux kernel. It's been revolutionizing tracing and profiling. If you've heard Brendan Gregg talk about this, I think you have noticed and you have experienced that this is changing how we can do performance analysis. One example, and I'm just using one, is showing how we can use BPF to generate histograms directly in the kernel. So instead of sampling everything to user space and then looking at the samples and deciding something or what to do with the samples, we can actually do that inside the kernel. Why was this even needed? Because the number of samples were too high to even export them to user space. This is why tracing and profiling subsystem has moved to BPF. But tracing and profiling is not the only option, the only subsystem or the only field that BPF is revolutionizing. Another one is networking. That's the one where we focus on most. Some of you may have seen Daniel Borkman's presentation yesterday. He talked about BPF in general on XDP, Express Data Plane. XDP is a framework which allows us to run BPF programs at the network driver level of Linux. So very close to the actual hardware than Nick. What I'm showing here is an experiment that we've done. We've basically measured XDP BPF DDoS mitigation filter compared to an IP set-based DDoS mitigation filter. So IP set is an IP tables extension which allows you to match on a set of IPs or ports. What we've done here is basically we've connected two machines together back to back with a 10 gigabit network card. And we've loaded the filters to filter on 16 million IP addresses. We would then use one machine to send as many 64-byte packets as possible. And the receiver would have to drop them as quickly as possible. So let's look at the numbers real quick. For all the details, Daniel has shared his presentation. And you can look into the details. We also have a demo recording where you can see the actual video how this happens. But let's look into the numbers real quick. So the sender is able to generate 11.6 million packets per second. If you're using IP set, we can actually only ever drop 7.1 million packets per second. So all the resources on the machine are actually not enough to even drop all of these packets. If XDP, we can easily drop all of these packets. So what happens if we load the rules, these 16 million rules while we generate traffic? How long does it actually take to load all of these rules? With IP tables, IP sec, this took over three minutes. With XDP, we were down to 31 seconds. This is not the exciting part. The exciting part is here. What about the latency and the throughput of a machine while it's on a DDoS attack? If you're running an IP set-based filtering, basically the latency goes up to 2.3 milliseconds. And the throughput goes to 0.014 gigabits. Basically nothing. While with XDP, the latency stays extremely low. We can still use a good portion of our bandwidth. In terms of handling TCP requests per second, that's the lowest metric. With IP tables, IP set-based filtering, we can do a couple of hundred requests per second. If you have an XDP-based filter, we can still handle thousands of requests per second. So basically, even though the machine was under a DDoS attack, the machine remains reachable with low latency and can actually handle workloads. So this is how Linux in the future will be capable of protecting itself from DDoS attacks. This is one example. The second example of what I want to talk about is Facebook published numbers at NetDev this year. And basically announced, I would almost say, that they're switching their local answers, their layer three, layer four local answers over from IPvS, which is a Linux local anti-technology to BPF XDP. So we're talking about this piece here. So between ECMP hardware-based local answers and the L7 local answers, they're running an L3, L4 local answer. And these are the numbers, and the numbers are mind-blowing. The bar below is the IPvS throughput. And the upper bar is the XDP BPF throughput. So packets per second. And there's almost a 10X improvement. And this is amazing. Anybody that's been in an working field knows that 10X improvements don't come every day. So this is definitely, even though Facebook is not sharing the absolute numbers, they are sharing the performance delta between the two. This is not what I'm going to focus on. I wanted to give you a outlook into how BPF is changing the kernel and how we do networking security and profiling. If you want to know about this specific use case, then network maintainer David Mill has done a keynote talk this year, and there is a recording I have included it in the slides here. This is not just changing software networking, though. At FutureNet a couple of weeks ago, all of the SmartNIC vendors have announced that they're going to support or are already supporting BPF as an offloading engine. So as we write BPF programs as software engineers, SmartNICs in the future will actually be able to offload and run these at even higher speeds. And the details mitigation filter I explained about and talked about is just one example. All right, so what about security? There's multiple projects. I want to mention one which is landlock. It's changing how we can do sandboxing, so it will be another low-level tool and framework how, for example, something like a Docker runtime or a Rocket runtime can containerize or sandbox applications. And obviously, Cilium. So how does Cilium revolutionize security? And I want to give you guys an example on what we focus on and what we figured is something that is currently unsolved that needs to be solved. And I will take you to the full thinking process that we went through. So if you look at how applications have been delivered or developed and deployed, many, many years ago we started with servers and we would deploy maybe yearly. We would set up the server, it would be a mail server, it would be a DNS server, it would be a database. We would deploy yearly, we would apply security fixes. That time is long gone. We went on to virtualization and we would deploy VMs. We would deploy maybe. We're now entering this phase of microservices or service-oriented architecture. We can call it whatever we want, but it's a world where application developers deploy multiple times a day. We've seen a lot of tooling improve, a lot of tooling that provides automation. There's been infrastructure deployments to Terraform, Ansible, CFNG and so on. We see containers evolve. We see Kubernetes coming up. These are all tools which help us deliver and deploy applications quicker. Eventually if the goal that we can disrupt other businesses because our application teams can evolve faster. If you look at networking though, we have seen a move hardware appliances to servers in the VM move. We have not seen much after that. So if you look at current, for example, Kubernetes networking solutions, even Kubernetes itself still maps to IP tables. And IP tables, I worked on IP tables myself for many years. My background is the next kernel development. I've done that for 15 years. I know what IP tables has been designed for. It has been designed as a firewall for servers. So it filters on ports and IPs. And I'm using IP tables here. We could use any virtual switch here that's flow based. It's based on IPs and ports. So why is that not enough? Well, if you're looking at these modern cloud native applications, they would typically use a protocol such as GRPC, REST, Kafka, and so on. And what you typically see is that most of the communication between these containers or microservices is over port 80, like if it's REST or GRPC. Which means that as a network engineer, as you open up the port, you basically open up everything, right? All of a sudden, whoever can talk to, whoever can basically use all of the functionality. And this is a problem. And I will talk you through a specific use case why this is a problem. So in this example, we'll look at Gordon. For those of you who don't know, Gordon Gordon is one of the mascots of Docker. So Gordon is an intern and has a brilliant idea, right? He sees that company is struggling to fulfill all of the hiring needs. So he's on Twitter all day. So he figures why don't they write a microservice that will basically tweet out all of the job openings that my company has? So he goes along and he wants to create that microservice. And in order to do that, he needs to have access to the data of all the job openings. So what does he do? He accesses an API which has this information. This API has a couple of API endpoints. For those of you using Kubernetes, all of the services basically have this get slash health which Kubernetes will call to figure out whether a part is healthy. You can access it to get the actual job postings. The database also stores the applicants that applied for the job and you can actually create new jobs. These jobs or these data might actually might be backed by something like MongoDB or something else. All right, so far so good. So Gordon basically writes his microservices and for his purpose, he needs to get access to the get slash jobs API to retrieve the job openings. He goes along because obviously Gordon is a good citizen software lover, so Gordon uses mutual TLS auth, right? Good thinking, Gordon. Developer etiquette, super simple stuff. Does TLS bias anything? TLS basically says anything from this container, this container or from this app to this app is encrypted. But it doesn't actually do anything on API call level. We can still do all of the API calls that we want. So let's dive into the networking level. So how would we secure and try to secure this on a networking level if you apply something like a Kubernetes network policy? It will get translated into an IP tables rule like this, which says, well, this tweet service container has this IP so you can talk to this jobs API container or jobs API service and you can do that on TCP on port 80. So this is how the rule will look like. This is how your firewall will look like. So this will allow the containers to talk, but at the same time it exposes all of the API endpoints. So if the intern, for whatever reason, introduces a little bug and that application is misbehaving, worst case scenario, it can actually tweet out all the applicants that applied for that job, which is definitely something that we don't want. On the other hand, the intern could also use this API to even create a job if he wants to stay at the company. But it's definitely not least privileged security. Like this is definitely not least privileged security. So what can we do about this? And this is the problem we're solving. We're saying, let's go back to the drawing board. What we want is something very simple, which is I want containers to talk to each other, pods to talk to each other, but I want to expose the least amount of API service possible. So least privileged security on API call level. So in this example, we allow the tweet service to talk to the jobs API service, but it can only do the get to slash jobs API call. If it attempts to do the other API calls, we will block this. So even if the intern screws up, he cannot leak the data, such as applicants data, or create new jobs. All right, sounds neat, right? We want a demo and this is open source on it, so let's do a demo. So the demo I'm about to show is Kubernetes based. Who is using Kubernetes or is planning to use Kubernetes? About half the hands. Does somebody have no clue at all about Kubernetes? Awesome, all right, don't need to do a Kubernetes intro because I would have really struggled to do that. But Kubernetes in a minor nutshell, one word, one sentence, it allows you to run containers at scale on multiple nodes and it will orchestrate all of this and takes away a lot of management burden. All right, so this demo is a demo that has a seam. So let's look at that. Some of you may remember this intro a long time ago in a container cluster far, far away. It is a period of war. The empire has adopted microservices and continuous delivery. Despite this, rebel spaceships striking from a hidden cluster have won their first victory against the evil eclectic empire. During the battle, rebel spies managed to steal the swagger API specification of the empire's ultimate weapon, the Death Star. So this is the intro to our demo. And what I have here is basically, this is my laptop, a VM, and I have a mini cube, which is basically an entire Kubernetes cluster, fitted into one VM. So I have a full, and right now it's one node, my VM. I have nothing running. So let's do a get parts. This would be the containers running. I have nothing running. So as a first step, I will deploy a Death Star. So the empire wants to deploy a Death Star. What is a Death Star? A Death Star is a service which is basically the local balancing construct that's not important here. And it has a deployment, which is a way of describing I want to deploy a container or a pod. What's important here is to use labels along this demo. So the Death Star has labels, and it belongs to an organization empire, and it has a class. It's a... And then down here, you basically describe what type of container I'm running. I'm running a Star Wars container image. So let's deploy that. This is how you deploy in Kubernetes. Cool, so this is deploying now. The Death Star is getting constructed. We now want to have spaceships land on the Death Star. So spaceships are basically containers as well. So this is our definition of a spaceship. It's a container image, and that container has labels. So it's organization empire and class spaceship. So let's create that as well. We can now get these and they should be coming up. So they're still creating. In the meantime, while these are spinning up, we want to establish a policy. So we want to have, or we want to allow, spaceships to talk to the Death Star. How do we do that? In Kubernetes, you do this with policy. And a policy could look something like this. The policy is simple. It basically says this policy applies to all parts which have the labels, class, Death Star, organization, empire, and you can talk to me if you have the label class spaceship. So there's no IP addresses we define policy through labels. So I'm going to apply that on a Wi-Fi. Let's see, missing power. Let's see why it's not coming up. All right, let's start over and let's try again. We're not even at the silly part yet. It's what you get with plating edge technology and doing a live demo. Let's try again. Let's see if it doesn't come up, it will not come up, not sure why. Maybe the Wi-Fi is very slow. So what's happening if you run a container, it will actually check with the container registry weather as in your image, so this is typically why Docker-related demo fail on stage because you don't have Wi-Fi. In case it's failing, we did this demo at DockerCon. There's a video recording, so worst case, I will refer you to the video recording. It doesn't look like it will be coming up. All right, sorry about that. Let's go back. So what is Sillium? Well, actually let me talk you through what the demo actually have showed you. It would have showed you that we can import a layer three policy, a layer four policy, to have containers or pots talk to each other. But then we also support importing layer seven policies. Some of you may have come by our booth and saw how we basically used layer seven policy to secure communication on API call level. So how do we do this? As I mentioned in the intro, Sillium is all about BPF. So what's the purpose of Sillium? What does Sillium do? Sillium runs as an agent on all of your servers. In the Kubernetes case, this would deploy it as a demon, so running as a pod on all your servers. It would then generate BPF bytecode, BPF programs and inject them into the kernel. So what is BPF or what can you do with BPF? BPF allows you to inject bytecode into the kernel and extend the kernel at runtime. While doing so, it goes through a verifier. So the kernel ensures that you cannot crash the kernel, you have to run to completion and so on. So it's basically similar to a kernel module, but you cannot crash the kernel because it goes through a verifier. So it's basically the next generation of making the kernel extendable. The bytecode after verification go through a chit compiler and just in time compiler, which is the BPF bytecode and translates it to the instructions that your CPU understands. So the BPF program in the end has 886 instructions or ARM instructions, so there's no overhead in terms of performance. So this is our data path or our kernel side. The upper side is basically how we integrate with the rest of the world. So we have a CLI, which basically allows you to retrieve debugging information and so on. We have a policy repository. This could be your Kubernetes control play, so this could be Kubernetes resources or it could be a key value store. We have plugins, we have plugins for Kubernetes, for Mesosphere, for Docker. We have plugins for different container run times and so on. This is how you interact and integrate with the rest of the world. And we have a Cilium monitor. This is the monitoring component, which can listen to events that happen on the data path. So for example, whenever Cilium drops a package or a request because of policy, we would generate an event through a framework called Perf Ring Buffer. This Perf Ring Buffer is coming out of this tracing and profiling revolution in the kernel and it's a very fast data structure. We can expose millions of events per second through this. So this is radically changing how we can give visibility into what's happening. Running TCP dump in a production environment is definitely not something that you want to do. Running IP tables J log is something that you don't want to do, but this has low overhead. This is something that you can do and you can run this when needed and when you stop running it, the overhead will be gone. So it's basically something that you can start monitoring and gaining visibility into your production workloads. A very nice property of this BPF co-generation is that we can replace these programs at runtime without any disruption, which means that, and we've done this a couple of times, we find a bug, we can fix it and we can deploy it. Not a single connection was lost. So basically, how does this work? We compile a new program, we verify, it gets just compiled and then the program gets replaced in a so-called atomic operation. None of the state is lost. This is really changing how we can do how we can do networking. What does it allow us to do? It allows us to do hot-fixing. It allows us to basically, if something is not working, we can compile in debug instructions on the fly. I'm a kernel developer. How do you debug kernels? You add, printk, printf, equivalent statements. You recompile, you reboot the machine and try to reproduce. What we can do in studies, we can compile in these debug statements without rebooting. So the problem is still occurring or we can debug it live or we can even hot-fix it live. So this is a completely new way of doing kernel development or networking development. So I talked a little bit about our Kubernetes integration. We basically integrate with the standard resources. I'm listing them here. So network policy. Network policy was recently declared GA, part of the official Kubernetes resource API. With network policy, you can define layer three and layer four ingress network policy. You can say, this pod can talk to this pod. You can say, you can talk to me on pod 80 and so on. Right now you could not define egress policy but this has been worked on right now and will most likely be included in the next release. Right now you cannot define egress side rules, for example, but this will also be included in the next couple of releases. We also, as Andre mentioned in the last session, we also implement services. When we say services, we implement the part to part services. IP tables, so typically this is done by IP tables. There's a distinctive disadvantage if you do this by using IP tables. For every service you define, Kube Proxy will inject about five IP tables rules. What are IP tables rules? There are like a sequential list of rules that every packet walks through. So as you scale up the number of services, it will get slower and slower and slower and slower. With BPF, this is a hash table. The cost is exactly the same whether it's one service or 50,000 services. It's exactly the same, even our policy enforcement, it's a hash table, it's the same. The numbers look the same whether we have one rule or 5,000 or 10,000 rules. So we're redoing networking with a scope of microservices where we have hyperscale and you're talking to hundreds of thousands of endpoints eventually. We recognize parts. Why do we need to even know about parts? We look at parts and we retrieve the labels of the parts. So this is how you define a policy. We saw this in the first two minutes of the demo. This is how you define a policy. You don't define a policy based on IP addresses. You say any container with the label foo can talk to any container with the label bar. You don't care if you're running one container or 10,000 containers from a policy perspective. You don't care, it doesn't matter. We integrate with nodes. So why do we need nodes? We actually have what we call a zero configuration networking mode where instead of using an external key value store or something, we basically just use Kubernetes as the control plane. What does that mean? It means that as Andre explained as well last session, how do we know about what nodes host which podciders? What are the IP addresses that are used to another host? We use the Kubernetes control plane to do this. So instead of inventing our own, we basically leverage and use Kubernetes for this. And last but not least, network policy does not allow you to do egress. It doesn't allow you to do layer seven yet. We are working on extending this and making this a core property of network policy. In the meantime, we offer a custom resource definition. This was previously called third-party resource. It's Kubernetes way of allowing evolution or development pre-standard. So we can basically, everybody can define this and use them and then what makes sense eventually gets into the standardized APIs. So this is how you can use Kubernetes and layer seven policy today. What do we do before networking? And this is the big question that everybody's asking. If I'm doing multi-node networking, should I use an encapsulation protocol or should I do direct routing? So Cilin supports both. We have an overlay mode and this is default, which basically means that you create a so-called overlay or UDP encapsulation between all of the nodes. So it's basically a tunnel. You hide the pod IPs from your underlying network. This is easy, works out of the box, but there is a performance penalty. It's very simple to set up. You basically run the KubeControl and manage it with the allocate node-siders and Kubernetes will automatically handle all of this. So this is the only thing you have to provide. You start Cilin and it will have multi-node networking. Easy, but overhead. So use case is typically POC or if you don't care about the last percent of performance. The second mode, native routing mode, is basically the mode where you're running a routing daemon or you want to use the cloud provider's routing functionality. This case, Cilin basically just gives the packet to the router, to Linux. You know what to do. Either the cloud provider knows how to do with this or you're running a routing protocol and the routing protocol disputes all the route. Typically what you do is post POC when you know what you're doing and you're setting everything up for production. It's faster and the network actually sees the product piece. I won't go into more details here, but basically the bottom line is you can run it with Cilin. So how are policies actually defined? We saw this in the first part of the demo. This is a L3 label-based policy. Basically, there's lots of information here. What really matters is this part and this part. So this part of the policy says this policy applies to all parts which have the labels Dev Store and Empire. And then you say all parts with the label class spaceship can talk to this. So this is how you do connectivity layer three, part to part, very simple. What do you do if you want to, for example, limit access to external services? For example, you have a microservices which uses stripe.com services. You don't want that microservice to reach the entire world. You want to limit it to what it needs, like least privilege. In this case, we're saying this policy applies to all parts with spaceship and Empire and you can only talk to the external IP 8888, Google's DNS server. So this policy would allow the part to talk to the Google DNS server, but not... Let me try to re-energize this. All right. L4 policy, same principle, you have the selector which selects the parts that it should apply to and then you say you can only talk going on port AD TCP. So you cannot, for example, use a different port when you're talking outside and so on. Actually this is Ingress, so this is incoming. You can only be talked to on port TCP. And then layer seven, this is the part where the demo would have been awesome because it was Star Wars themed. This basically says this is an extension of the layer four rule and it says you can do these two API calls only. So you can talk port AD TCP and you can only get to slash, we want, or you can do a put to slash, exhaust port if you have the HTTP header x has force true set. So some of you may notice where the demo was going, it was definitely some DevStar construction was definitely have been going on. So this is how you can define layer seven policy. How are these policies enforced? So we talked about BPF, are we using only BPF for this? For the layer three, layer four port, it's all BPF. The kernel can do this today right now. Layer seven policies, we can do this, we will be able to do this in BPF. Right now we're using a sidecar proxy for this. I will explain what a sidecar proxy is. So what is a sidecar proxy? A sidecar proxy is basically if you have two services talking to you, you run a proxy as a sidecar next to them. So basically you place a proxy next to them and then all communication goes first through the proxy, proxy to proxy, and then proxy to service. This is what is called a services mesh as well. Some of you might have heard of Istio, Envoy, LinkerD, some people have Nginx and HA proxy beforehand. What this allows to do is basically provide networking functionality on layer seven. So I can do HTTP or low-pallancing. I can say, if I do low-pallancing and my request to the backend fails, I can try another one. I gain visibility, latency data into, I get tracing information and so on. So right now this services mesh is not focused on security, focused on low-pallancing, routing, and so on, but you can use this technology to enforce security and this is what we use it for. So how does this look like on the networking level? It basically means that all traffic goes out the socket and TCP down here, this is basically a kernel, and down here you have an IP tables rule or a BPF rule that basically redirects everything back to the proxy, the proxy does whatever it has to do, sends it out, it goes over to network and it goes through a cycle of proxy here as well. So basically from service to service, you're going through the TCP stacks six times. That's three times the number of memory resources and this is non-trivial. If you're running, let's say public cloud provider images, you may have to bump the image size just because the memory needs are bigger so your bill will increase. The latency is hard because you're going through TCP stacks multiple time. Context switches, this is basically switching back between kernel and user space, this adds latency and there's a ton of complexity. But what was previously one connection is now three TCP connections to have two services talk to each other. So can we do something about this and this is what, or can we turn this sidecar into a race car and this is where Kproxy comes in. So Kproxy is kernel proxy. Kproxy basically brings some of this sidecar functionality into the kernel, basically at the socket layer. So this is what applications use to talk to each other if they do, if they run TCP. And this is where we basically would look at this payload with BPF and make a decision or add low-pricing function to anything at this layer. If you look at this picture, this is very simple, right? One or two TCP stack traversals, you go over the network, simple. Then the question came up, like, all right, what about SSL, TLS? What if my application is end-to-end encryption? How can I handle that? And this is why this was not possible previously. But recently, KTLS, kernel TLS was merged. What does KTLS do? KTLS allows the kernel to take over the symmetric encryption part, which means open SSL, the library will still do the handshake where all the box in the code are. So basically when all the exploits that we saw over the last 15 years, they were all in the control handshake part. And then once you have negotiated everything, you pass down the key to the kernel and the kernel will do the actual encryption. That is the expensive part. You gain about three, four percent of performance simply through this. And this is why some static content providers are interested in this. More importantly, this allows the kernel to send the clear text payload even if the application is doing end-to-end encryption. If we're going back to this picture, the worst case scenario is that you're doing end-to-end encryption and the proxy actually needs to decrypt here. It will decrypt, look at the header, make a decision, re-encrypt. It will send it over, decrypt, look at the data, re-encrypt. You're wasting a ton of, your AWS build will basically double at least. Go ahead, there's a question. So yes and no. So the question was, is the cycle proxy just becoming a control plane here? The kernel will have limits in terms of complexity it can handle. So what we're doing basically is we're saying, if we can handle it, we can handle it in a kernel and otherwise we can basically punt it to the cycle proxy in user space still. So it's more like an offload from that sense. Which means, exactly. So the statement here was just for the video recording. It can opt in what to handle and everything else will still be handled by the user space cycle proxy. And we can do this as on the request basis. So it's not just connection. So if you have let's say a long lift HTTP2 connection, we can do this per request. So even if just one of the requests inside of the connection cannot be handled, we can punt this out to user space proxy. And it gets even more exciting because we introduced or we're introducing something called socket redirect, which is basically if you go back to here, it allows us to basically jump from here straight over to here, like socket to socket. Which means if we have to punt to the user space proxy, we can save this very expensive hairpin down through the TCP stack and back up. We basically just can copy over. Which even if we cannot do it in the kernel, we will still save a ton of cycles. And we can basically say, well, this is safe. We can delay the encryption. Or the, yeah, we can delay the encryption. So we can move the encryption from here over to here. So we don't have to decrypt and re-encrypt again. So this is how we see the future of services mesh enforcement data play or layer seven functionality in general. Yeah, this is a socket redirect I just talked about. So we're basically getting rid of this part below. So just to give you an idea of performance and leaking company information right here, this is coming directly by our internal Slack channel. So maybe John Fasterbend is currently working on this. He did a performance measurement. And the numbers below, maybe if you cannot see it, it's 5.5 gigabytes per second is if one application is talking to another application locally through TCP over the loopback. The above example is socket redirect with a filter applied. We're actually faster the socket redirect because we're not going through the TCP stack. So you gain policy, you gain layer seven functionality and you are faster than you were before. So the number in the upper box is 6.7, 6.6 gigabytes per second. So the before and the after is pretty simple, right? So to summarize, so Cilium uses BPF to do networking, low planting and network security firewall on layers three. We can do label based as we saw defining policy based on labels. We can do cider based filtering, ingress, egress. You can say I want this legacy oracle database to connect to my microservice or I only want to be able to contact stripe.com IPs. We can do layer four policy. We can do layer policy. Right now we can do HTTP. We're currently doing the work to transition over to using Envoy as the sidecar proxy to enforce this. It already supports GRPC and Mongo. So with this transition we will start supporting HTTP, Mongo and GRPC. And then we'll add more protocols. The most likely candidate is definitely Kafka. Like Kafka is, if you look at Kafka and the potential how to secure this it's obvious. You have microservices sharing a message box. Kafka has a concept of topics where the same message box can be used for different topics. It's obvious that you want to have a policy and say this microservice only needs access to this topic, only granted access to this topic. So you can not steal data. You can not steal messages. You can imagine a policy which says this is a producer. It only ever writes to the Kafka message box so you say it can only write. Another component is a consumer so it can only take something off the messaging box. These are obvious reasons why you would want to secure on Kafka level. We saw the low-balancing bit and we saw the performance numbers both what we observed in our DDoS mitigation use case which is very similar to being a low-balancer and the Facebook use case. Basically being able to compete with user space networking solutions all in kernel, all well integrated. I didn't talk a lot about the dependencies because you don't have many. The only dependency that we have is an external key value store in the age of cloud native microservices. This is how you interact or how you share state between components. So a key value store is basically a, yeah, a key value store database where you can keys and a value associated with them. You can use Etude or console, right? So these are the two options that we support. Yes. So the question is if I'm running Kubernetes do I still need this? Yes. But you can use the Kubernetes etude key value store if you want to, even though I would not recommend this at a certain scale, right? Etude will have a certain scale limitation. And yeah, everything on this slide. Yeah, so I talked about Kubernetes. So we come as a CNI plug-in but we also have a lib network integration. So if you're running Docker swarm you can also run Cilium. Mesos recently added CNI port as well. We are supporting the Mesosphere ecosystem as well. If you want to get started, if you got intrigued and you want to try this out we have a getting started guide, a tutorial which is using a Wegram box. You can go to Cilium.io slash try and basically try this out including layer seven on Kubernetes, Mesos or Docker. And it's an open source project. So we are on GitHub, feel free to start us. We have a Twitter handle, we're sharing news, feel free to follow. I think I saw a sign up for one minute so I think we have a little bit of question but I think the coffee break is next so I think we can use some time for questions. All right, here we go. So the question is what is the relationship between Cilium and Envoy? Envoy will be our primary sidecar proxy which means that it will be the default to handle layer seven policies. If we can do that in the kernel, for example for Kafka which is a very simple protocol we will definitely do that in kernel which means we can do it at lower cost, higher speeds, lower latency. So we already have multiple people working on Envoy to basically prepare it for Envoy will be our primary sidecar proxy. The question is can you only use the layer seven bits right now? The thing about security is that you want to make it very hard, ideally impossible to pipe bypasses. The way we do this is we take over networking because in this way we can guarantee that we see everything which right now there's no way to do this but you're not the first to ask this question and we're currently investigating what to do this. In that scenario you would not be installed through a CNI plugin anymore but you would basically run it on top of another CNI plugin for example. What you can already do for example, I want to use for example Calico's routing demon and but use Cilium so that would be compatible or you could say I'm using a flannel and running Cilium on top that would also be possible but right now it's not like generically decoupled yet. More questions? Yep. So the question is can I integrate this with Nginx or Apache? What do you mean by directly on the network? All right so the question is can I, could I use the Nginx configuration interface to use this? Is that the question? Okay so right now you can't. So right now you can, we have an API that you can use to directly configure this or you can use a Kubernetes resource file as we saw in the example. These are the current ways. I'm happy to explore how that could look like. I haven't looked into that yet. So we do have t-shirts, feel free to stop by. We also have, and please star us and get up. All right, thank you very much.