 All right I think we're going to go ahead and get started. I'd like to thank everyone who is joining us today. Welcome to today's CNCF webinar, What's New in Lincardie 2.7. I'm Taylor Wagoner, the Operations Analyst here at CNCF and I'll be the host today. We'd like to welcome our presenter Oliver Gould, the lead creator of Lincardie and CTO at Boyant. Before we get going, I'd like to go over a few housekeeping items. First, as an attendee of this webinar, you are not able to speak out loud but you can communicate with us via the chat and the Q&A box. We ask that if you have questions that you ask in the Q&A box rather than the chat window. So please direct your questions there. This is an official webinar of a CNCF and as such is subject to the CNCF code of conduct. So please don't add anything to the chat or to the questions that would be in violation of the code of conduct. Basically, please be respectful of all of your fellow participants and our presenter today. A reminder that the recording and slides will be posted on the CNCF webinar page later this afternoon, which is CNCF.io slash webinars. With that, I'd like to hand it over to Oliver to kick off today's presentation. Thanks, Taylor. Hi, folks. So for those of you who don't know me, I am Oliver, the creator of Lincardie. And I'm going to go through today some updates about the project in general and specifically in the context of what we just released in 2.7 last month. And then at the end, I'll update you on the upcoming roadmap and hopefully give a little bit of a demo depending on the time. Okay. So background of the project. We've been working on this actually before we started working on Lincardie. So my background was an infrastructure engineer at Twitter working on much of the same problems, but in a library based approach rather than an application based approach. And that led us to start working on Lincardie in early 2016. And about a year later, we donated after getting some production users, we donated the project to the CNCF, which has been a great home for the project over the past three years now. And then a little in the end of 2018, we did a big overhaul of the project, moving, sunsetting a lot of the JVM stuff that we started with and moving to a new approach, which is well, discussing mostly today. So let's start with what Lincardie does or why anyone might use it. So Kubernetes and in general, the kind of cloud native approach gives you lots of control and visibility into your workloads and whether they're running and whether they're not and where they come from. And a lot of the kind of container side of things, what they don't give you is much in terms of the way that traffic, the application is actually behaving in that system. And so that's what where Lincardie really comes in. And first and foremost, we provide uniform golden metrics across all of your application stack. So that's success rate, latencies, throughput, error rates, you know, a wide range of things. And with that, you can build things like service topologies. So you can view who your callers are and who your colleagues are. You can instrument distributed tracing and even ad hoc tracing, which I'll talk about in a bit. The other main draw for Lincardie is its automatic security feature set. And so we do things like transparent MPLS and a whole lot there that is really valuable and works just out of the box. Excuse me, I work in San Francisco and there's an ambulance aside. Additionally, all that has to work reliably, we can't if you're introducing something like a service mesh, it can't reduce the reliability of your system. In fact, there's lots of opportunities to enhance it. And so that's really where Lincardie comes in. And we do that in what we hope is a really operational, an operationally simple approach. So quickly, if you're not familiar with the service mesh, which I guess there's still people out there who are getting around this, but we've been working on this for a couple of years now. But it really comes around the microservice approach. And so the distinguishing factor from microservices or distinguishing trait from microservices is that there are small components that can communicate over the network, usually in some sort of RPC fashion. And then with the service mesh comes in is that we add a sidecar proxy, what we call the data plane to every instance of the application. And this allows us to add that functionality we're discussing before. And that's all powered by a control plane. And that control plane is, you know, responsible for service discovery, as well as powering visibility system and observability system and providing policy to the proxy so that they can enhance the application. In Lincardie 2, this looks somewhat like this. And of course, this is an evolving picture. But we have a set of control plane components that are all written in Go. We did that so we can be tightly coupled with the Kubernetes API is like Clanko. We vendor, we currently vendor things like Prometheus and Grafana so we can have an out of the box experience that just works. And then we've written a data plane proxy in Rust. And this has been a big part of our investment over the past few years is to build a service mesh proxy that's purpose fit for this use case. And then we use things like GRPC for all of the communication within the system. So this is really a cloud native approach as best we can. And as we went into this, we went into this with a lot of lessons from Lincardie 1. And those lessons were that this had to be a kind of zero config experience. This needs to be something that you can drop into your Kubernetes cluster without changing the application and start getting data. This can't be a six months to a year. We have to go adopt a service mesh journey. This needs to be an incremental thing where we can start to get visibility and start to enhance security. While we figure out all the things that we want to do and can do with a service mesh. Part of that being a kind of lightweight addition to your cluster means that it has to be really low overhead. We don't want to be adding a lot of memory overhead or CPU overhead and especially latency. We don't want to make your application in worse. And as I said, I think we have opportunities to really improve latency. And the third tenant of this is that it had to be Kubernetes native. And so we've really made sure that we embrace Kubernetes primitives that we're not adding more abstractions on Kubernetes but are building directly into Kubernetes. And with that, we've integrated tightly with the ecosystem through things like having Prometheus working out of the bat since the beginning. And the control plane we talked about is about 200 megs RSS if we ignore the Prometheus component. And that data plane is somewhere between 10 and 20 megs of RSS, typically with about one millisecond of P99 latency. And there's a bunch more reading on how we got there at those links below, which you can get when this is up online. So exactly how fast and small is it? Well, we did some, we had the folks at Kinneville run some tests about a year ago. And so of course, when you have no service mesh at all, when you have no sidecar proxies, you will get the kind of best raw utilization. However, when you do add a service mesh proxy, LinkrD is a great choice here. And I'm sure that the others will improve, but we've really focused on making this a low overhead as possible. So next, I want to walk through in more detail what some of these features are. And so when we talk about things like observability or security, or reliability, for that matter, what, what does that mean? And what's the mean for your application? And so first and foremost, what we find is that load balancing is the sharpest tool in the shed. So I don't know if you're familiar with Kubernetes load balancing approaches. But typically, the way services work is that this is all managed through IP tables, which means we do connection level load balancing, which is really not an efficient way to do load balancing, especially for modern applications, especially GRPC and HP2 applications, where we have one connection for the whole lifetime of the process typically. And so we really focus on doing request level load balancing. And so that means we can efficiently utilize your Kubernetes deployments. And we do this with a latency aware load balancer, what we call peak Umo load balancer, which is exponentially weighted moving average. And that allows us to make sure that if you have individual pods in your cluster that are slow, because they have noise neighbors or failing because of some bad configuration, the proxy is able to eliminate them from consideration for balancing, and we only send traffic to the healthy endpoints. This is all powered by Kubernetes primitives like services, we're not introducing new service, services discovery complexity on here. And again, most importantly, this doesn't require any application change. So just by adding a cycle proxy, we transparently detect HTTP traffic and load balance it without anyone being the wiser. This has real effects. This isn't just about kind of theoretical performance improvement. Here is a test we ran with the various load balancing algorithms. And these are all request level of balancing algorithm still. But we see by switching to a Umo balancer, you can really improve success rate. And so if you had a time out of say one second for your requests, what a route Robin load balancer would do would give you a 95% success rate by using a better load balancing algorithm, we can push it up around three nines, which can be the difference between being paged and not paged in many on call rotations. And so it's a really important tool, especially as you deal with scaling your system and getting it to scale her thoughtfully. The other kind of really important out of the box feature for liquidity is that we automatically establish mutual TLS between every node in the mesh. And that's to say, if we have a sidecar on each side of connection, that will all get TLS without any application participation. And that's not really just about encryption. So we talked to folks who say, Well, I don't need TLS in my cluster because I trust my cloud provider. And, you know, I'm not dealing with health care data. So it really doesn't need to be encrypted in transit. And that might be true. However, TLS gives us more than that. It gives us a way to establish identity on the communication level. So we can actually know which workload is talking to us. And we can know that in a cryptography verified way. And if you extend this into how your your provisioning system works, this can actually give us a be part of a full zero trust communication model. Today, we bootstrap all of us up from Kubernetes service accounts. And so we use again, Kubernetes primitives, and the identity bootstrapping in within Kubernetes to bootstrap service mesh identity. And then once we do that, we rotate certificates today, once a day, but I think that could become much more frequent. And, you know, we're again, the keys never leave the pod. So there's no central risk there. And then we rotate these things very frequently. When just in two seven, and so this was the kind of banner feature for two seven, we made it possible to bootstrap all of us with certain manager. So previously, part of the link of the installation, it would generate trust routes and do some things for you. But that's not really a great way to run a production where you want to have a real chain of trust around your certificates. And so you cert manager let's integrate with things like vault and various cloud providers. And so that's a really exciting addition. It's again, a brand new feature. And we've had a lot of questions about that over the past few weeks. So please come get involved there and tell us what other things you'd like to see in that space, because that's a really cool feature. We do all this without conflicting or imposing requirements on your ingress or application TLS. So if you have ingress TLS, we will transparently proxy that through and not be encrypted in the mesh same thing for application traffic. And that means also that this mutual TLS is not intended to be used for those things. And so if you want to have TLS at another level, the mesh will proxy it and treat it as raw TCP traffic. If you're communicating in plain text, we'll do all this awesome MPLS for you. And today that is just for HTTP and gRPC traffic. But we're working on making that feasible for arbitrary protocols. And that works in flight. And again, like most of our features, no application changes, you don't have to do anything to your code to start participating this other than a few adaptations to your workloads. This is my favorite Linkardee feature that we never talked about. And I didn't find a good image for it, unfortunately. But another part of the transparent upgrading that we do is that we communicate everything between mesh pods over HP2. And again, with MTLS. And so what that means is that we if you have an HTTP application that would typically open many, many connections between pods, we can do all of it over a single connection. That means we can amortize a little bit cost for TCP handshakes. And it means that things like MTLS are not a significant performance over head, because we don't have to do session establishment repeatedly. That happens about a few times per process, per connect, you know, per edge for two pods. And that's it. And again, no application changes. Nobody knows that HTTP2 is involved from your application's point of view. So whether you're using HTTP2 or HTTP1, we just merge that all under one big fat pipe. It's great. Now another some of the visibility features. So we've started with Prometheus from the ground up. And this is kind of in direct contrast to the Envoy base meshes, where Envoy did not really start with Prometheus support, they started with StatsD support, and have been, you know, moving towards Prometheus support over the past couple of years. This is something we realized was really important for Kubernetes native mesh. And so we started this. And we do that in order to give every pod in your fleet a uniform level of visibility. And so regardless of what language is implemented, or what, you know, whether that's written in house or third party software, we can get the same gold metrics, we can get latency, success rates, quest counts, failure counts, all of the interesting things about your traffic. This is HTTP and GRPC aware. And so we know what an HTTP success code is versus a failure code and same thing for GRPC. And we can annotate the metrics with lots of that metadata. And in addition to that metadata, we pull lots of the Kubernetes workload metadata from the discovery API. And so when your proxy is talking to another pod, we can tell you exactly which pod it's talking to, what service it's part of, and a lot of the other Kubernetes centric metadata there. We've done work to make sure that we give you raw histograms, which is kind of maybe esoteric feature. But what this means is that there's no averaging of latencies in the system. If you want to ask about P 99 latency across your fleet, we can actually give you a fairly accurate P 99 latency rather than the average of P 99 latencies, which is typically what you find in these solutions. This is all can be hydrated with open API and GRPC specs. And so if you happen to be using an IDL or a interface definition language, you can use this to configure the metrics so we can give you per route metrics and things like that. Again, no application changes, you just upload this data and the proxy will pick it up at runtime and go with it. We've done a bunch of work, I think last year to integrate with open census. And this lets us so if your application uses open census with something like Yeager, Linkardy can participate that. And so here we see in this screencap that there are, you know, another application running and you can see all of the Linkardy hops in that application. However, this does require application changes. And that is just the nature of distributed tracing like Yeager. Your application has to forward headers and has to do things. And so this is a really cool feature, but it's not within Linkardy's kind of wheelhouse of out of the box, awesomeness. However, we do have something that we call tap, which is an ad hoc tracing feature. And so this can be done without any application change. And the way this works is that at runtime, as your system is running, you can connect to pods through the control plane and say, show me requests that look like this. And we can actually start to collect data from the fleet of pods at runtime to give you ad hoc tracing without having to do any application change. And while that may sound scary from a security point of view, we've done work to make sure this is all RBAC and NTLS and validated so that you can actually set control over who may tap who. Another one of our awesome features is traffic split. This is something we've been working with the folks at SMI. And so SMI is a service mesh interface. We're working with folks in Weave and Hashtag Corp and Super Glue, all different groups there, Microsoft to define APIs that are kind of core to the service mesh regardless of implementation. And one of these is traffic split. This allows us to provide traffic between services, which lets you do things like canary or blue green deployments. And there's some really cool demos around flagger. And so if that's interesting to you, I would encourage you to go check out their demos. In addition to just traffic split, we also have more APIs in SMI. Traffic telemetry is a uniform way to ask about these metrics independently of the service mesh. And that lets you build things like dashboards on top of this, whether you're using Linkerty or Istio or any of them. And we also have traffic policy, which is still kind of in its alpha state, but is a way to set policies on which endpoints and talk to which services, etc. Okay. Before I go further, are there any questions in the Q&A that we want to look at? It looks like there may be. Okay. So Deepak asked, when should we choose Linkerty over Istio? And I wasn't planning on talking about that explicitly. I think that Istio is a great solution for lots of complex policy problems. And that we do find kind of consistently that folks who adopt Istio take kind of a long time to adopt it. And so if this is part of a longer architecture thing where you can really spend the time to get this right and really dig in and learn a lot of organizational things, Istio is probably a good approach. There's a lot of API surface area there and do lots of things. Linkerty's focus on simplicity is meant to unblock you. And so if you're trying to get to Kubernetes today and you need visibility and security and reliability, Linkerty is something that you can drop in and then we'll grow with you. And so we don't have yet some of the policy features that Istio has, but what we focused on is what we think are the essential things that you can't know about. Let's see. There are some other questions here. And I will get, and I think the other question around Linkerty is just the same. I'll get back to those questions towards the end. I'm going to take a quick deviation from my slides to show a demo if that's acceptable to everyone. Okay. And so I have taken the liberty of deploying an app in Kubernetes in advance and it's this typical bookstore type application. And I can see that it's all running and healthy, but that's about all I really know about the application. You know, I could probably try to look through logs, but there's not a lot there. And so I want to show you what Linkerty can do for this application in just a matter of minutes. And so the first things first, I'm going to install Linkerty. And rather than install Stable 2.7, I'm going to install the Edge Release, which we released yesterday. And I kind of skipped over this earlier. We do Edge Releases weekly and so we have a very kind of regular release process off of master. And then we release Stable Releases about every two months. And so we iterate kind of quickly and then we test QA of the feature set on Edge. And then once we feel like we have a stable feature set, we have a stable release. So we do stable about every two months and Edge is weekly. So I've already upgraded to this week's Edge teams and verify that. And I don't have a server version yet. And so the first thing I'm going to do is Linkerty. Well, before I install it, I'm going to check my cluster. And so sometimes your Kubernetes cluster can be configured in a way such that Linkerty will not just work in it, unfortunately. And some cloud providers or some bare metal installs are effectively, especially affected by that. So we have this check that this makes sure that the cluster is in a good state. And so my AKS cluster here is in a great state. So now I can go ahead and install it. You can use things like Helm here. We've done a lot of work to make the Helm integration really good. I would recommend that if you're going to do something in production. But today, because I'm doing a demo, I'm just going to deploy this yellow style. Okay, Linkerty install, crypto playoff. All right, so we created a couple CRDs and then a whole bunch of config maps and things like that. A bunch of roll bindings. This can also be split up so that you can set out the roll binding the cluster roll things from the use level privileges. And then we'll run the Linkerty check. And Linkerty check will again make sure that the cluster starts successfully. And so we validated that all of the kind of configuration and credentials were uploaded properly. And now we're waiting for, well we're waiting for Prometheus to start. That shouldn't take too long, but we might get in patient. Let's see. Okay, there we go. And so that is good. We should run again and it will just work. Great. And we see here that we've added, I don't know, 10 small pods to the cluster. And we can try to see stats for them. Output's not pretty. What we find is that we have a bunch of stats in the Linkerty namespace, but we don't yet in the default namespace. So Linkerty, deploy. None of, we haven't upgraded the books out for anything yet. All we've done is install Linkerty. Great. And so we see that Linkerty is running successfully, but we have no stats around the books out. So how do we add Linkerty to the books app? Well, normally in production, you would go update some YAML file and check it into get and get that out. For demos, I'm just going to run this handy-dandy command. And I've annotated the namespace so that Linkerty will be injected to everything in that namespace. And then I can do a poob, roll out, restart, deploy. And I just will restart them all. This will take a couple seconds, probably. And see now where we used to have one container for pod, we now see the new pods have two containers. And so we have a sidecar container running with them. And as the container starts up, it generates a private key. And then it talks to the control plane using its service account token to validate its identity and get a certificate. And then at that point, they all start to communicate over TLS. And so if we so we see some of the old things are still in here. Okay, now we have data. And we see that traffic thing doesn't have anything because this is all server-side stats. So we can go to deploy and we will now we see the outbound stats from those things as well. Which is cool. We see success rates not perfect here. So we can do more introspection here. But immediately, so we've just all we've done is add an annotation and restart the pods. And we now have traffic data. We can see success rates and we can see latencies. We can see the success rates are not perfect. And so we can do something like linkerty top. Linkerty top say which one was not healthy. So let's say the web app. And now we're using that tap functionality, that ad hoc tracing functionality, that connect to the web app process and to start to look at exactly which requests are running. And we see latencies and accounts of these things here. We can also use things like linkerty tap. And this will just dump out raw request metadata. We can do I think yeah we can get really high granularity JSON structure data here so you can start to script over this stuff if you want to. There's some cool jq things you can do there for sure. And then there's a there's a bunch more in the cli but before we go further. This edges command will actually allow us to start to see who's communicating with whom and whether it's secure or not. And so we see this one case the web the traffic pod had talked to an old web pod before it had any identity to it. And so there was no security there. No TLS there. But everything else all the other communication has been secured now which is awesome. Before I put this demo down let's just open the dashboard. It's not just a CLI tool. We have a really nice web dashboard for all this stuff. And so maybe here we'll get a better sense of what's going on. So one we can get topologies. People think incorrectly that you need tracing to get topologies. We can do this all just with linkardies metrics because we've annotated with rich metadata. We can also click into the books app and when we click the books app we again see its upstream and downstream services and their success rates. And we also can get a big live view of the calls and actually which calls are failing. So I can see that hosting to bookstop Jason is really where success problems are. Also this one reading this one book is a little hard to. And that gives us a lot of actionable data. Now I can actually go start to debug things. And we have the same sort of you know the same set of features that we have in the CLI. Okay and so back to the slide there. Well actually let's stop before we go further. Any more questions about what we just saw? There might be some more things in the Q&A. And I'll come back to the questions in a sec. But the roadmap coming up so in 2.7 we just shipped a bunch of cert manager features. 2.8 is coming up hopefully towards the end of the month. It was going to be after KubeCon but without KubeCon we might get it done earlier. And that is going to be a bunch of stability improvements and perf improvements. As well as I mentioned earlier that it can be quite large or the Prometheus instance can become quite large in production. And so most folks tend to have their own Prometheus installs and don't want to duplicate that with ours. And so we're working on making that something where other folks can just plug their Prometheus into Lincriti and we don't have to run a second one. That'll be really cool. A much bigger product we just started on is multi-cluster. And so that will allow us to start bridging clusters and doing cross cluster routing and policy which is super exciting. We've just shipped some of the first pieces of that into the code base but that'll be slowly rolling out over the next couple stable releases. It's a great time to come get involved with that and we've written some blog posts on parts of that that we understand well. And so if that's exciting to you please get in touch because there's a lot more to talk about there. A lot of my focus right now is on getting MTLs for everything. So I said we did HTTP and GRPC by default and I want to extend that to be all TCP traffic that we automatically MTLs it. That will allow us to start doing things like traffic policy in a much better way. And so that's happening soon. I'm expecting it in the two nine time frame the next set of features there and we're working kind of steadily towards implementing the SMI traffic policy APIs. Additionally we get requests for exotic protocol support things like Kafka and Redis other things I'm sure. And so this is a big opportunity where I'm looking for books to help contribute these things to the code. It's not technically challenging it's really just figuring out what the value proposition is for each of these various protocols and helping us working with us to integrate that into the system. And all of this is on GitHub. All of this is open source. We have a ton of contributors from a ton of organizations and so if this is something that appeals to you and we are lacking a feature or lacking docs that you think we need please come get involved and ask questions. We're just in the process of launching a new RFC process and so the intent here is that it will make it easier for folks who are not kind of integrated well with the project you're ready to start proposing bigger changes. We find that some folks will just kind of walk into issues and want to think through large new features and we think RFC processes a little bit more structured way to deal with that. We have a great community on Slack lots of questions and lots of people answering questions so I encourage you to join our Slack. We're mailing this they're not so active but they're good for information. We do regular periodic community calls and we've done things like formal security audits. Cure 53 did a really great audit of our code base last year and we're working with the community right now to do some auditing of the underlying T-less infrastructure that we use which is really really exciting and couldn't do it without the CNCF that's for sure. Okay and all that said I think I've talked enough in a big stream of thoughts but I hopefully there's some more questions now. Paul wants to know if we're hiring. That's a good question. We're always opportunistically hiring the right people for needs that we have and so talk to us but I expect we'll be hiring more later in the year or next year after some of Boyan's business things progress but stay in touch for sure. Crit asks how do we export the configs to vigils style deployments. I'm not sure I understand what a vigils style deployment is but I'm willing to learn and Deepass has so many actually in which case is it not a good idea to use the service mesh just cosplay a role in any role in this decision and that's a really good question and I could probably talk for 40 minutes on that. I think the short answer is that all mature microservices end up having something like a service mesh. The question is whether it's decoupled from the application itself or whether it's integrated as a library and so when I was at Twitter working on this we used finagle which was a library but it was effectively a service mesh. It's a rich data plane. It's a smart data plane that always had a talk to a service discovery that knows how to learn timeouts and policy information and all that stuff and apply it to the data path and so it's inevitable that you end up with something like that in a sophisticated microservice or really any microservice and the cost then becomes whether you have the control to have a uniform code base and you have the engineers to maintain that infrastructure or whether it makes more sense to bring this in as a separate component that can be layered in and so that's really the cost trade-off that has to be assessed there. My hunch is that in the majority of cases where you do not have an existing data plane solution a service mesh is going to be a cheaper approach in the long run in terms of staffing especially. Those things can be quite complex and if you have an extremely uniform service you may be able to get away with it but those things don't last forever. Oh okay so Kurt was asking about get-off style deployments and so yes that's a big focus of ours right now. We have been focused on one nice that Helm integration so Helm is kind of the the factor of standard there right now. We think there's a lot of other opportunities for doing more integrations there but for instance the CERT manager integrations that we did in 2.7 were done specifically so that we could support get-off style workflows where we don't want folks to have to check in LinkerD's signing credentials. We want that to be managed by an external system so CERT manages a big part of that and so yeah we expect that to work very well if you find problems or have questions come into Slack or GitHub we'd love to help you solve them and I think I'm about out of good answers unless there are easier questions. I don't know much about identity-aware proxying. I think so I think reading into deep expression here is with so many security services like Cloud Armor, IAP, identity-aware proxy how does this using a service mesh like LinkerD help in securing the traffic and I would view those as different layers in the same in the complete solution and so LinkerD is not really focused on dealing with ingress or user-facing traffic in any way and we of course have to proxy some of it but we view that as mostly forwarding TCP connections to an ingress and we want there to be smart ingresses that deal with OAuth and the various authentication systems that you may need there. Where we see a service mesh really coming in is in the workload to workload service to service communication and how do we extend the identity model to deal with services and not just people and so I view them both as kind of essential components in a ZR trust solution. I hope that answers the question. Okay and if there are no new questions popping up I think I have exhausted my voice Oh a question from Vivian. Is it possible to expose LinkerD metrics without the use of internal Prometheus metrics in the current version 2.7 or will this be part of the next release? So LinkerD's metrics are automatically exposed from the proxies themselves and so you're always able to connect to the proxies on port 4191 and curl the slash metrics endpoint and you'll get all the Prometheus metrics you want from there. So you don't need a Prometheus installation to get metrics out of the system. To be able to use our dashboards or run LinkerD's stat or any of those commands that I demoed today Prometheus is a necessary component so a lot of the workflow that we expect we need something like Prometheus but you know you can hack it yourself I'm sure it's all open source. Okay another question from Bailey Hayes. You've called out Helm several times. Is the plan for get ups changes or will we customize? That's a good question. I think we are looking for people who are customized experts to come help us fill out that story and so these are all I don't think LinkerD has too much of a horse in the game of how you do installs. We want to make sure that we have kind of the generics exposed so that it's possible to go do anything you want. I know Thomas who works with us has done some stuff with customize here but I just don't know enough about it to know but I would love folks in the community to get involved there and show us some good recommendations. I'd also point out that last week we saw a demo from someone using Terraform to manage these things and so it's not just Helm it can be done in lots of different ways but we're looking for community solutions there for a large part of it. Good question. Oh great more and more questions I love this. If the Kubernetes cluster used Calico for networking does LinkerD support it? Yes we have a CNI and we can integrate we work well with other CNIs so no blockers there let us know if you find any issues and is there a guide you can follow which if we are running a running LinkerD in large deployments 3000 plus pause scaling the control plane there are I don't know there probably are some things on the internet around that however I will also add that company is like buoyant which I work at do offer support for very special installations and things that need a little higher touch so if that's not appealing to you I would say let's get come to the to the GitHub or Slack and start the questions there and we can find folks in the community who can who've been through some of these things that can help you. Yeah we're not a top-down community where there's a lot of folks figuring these things out on the ground with us so use them not just us or not just me okay I think we're Taylor correct me if I'm wrong I think we're roughly good on time here yes we're great on time are there any last questions from anybody in the audience Vivian is a question I have been using hooks to replace the Acer when he's going to be home will there be support for Rotate CA certs so that's a hot topic of conversation that's a really good question I am going to probably ask you to take that offline and come to Slack or GitHub where there are multiple schools of thought on that and we want to help your use case there are some security concerns around making the CA rotation easy and so we're trying to balance the security risks there with the difficulties around managing these configs any other questions before I go and again come to Slack or GitHub and you'll find us there too we have community meetings probably towards the end of March any clients I can mention so I will go back oh that's not what I meant to do on the users up here we have a bunch of big slides with folks who are using Winkredi some of these are Winkredi one but many of them are Winkredi two I other than these that are up here I don't want to out anyone anyone's infrastructure plans but again I think if you come into Slack you'll find people at various companies who are happy to talk about what they're doing Kelly Burr is another really good question any thoughts on giving Winkredi the ability to have an intermediate cert with a private key so a command in the middle and spec traffic for services that have to be TLS end-to-end like elastic search that is a fantastic question there's someone on the team who really really wants to do it it kind of scares me because it's a little bit mischievous but I think that'd be a really cool issue or in our seat open up so if that's something that's interesting for you I'm not gonna say no to it just we haven't done it yet good questions really good questions folks okay I think the questions have stopped I agree thanks everybody for all your good great questions and Oliver for an awesome presentation and thanks everybody for joining us today the webinar recording and slides will be on later online later today at cncf.io slash webinar if you'd like to download the slides and check them out yourself we look forward to seeing everyone at a future cncf webinar thanks so much everybody thanks Oliver bye bye thanks