 The Cube presents KubeCon and CloudNativeCon Europe 2022, brought to you by Red Hat, the CloudNative Computing Foundation and its ecosystem partners. Welcome to Valencia, Spain. In KubeCon, CloudNativeCon Europe 2022, I'm Keith Townsend alongside Enrico, senior IT, senior IT analyst for Gigon. Welcome back to the show, Enrico. Thank you again for having me here. First impressions of KubeCon? Well, great show. As I mentioned before, I think that we are really in this very positive mood of talking with each other and people wanting to see the projects, people that build the projects at its amazing. I mean, a lot of interesting conversation in the show floor and in the various sessions. Very positive mood. So this is going to be a fun one. We have some amazing builders on the show this week and none other than William Morgan, CEO of Boyant. What's your role in the LinkerD project? So I was one of the original creators of LinkerD, but at this point, I'm just the beautiful face of the project. Speaking of beautiful face of the project, LinkerD just graduated as the CNCF project. Yeah, that's right. So last year, we became the first service mesh to graduate in the CNCF. Very proud of that. And that's thanks largely to the incredible community around LinkerD that is just excited about the project and wants to talk about it and wants to be involved. So let's talk about the significance of that. LinkerD, not the only service mesh project out there. Talk to me about the level of effort to get it to the point that it's graduated. You don't see too many projects graduating CNCF in general. So let's talk about kind of the work needed to get LinkerD to this point. Yeah, so the bar is high and it's mostly a measure, not necessarily of the project being technically good or bad or anything, but it's really a measure of maturity of the community around it. So is it being adopted by organizations that are really relying on it in a critical way? Is it being adopted across industries? Is it having kind of a significant impact on the cloud-native community? And so for us, the work involved in that was really not any different from the work involved in kind of maintaining LinkerD and growing the community in the first place, which is you try and make it really useful. You try and make it really easy to get started with. You try and be supportive and to have a friendly and welcoming community. And if you do those things and you kind of naturally get yourself to the point where it's a really strong community full of people who are excited about it. So from the point of view of users adopting this technology, so we are talking about everybody or do you see really large organization, large Kubernetes clusters infrastructure adopting it? Yeah, so the answer to that has changed a little bit over time, but at this point, we see LinkerD adoption across industries, across verticals, and we see it from very small companies to very large ones. So one of the talks I'm really excited about at this conference is from the folks at Xbox Cloud Gaming who talked about, we're going to talk about how they deployed LinkerD across 22,000 pods around the world to serve basically on-demand video games. Never a use case I would ever have imagined for LinkerD. And at the previous KubeCon, virtually KubeCon EU, we had a whole keynote about how LinkerD was used to combat COVID-19. So all sorts of uses and it really doesn't, whether it's a small cluster or a large cluster, it's equally applicable. Well, so as we talk about LinkerD, service mesh, we obviously going to talk about security, application control, et cetera. But in this climate, software supply chain is critical and we think about open source software supply chain. Talk to us about the recent security audit of LinkerD. Yeah, so one of the things that we do as part of a CNCF project and also as part of, I think our relationship with our community is we have regular security audits where we engage security professionals who are very thorough and dig into all the details. Of course, the source code is all out there, so anyone can read through the code, but they'll build threat model analyses and things like that. And then we take their report and we publish it. We say, hey, look, here's the situation. So we have earlier reports online and this newest one was done by a company called Trail of Bits and they built a whole threat model and looked through all the different ways that LinkerD could go wrong. And they always find issues, of course. It would be very scary, I think, to get a report that was like, no, we didn't find... Yeah, it was clean. Yeah, everything's fine. Should be okay, I don't know. But they did not find anything critical. They found some issues that we rapidly addressed and then everything gets written up in the report and then we publish it as part of an open source artifact. Are you, let's say, do they give you a nice something, so if something happens, so that you can act on the code before somebody else discovers the... Yeah, yeah, they'll give you a preview of what they found and then often, it's not like you're going before the judge and the judge makes a judgment and then off to jail, right? It's a dialogue because they don't necessarily understand the project. Well, they definitely don't understand it as well as you do. So you are helping them understand which parts and are interesting to look at from the security perspective which parts are not that interesting. They do their own investigation, of course, but it's a dialogue the entire time. So you do have an opportunity to say, oh, you told me that was a minor issue. I actually think that's larger or vice versa. You think that's a big problem, actually. We thought about that and it's not a big problem because of whatever. So it's a collaborative process. So Linkerd, you've been around, like when I first learned about ServiceMesh, Linkerd was the project that I learned about. It's been there for a long time, just mentioned 22,000 clusters. That's just mind-boggling. Pots, 22,000 pods, yeah. Clusters would be great, though. Yeah, yeah, clusters would be great too, but it's still 22,000 pods. Yeah, it's a big deployment. That's the big deployment of Linkerd, but all the way down to the smallest set of pods as well, what are some of the recent project updates, some of the learnings you bought back from the community and updated the project as a result? Yeah, so a big one for us on the topic of security, Linkerd, a big driver of Linkerd adoption is security and less on the supply chain side and more on the traffic, like live traffic security. So things like mutual TLS, so you can encrypt the communication between pods and make sure it's authenticated. One of the recent feature additions is authorization policy, so you can lock down connections between services and you can say service A is only allowed to talk to, service B, and I want to do that not based on network identity, not based on IP addresses, because those are spoofable and we've kind of, as an industry, moved, we've gotten a little more advanced from that, but actually based on the workload identity as captured by the mutual TLS certificate exchange. So we give you the ability now to restrict the types of communication that are allowed to happen on your cluster. So, okay, this is what happened. What about the future? Can you give us, you know, in some suggestion on what is going to happen in the medium and long term? I think we're done, you know, we graduated so we're just going to stop. What else is there to do? There's no grad school, you know. No, no, so for us, there's a clear roadmap ahead continuing down the security realm for sure. We've given you kind of the very first building block, which at the service level, but coming up in the 2.12 release we'll have route based policy as well so you can say this service is only allowed to call these three, you know, routes on this endpoint and we'll be working later to do things like mesh expansion so we can run the data plane outside of Kubernetes, you know, so the control plane will stay in Kubernetes but the data plane will, you'll be able to run that on VMs and things like that. And then of course, and then, you know, we're also starting to look at things like, I like to make a fun of WASM a lot but we are actually starting to look at WASM and then the ways that that might actually be useful for LinkerD users. So we talk a lot about the flexibility of a project like LinkerD, you can do amazing things with it from a security perspective but we're talking still to a DevOps type of cloud of developers who are spread thin across their skillset. How do you help balance the need for the flexibility which usually comes with more nerd knobs and servicing a crowd that wants even higher levels of abstraction and simplicity? Yeah, yeah, that's a great question and this is what makes LinkerD so unique in the service mesh spaces. We have a laser focus on simplicity and especially on operational simplicity. So our audience, you know, we can make it easy to install LinkerD but what we really care about is when you're running it and you're on call for it and it's sitting in this critical vulnerable part of your infrastructure, do you feel confident in that? Do you feel like you understand it? Do you feel like you can observe it? Do you feel like you can predict what it's going to do? And so every aspect of LinkerD is designed to be as operationally simple as possible. So when we deliver features, you know, that's always our primary consideration. As you know, we have to reject the urge. You know, we have an urge as engineers to like want to build everything, you know, it's the ultimate platform to solve all problems and we have to really be disciplined and say we're not going to do that. We're going to look at solving the minimum possible problem with a minimum set of features because we need to keep things simple and then we need to look at the human aspect to that. And I think that's been a part of LinkerD's success. And then on the buoyant side, of course, you know, I don't just work on LinkerD, I also work on buoyant which helps organizations adopt LinkerD and increasingly large organizations that are not service mesh experts don't want to be service mesh experts. You know, they want to spend their time and energy developing their business, right? And building the business logic that powers their company. So for them, we have actually recently introduced fully managed LinkerD, where we can take on, even though LinkerD has to run on your cluster, right? The sidecar proxies have to be alongside your application. We can actually take on the operational burden of upgrades and trust anchor rotation and installation and you can effectively treat it as a utility, right? And have a hosted like experience even though the actual bits, at least most of them, not all of them, most of them have to live on your cluster. I love the focus of most CNCF projects. You know, it's peanut butter or jelly, not peanut butter, trying to become jelly. What's the peanut butter to LinkerD's jelly? Like where does LinkerD stop and some of the things that customers should really consider when looking at service mesh? Yeah, no, that's a great way of looking at it. And I actually think that philosophy comes from Kubernetes. I think Kubernetes itself, one of the reasons it was so successful is because it had some clearly delineated boundaries. It said, this is what we're going to do, right? And this is what we're not going to do. So we're going to do layer three, four networking, right? But we're going to stop there. We're not going to do anything with layer seven. And that allowed the service mesh. So I guess if I were to go down the bread, the bread of the sandwiches Kubernetes and then LinkerD is the peanut butter, I guess. Then the jelly, you know, so I think the jelly is every other aspect of building a platform, right? So if you are the audience for LinkerD, most of the time it's the platform owners, right? They're building a platform, an internal platform for their developers to write code. And so as part of that, of course you've got Kubernetes, you've got LinkerD, but you've also got a CI-CD system. You've also got a code repository if it's GitLab or GitHub or wherever. You've got other kind of tools that are enforcing various other constraints. All of that is the jelly, you know, this analogy is getting complicated now. And like the platform sandwich that you're serving. So talk to us about trends and service mesh from the, as we think of the macro. Yeah, yeah. So it's been an interesting space because we were talking a little bit about this before the show, but there was so much buzz. And then what we saw was basically it took two years for that buzz to become actual adoption. And now a lot of the buzz is off on other exciting things. And the people who remain in the LinkerD space are very focused on I actually have a real problem that I need to solve and I need to solve it now. So that's been great. So in terms of broader trends, you know, I think one thing we've seen for sure is the service mesh space is kind of notorious for complexity, you know. And a lot of what we've been doing on the LinkerD side has been trying to reverse that idea, you know, because it doesn't actually have to be complex. There's interesting stuff you can do, especially when you get into the way we handle the sidecar model. It's actually really, it's a wonderful model operationally. It's really, it feels weird at first. And then you're like, oh, actually this makes my operations a lot easier. So a lot of the trends that I see, at least for LinkerD is doubling down on the sidecar model, trying to make sidecars as small and as thin as possible and try and make them, you know, kind of transparent to the rest of the application. So. William Morgan, one of the coolest Twitter handles I've seen at WM on Twitter. That's actually a really cool Twitter handle. Thank you. CEO of Boyant. Thank you for joining the Cube again, Cube alum. From Valencia, Spain, I'm Keith Towns along with Enrico Signoretti. And you're watching The Cube, the leader in high tech coverage.