 there we go. All right, so I love DevOps. DevOps has been a pretty foundational part of my life for about the last seven-ish years or so. That's when I first heard the term and was introduced to the practices that were entailed with that. I've been practicing some of those things before, right? So I think I can say with a lot of confidence that DevOps has made a pretty large impact in my life. But I think that maybe we're at a little bit of a point where DevOps is changing, and DevOps is always changing, but I think it might be changing a little bit faster than maybe we're used to. So DevOps is dead, right? I've heard proclamations like this. My slides will keep working. Maybe I'm not going to move a whole lot here. I'm going to stand still. So DevOps is dead. I don't believe that. But I've heard it. Maybe you've heard that too. People have said it. Why are they saying it? Is it really dead? So some folks like Chris and Bridget have talked about these things. So I'm not going to belabor the point or apparently my slides are just going to keep coming in and out, regardless of whether I move or not. But I'll frame up the discussion like this, right? I'll say that we are at an inflection point. Tools and technology are changing very rapidly. I think the nature of infrastructure and development is also changing very rapidly. And that's led some folks to make proclamations like this, right? DevOps is dead, but I don't really believe that. I was lucky enough to go to a config management camp in Belgium this year. Anybody else go to that? Or maybe Fosdom? Raise your hands. Got one hand in the back. So let me say this about the event. The cool thing about this event was that it was the fifth year that it had been happening. And so to celebrate that, the three founders from the main config management projects, there would be Adam Jacob from Chef, Luke Kines from Puppet, Mark Burgess from CF Engine. They all got together, did a series of keynotes where they basically reflected on what's happened in the config management space, where it's going, what we've accomplished, and maybe where the technology is going next. And I highly recommend those talks. I have some links to those talks at the end of this deck. But there's a theme that happened throughout these talks that really struck a chord with me. And I thought it would be very, very relevant for us here today at DevOps Day. I think really what's happening is that DevOps is perhaps at an inflection point. So technology shifts are rapid, right? And they're only getting faster, it would seem. And so I think one of the things that happens is every time we go through a major change of tool sets, we happen to leave behind the practices that really taught us many valuable lessons in the shift that came before it. We have a tendency to lose those valuable lessons of the past and not bring them forward with us into this new paradigm, into this new world. And I'll show you what I mean. So I've seen a lot of shifts in my day. I'm probably going to really date myself here. But I remember VMware Workstation 1.0. Anybody else remember that? All right, cool. My people. So this was when virtualization really went mainstream, right? Virtualization had been around for decades before this, but it was VMware that really made it so easy to use, right? And they packaged it up and made it super simple. And so that's when it really proliferated and went into the mainstream. All right? Does that sound familiar to anyone? I think we're seeing that story again. And so what happened when we made this shift is that in days before this, right, I would deal with racks and racks and racks of servers. And I'd spent a bunch of time in data centers making cat5 cable, right, setting up big iron, you know, bringing out a crash card to troubleshoot things when things went sideways. And there was this entire body of practice around how we managed our data centers, right? And there were things like how do we manage and ensure hardware resiliency, right? And like power redundancy. How do we make systems bulletproof and always available, right? Because that's how we made production applications back then. But then came the big shift, right? All of those rules went out the window when we moved to virtual, right? Data centers were now commodity. Data centers were now commodity, right? And the real value is now in things like live migrations, right? And zero downtime when you move machines, setting up resource pools, capacity planning, right? Golden image management, those things became what was really interesting. And oh my God, right? Those physical hardware days, they were awful, right? Troubleshooting with the crash card, resiliency with redundant power supplies, right? We were fools. What were we doing, right? Virtualization, that is the answer, right? That is the key. Through virtualization, right? We're going to have all of our problems solved. And we're never going to worry about all of those old problems again, right? That was the promise. And then came the shift to AWS, right? We moved to the cloud. And oh my God, those virtualization days were terrible, right? We managed hardware on-prem. What were we thinking, right? How ridiculous? We were fools. And the cloud, the cloud is going to solve all of our problems. And we're going to forget all of those old lessons in the past, right? Like resource planning and resource pools, capacity, et cetera. But wait, how do we build resilient productions or production applications in the cloud? How do we get reliable networking? How do we ensure redundancy? How do we get resilience storage? What happens when an entire data center, like, nay, an availability zone effectively loses power, right? Or it just goes offline or becomes unavailable? What then, right? We're never going to have those old problems again until inevitably we do, right? So why would we want to bring those old experiences into this new world? I'm just going to do this without slides. So what happens, right, is basically we reinvent solutions to practices that we eschewed when we made this shift into a new world, right? When technology changed, great, we don't need any of those old lessons. And we seem to have this need or we suffer from like a technology amnesia, right? Or maybe at least it's like very optimistic enthusiasm, maybe a little blind, right? But I think that enthusiasm exists because we want to believe, right? We want to believe that this new shift, right, this is the one that's going to get it right. This is the one that, you know, is going to move us forward because all the other ones were wrong, right? We just want to start fresh. And so we don't need those old broken things anymore, right, until inevitably we do. We still face the same problem. And so we reinvent solutions to long-known persistent problems. Hey, slides. And so all of those old problems, right, are solved in this new world. So why do we need them, right? Networking, troubleshooting, resiliency, managing our fleet. Like, we got this, right? We don't need those old tools. We don't need DevOps anymore, right? DevOps is dead. I don't really believe that. In case you're wondering, right, you are in the right talk. I'm eventually going to talk about the service mesh. But I decided to change up this talk last night because I think there's a really strong case to the things that we're talking about here at DevOps days. How did DevOps practices play out in new technology and new frameworks? So I'm going to talk about what a service mesh is, the problems that it solves, what it can do for you, where it fits. But I thought relevant to this group, it might be worth sparking a few conversations about how we take the practices that we know and love and plug them in to new technology frameworks. We're going to look at a couple of examples when I go through the tooling and how it works. But what I would love is if, over the next couple of days, we take some time to talk to each other about what those practices are, right? How they can potentially fit, and how we codify those old lessons from the things that we have learned into these new tools, right? What are those new patterns? How do they play out? I think the service mesh is a good context for that conversation. But I should do some introductions first. Hi, everyone. My name is George. Hello. I currently work at Boyant. Boyant makes the Linkerdee service mesh. They make the conduit service mesh. We are a very service mesh-y company, a service mesh to solve your service mess. Try saying that fast three times. Prior to Boyant, I was a little company called Chef that you might have heard of. Chef convinced me to go into the vendor space. I was always an engineer before that. But I was so passionate about automation and config management and dev ops that I really wanted to push it forward, right? So I was a chef for five years. It was a really good run. I love that company. Before that, I was an infrastructure engineer, mostly in the video game industry and finance. Spent a little bit of time doing personal data mining, asking me about that sometime. But mostly, right, I'm a sysadmin. It's who I am. It's who I've been. It's probably who I will continue to be. That's my background, right? Large distributed infrastructure problems. How do we fix those? And that's what brought me to Boyant, right? So I think Boyant is on the cusp of some very new, interesting technology practices, some cutting edge patterns that I think we're all going to need. And so that's what we're going to talk about today, right? Or as my talk was called, service communication is a first class citizen. So can you tell me what that means, George? And maybe some less buzzwordy ways? Yeah, hopefully. I think we can do that. Sometimes they're here, sometimes they're not. But what I'm going to do is I'm going to tell you how we got here by what my journey was into this space. So I was a microservice skeptic. I did not think that microservices were going to actually pan out, right? Mostly because I had no idea how a number of management problems were going to be solved in this new type of space. And it turns out that many others of us didn't have that either. But I think we are at a point now where we've matured the technology enough and the practice enough that we no longer do things like interchange the word container and microservices, right? Containers are a packaging format. Microservices is an architecture pattern that unlocks new benefits. But what we do with this pattern, right, is we take applications that were centralized, right, and we turn them into many small distributed components. But what happens when you take that to its logical conclusion? So when you have a monolith, right, this is an uncommon. You have a web tier, app tier, database tier, presentation, application, data layers. And it's pretty easy to predict where traffic is coming from and where it's going to, right? There are very few hops in between. I mean, maybe you have a load balancer in there or something, right? But generally speaking, it kind of looks like this. And you can wrap your mind around what's happening there. I mean, even the relationships there are a little webby, right? But generally speaking, it's understandable. But what happens, right, when you take that monolith and you start decomposing it into much smaller parts, right? Different parts of your organization start to own different parts of the stack. And usually that's the responsibility of different teams working on different schedules, right, with different priorities, with different needs, different ways of working. What happens when you take that to its logical conclusion, right? Can you really wrap your mind around what's happening in that world? I really want you to see this next slide. All right. So while it's here, this is a map of communication services at Twitter. And the thing that I love about this map is that this was circa 2012, right? This is before Docker was a thing. This is before Cloud Native was a thing, right? And what happened at Twitter is that they had such a massive influx of traffic that they needed to radically rearchitect the way their services were communicating, right? So they went down the path of microservices. And this is not unique to Twitter, right? All the web scale giants of the era had to do it. Facebook, right? Netflix, a few others. But the reason that I bring up Twitter is because Twitter solved a part of this distributed application mess with a network library called FNAGLE. And FNAGLE was the predecessor to Linkerd, which was the first self-described service mesh solution. But maybe let's look at this complexity problem a different way. All right. Got it. So if that depth star diagram, right, with all those weird services in it, didn't really present the complexity issue in a good, rockable way, let's look at this problem in a different way. When we build microservices, service-to-service communication suddenly explodes, right? And you introduce this whole new class of communication into your infrastructure, right? In that three-tiered monolith diagram that we saw earlier, where it's pretty easy to predict where traffic is coming from and going to, right? It's going from the first tier to the second tier to third. And that's usually called like a north-south pattern, right? It's moving from north down south into the diagram. But with microservices, you get this class of communication that goes east-west, right? It's a bunch of inner communication within the same tier. And you usually introduce that class of communication in your infrastructure, usually without thinking about it, right? I mean, it's just a network call. Call, like, the network always exists, right? It's always there. It's always available. But the thing that happens is this new class of communication is now the fundamental determining factor for how your applications are going to behave in runtime, right? And that's pretty critical. And so what happens is you start to slam into all sorts of new problems when you start running in production. There are different ways that we could take this talk, right? But I'm going to go ahead and dig into one problem in particular. Whenever I talk to folks about this, there's one problem that always seems to resonate universally, and that is the need for observability and visibility and just understanding what is happening in your stack. So there's a blind spot that we have been living with in the service communication layer for years. Basically, the blind spot is this, right? There's a lot that you can infer about the state of service communication in your stack, but you can't really measure it directly. Let's see what I mean by that. So if you're managing production applications, right, you probably have a dashboard like this, somebody in your infrastructure is looking at this dashboard, right? And so here we're looking at network monitoring stats. We were looking at network monitoring stats, right? L3, L4 type stats, things like, you know, bandwidth utilization, transmission failures, right? Packet loss, things of that nature. And that's important data, right? You need that data. You need to know if your network is healthy. But, right, just because your network is healthy, that doesn't tell you anything about the state of service to service communication, right? Those inner service requests could be failing and you wouldn't know it. You wouldn't know it unless you look further up the stack, right? So there are different ways to look at that. Here we're looking at some data from Smokeping, right? You could use a latency monitoring tool and start getting closer to measuring service health, right? And this breed of tools provides external availability data, right? It tells you how is this service responding to the rest of the world. But that's a very passive view, right? If you notice suboptimal performance, if you notice a spike in latency, for example, do you know what actually caused that, right? You have to go back and triage that with an internal data source, maybe an event streamlog for this related service, right? And triage that along with the stats that we were looking at in the L3, L4 layer. And you put that all together and then you can sort of infer what happened, right? What was the problem? What was going on with my service communication? What is going on with my slides? So we could use an in-band tool instead, right? We've been using out-of-band tools. We could use a tool like TCP dump. I love TCP dump. You can do some great things with TCP dump. But in any reasonable production setting, right, if you're using TCP dump to inspect traffic, it's like drinking from the fire hose, right? You can use TCP dump to filter for what's going on, right? But you need some bit of known data, like a payload, right? Port destination, like some bit of known information, right? And with enough scaffolding, you can build some pretty decent solutions that I've done some good things with TCP dump in my day. But again, right? That's a pretty low-level inspection. And even then, when you do find the source of an error, you still have to triage it with all of those other data sources that we've been talking about, right? It's still difficult to directly see what's happening inside of your service communication in an effective way. So if you've managed production applications before, right, you sort of know that troubleshooting dance. Anybody familiar with troubleshooting your apps that way? Right. Yeah, a bunch of us have done that before. And generally speaking, this has been enough, right? It's been enough because troubleshooting service-to-service communication is a relatively infrequent event, right? We know where requests are coming from and where they're going to, and we know where to look when things go wrong. And generally speaking, this has been enough. There wasn't a need for a more elegant solution, right? Until this. Until microservices. So when your three-tier monolith becomes hundreds or dozens or thousands of distributed microservices, it's not always clear where requests are coming from or going to, and it's not even necessarily clear what they're in or relationships are between these services. So if I can show you right there, I don't know if you can see it, there's this service called Gizmo Duck. And Gizmo Duck in Twitter was a user service, right? And so pretty much every other service wants to talk to the user service. There's some data that you'll get there, right? Tweets that should be displayed, right? What notifications belong to this user, et cetera. But as a developer in that platform, if I make a call to Gizmo Duck and it fails, the only thing I know is Gizmo Duck failed, right? Maybe it's one of the underlying dependencies that failed, right? Maybe like my browsing preferences, right? Maybe that, maybe Gizmo Duck is calling out for that, and that service is actually what failed. But as anyone interacting with this platform, do you really know that that was a problem? Can you really wrap your mind around the fact that those interdependencies exist? How are they managed? How do you actually bring some sanity to that world, right? You don't even know what's failing. And so when you start dealing with service dependencies, right, it's like dealing with any other class of dependencies. It's like dealing with resolving Ruby gem dependencies, right? Or Java library dependencies. Except the difference here is when those things go wrong, you don't have standard out telling you that those things necessarily failed, right? It's like trying to troubleshoot those dependencies, except you need detrace to dig in and figure out what's happening. So that's a problem, right? That's a problem that you very quickly run into in production. And so there's got to be a better way to do this. And so historically, some teams have dealt with this problem like so. You solve for that blind spot by building embedded custom monitoring solutions, control logic, right? Some built-in telemetry and debugging tools. And you embed that into your services as a communication library. Which means, right, load this library, use it, route all requests through it, and let it manage your traffic flow. And then you embed that communication library into another service, and into another, and into another, right? And you keep multiplying this critical component into several disparate endpoints in your infrastructure, right? So now your infrastructure starts to look a little more like this. And one of the main problems with that approach, right, aside from managing config and doing updates for hundreds of disparate endpoints now, is that your code is very tightly coupled to that application, right? Your network management code now belongs as part of your app logic. So let's say your application is written in Node.js, right? Good luck supporting Go or Python, right? If your applications are polyglot, that means you need to rewrite that network application library into another language and then make sure that that new library is compatible with the old library, right? And you kind of see where this is going, right? Some of the management pains that could be present there, right? There has to be a better way. And so that brings us to the service mesh, right? That's the idea behind the service mesh. The service mesh exists to decouple that logic from your application code. And the service mesh also provides constructs to monitor, manage, and control your production application. Basically, it takes all of that logic that, you know, lived beside your applications and it pushes it down into the infrastructure layer where it can more easily be globally managed. How does that play out, right? What does that look like? Hopefully this one stays up here. So it usually starts like this, right? Ah, there it went. You have a data plane of some sort, right? And the data plane is basically a series of interconnected proxy. So when you look at this diagram, these little rectangles here, they could either be a container or a pod or maybe a physical host, right? But per endpoint, you have a proxy that you route all traffic through, right? And then those proxies talk to every other proxy, right? And inside those proxies, that's where you have all of that rich telemetry and control logic, right? And that's how it starts to come together. And when you, as a human, interact with the service mesh, you interact with the control plane. And the control plane basically exposes primitives so you can tune all the little exposed hooks in those proxies and compose policy that determines or that defines how your apps should behave in your runtime, right? You have control of that. You can compose it. You can set it. And then the data plane reads from it and alters its behavior according. That's kind of the high level look at how a service mesh comes together. But that brings us back to service to service communication as a first class citizen, right? And basically, right, being a first class citizen means that this entity supports all operations that are available to other entities in your infrastructure. And the cool thing is that now all of our infrastructure is abstracted, right? So we have these infrastructure objects that exist. And everything that you expect from those infrastructure objects, you should be able to expect that from service communication as well. So those production grade objects, right? Anything that you are managing in your infrastructure should give you visibility, control, and management, right? Basically, you should be able to see it and know what it's doing. You should be able to change what it's doing whenever you need to change it. And you should be able to manage events that influence the entire life cycle of this particular object, right? That should be true of all objects in your infrastructure. And so when I say that service communication is a first class citizen, it should have those properties as well. Let's look at how those play out and like, let's dive back into some examples that I think are relevant for DevOps practices and how they dig in. So in general, any service mesh should give you a couple of things. Should be able to provide visibility, resilience, some level of security. So we're going to keep talking about visibility since I started there. And I put a couple of things on this slide. Generally speaking, there are a couple of different options, ways to approach a service mesh, some different options in the ecosystem. But any one of them should be able to give you very, very rich metrics. And this is top line metrics, things like request volumes, success and failure rates, maybe show you which requests are failing and why. There should be some level of that, the kind of things that you would expect from a robust dashboard. But I think perhaps a better thing to dig into and a good relevant example for some of the discussions that I hope we have are things like distributed tracing. So this is a little bit of that same gizmo duck example that I was talking about. When you have dependencies between services, you might have a flow that looks a little something like this. Any client in your infrastructure might be making a request to a service. In this case, it's a profile service in the example. And that profile service might also be making requests to other services in a particular order in order to fulfill the operation you asked it to accomplish. And so here, we're talking to some other dependent services, off billing history and auditing service. How are those things happening in what order are they happening in? What's really critical in this stack? Now, even here, in this one example, you can sort of understand what's happening for this request and sort of start wrapping your mind around it. But what happens when you multiply this for any microservice times a dozen types of requests that it issues? Over hundreds of different types of microservices? Can you really wrap your mind around that and understand how it works? So distributed tracing exists to give us a much better view of how those things come together. And in a distributed trace band, which hopefully we'll see again in another minute here, a trace band allows you to start understanding what some of those dependencies are. Here, we see the same kind of call, but we see everything from beginning to end, including child dependencies, including any nested dependencies that this task needed to perform. And we see those all in a time series, right? We understand how they performed in what order and how they relate to one another. And this type of view, I think, is interesting because it allows developers, it allows platform operators, it allows anyone that's touching production to more easily understand what's happening in your infrastructure. It's much easier communication that you get without necessarily needing to talk to one another, right? You can just see what's in the code, right? Similar to infrastructure as code. But here, what's happening is we're shining a, like basically shining a spotlight into that blind spot that once existed, and existed in a way that we couldn't manage before. So tracing exists on its own, but in a service mesh, it's much easier to implement, right? Because we are managing all traffic through that series of interconnected proxies, we can reassemble all of these spans into, like, a good, manageable, viewable representation to help you see and understand what's happening. A service mesh also gives you constructs for resiliency, right? Visibility is not enough. We need a lot more in production. We need things like resiliency, right? And this goes back to the fallacies of distributed computing, right? The network is reliable. Transport cost is zero. The network is always available. Bandwidth is infinite, right? Your topology doesn't change. And so we need to make sure that all of that network communication completes successfully somehow. And so usually, right, you can do that with retries. And you can do retries and software load balancers today. But inside of a service mesh, because it's focused on that session layer, on looking at service, service communication specifically, the service mesh tends to be request aware, right? We can look at the type of request that you're sending and look to see, is this an idempotent operation, right? Can this request be successfully retried? If it can, retry it. If not, just fail it, right? But retries are actually a pretty expensive operation, right? You're going into a loop, right? Submit this request. If it fails, wait, retry it. If it fails, wait, retry, right? And you get into these loops of trying to manage that. And if you have one service that's failing because there's an actual outage, chances are you have another service request behind it that's also going to fail. And another one behind it, and another one behind it, right? And those service retries can stack up, right? And become very expensive in terms of resources to manage. To the point where the thing that is queuing up those requests, right, can also run out of its own resources and also itself fail. And if it fails, right, any services that depend on it will also continue to queue, right? And potentially run out of resources themselves and fail and so on and so on, right? And just ripple throughout your infrastructure. So the service mesh gives you a bunch of other things to try to mitigate that. Things like deadlines, right? I don't care if you've hit the retry count after x amount of time, it's no longer useful to me, right? Just consider it a failure. You can do things like set retry budgets. So retry budgets are interesting. So let's say you have a retry budget set to 10%, right? And you have 200 requests happening at any given time. That means that 20 of your requests can actually be retried and sit there in that long-pending loop and any of the rest of the traffic would just fail. And that's kind of a way to ensure partial availability, right? When you experience a failure, let some of those requests happen successfully. And that may be the right thing to do in your infrastructure, depending on what service level agreements you have, right? The ways to sort of mitigate those failures, make sure you get at least some partial functionality. Circuit breaking, right? Circuit breaking is a little bit more of an advanced pattern. If you think of that interconnected series of proxies as one big circuit, right, all of those connections between them are closed by default. And they are always communicating and sending traffic. If one of those proxies fails, if one of those end points goes offline, right, the circuit opens and it no longer receives requests directly. And so one of the other fallacies is that transmitting a request, right, there's a zero cost for sending network requests. And that's not really true, right? Especially when you're sending thousands of requests per second. And so rather than burn resources and calories trying to send that request to a thing that you know is going to fail, if we know that it failed, just go ahead, nip it at the source, right? Issue of failure, don't even try sending it. And when it eventually comes back online and becomes healthy, we can close that circuit, bring it back online and start sending traffic to it again. So more or less, right, a service mesh exists to mitigate cascading failures across your entire infrastructure. And that's one of the best places that it can plug in in terms of resiliency. Here's a good example. Here's one I really want to dig into. So security is kind of a big topic. And this, you know, this operates on another fallacy, right? The network is always secure. So security is a big topic. Let me say this because I put service off up there and address it, right? So the service mesh can do things like service A can talk to service B, but not service C, right? And so if you can't talk to service C, don't even send the network request, right? And that basically means we can do ACLs similar to what we can do in layer three, layer four, right? We can do that in layer five and up, right? Cool. We can do ACLs. But the second one, right, mutual TLS, I think is a very interesting case. So I think we can agree by default that all traffic should be encrypted. That's the consensus, but that's not the world that many of us live in. And I'm not going to ask anybody to raise their hands if this is true in their infrastructure. What I can tell you from my time as a consultant is that a lot of organizations punt on this sort of consensus because managing secrets is hard, right? And so what you end up with is either organizations that manage encrypted communications very well, right? Or they manage it inconsistently or they just don't do it at all. And we should do it, and we want to do it, but we don't do it because managing secrets is hard. And so there's an opportunity here. If you think about the way that a service mesh comes together, right, that interconnected series of proxies, we have a really good opportunity. We control all of the endpoints so we can terminate TLS on both ends very easily, right? And if we have constructs that help us manage certificates and rotate keys, great, right? That can just be easy and happen, and we've introduced an effective solution. But here's where it comes back around to DevOps, right? We've just created a really hard delineation when we do that. It's a very clear set of where roles and responsibilities differ. Rather than the shared model that we have between Devs and Ops now, right, we have certificates that we manage either on our servers or in our apps or as part of a release process. We sort of collaborate on that. We know where those things exist. Here, with this sort of approach, right, we very much push that into the infrastructure layer and make that a platform concern. So do your operations folks still need to know about TLS encryption and worry about it? That kind of feels like a little bit of a wall to me, right? Are we actually creating much harder balance? What changes, right? What's the new pattern of communication there? Should you continue to care about the fact that that's happening and it's being managed? I think you should, right? I love DevOps. I think you love DevOps. I think those things are good. Those are valuable lessons that we've learned about understanding operational concerns in the development stack and vice versa. But how does this play out, right? I don't know. I think that's a good question. I think it feels like a wall like in my gut, but I'm not sure if it really is. I don't know. I think only time will tell. So where do we go next? I think those are some places where the service mesh can help you and I think that's a very good high level overview. But where do we go from here? A couple of questions. One, do I need a service mesh? I think the answer here is, or here's the qualifier, right? If you are managing microservices in production or you plan to, then the answer is, yeah, you probably need a service mesh. If you have a lot of services that are intercommunicating with themselves in the same tier, that east-west traffic pattern, then yeah, you probably need a service mesh. If you don't, then you can implement a service mesh and there are some benefits there, but you're probably not going to get a lot of bang for your buck. The juice might not be worth the squeeze. And again, I think the pertinent question here is how do we take the lessons that we've learned from DevOps and apply them to these new tools, to these new frameworks that have some very concrete solutions to old problems? Can we port those ideas over? How do we build bridges between experience and innovation? Do we need to do that? Or is this new tool, is this new shift going to solve all of those problems, right? This one is the one and we're never going to experience any of those old problems again. We don't need those lessons that we've learned from DevOps. DevOps is dead, right? I don't really believe that. So I think it's an interesting and exciting time for both the DevOps space and the cloud native space. One of the biggest challenges going forward, I think, is going to be figuring out how to build those bridges and how to bring those practices forward. And I hope that's something that we can start discussing here at DevOps days because the only way forward is with everyone here in this room, right? If you're having these pain points, if you're experiencing these issues, if you're going through that type of transition, join the user communities for these groups. There are a number of open-source service mesh projects. Let's keep figuring out what those patterns are, how we codify the things that we have learned, because these tools are malleable, these practices are malleable, they are just starting to emerge, and the time to figure out how to start shaping them, I think, is now. I was going to say long live DevOps, but I think the slide said it. There it is here again. So again, right? That's basically what I wanted to put out there. How do we take those practices and make them work in these new paradigms? Because I think some of those solutions that are present pose some really interesting questions. That's it. Thank you very much. I originally said to the organizers that I was going to do a walkthrough of the various service mesh products. I just did a webinar on that on Wednesday, so if you are curious and you want to dive deep into those details, I'll make these slides available. The link is here at the bottom of the screen, and the links to those talks that I was referring to in the beginning are also here. So thank you, and I'll be around if you want to chat. Thank you very much.