 let's kick this off. Thank you everybody for a fantastic day so far. I'm kind of overloaded with all of the awesomeness that has happened today. I picked out my favorite so far, I think, but Lewis, why don't you give us a preview? What do you think were some of your highlights for the aside from your talk? Actually, that was probably the one I enjoyed the most. You know, I kind of live, eat, and breathe service mesh. And so it was just really interesting to hear kind of three different implementers of service mesh agree on so many things so vehemently. So that was definitely a highlight for me covering, you know, obviously a lot of experience that they had had with the different, you know, deployments that they had tried to support for customers and what they had learned from that. So that's obviously always super interesting for me. Another highlight, I think, but one thing that stood out for me was how they kind of integrated, you know, leveraging a higher level concept from the mesh as part of their GitOps. There was a feature where they would label deployments and Kubernetes and said that they wanted them to opt into SSO. Like SSO is not a feature of any built-in feature of any mesh, really, but it was something that they had built on top and then they integrated it kind of end-to-end with the developer experience in a really powerful but kind of bespoke way, but it delivered a great user experience. So that one kind of stood out. Yeah, I think that's something that we've all been trying to achieve with Kubernetes and Service Mesh is kind of the next step, that like common environment, the idea that they are taking Service Mesh and getting it into satellites and fighter jets and normal deployments is like one of the coolest things for me because you get this like common API that you can then build on top of. Really exciting, you know, as we start to get this like cloud-native world where the cloud isn't just an AWS data center anymore. It's anything, anywhere. It's really cool. Right, and they were also being super nice about like sharing all of these kind of practices by kind of creating templates for deployments and, you know, publishing them on Git for anyone else to use, right? So that was really nice to see your tax dollars at work. It highlights something that I think is awesome about Service Mesh that we don't spend a lot of time talking about is that a lot of where we are today as a community is building out the primitives that you can go build on top of like with Service Mesh interface and traffic split. There's some awesome primitives that you can go do things with that we just haven't built yet and flaggers starting to do that with, you know, their kind of work and like seeing the DoD release their configurations of the rest for those next level concerns is just so exciting. Yeah, yeah. Those are the things that I'm kind of the more user experience side of things. You know, almost there's a fair amount of detailed technical content. And we had two talks today about telco, which is a good segue into asking Projector what, you know, maybe what some of her highlights were for the day. And actually, I was not I was not silent by design. It was just my stuff was echoing a lot. But now that we got that fixed, I do I was actually excited to see a whole bunch of new talks. I think I mean, both of you made great points. The reason I'll tell you the motivation for doing the telco talk in the first place. Traditionally, we always assume telco edge enterprise, these are very different things. And we treat them like individual silos. And I have half my background is enterprise and half my background is telco. And actually, the problems to be solved are exactly the same. And the idea was to introduce these use cases to show that there's really common set of use cases we're all trying to solve. Sometimes the starting points might be different. And sometimes the technologies might be slightly different. But when we really talk service mesh like today in my slide, for example, I had about 12 plus implementations, including Kelsey service mesh, and that was just half ingest. But I think the key part is that service mesh is here to stay that that's what all these implementations show. Obviously, you've got Istio, which is immensely popular in the open source community. But each of these brings value to the community. And I think surfacing up how telcos can actually tap into it, even if they have legacy systems, or if their staff runs on VMs. That is the paradigm has to adapt to the use case and not the other way around. So I think it just showed to me also another thing that we need to keep expanding the bounds of what service mesh is and does. And so that was the motivation behind doing the talk. And I just wanted to add actually my observation on the other talk, which was done from by Ericsson. And this was Gabor who was doing the Gabor Retouary from Ericsson. I was actually excited to see because he picked up a really legacy thing, which is the Session Border Controller. It is as legacy as you can get in some ways. He picked up a protocol that is not HTTP based. So he picked up RTP over UDP. And then he started laying out how, you know, we, when we look at it from a cloud native lens, we are seeing everything HTTP, WebSocket, GRPC based, that is not the real world in telcos. So how do we bridge these things? Then he talked about the profile, like one is long weighted, one is short lived, talked about KPIs. But then he went back to the commonness, which is service mesh is nothing but separating control and data plane of services. And then if we can figure out how to bridge some of these nuances, every bit of what we talk about in service mesh, whether it's traffic control, whether it's security, whether it's observability, scale out, all of those are equally relevant. And obviously a lot more work to be done there. But I was super excited to see that talk. Me too. In fact, we haven't implemented it yet in LinkerD, but by far the two most exciting protocols that I want to see are MySQL and Kafka, so that you bridge this like traditional HTTP microservices world into a bunch of different paradigms, because in the end, we're all trying to solve the same problem. It just happens that protocols are unique to our problem spaces, which is a pretty cool way to look at it. Yeah, that was the thing. Sorry, Luis, keep on going. Oh, my audio is a bit laggy. Yeah, I mean, I think, you know, HTTP is obviously the kind of early sweet spot for meshes. It's the most widely used protocol. It's the protocol that's most used by the people that we targeted first, right? Kubernetes users building kind of distributed apps in a kind of microservices paradigm, right? HTTP, REST, JSON, GRPC and things that it will dominate there. But yeah, I mean, MySQL, Kafka, you know, other kind of database protocols, Postgres, JDBC, right? This is a very, very long list. A lot of these protocols have a lot in common, right? There are generally speaking only a few different shapes of protocols out there, you know, kind of command body oriented ones, things that are more resource oriented, and then almost all of them have headers of some form. So what interesting challenge and something we've been talking about in the history community is, you know, abstracting the kind of the encoding details of the protocol a little bit from the kind of routing and feature extraction aspects that was related to telemetry, so that you could design APIs for routing and other things, which is something that's a big part of service mesh, and then have those APIs apply to many protocols as long as they could map into a certain set of well-known paradigm. It's an interesting design challenge and, you know, it's not something that we shift or made a ton of progress on, but it's something that I think about a lot. If I look at the Kafka protocol or I look at Stomp or something else, right, I see a lot of similarities between that and other protocols that would be a pretty good support for it. And if you look at a lot of the HTTP routing stuff that exists, a fair amount of it is pretty generalizable to other protocol, right, match headers with regexes, prefix, suffix, match, path is just really a special type of header. Pre-reparations are headers passed in something else, right, it's pretty easy to come up with generalizations. So I think that will be an interesting space, and if we lay the foundation for that pretty well, then, you know, it might be easy to expand the universe of support protocols pretty quickly. And those same things apply to policy and telemetry, right, those same kind of sets of attributes. That's an area that I think, you know, I would certainly like to see some exploration of by the community over the next year. Oh, 100%, me too. And it's always the trade-offs there, in particular, that like, do you want to, how general can you get before you start losing fidelity? Like, you don't want to do something that is 100% unique just to Kafka, because there are generic, important abstractions you can pick there, but on the other side, you don't want to lose all of the important information that you get out of that. Like, if you start talking about policy, policy that I would love to write someday is the ability for limiting select statements on MySQL at the service mesh level, which is a pretty interesting, like, super unique integration with the actual protocol. But like, how do you make sure that the extensibility is there without actually having to do a special implementation each time? I think this actually goes into, sorry about that. The other example I have here is, you know, send select requests to read replicas of MySQL and send inserts and updates to the master, right? Lots of people who have scaled out SQL have written proxies to do just that, to deliver super popular use cases. In fact, there's a plugin that exists for LinkerD1 where it'll do automatic sharding for you, which is really cool in my mind. This actually, I think, leads directly into a question, or not a question, but one of the other talks, which was the WASM talk, which got me super excited. The ability to go and start writing and doing prototype on top of the proxy itself with WASM opens up the world so that we can actually do a little bit of experimentation to figure out what the generic implementations are, where the right abstractions are, and having that really powerful WASM tool so that we can experiment and figure out what we want to go and make a more concrete solution is super-duper exciting to me. Actually, I would add to that, because I mean, just adding to what you were saying, Thomas, we see, you know, there are these requirements for customization. And I think Christian, who did the talk from solo.io, he called out several of those needs, including, for example, you need new wire protocols, or you want your own custom metrics, or maybe you want to implement custom security exchanges, maybe because you've not upgraded. Lots of things around header message transformation, firewalls, and so on and so forth. I think WASM, custom filters, this level of customization is also important to help vendors of products plug-in to things without really going and touching the core framework of a service mesh paradigm. So it gives them a way to put in their proprietary stuff in a way that we preserve the paradigm and what it stands for, and also preserving a lot of the openness that comes from, for example, following an XPS protocol for talking to Envoy. So to me, that was, I think it is a very key, it's relatively new, but I was excited to see their demo. It was pretty nice. And then he talked about the WebAssembly Hub, which was an interesting concept. I think it was about the community coming together to share stuff. Broadly, I think WASM will open up use cases that we would have normally otherwise struggled to implement. Obviously, we need to make sure we make it production grade and we all put resources into it, but it's a great area for the community to collaborate on as well. Louis, I know you had some thoughts on this as well. Louis, I was going to say, I don't know, it took me a second, but when I was watching the talk on security from VMware, in their diagrams, they were using the logo from WASM. I didn't get a chance to follow up with a guy and asking a question on Slack, but that use case, that integration use case, I got the impression from their diagram that that's what they were doing. I know VMware has a broad portfolio of security products to certainly acquire a number of companies in that space over the course of the last 18 months, I think. So I figured that's an important thing for them to be able to bring that portfolio into their service mesh product. And it looked, at least from the diagram, that they were going to think maybe thinking about using WebAssembly to do it. So that was kind of exciting. It was nice to see. But yeah, I agree. I thought both the experimentation phase, but also moving from experimentation to implementation or also being able to segregate functionality. I think there was a conversation I have fairly often with Mac client is, you know, there's a lot of built-ins in properties, right? And in Envoy, too. They have plugins for a variety of protocols that probably have low usage in aggregate. Those plugins represent a security risk, right? You're not actually using that code. And that is quite easier to get those plugins out of the core and into an ecosystem of, you know, stable, but separate plugins that people can choose to use. And, you know, for really big deployments or very security sensitive enterprises, you know, only configure and take dependencies of what you actually use is a pretty important thing. And so if WebAssembly can help fulfill that for us, too, I think it will be very valuable. I think this actually loops back to Mitch and my talk as well in that Wasm also really provides us as service mesh implementers a really unique tool to keep our service meshes light and fast. Because when users who have totally valid user requests come and ask for something that's very specific to their environment, we can say, hey, you know, that's a really awesome user request. Why don't you implement it in Wasm yourself? This isn't something that needs to go into the core service mesh and potentially reduce security or increase the amount of tests and all of the rest of that. Because it's a small, pluggable piece, we have a lot more freedom in what the service mesh does and doesn't and being able to define that, which is also super exciting. Actually, since we are talking about your Thomas stock Thomas, we should we should talk more about your talk because I personally really enjoyed that talk. I like the fact that you know, you had obviously there was you but there was Sabine from she was there from console side and then we had Mitch from Istio. And I like the fact that, you know, it was interesting for me to see that your bot brought some very similar and some very different points of view. And I like the fact that, you know, each of you was very open to the other's point of view. I would say all of your conversation, like for example, Mitch brought up the whole issue of Istio control plane simplification. And you said an interesting thing where service mesh is not really about technology, it's about people that really resonated with me. It's really about that. Obviously, we need to make things more simpler. But broadly, it is really about solving that problem. And then I like the fact that console Sabine from console came up with a different angle around how they didn't want to be an APM product and then how they went about it. And I think I saw the theme around wasm in all of your conversations as well. It seems that you folks are also putting a lot of emphasis on the user experience. And I think broadly as a community in our own open source as well as managed implementations, we should keep that in mind. And then I think there was some lots of, there was a lot of mesh humor, I should say, I'm going to steal some of those lines. But I really like the three perspectives. So for example, when there was an issue, you know, how does Istio deal versus it with it versus console versus linker D and I think you spoke about taps and wire sharks and so on so forth. I just, I just love that talk from the, for the fact that I think we spend too much time thinking about how different we are. I love the fact that you brought three different perspectives and you raise some really good issues that all of us across the board can use and bring into our own implementation. So just thank you for that talk. Louis, I know you like that one. Sorry, go ahead, Thomas. I was going to say, you bet, I think, I think the fun thing for me is how much we end up riffing on each other. Like a really great example is the original Istio init container code. We borrowed heavily on the linker D side of things. And now Istio has gone and picked up a lot of our check infrastructure, which is great. Like, it's super fun for me being part of the ecosystem just to see when you do something that's really great, everyone goes and picks it up and adopts it. And that's awesome. It makes me so happy because we can go and pick the things that are unique to us and go double down on that and have the rest of the community build out really great solutions that work for everybody. Awesome. Louis, I know you listen to that talk as well. Oh, yeah. No, I mean, I very much enjoyed that talk. Yeah, I mean, I think there's a transition going on, I think. It's kind of this crossing the chasm moment. Because the kind of proof of that was that there were talks by people who had gone through real deployment struggles, persisted, become familiar with the tools, and gotten real demonstrable value from the deployment. There was a talk about provisioning machine learning stuff from Splunk, which was a great kind of classic user experience and value derived kind of talk where there was struggle in the middle. Our hero had to go through a process, but ultimately came out victorious. And it's nice to see more of those kinds of talks at these types of events. And I think we're starting to see that frequency go up. And part of that is about people. Service mesh isn't the fact that it's a terrible term, because it doesn't describe what it does, but then most terms and computers are descriptive. I think we're getting to the point where there's enough critical mass of people who understand the value that are able to bring that value into their organization and explain it to other people who don't necessarily understand it. Yes. But when they can show the value in the context of their own business, that's when the light bulbs really start to go on. And I think we're starting to see more of that. I think I hate being in the realm of prognostications, but I feel a lot better about that than I did, say, at the beginning of 2020, despite it being 2020. So that was particularly encouraging for me. And that kind of conversation that you and we're all having about your different experiences also kind of reinforced that for me a bit, because you were seeing the same things from users, you were explaining the same context, solutions, sometimes taking slightly different approaches to the same problem space, but mostly seeing the same problems and working hard to address them because you were in engagements, right? You were doing that work because customers were asking you for help, right? Which is a good place to be as a product, as a technology, as an open source project. So that was kind of, that was why I like that talk so much, I think. 100%. In fact, I think it's almost a theme that we had from today is this, like real world use cases that are concrete, talking about telco and the interesting, unique problems that comes from that and why a common service mesh pattern can work there. The machine learning and making something that maybe was originally just for microservices work there and going through the hero's journey and figuring out how to have it all fit together. The multi-cluster talk was fantastic. How to go and make this a globally distributed system, talking about the security pieces, what the DOD worked through. It very much was this, here's a concrete application of these generic patterns and primitives that work for everybody, which gets me insanely excited on a regular basis. Yeah, I think, I mean, I would just echo that. Great points. And I think maybe what we should do, we've got five minutes. Maybe recap our takeaways of the day. I think it would be great also later to talk to folks over chat and see what they thought of the day. But I'll just summarize mine. And I do think, I mean, you touched on some of the talks, we didn't necessarily dive into, but multi-cluster is important. And in fact, everything is important, multi-cluster, multi-region, multi-lib. I think broadly, there was one interesting thing that today, which is service mesh, everybody agrees is good to reducing complexity of applications. But I think each one of us needs to manage the complexity of the service mesh implementations themselves. So we need to work on making the implementation simpler to consume. And we should think like a user and not like somebody who's building a distributed system or some piece of infrastructure. Debugging was another theme that came up in several talks. I think, for example, Istio seemed to be the flavor of the day. But I think there were also lots of interesting suggestions about doing better on the debugging side. Then I would say number of service mesh implementations, I think the market will decide. And I think the key takeaway for me there is service mesh is here to stay. We need to make our things simpler, better. Security, I heard the last statement in the DoD talk, which was mesh is key to security. I love the fact that Tetrade and DoD got together and they solved the use case from a customer's perspective, which was DoD in this case. I think the same thing. In fact, Kunal in my talk drives our relationships with AT&T and other big telcos. And that's why we opened our talk so he could give a customer, he's like the voice of customer here. And hopefully we keep that customer perspective. I would say as, I would just say as a community, we should really think of all of the significant collaboration opportunities that surfaced up today. Sometimes I feel like we spend far too much time talking about our differences, not so much about the things we could do together. And I think the latter far outweighs the former. And I would say we should keep expanding the definition of mesh. Let's support our open source communities like Istio, Kuma, LinkedIn, whatever is out there. Let's go support that. And I think for us, we will continue to bring the best of Google into our managed solutions like Traffic Director, Anthos Service Mesh and with a lot of humility because we're also learning in this journey from all of you from customers. So we look forward to collaborating. And with that, I'll hand off to Thomas and Louis, would love to hear what you have, what you took away from the day. I'll actually just put a little blurb in here, which is we'd love to work with everyone in Service Mesh Interface to go and build out that API and the common patterns and use cases there. So folks want to jump over to the SMI Slack channel and chat with us there, show up at our community meetings. We'd love to work with expanding out what those APIs mean, what the core problems that Service Mesh is solve are and moving that discussion forward. Since I'm talking, I'll give my two cents. Multi-cluster was a theme. It was a great takeaway. I love getting together and hearing, again, the use cases, what everybody's up to and the differing perspectives. It's why we had the talk that we did to chat about the trade-offs and understand it. I had one of the funnest days putting that together just because it's so much fun to chat through how all the pieces fit together. And I have so many blind spots. It's fun to see the elephant from someone else's perspective for sure. Louis, how about you? Yeah, so we all had the privilege of watching the talks yesterday. And so I got to spend some time thinking about what my takeaways would be. The first, which is common experience, we are, the different Service Mesh implementations are mostly seeing the same set of requirements from their users. There's a lot of commonality there which indicates that the kind of base set of features that Service Mesh's present are pretty well entrenched now in the minds of the potential users. And to some degree, they're looking for consistency. I think we're just starting to hit the early adopter waves and then know that kind of a second wave of enterprise. And then the third wave will probably be of leading all the needs of enterprise because land and expand is often a phrase used in enterprise sales. When you're an open source project, a lot of these projects are going to be put under pressure to start moving out and covering more of those scenarios that are important to enterprises with the same set of properties, whether it's expanding the set of protocols, whether it's expanding the set of environments. And possibly other things too, it's hard to speculate because there are so many different integrations that enterprises want to do. That's great, that indicates a degree of maturity, but it's also a big challenge. If you talk about customizability and extensibility, I used to work in a classical enterprise software company before I worked at Google. And customization often caused people to run screaming for the hills that they had worked before on systems like SAP, which were endlessly and infinitely customizable. So that challenge, that platform versus product challenge and where to draw a line between those two things for these projects, is going to be a challenge. It's a great challenge to have because it means you have things that people want you to do, but it's hard to get right. We'll probably get it wrong several times before we get it right. HDO has already had a couple of stabs at it in a couple of different ways and we've learned from those mistakes. And I expect that we'll keep doing that and we'll see which patterns the industry is willing to accept and which ones it isn't. And I think that's going to be a big part of 2021. That sounds great. I think we're just about out of time, but there was a few questions in the chat. Are we going to take those over on Slack or are we going to just try and do them now? So there was a question about the hardcore open stack stuff and telcos and what's motivating telcos to get that mesh and Kubernetes project? The thing is you talked a lot about telco. What is called a change in the telco industry? Why is this relevant? And I think one of the key things to remember is service mesh is not tied to Kubernetes. Service mesh is agnostic to compute, right? A good service mesh implementation should support containers and VMs. So even if telcos have open stack VMs, at the service mesh level, we should be able to support services that come out of open stack as well as that are orchestrated through your CAS. And I would say, in fact, the ability to put policy uniformly across Kubernetes and a VM-based services is one of the big reasons that we can actually start helping telcos migrate towards and then help them adopt service mesh architecture. So in fact, in our talk, we spoke about a use case called CapgroDrain, which describes this exact thing. Like, how do you manage your VM or open stack-based deployments and your Kubernetes deployments by using the service mesh layer to treat them more consistently? So I would say that is the key, right? We should not assume service mesh is only for containers. It is compute agnostic. It should work on containers, VMs, and bare metal. I think we answered, or CapgroDrain was just agreeing with that statement about using WebAssembly for the VMware stuff that we covered. And then a pitch for SMI and federation between meshes, which is a very interesting subject. I think that in your conversation with Consul, there was some good discussion of some of the bootstrapping aspects, etc., but I think we're going to have to move this conversation over to Slack at this point. So I just wanted to thank everybody for their time. Thank you, everyone. I would, you know, go ahead, Thomas. All yours. Yeah, I just wanted to thank everybody for their time. And, you know, it's always good to be involved in these events. I know it's, I mean, definitely being remote, but I found a lot of the content very informative. It was great to see some of the dialogues in these cases. Examples of user perseverance and success in particular. That's always heartwarming. And, you know, it's just an opportunity to talk to peers in the industry and folks I don't see on an everyday basis. Likewise. Thank you, everyone.