 Welcome everyone. My name is Pranava Dury. I'm an entrepreneur in residence at Greylock Partners. We're an early-stage venture capital firm located in the Bay area. In the past I've worked as a founding engineer in multiple infrastructure unicorn startups and most recently I was over at AWS where I led a brand new service to over $200 million in run rate. Today I wanted to share with you where we see service meshes and where I'm personally excited about the challenges and where they're going to be headed. So let's get started. Initially I'll talk a little bit about service meshes, why we have them, the motivation behind them, and the current state of service meshes and then I'll spend a little bit of time going over where I see them going next and where I'm personally excited about. So to understand why service meshes are needed we really have to start by looking at why DevOps is needed and why organizations practice DevOps. Modern DevOps is practiced for three main reasons or three main objectives. Agility, stability, and accountability slash ownership. Agility so that teams can move fast and can deploy code quickly and react to bugs or incidents quickly and iterate on what's deployed. Stability so that when they're moving quickly they have the mechanisms in place to ensure that what they're releasing passes a series of tests and whatever other qualifications are needed to ensure a quality release. And then accountability and ownership so that for a given set of functionalities or a service there is an owner that is servicing that bit of code and someone that is in charge of determining how that will change over time. One implementation of these three objectives is the microservice pattern that most organizations deploy today. Now when we talk about microservices we really have to talk about complexity especially when it comes to communication. As an example, Netflix as of this presentation has over 1,000 microservices. Now each microservice has to talk to other microservices and there will be dependencies across microservices. When you have a thousand plus microservices these dependencies can grow immensely complex and individually being aware of each dependency and how communication is happening between services explicitly becomes intractable at a certain point in time. Additionally when you have a microservice architecture some services will inherently be dealing with more sensitive data than others. As a result you might want to rope off certain services or dictate that certain services can only be accessed by other services that are privy to the data and API that those services will expose. Next we always have to assume that a service and the network can be compromised. So these are really the complexities that give rise to the way service measures have evolved to today. Namely, let's start with the communication goals. The first goal would be observability which is for the traffic that you have going on in your network you want to be able to monitor it, you want to be able to log it and logs are a great indicator and observability in general is an indicator of service help which is do I have a service that is repeatedly failing requests. Next we have security. As a goal you would like that all communication between two services or any combination of services is encrypted so that in the event that your network is compromised the communication still can be deciphered. In addition to the communication itself being encrypted you also want to ensure that a service is who the service attests it is as being so attestation and authentication the idea that a service is in fact what is reporting itself to be. Finally there's a notion of control the idea that you should be able to set policies for how your traffic behaves in your network this can be in the form of rate limiting it can be in the form of defining circuit breakers it can be in the form of authorization and ACL saying that only certain services can talk to other services so that's the control aspect of the communication goal. Now if we talk about an implementation of these goals that we've just outlined it gives rise to the service meshes that we have today and so what is a service mesh then? It's an architecture that enables the previous goals for container-based environments right and the way service meshes have been built so far is they don't require businesses change the actual underlying business logic instead they rely on proxies that sit in between business logic and the other services that they're attempting to communicate with and the proxies are where policies are implemented and enforced so let's talk a little bit more about that let's first start with a mesh architecture modern meshes are broken into two planes if you will there's the control plane and then there's the data slash proxy plane and so the control plane is where administrators and organizations will define policies again policies can be for example rate limiting they can be circuit breaking policies they can be policies around which services allowed to talk to each other services and what endpoints it's allowed to it's allowed to talk to so the control plane is really where these policies are created and the control plane will take those user defined policies and translate them into appropriate configurations for the proxies in the data plane and so the control plane is responsible for keeping all the moving parts of the data plane in sync with the user provided configurations and then I've defined data plane already in talking about a control plane but the data slash proxy plane is where policies are enforced a quick dive into the different types of meshes that we're seeing right now so there's two big patterns that we're seeing now there's the sidecar proxy which is by far the most dominant strategy that's or the type that we're seeing and then there's the node proxy so in a sidecar proxy a proxy sits alongside each service instance so in kubernetes parlance you have a pod which is which will contain an instance of your business logic and in the sidecar proxy pattern a sidecar proxy a proxy will be sitting alongside each copy of your service instance and whenever that service instance attempts to talk to the network it'll go through the proxy and all the policy enforcement that is set in the control plane gets enforced and enacted by that proxy that's sitting alongside the service instance if you have to use a tourist analogy this is akin to every tourist having their own personal translator another type of mesh that we see is a node proxy and this is where instead of the proxy sitting alongside every service instance it actually sits at the node level so multiple instances can be running at the node level and they'll all make use of that single proxy running at the node level to look at the tourist analogy again this is akin to having one translator per group of tourists and so there are some advantages to the node proxy approach one of them being if you want to upgrade the proxy that you're using for your service mesh in a sidecar proxy model every single existing pod needs to be rotated with the new version of the proxy and so there is some impact to your business objectives that you're running versus in a node proxy you still have to swap them out but the amount of instances impacted is reduced next i want to talk about the service mesh interface and why it's so important so the service mesh interface is a standard interface for container service meshes this is important because there's so many different implementations of service meshes out there and one of the core issues that'll come up with any organization that is trying to adopt the service meshes which one to use everyone has their benefits and they have their downsides as well not having lock-in to a single mesh is actually very useful for reducing an organization's friction to adopting a service mesh because they know that if they use the constructs as defined on the SMI spec they can rotate to another service mesh with with a little effort and so the service mesh interface standardizes some common mesh use cases for example traffic targets route groups and traffic metrics so now that we've taken a look at the types of service meshes the mesh architecture and where we're at right now let's take a look at what i'm excited about so spiffy is a standard for attestation of identity that is for a given service how do you how does a service prove that it is in fact who it says it is now previously spiffy was not a standard that was adopted by all meshes the good news is most meshes now have a partial implementation of spiffy i look forward to this being the standard de facto adoption for identity and meshes going forward smart nicks are another area of interest smart nicks are nicks that actually have dpu's that can be programmed now smart nicks are another area that are very exciting especially when it comes to high performance networking in containerized environments you see this with the adoption of bare metal kubernetes now what's interesting about smart nicks is you can start doing offloading compute intensive properties like crypto and you can offload them through the nicks themselves this paves the way towards high performance networking in kubernetes where you can really start unlocking speeds beyond 40 gbps next hybrid cloud is here to stay and the service mesh story really wouldn't be complete without having a comprehensive play that also allows organizations to extend two on-prem deployments and what we see is plays like kuma that are being designed both for kubernetes and container native environments but also to be working with traditional vms on-prem going off of the hybrid example within an organization different teams might have different requirements and so they might be using different service meshes as a result when you have cross-functional teams that need to work together it's good to have a unified mesh management system that will allow certain policies to be enforced org-wide and also allow you to configure these meshes or in org-wide fashion so we see mesh hub as an example someone paving the way towards that and i definitely am excited to see how mesh hub will evolve in that sense next meshes rely on proxies so if you want a certain capability to be built out in istio for example at the at the data plane level if the mesh doesn't support it istio can't do it and so enhancing proxies in terms enhances the capabilities of meshes so web assembly is a really exciting development from that standpoint that allows third-party developers to add filters presently for envoy to enhance its capabilities and so when you start allowing the proxies to take on new capabilities you can start adding functionality that is traditionally reserved for other areas of the network stack i see web assembly as a very powerful enabler of future capabilities for service meshes what i would also like to see is an open web assembly standard that is adopted across service meshes so not just envoy but for example linker d as a matter of fact linker d has mentioned that they would like to they're considering the possibility of interopping with the web assembly standard that's been established for envoy and this would lead to really cool use cases where in addition in the same way that what smi did for policies there could be an equivalent implementation for standard web assembly that multiple meshes can use next i think there's also a lot of opportunity when it comes to having existing security offerings vend web assembly extensions for service meshes an example of this could be as an organization you might be using a dlp suite the dlp suite could bend a web assembly extension that you could be incorporating into your service mesh so in this way i think the future is very bright for service meshes in terms of being composable with other services that you use in your organization so these are what i'm looking forward to and if you have any questions or any suggestions of what you would like to see i would love to hear from you so please feel free to reach out i'm at the nava on twitter looking forward to it and thank you for your time today