 Very good. Looks like we've got quite a few attendees this morning. So good. Good morning everyone. Good evening I've got as I hold on and grip my coffee tightly. I've got a hard time saying anything but good morning, but Welcome to today's CNCF webinar on how to secure and monitor external service access with a service mesh I'm recalculate founder of layer five and a cloud native ambassador. I'll be moderating today's webinar But we would like to welcome me Raj Podar the co-founder and engineering lead at Aspen Mesh That's a great fantastic to have Neeraj with us today as As we go to hand it over to him. I do we do have a few housekeeping items to To note. So this is a CNCF webinar during the webinar You know, you're not able to talk as an attendee, but Questions are highly encouraged. There is a Q&A box at the bottom of your screen So please do you feel free to drop in your questions there and we'll get to many of them as we can I would love for someone to stump Neeraj. I've not seen that before so that would be personal But that said we are recording and this is an official CNCF webinar And so it is subject to the CNCF code of conduct. So you please don't add anything to the questions You know or to the chat that might violate the that code of conduct and so you essentially you please be respectful of Your fellow participants and the presenter With that do you think of those hard questions and I will hand over to Neeraj and You know in Roger. Thanks so much for hosting us today. This is looking forward to it All right. Thanks Lee. Hello, everyone. I'm Neeraj. I'm the co-founder and engineering leader aspen mesh Thank you all for joining me today in exploring How to secure and monitor external service access with a service mesh So before I get too deep into the how and what are the various ways you can achieve it? Let's quickly explore why this is important. Why as an organization if You're currently running microservices in Kubernetes environment Why do you even care about external service access? So the main reason you care about it is you want to make sure you are protected from security breaches So if you are an organization currently having any sensitive data whether it's Privacy data related to user information their emails or their credit cards or health information You want to make sure that data is protected at the same time if you do have a security breach You want to make sure you can react quickly to it So basically protecting access to external services and monitoring it is one of the ways to Make sure you're covered for under those scenarios interestingly as the organizations are Moving towards cloud native technologies and adopting microservices There's an interesting fire. I am happening where you get it gets more and more difficult for you to understand your security Basically, you have more microservices that are coming up and going They're trying to reach out to external services and you actually don't understand what's happening So In a way, it's interesting that you're trying to gain agility as an organization But you still want to be out of the news When security breaches happen So at Aspen mesh we have been calling it agility with stability Which resonates a lot with our customers and this comic strip here basically Tells you that the strategy for any company should not be that you are news so many times because of breaches that your customers don't care anymore right so Moving on this point about monitoring and managing your external services is also emphasized By if you look at the oos top 10 list so you cannot familiar with oos They they list out the most common top 10 vulnerabilities that are affecting the applications And if you look at them, there are three which thirdly form to this category of external services So the first one is if you're using components with known vulnerabilities So if you have applications, which are using libraries whether they are open source or in some form factor and they have vulnerabilities You are open to breaches And most and most often they're not the way it works is these applications which have vulnerabilities They try to look at your sensitive data and publish it to a public website. So the example here that I have is a recent python vulnerability where Fake version of the data utility library was published on pi pi Where it was periodically listing all the directories and contents of your file and publishing it on a public website As you can imagine if an organization correctly monitored and secured external services, this could have been prevented Moving on if you have insufficient logging or monitoring It affects you in two ways. First, it decreases your ability to understand how your Applications work and at the same time if there is an attack You are your time to reaction and fixing it Increases and this doesn't apply just to external services But for everything but it makes it worse when you're trying to access a bunch of external services And the last thing is security misconfiguration So most often People do think about security, but either it's too hard to achieve. So they mess it up and then that leads to violations and breaches or Developers are actively working around it because it's it makes their life worse So as we explore today the various options We have we are going to try to make sure these options map up to the three things that are related to external services Last thing that I want to talk about before we go into the details of how to achieve this is Most of the organizations are currently trying to achieve zero trust security And more often than not a lot of focus is an attention is paid to how you get traffic inside your clusters How do you make sure that it is authenticated authorized and encrypted? This is the other half of the equation So you have to pay equal attention to what services you are consuming because breaches can happen there too So with that background in place, let's get started So i'm going to start off with a simple example of a microservice environment Which is running in your kubernetes cluster. So here you have an application a which talks to application b and app c They can be written in different languages most of the time they are So the traffic comes from the internet it helps it hits application a Application a is going to reach out to an external identity management service, which is very common for organizations to use something like a key cloak or cognito to do user management And then app c here reaches out to an external database So that you are using a sass managed service like a dynamo db or big table and making sure your burden is reduced Very common. It's a very simple Architecture the reason I am explaining this now is we are going to use this As the slide as the presentation progresses and incrementally add complexity to it Okay So from an external service point of view if your current state is this and you are a security operator or an application developer In your organization, this is what you want to this is where you want to be So the desired state is you want to make sure all the traffic to your external services are encrypted You have to make sure you're not talking to an external service With sttp. It should be sttps The next thing is you want to make sure Any unauthorized access is blocked what that means is if app b has a new version deployed and suddenly it's trying to reach out to github You should probably block it. There's probably a known vulnerability that has crept in and now it's trying to publish maybe your sensitive data And the third thing you want to do is going to make sure you get Observability and visibility for both of those scenarios. So you want to track when your communication is Going as expected and also when there are some violations happening and you have blocked it Now this visibility can be at different layers. You can have metrics You can have tracing or you can have logging or you can have all the three So with this goal in mind, let's try to see how we can achieve this And I'm going to focus on how you can achieve this with a service mesh So to summarize the goals for external service access are basically three folds. First, how do I know what external services I'm connecting to? So basically this is the visibility and logging and monitoring that you want Second is how do you make sure you're securing the access? That means all the traffic is encrypted And the third thing is you want to block unauthorized access So any external service that should not be reached to is blocked. And then again, you have visibility on all the Either of those scenarios when the traffic is actually meant to go out and when the traffic is blocked So let's move on next to what the different ways of achieving it Both with and without a service mesh and the options that people traditionally have So one way I've seen organizations achieve this goal is to embed the logic in application code So you can make sure your application code itself only talks encrypted and surfaces all the telemetry And for and whatever happens, it won't reach out to some other third party services that it shouldn't be talking to You can also use open source libraries and other third party tools And embed that in your application so that you offload that functionality into some shared code There are two basic problems that happens with this approach First is because you are embedding this capability in your application, if you have different languages So if you're using Python and Go and JavaScript, now you have to make sure you have consistent libraries Or you have to upgrade them all at the same time Secondly, if you are using even TLS stack in your applications If they're vulnerabilities, you have to rebuild all of these applications and then deploy them again It just inherently lowers or I would say increases the time that you need to react and fix the issue at hand The third way which I highly recommend organizations use The third method that organizations should use is offloading this functionality to an infrastructure layer So basically what you want to do is take this complexity out of your application Move it to an infrastructure layer Today we are going to cover service mesh which is one of those infrastructure layers you can use That has two benefits One is application developers don't have to do this anymore so they get to focus more on what they really want to do And then you are enabling your operations team to configure security policies via configuration and not via code Alright, so with that in mind, I'm going to do a quick recap of what a service mesh is And then we are going to move on to how a service mesh can help in this endeavor Alright, so what's a service mesh? So service mesh is a transparent infrastructure layer that manages and handles communication between microservices And as part of doing that and handling that communication, it does two things It allows developers to offload the functionality like I was saying And allows them to focus more on the business logic And at the same time it allows operators to get the resiliency and the security in their environments Outside of the dev cycles which means both of these teams or both of these personas can do their job independently And still successfully achieve the goals of an organization or a business So typically a service mesh is worked by adding a proxy and that proxy intercepts traffic coming in and out of your mesh Sorry, in and out of your application and as that proxy is intercepting the traffic, it can add a lot of advanced functionality Depending on where the proxy is placed in your architecture, you have a lot of different options So today I'm going to cover one of those architecture options which is called a sidecar proxy architecture So if you haven't heard of this term, I'm going to just quickly explain what it is A sidecar proxy architecture is basically you insert a proxy as close to the application possible So in case of Kubernetes, the proxy is added as another container in the same pod your application is running So the value that you get by running them in the same pod is they share the same networking namespace So from an outside perspective you feel like you're talking to one entity Internally those images can come from different sources so the application image comes from developers and the proxy image can come from the security team And the next thing we do is we place some IP table rules or intercept rules where all the traffic going in and out of the applications will be routed through the proxy first So in the simple example, if you have app A which wants to talk to app B, first the app A's traffic will be intercepted by the proxy And service A is going to talk to the proxy in service B and then the request will eventually go to the application B container Having the proxies at both the both the sites which we call bookended proxies gives you the capability to actually enforce policies both at the client and the server So all the proxies together is what we call the data plane of a service mesh Service mesh is most of them also include a control plane so control plane is responsible for looking at your Kubernetes environment looking at the configuration that an operator provides and lower that configuration to the data plane so that the data plane can understand what to do So this gives you like nice abstraction where you can replace the data plane if you want and the application and the operators are talking to the service mesh control plane and configuring it in the API as they understand So most of the service measures provide functionality in three broad categories First is traffic management which is shaping the traffic as the requests are flowing through the proxy. So this includes things like circuit breaking fault tolerance or some advanced concepts like path based routing and canary rollouts In this case today we are going to cover how you can shape the traffic for external services The second category is security. So as the proxies are receiving traffic, whether it's inbound or outbound, you can do authentication, you can do encryption, you can do authorization and today you can actually block, we are going to cover how you can block external services And the third category is observability. Depending on the type of requests and the proxy and the type of proxying the proxy is doing so whether it's doing TCP proxy or whether it's doing HTTP proxy, the proxy can surface a lot of metrics tracing and logging for you So the good thing about doing this out of your application is you get consistency and one of the key aspects of security and operating a cluster is to make sure you are playing on a level playing field where the information is consistent across all applications So moving on. So today I'm going to focus on service mesh, but particularly I'm going to focus on how Istio enables this. So this is going to be my transition to talk more and more about how Istio allows you to do external service access and monitoring. So to quickly explain that in Istio we are using a sidecar proxy architecture and the proxies that are used are on-way proxies. If you're not familiar with on-way, on-way is a CNC project. It's a high-performance proxy written in C++ So this is the architecture of Istio, but it's an architecture of most of the service meshes. You have a control plane and a data plane. In Istio the data plane is on-way Alright, so with that, let's not dig into specific criteria about how service meshes and how particularly Istio can help you with external services So this is the updated diagram of the initial slider I had. So the same architecture you have app A, B and C. Now you have a proxy which is inserted. So now when app A wants to talk to external identity management, it has to go through the sidecar proxy. Similarly, when app C talks to the external DB, it has to go through the proxy. So with this architecture in place, let's look at all the options that are available to us to help us manage external services. Alright, so in Istio these are the four options that are available to you. I'm going to go into details about each one of them. These four allow any restricted access with TLS pass through, restricted access with TLS origination and egress gateway with TLS origination. Some of the other service meshes may also provide you some of these options or all of them. I'm not particularly sure because there's so many service meshes out there, but Istio currently provides these four. So the key thing I'm going to do is I'm going to focus on the three parameters. Basically, these are the three key things that are listed from OWASP that I want to cover. So we're going to compare and contrast these options for how easy it is to configure them. If it is hard, probably people are going to screw it up and then you are more prone to breaches than what you intended to be. Second is what's the level of visibility you get. And then the third is actually how secure it is. A false sense of security is sometimes more hurtful than actually having no security. So I want to be clear on the options that you're going with what you actually get out of them. So with that, let's move on to the first option. The first option is allow any. So allow any is the simplest option. Basically what that means is the proxies are not going to block any traffic. So what I mean by that is if app is talking with encrypted traffic or HTTPS to external identity, it will be allowed. Similarly, if app C is talking over regular HTTP to external DB, it will also be allowed. This really means you don't have any security in my view. The proxies proxying things at a TCP level without actually enforcing anything. And similarly, because it's doing that when app C wants to talk to GitHub, it's actually allowed and not blocked. The reason I am bringing it as one of the options is currently many, many people who run Kubernetes environments, they actually don't monitor the external services. What that means is when they add Istio on top, if you start restricting external services, it breaks the environment. So this option is there to easily transition them into adopting Istio. So let's look at the pros and cons with those three parameters of configuration, visibility and security for this first option. So obviously it's very simple. So there's zero configuration. You just have to flip one bed and you get allow any. But as I was saying, it's not all secure. So for any organization which is focused on security, I would say, don't use this option. The second thing is you do get some telemetry. So recently we had added support for getting telemetry and Istio for allow any and that telemetry is in the form of TCP metrics. So you can configure it and you can get destination IPs if you want. But again, that information is very restrictive because if really an attacker is trying to leak or get some sensitive data out, they're probably going to continuously change your IP addresses. So it's just getting TCP level metrics not sufficient to enforce security here. So next what I'm going to do is I'm going to show you how you can configure allow any and this is the format that I'm going to use throughout the slides for the remaining of this presentation is talk about an option. Tell you how to configure Istio to use that option and actually show you some envoy configuration so that you can actually go to the source of truth and understand what's happening in the environment. So for allow any, the first thing you need to do is make sure the config map that is deployed in your Istio system namespace, which is the config map for the mesh says that you have allow any configured simple enough. Moving on the onward configuration for any of the parts that have the sidecar should look like this. So you see what is called a virtual outbound listener. So you're looking at the config number of envoy envoy has in Istio we configured envoy to have virtual outbound listeners. These listeners are the default listeners that all the traffic from the application get routed to and these are configured with original desk. What that means is if there's a more explicit listener that listener will get involved, but if there are no explicit listeners or the traffic it is receiving the configuration of this particular virtual outbound listener will get activated. So in this case, if you have allow any the configuration for the virtual outbound listener is a TCP proxy. That's where you're only getting TCP level stats and it is configured to use the cluster which we call pass through a pass through is a special virtual cluster in envoy, which tells it to forward the traffic as is to the original destination. That's why this actually works. So sometimes there are many customers and also community users who ask me how does allow any work and how can I verify whether I have allow any configured amount. This is how you should verify it. If you have your virtual outbound configured. If you have allow any configured your virtual outbound listener will always have a pass through cluster. All right, moving on to the next option. So clearly this is not a very secure option. So progressively we are going to look at more secure options and options which give you more visibility. So the second option is restricted access with TLS pass through what that means is in this scenario operators have to explicitly configure the mesh and and tell it what are the hosts and services you are allowed to talk to. So in STO by default, all the services within the Kubernetes cluster are allowed to talk to each other. When you turn this option on, you won't be allowed to talk to any other external service, but you have to explicitly whitelist them. And as the operators are whitelisting them, it does not make sense for them to whitelist HTTP services. So you'll always whitelist HTTPS services. Now the trick here is it's called a TLS pass through. So TLS pass through means the proxies are configured to look at the SNI header, which is the server name indication header in the TLS handshake and then route traffic based on the SNI header. What that means is application is talking to the external identity management application a itself creates an HTTPS request. That means that TLS handshake will happen and then proxy looks at the SNI header and sensor traffic. If the proxy sees a request where the SNI header is not in its configuration list will reject that profit. So similarly app seek and talk to external database and now you are secured because only encrypted traffic can go through. And as GitHub is not part of your whitelist, it will be blocked here. So as you can see, we have added a fair amount of security here. The two things that we got, we got that the traffic is encrypted and we are able to block any traffic that is not in our whitelist. So let's evaluate what are the pros and cons of this option. So from the pro side, you are fairly secure. You achieve two of the goals that you wanted from the cons. You obviously now need more configuration. I'm going to show you what configuration you will need to make it work for this year. The visibility side of things are still limited because the proxy is doing TCP proxying with SNI routing, you only get TCP metrics. Again, to quickly react to any security breaches or data breaches, any organization should collect as much information and if that information is at layer seven, that's the best thing you can do. So this option is secure, but the visibility is not that great. And the third thing about security, which I listed as a con is very interesting. So in this, both app A and app C are using the TLS stack of the application. What that means is if you have a vulnerability in the TLS stack, you will have to rebuild your application. So for example, if this was deployed in your cluster and a vulnerability like hard bleed came out, you'll have to rebuild the whole world. If on the other hand, if the proxies TLS stack was used, then the operators can deploy a new version of prox. So you get some security and you also get some vulnerabilities because of the way this works. An interesting thing about this option is many of the users that I talked to, they turn this on in the AWS environments, especially, and they start to sink failures. And they say that my AWS, my services within the cluster talk to S3, for example, and I have configured S3 service entries, but they still fail. So I'm going to quickly show you why that happens and what are the configuration you will need. So in Istio, if you want to configure TLS pass through, the first thing you need is you want to make sure you have switched that config map option to registry only. So registry only says no longer you are allowed to pass through all the traffic that you're seeing, but only the traffic that is whitelisted is allowed to go. When you do this, the virtual outbound listener that I was talking about now gets switched to a TCP proxy with black hole cluster. A black hole cluster is the opposite of the pass through cluster in Istio. And obviously the name is saying that it's going to black hole any traffic that exceeds. So good. So you are not going to be able to access anything you want, but only the whitelist. So let's look at what configuration you will need in your cluster as an operator to meet this work. So you need to create a service entry. A service entry in Istio is a way to augment the services that the proxy should be allowed to route to. So like I was saying, normally you can only route to things within the cluster, a service entry is a way to update the registry. So in this case, I have told it you can talk to HTTP bin.org. So there are two key things here. One is the name here is HTTPS and the protocol is HTTPS. This means that we are going to be doing a semi routing and the application itself is going to make a request with HTTPS colon www.htpbin.org. So this is very important to make sure you don't put here HTTP because that's going to create vulnerabilities again in your environment. So when you configure service entries, this is how your on-work configuration will look like. We will create or Istio control pin will create additional listeners. So in this case, the listener will be for wildcard host or wildcard IP and 443. We will configure an SNI match, which will save the server name is HTTP bin. You should activate this filter and the filter will be a TCP filter, which is a TCP proxy and it will send the traffic eventually to 443. So this is fairly simple and gives you some amount of security. Coming back to the AWS thing I was talking about, if you're talking to AWS services like S3 and you just configure a service entry, normally it won't work. And the reason is AWS services normally talk to some other things like a metadata server or an STS service to actually get tokens. So depending on how you're getting your credentials need to make sure not only do you whitelist the services in AWS that your applications are directly consuming, but you whitelist things like the metadata server and STS service. So just something to keep in mind, I have debugged fair number of issues where people will start with this to add these options and then everything is broken in AWS as well. Alright, so moving on. So this is the third option, which is my favorite option, which is what I always recommend users to use, which is restricted access. So you still are whitelisting the services that are allowed in your mesh, but you're doing TLS origination. So TLS origination means you are deferring the TLS negotiation to the proxy. That means the application talks to proxy as HTTP and then the proxy upgrades the connection. And I know if you're an operator, you're like, this feels like less secure to me and why would I do that? So I'm going to quickly show you why you want to actually use this option. So the way it works is the application talks over HTTP to the proxy and then the proxy is eventually upgrading the connection and doing TLS. So if you look at the pod boundary as your security boundary, the main thing you want to achieve is your packets are never unencrypted outside the pod boundary. This achieves it. Your traffic is always encrypted eventually to external services, but within the pod they are not. And this actually is beneficial to you because now you can apply lots of advanced layer seven policies and the visibility that you get from the proxy is not layer seven. Similarly, just like the last option, if you're trying to talk to GitHub and it's not in your whitelist, you'll be blocked. So if you look at the pros and the cons here, the pros are a lot more than the last option. So you are getting the same level of security. You are able to apply layer seven policies now. So you can use in Istio if you're familiar with Istio, you can use virtual services, destination rules and get retries, time out, all the things that you want. And as an operator, you don't have to rely on application. From the visibility point of view, if you have configured the right options in Istio, you are not going to get layer seven metrics or SDP metrics, you're going to get access login, you're going to get tracing. This is a lot of visibility right out of the box without changing your application. The con is, as is most of the times with security and visibility, if you get, if you're getting both, you are also getting a lot of configuration, right? So that's one downside. And another downside, if you want to talk, tell that it's a downside is you can have an encrypted traffic now between application and proxy. Again, most of the organizations and businesses I know they can live with this because POD is the security boundary. So this is how you configure restricted access with TLS origination in Istio. So you still need a service entry like before, but now the interesting thing is you need to also add port 80 and with the protocol HTTP. But don't worry, the proxy is going to upgrade the connection. And the way it does that is you have a virtual service configured which says, when the application is trying to talk to HTTP bin.org at port 80, make sure you route it to a specific destination, which is configured via destination rule, but at port 443. And that destination rule is saying for port 443, you're going to do a TLS mode symbol. So this means the connection from the application is going to come to the proxy at port 80, but the proxy will make a connection out at port 443 using simple TLS. So the onward configuration, so this was all the Istio configuration you need. The control plane eventually lowers it to onward and this is what onward configuration will look like. So we will create a new listener at port 80. That listener at port 80 will be an HTTP connection, will be an HTTP filter and that HTTP filter will have a route configured for that listener. So the way onward configuration works is listeners either have TCP filters or they have mostly HTTP filters. Or if you have some other protocols, they can have those filters for HTTP, the filter, the HTTP filter typically points to a route. So in this case, it's pointing to a route 80. That route now will point to a cluster. So the way it goes is listeners, routes and clusters. So then the cluster for route 80 is pointing to a special cluster here, which is outbound TLS origination. And the configuration for that cluster basically says use the TLS context, which is a common TLS context. So this common TLS context means it will use the well-known search to verify the certificate of the server on the other side. As you can see the number of jambles or configuration you have to achieve, you have to write to achieve this is fairly high. But you get a lot of benefits and with the right amount of automation or using some vendors, vendor products, which create a wrapper around it. You can get the benefits and still you don't have to worry about all the integrities of the configuration. So I always recommend people to use this. The fourth option here is a pretty advanced option that's using an egress gateway. So an egress gateway gives you the capability of routing all the traffic or external services through a special gateway, which is like the inverse of the egress gateway. It's a standard proxy. And the two use cases I know what people want to use it. So one is either you have some stringent requirements where all the external services that you're accessing the traffic has to go through special nodes. And those nodes have some special monitoring and policy enforcement or you have a policy of all the nodes in your cluster cannot have public IPs. So you only have public IPs allocated to some of the nodes. Now, this is going to be a lot of configuration and I'm going to make sure I'm going to inform you that if you're using this, make sure you have those requirements. Otherwise, you're taking on a lot of pain, a lot of benefit. And also with egress gateways, you can have multiple options. I feel like if you care about security that you want to use an egress gateway, you only should be using TLS origination. Otherwise, you're actually giving up security compared to the last option. So let's quickly see what's the architecture here. So we are using TLS origination. That means app is talking to the proxy unencrypted. Then the connection between the proxy to the egress gateway here will be encrypted. And this encryption is through Istio mutual TLS. This is very important. And some of the customers or some of the users I've seen who are currently using egress gateway. They don't realize that you need MPLS to make this work. Otherwise, you have gaps in your security. And then the egress gateway will again do a TLS upgrade. So lots of moving pieces here, two kinds of encryption. But if you really want the benefits of it, you can do this. So pros here are the same pros for the last option. Additionally, you have the public IP control, which I was saying the visibility is the same for the cons. There's really a lot of configuration. I'm going to walk you through that configuration quickly next. And then from the security point of view, really, you have to enable MPLS in your cluster. Otherwise, this link here will not be encrypted. So the configuration for this is first, you need to enable egress gateway. And you have to verify that it's running. You need the service entry and this service entry should look very familiar. The exact same service entry is for option three. And now comes the main configuration overhead that comes with enabling the egress gateway. So here we have to write a gateway resource. Gateway resources are the resources which configure layer three and layer four for any gateway, but that's egress or ingress. So in this case, we are going to say selector is STO egress gateway. On port 80, you're doing protocol HTTPS. So if you see, even if you have port 80 here, we are doing protocol HTTPS because the sidecar proxy and the egress gateways are going to talk over MPLS. And then the egress gateway listener will do SNI sniffing. And the option here for TLS is mutual. And these are the certs that are available for it. And these certs are the STO virtual certs. So then you have a destination rule. And this destination rule tells that whenever you're talking to the Istio egress gateway, you should actually just use TLS Istio mutual and then the SNI should be or the header to look at for the host is STP bin. All right, bear with me. A little bit more configuration here. You have to next configure a virtual service and this virtual service is for the host STP bin, but you will match different gateways and accordingly take actions. So if you are matching on the mesh gateway, that is the sidecar proxies, you're going to route it to Istio egress gateway service. But if you're matching it on the gateway proxy, you're going to send it to HTTP bin and this is where you're going to do a TLS upgrade or a TLS origination. The last piece of the puzzle is you need to create a destination rule and the destination rule applies to the Istio egress gateway where it does TLS mode simple and then upgrades the connection. As you can see, lots more configuration, you have opportunities here to screw it up, but if you have some automation, you might be able to get away with it. And again, only use it if you have those stringing requirements. So let me quickly summarize what this means. So if you're trying to go for more security or more visibility, sadly more configuration always comes with more security and that's just how life works. I would say allow any is the option that you should not even consider if you want security. A restricted access with pass through is reasonable to start with. And then you should eventually try to be somewhere here, whether it's restricted access with TLS origination or egress gateway with TLS origination, depending on your needs. The last thing is you want to make sure as a team or as an organization that you don't have to stay at the center to reach your goals. That means whichever option you're going with you have some automation around it or some wrappers to simplify the life of your developers. And the last thing which I really like to talk about for Istio is it was designed for incremental adoption. So in here, you can start with any of these stops that we have and still reach the goal you want. So if you are just starting out with Istio and you want to have allow any, that's perfectly reasonable. You can start capturing destination IPs from the TCP telemetry, then go ahead and you can create service entries for those external services. So you have to do a reverse lookup for DNS there but then you can add those service entries. Then you can flip to registry only as you have service entries. So now you have only encrypted traffic going and you have blocked all the unauthorized traffic. Then you can update the application to use HTTP and then eventually use TLS origination. And once you use TLS origination, you have security and visibility with reasonable amount of configuration. I won't say it's trivial, but it's still not zero configuration. But the important thing is you can also start with any of these pit stops and still achieve where you want to go. So with that, let's see, I have 15 minutes left. So Lee, do we have any questions? Oh, this is great. This is an awesome presentation, Niraj. So we do, we do look like you were about to hip-deep in YAML there for a while, so I'm glad you escaped. Oh, good deal. So if you, to all of our attendees, if you do have questions, please do ask them. Put them into the Q&A at the bottom of your screen. We'll get to as many of those as we can. And Niraj, we do have a few questions that are coming in. One of those is an attendee asks, can you mix the TLS configuration in any given cluster, like enabling TLS origination and egress gateway and TLS pass-through, or can you only select one at a time? Absolutely. With Istio, you can pick what you want. Now, again, you want to minimize your pain and hurting here, right? The more variety of options you have, you need to make sure those fine-grained configurations are correct. What I would recommend if you really have this use case is making sure you have your deny or the overall posture configured first. So if you want to block everything, make sure everything is blocked and then incrementally add things to allow and then configure origination or mutual TLS. The second thing is use tools like Istio, CTL, auth, and policy checks and analyze that gives you a way to see if your configurations are in conflict with each other. And the last thing is go to a source of truth, look at your own work configuration, and that's why I had those snippets here. That's going to really tell you what's happening. Oh, very good. Very good. Okay. Next question was for the options that you've outlined, which of these would give you the opportunity to limit service A to an external service, but not service B? Really good question. So it feels like this is asked by someone who has worked with Istio. So there are two things in Istio which are kind of weird and we are trying to fix in the community. One is some of these APIs have what we call a global effect. So if you configure a service entry for any host, currently it applies to all the sidecar proxies across all namespaces. But you have an option here which is called export to what that means is you can limit the scope of the visibility of the service entry to a particular namespace. So if you are currently following namespace level isolation, I would say you put service A in namespace A, service B in namespace B, create a service entry and say export to a dot in namespace A. And that will make sure only services in namespace A can talk to the external service, but no other services in any other namespaces can. Please reach out to me if you're stuck with this. I have had quite a few people get configuration set up in this way. It's a little tricky and we are definitely working on to make it better. Another question was, you know, how to configure egress gateway TLS origination with self-signed certs mounted on the gateway and have no need to mount those certs in the sidecar proxy. Wow, people are really advanced here. I like it. Maybe maybe you're going to get your wish lead. So let me see. So the way I would do it is to use destination rules. And again, I hear people when they are worried about using so much configuration and we'll make it simple. So when you configure destination rules, you can specify for the sidecar that you want to enforce that rule on similar to service entry, you can have destination rules. be scoped by specifying the export to. So what I would do is have the destination rule defined in the same namespace you have egress gateway scope it down. Configure the destination rule to use self-signed certs and that way only the egress gateway will get those certificates. I think we can make it even better. Again, the API is still evolving. It's definitely doable by scoping it down and you are right. We would ask this question. You don't want all the sidecars to get the certificates and the way to achieve it is to scope it down. And I was so hopeful that that was a stump. Very good. So other questions have come through one of those is so one of those questions is about the ability to watch the seminar again in the future. And so, yes, the seminars are recorded. The slides will be available and so the recording will have the link to that in in chat. Very good. Another one here is the question is, you know, would we get the same security with with g rpcs. I mean, yes, you will because g rpcs is HTTP. It's just HTTP to and in on boy, we already have the capability to understand g rpcs. So, I mean, you can definitely do g rpcs TLS origination or TLS pass through using these primitives. Very good. We do have a little more time for some questions. So here's another one. How would you compare and contrast the options that you have outlined to something like Istio mesh expansion, where you're adding an on way agent on an external service. Are these, you know, are there cases where mesh expansion would be a better alternative. Again, it's an interesting question, but the aim of mess expansion is a bit different. Right. So when you're trying to do mesh expansion, you're trying to bring your VMs into your mesh, so that services within the Kubernetes cluster which are part of your service mesh can constantly talk to other services which are outside the mesh. Now, assuming you are bought into the mess mesh expansion ideology or architecture, then yes, you have the right options so you have the options to do mutual TLS. So, you know, services within the Kubernetes cluster should be able to talk to services outside where the one way agent is deployed as a node agent with either mutual TLS or if you just want to do TLS I would recommend at that point just use mutual TLS because that's the most secure way. The interesting thing what you're trying to ask, I'm guessing is if that VM or the application on the VM itself is trying to reach out to egress, how do you enforce it. I think the size and the configuration options I gave should work, but to be honest, I haven't tried them. So if you're really interested please reach out to me you have my email and Twitter there and I'll definitely help you out. Nice. Fantastic. Okay. Another one here is compared to other options here, the first one allow any, it looks really unsecured. What are the use cases for why you are right? I mean, they are completely unsecured and that's why if you're an Aspen Mesh customer or use our distribution, we don't even have that, we disable it. The reason we brought it or we have it as a community is the organizations that aspire to use STO or reach Zero Trust, they are in different phases of that option. Some people are just moving to Kubernetes. When they just move to Kubernetes, if you start blocking external services for them, things break. So we need to give them incremental adoption and yes, it's not secure, but for many organizations, I think security is a big concern, but it might not be the burning concern. They might have to make more progress towards getting more customers and doing more agility, getting more agility. So it's just provided as a way to eventually go to a more secure posture. Makes sense. Makes sense. For the developers out there, I'm not sure that security is ever a burning concern. You're right. And you know, there's always a trade off here, right? It'll be a burning concern at some point. Very good. You have been sufficiently peppered with question after question. This is good. We've got another couple of comments coming through. Maybe one more here. The question is, do you need to have policy enforcement enabled in Istio policy to switch to registry only? No, that's a really good question. I'm glad somebody asked because I missed that in our presentation. So the options that I configured and I showed you right now, they are orthogonal to Istio policy. So in Istio 1.4, we actually removed Istio policy as a default option. So Istio policy used to be a separate container in which to run as a form of mixer where attributes is to go from side cars and then the policy used to be evaluated outside as a centralized point of control. Going forward, we are deprecating that and the policies will be enforced all within the sidecar. Registry only option is a sidecar option. It is purely a sidecar option. So if you configure registry only, the policy part is not all involved. Similarly, when you are configuring destination rules and the other options, all those things apply directly to the sidecar. Very good. Great. Well, Niraj, this is a great presentation. Lots of good questions. And that's probably all the time that we have for questions today. And we were that close to potentially stumping Niraj. I just, I want to let everyone on the call know that I'm slightly disappointed today, but there will be more chances. I'm sure you can catch both at his Twitter and his email here. Niraj, will you be at the upcoming QCon here? I might be. I'm still working on the logistics here. I think Aspen must be there with our booth and please visit us at our booth. And I think we'll have someone from our team to answer any questions. Very good. Very good. Okay. Well, thanks all for joining us today. The webinar recording and slides will be online later today. And we'll look forward to seeing you at a future CNCF webinar. Have a great day. Thank you, Niraj. Thanks.