 Welcome, everyone, to the session on using on-white as an egress proxy for PLS-enabled traffic. Briefly introducing myself, my name is Amit Jain. I'm presently living the application security and services team at VMware for modern and cloud native applications. Prior to VMware, I was the founder for Mesh7 and served as CTO. At Mesh7, we developed application security mesh for distributed cloud native applications. We were recently acquired by VMware earlier this year. Joining me is Kiran Kumar. Kiran is an MTA software architect at VMware. Kiran was part of the founding team at Mesh7 as well and have a strong background in application proxies, as well as security solutions, such as ideas and ideas. To start with, cloud native and modern application architecture have become the new normal. They offer tremendous benefits in terms of developer productivity and innovation agility. But of course, with great power comes great responsibility. In this case, it is about securing this new infrastructure against the new security challenges that are surfacing with modern applications. As we go from users to the workload at the edge, we care about ensuring only the last-minute traffic gets in and is appropriately load-balanced, thread-to-mitrate, et cetera. At the edge, the security controls are implemented using the web application firewalls, API gateway, and the Kubernetes cluster range. Now, as we enter into the microservices network, we need to worry about lateral movement of threads and least privileged communication between services and data. At the microservices network, we enforce security controls using CNIs, Kubernetes policies, and most recently, using service mesh at the application and API layer. And finally, modern applications are increasingly relying on external third-party services, such as Tuleos, Box, for example, shared cloud services, such as S3, and legacy applications. Now, as we consider external interactions, we need to ensure that only valid third-party domains are being connected to. In this session, specifically, we'll focus on egress security aspect of securing modern cloud native applications. In the next few slides, we will walk through the challenges of securing egress and describe the solution to add egress observability and security to the distributed cloud native applications. Now, let's consider the need of egress connectivity. Egress connectivity is a must to have for a cloud native apps functionality. As we covered briefly in the previous slide, cloud native apps are increasingly using external third-party services for different important functions, such as Box, Tuleos for communication, et cetera. Applications are also increasingly using shared egress cloud services that are provided by cloud service providers, such as S3 and RDS for database, et cetera. And they are also reaching out to the on-premise legacy apps that may be deployed on-prem or in the noncontinental workloads such as VMs or the Retail, for example. All of the above usage of egress services requires or makes the egress connectivity a must to have for cloud applications. And at the same time, while egress connectivity is required, it also poses a different type of security risk and challenges. For example, in case of attacks that deploy the persistent backdoors, egress connectivity can be used to connect to the command and control service. For a compromised workload, they reach out to malicious site to download a malware such as ransomware. Egress connectivity can also be exploited for data exculturation. Hacker may access the cloud resource, such as S3 bucket, and then can read the data and upload it to an egress server. Now, being able to observe and secure egress interaction is very important to secure with cloud native application, but it's not easy. One of the main challenge of securing the egress is that most of the egress interactions are TLS encrypted. And to be able to observe and secure these interactions, we require a SSL proxy, we require a proxy which allows us to do deep TLS inspection. And that requires a special semantics, which is known as SSL man in the middle, for being able to do TLS traffic interception and observe it. And that's where the on-wise limitations comes around. On-wise does support TLS termination, but it supports it as a reverse proxy semantics, and it can dominate the TLS traffic if it has the application's certificate and key. For egress TLS interception, though, we need a forward proxy semantics that is based on SSL man in the middle of a process. And the way it works is that the SSL interception proxy performs certificate rewriting, also known as trust translation, which we're going to cover in detail in the next slides. On-wise currently does not support SSL man in the middle. And in longer term, it would be the right approach to enhance On-wise to be able to do egress TLS inspection with the built-in SSL man in the middle of interception. For the alternative, for this session, though, we are exploring an alternative solution where we are using another open-source proxy called SSL proxy and deploying it in conjunction with On-wise to achieve the observability of egress TLS traffic. So let's take a look at SSL proxy. So we are proposing in this session, we are proposing the use of SSL proxy along with On-wise to achieve egress observability. SSL proxy is a full proxy that intercepts TLS connection, decrypts the traffic and diverts traffic to other programs for processing and deep inspection of TLS traffic. It's open-source and BST-licensed. It is used widely and it is used by OT UTM Parallel project to provide a deep TLS interception for web, pop-3 and sent-to-be traffic. It supports many TLS protocol versions including SSL.org and TLS 1.3. It supports TLS extensions such as SMI and it is able to work with most of the cypher suits including a recent one and RSA, DPL, NEC, DSA, etc. Additionally, it supports advanced features such as user authentication and also when we actually intercept TLS, one of the key requirement is that the intercepting proxy needs to make sure that it is connecting to a valid backend server and it needs to do things such as certificate validation, etc. SL Proxy provides supports of certificate validation. When it does the TLS interception, it ensures that the certificate presented by the backend server is valid and if it finds it to be invalid, it terminates the TLS flow. Now let's take a look at how it works. So as the application tries to connect out to an egress SSL service, SSL Proxy intercepts the SL handshake in an inline manner. It then performs the TLS handshake with the destination server on behalf of the application and rewrites the server provided certificate to forge it and to generate an ephemeral certificate towards the application using a local root CA which can either be user provided or automatically created and we'll cover back in some more details in the next two slides. Now, the root CA used by SL Proxy, it basically certificate rewriting enables the SL Proxy to do SSL man in the middle and to actually have two independent sessions, one towards the application and the other towards the backend services. And this allows the SL Proxy to decrypt the data which is coming from the application and then be able to re-encrypt it before forwarding to the backend server. One of the main feature of SL Proxy is that it can redirect decrypted TLS traffic to an external program for further processing and inspection and external program can process the traffic and then can redirect the traffic back to a SL Proxy. And this allows the SL Proxy to be deployed as a proxy which provides the access to TLS encrypted traffic which is the main feature we are using while deploying it with the owner. Now, let's take a look at how it works. So SL Proxy while supporting the redirection of the decrypted traffic to an external program uses a protocol called SL Proxy protocol. And the way it works is that SL Proxy insert a connection metadata into the first packet in the connection. So when SL Proxy receives encrypted traffic from the application, it decrypts it and it forwards the decrypted traffic to an external program and when it does so it basically injects or inserts a metadata as part of the first packet on this flow, on this connection. And this has the details around the source IP and the port of the actual application, destination IP and the port where application is trying to connect to. And then there is this piece which is about SL Proxy IP and port. It is the IP and port where SL Proxy is expecting this external program to send back the traffic once it is done processing that traffic. And then SL Proxy can receive that traffic and can forward it to the backend server after re-encrypting it and forward it to the backend server. Now let's take a look at how we basically deploy this solution with the OnWire and how we used this as part of the OnWire in connection with OnWire. So when the application tries to connect out to external services in GLS using a traffic lead direction mechanism such as IP table, we redirected the traffic to SL Proxy in a transparent manner. Now SL Proxy intercepts the GLS handshake between the application and the backend server and it performs another GLS handshake with the external service on behalf of the application. Now upon receiving the GLS certificate from external service, it uses external services information from the certificate to generate an ephemeral certificate and signs it using a local root CA which can be automatically created or user provided and forwarded to the application. So what goes to the application is a forced certificate which is issued by SL Proxy and which is based on the detail which is received from the external service. So the application basically things that it is communicating with external service. This process creates two independent sessions. So SL Proxy maintains two independent sessions, one with the application and another with the external services and these two independent sessions allows the SL Proxy to be able to decrypt the information which is coming from the application and then forwarded the OnWire and then reencrypted back to the external service. So the way it works is that when the application now sets the data to the external service, SL Proxy decrypts the data and forwards the decrypted data to the OnWire using the SL Proxy protocol. OnWire receives the decrypted traffic and can provide the observability and processing on the Eclipse traffic and then it sets back the traffic to SL Proxy or an IP port that is specified in the SL Proxy. SL Proxy reencrypts the traffic and forwards it to the back to the server. Written traffic follows the same path, the response comes from the external services. SL Proxy receives the response, sends it back after decrypting it to OnWire. OnWire processes the response, sends it back to the SL Proxy, SL Proxy then reencrypted back to the application and the application receives the response. Now to be able to use the OnWire as an external program with SL Proxy, we needed to add the support of SL Proxy protocol termination in OnWire and this is done using a new listener filter in OnWire which we are calling SL Proxy. What this new listener filter does is that it attaches itself to the listener and then it subscribes, it reacts to the accept event and in the accept event, it subscribes to the SOFIT or file event for read data. Once it receives the read data event, it looks for the SL Proxy header and once it receives the SL Proxy header, it passes the SL Proxy header, it extracts the information about the client IPN port, the distribution of this IPN port and the dynamic address of SL Proxy. It then uses this information to set the flow metadata in a way that flow is seen as coming from the client IPN port and also it basically forwards the traffic to SL Proxy back. Once it has done that, once it has processed the SL Proxy header, it disables itself by unsubscribing to the file event and continuing the filter chain which makes that all the future interaction between the application and the backend service, all the data which is received on this flow goes back to the application in this case as strategic connection manager filters and SL Proxy filter is no more active in the path after it has processed the SL Proxy header. Now let's quickly go over how we use the above solution and its two deployment to achieve observability of DLS traffic. So deploying the solution in the STI environment required two modifications to the STI sidebar. First, the STI sidebar image is modified to include the SL Proxy binary and associated a script to set up and initialize SSL Proxy. We also needed to add the IP table rules for redirecting the application's TLS traffic to SL Proxy. And finally, we needed to integrate with the STI control plane to add a new listener in on-way sidebar proxy that has SSL Proxy filter and captures the traffic from SL Proxy to and provides the observability on it. We also added a new cluster in on-way sidebar proxy with the Oristol-DST load balancing method. This provides a transparent proxy semantics for the on-line proxy. This briefly shows how we are starting the SL Proxy inside the container. Inside the initialization script, the standalone script and performs two steps. First, it creates a temporary root CSR for each port and then it initializes the SL Proxy inside the sidebar. Few of the key parameters are described on the above slides. These parameters provide basic configurations just listening port and the user ID for the SL Proxy process. For redirecting applications, TLS traffic to SL Proxy, we add a new IP table rule as shown above. The rule is added in the STI output chain before the traffic is redirected to STI or redirect chain. Finally, we use the on-way filter CRD to add a new listener and the cluster in the on-way proxy deployed in the sidebar. We needed to integrate with the STI control plane and on-way filter CRD basically allowed us to download a new listener in every sidebar and is to use environment file. And this new listener basically has the SL Proxy filter and allows the blue logic for allow the on-way to receive the traffic from SL Proxy and then be able to process it. We also added a new cluster in on-way sidebar proxy with the original DST load balancing method. This provides a transparent proxy semantics for the on-way proxy. Now we'll briefly demo that solution using the Bookinfo app in the STO deployment and I will transfer it over to Piren for the demo. Thanks. Hi, everyone. Today I'm going to give a demo on on-way being used as Egress Proxy for TLS-enabled traffic. Now I have in my demo setup, I've already installed the Bookinfo microservice application. In this setup, I've also modified the product page to fetch data from SNS Secure website, fobs.com. Now in the modified Sidecar image, the SSL Proxy binaries package, during the initialization, the SSL Proxy is started with the required parameters. And also we have added a new IP table rules to redirect the TLS traffic to SSL Proxy listening on port 8443. Now I have already applied on-way filter CRD where it adds a new listener as well as a new cluster. Now this is the on-way filter CRD which has the following configuration. This is a new listener that we add and which listens to the port 10,090 and it has a flag called us SSL Proxy protocol. This flag indicates that this listener expects the SSL Proxy protocol header to be present when on the new connection. Now I'm going to take the dump of on-way and then verify if this configuration is applied successfully. Now this is the listener that got added successfully. So we added this new listener. This new listener has this SSL Proxy filter added. It receives all the decrypted traffic from the SSL Proxy using the SSL Proxy filter it terminates the SSL Proxy header. And then it forwards to the cluster, the cluster external which has this internal destination enabled, right? And has a load balancing policy. This provides a transparent TLS Proxy semantics. Now before I access the product page and reset all the IP table counters, net table route counters to zero, okay? And also in Istio 1.7 all the telemetry data data from the sidecar proxy are sent to the telemetry pod. Now using this, I have already enabled the telemetry log entry. So I'll tail the log entry and we can see what telemetry data that invoice sends to the telemetry pod. So I'm tailing the logs here so that we can see the telemetry data sent by on-way process. Now I'm going to access the product page. Now as I explained earlier, the product page fetches the data from external secure site, the code information, and then it displays it. So now let's access the product page. Now as you see, this code section is actually the data that has been fetched from the external secure site Forbes.com. Now looking at the IP table route, if you see the, with the verbose thing, you can see that the client initial request to the external secure site was redirected to the SSL proxy which was listening to the port 8443. As explained earlier in the presentation, this data is then decrypted by SSL proxy and then the decrypted traffic is then passed to on-way for further analysis. Now looking at the telemetry log, the telemetry log shows that the destination IP, the destination host information, and it has the URL information and it has the source product information and the user agent, et cetera. So with this information, we can enable policy enforcement and all of these things enable policy enforcement and also the analytics and do the other required et cetera things. Thank you Kiran, just very quickly about the open items and next step are the plan to open source the SSL proxy listener filter and submit it for review and merge to upstream to on-way community. We plan to work with the student community to make this part of the STO and that will require us to enhance the site card as well as a different integration with this two service entry paradigm for TLS configuration and of course this demo we only covered the observability part. We plan to also integrate with the traffic management and authorization policies to be able to make them accessible on the external TLS services and the final piece is around the root CA. The STL man in the middle requires the application to trust it is issued by SSL proxy and that requires the root CA to be added to the trusted chain list for the application. This requires a standardized way of doing it so that is another item to be discussed and defined to make the solution operational and of course in the longer term we wanted to, we would like to bring SSL man in the middle negatively into the on-way process itself. Thank you We appreciate your time and hope you enjoy the on-way Thank you