 I'd like to thank everyone who's joining us today. Welcome to today's CNCF webinar. It's on how to choose the right proxy architecture for microservice-based application delivery. I'm Karen. I'm a CNCF ambassador and community manager on the Azure Container Compute Team, and I'll be moderating today's webinar. We'd like to welcome our presenters today. It's Tenkuj Gupta, senior director at Citrix. He's on the cloud native application delivery team. And Niko Desini, director on the cloud native application delivery team at Citrix. Just a few housekeeping items before we get started. During the webinar, you are not able to talk as an attendee. If you look at the bottom of the window, the zoom window, there's a Q&A box at the bottom. Please go ahead and use that to drop in your questions and we'll get to as many of them as we can at the end. With that being said, I will now hand it over to Pankaj and Niko to start off today's presentation. Thank you, Karen. Thank you, audience, for joining the webinar today. How to choose the right proxy architecture for microservice-based application delivery. I think this is the most critical decision you will make to choose the right architecture as you advance towards micro-services-based applications. As you modernize your applications, you also have to consider modernizing your application delivery infrastructure, which is built on application delivery controllers, also previously known as load balancers. You will learn today the various architecture choices, how to choose the right architecture for north-south traffic, how to choose the right architecture for east-west traffic, which is between or among the microservices, and also many other parameters to choose the right architecture. Citrix is an active member of Cloud-Native Computing Foundation, and we are very proud and very thankful for partnership and association with CNCF. We have a pretty packed agenda. We start with importance and the challenges of choosing the right architecture. And this is the only place you will find a very comprehensive view of various architectures available in the marketplace. You need not to shift through tons of information. You will find a single place various architectures and pros and cons for each one of them. After that, we do a quick recap of layer four and layer seven load balancing. What are the fine differences between them? And this is a critical criteria to consider your architecture. And some of the architectures will be talking about cube proxy for east-west traffic, which is primarily a layer four load balancer. We have a snapshot of four architecture, and then we do a deep dive into each architectures of seven key attributes, which your expanded stakeholders care about. Towards the end, we are going to touch upon Citrix solutions at a glance. I am Pankaj Gupta. I'm senior director for cloud native application delivery. And along with me, I have Miko Desini, who is the product management leader at Citrix. Our emails is on the first slide. If you need more information, feel free to reach out to any of us. As you build your business critical applications on microservices or Kubernetes environment, you have to choose the right architecture. And choosing right architecture will determine how best you can deliver the experience to your users, what kind of visibility you get, what kind of security you get. And choosing that is not easy, because you have multiple stakeholders, like developers, platform team, networking team, SecOps team, DevOps team, CitroLibrary engineers and many more, and each has unique needs. So your architecture should be able to address needs and unique needs of each stakeholder. You have to also load balance, not just the north-south, which is traditionally done for tier three web applications, but now you have to load balance and offer same visibility, same security for east-west traffic, which is in a very simple word, the traffic between the microservices or between the containers or among the pods. The biggest challenge is going to be the trade-offs between benefits and the complexity. And each organization is a different journey for cloud-native application skill set. Some are experts and some are novice. And we are going to provide you a simple framework to choose the right architecture, make no mistakes, architectures are complex and they are evolving, because technology is changing pretty fast and there is a lot more open-source innovation is coming. So you can take the decision wisely. A quick recap of layer four and layer seven load balancing. In simple words, layer four load balancing is very simple, very primitive, based on just IP address and ports, it is also STTP and STTP as blind and doesn't have capabilities to rewrite the payload or look into the payload or in simple words, ability to do content switching. While layer seven load balancing is very feature rich, it has a tremendous capability to look inside the traffic or the packet or the payload and you can apply advanced load balancing techniques based on URL or the client information like browser OS device. And it truly takes advantage of looking into STTPS or STTP packet. And you know, most of the microservices use STTP or STTP or APIs as a standard protocol. So layer seven load balancing is really designed for applications in today's and tomorrow. Having a deep packet inspection capabilities and layer seven load balancing also gives you better user experience because you can offer advanced session persistence and in layer four, if you are doing just session persistence based on IP address, which you can change over the period of time, you may not deliver the best user experience while layer seven load balancing takes a benefit of using the cookies. So even if the IP address changes, you can still offer the great experience. How the resources are allocated for load balancing or the decisions for that? Is based on a very detailed customizable health check in layer seven versus in layer four, it's just ping or TCP handshake only. Advanced packet lookup capabilities of layer seven also helps you to offer better security. These slides will be available later if you want to go into detail versus on layer four versus layer seven load balancing, you can find this information handy. We will be discussing four architectures today, which we have plotted today on two accesses. One is the complexity and one is the benefits. On the lower bottom most corner, you see two tier ingress, which is the most simplest to deploy and takes the North-South load balancing is split into two tier ingress, which is represented here by green and the blue. We'll be discussing a variant of that, which combines these two tier ingress into a unified ingress. And the Nirvana or the North style or the most advanced, most feature rich architecture is service mesh, which has emerged as a North style architecture for many of the companies is the service mesh. We are going to do a very detailed insight into that. Service mesh is very complex too. So if you want all the benefits of service mesh architecture but want much lesser complexity, we'll be talking about today the service mesh light architecture. This four architectures help you to move to cloud native at your speed. You can choose the right architecture by balancing between benefits and complexity. And whether you are a cloud native pro or a cloud native novice, this framework will help you to choose the right architecture for today and for tomorrow's needs. Choosing the right architecture is not easy because you have diverse stakeholders. And center of these stakeholders is the platform team which is responsible for running and managing the Kubernetes platform. They bring platform governments, they are responsible for operational efficiency and developer agility. And they are the connecting tissue among multiple stakeholders. Starting with DevOps. DevOps top care about are faster release, faster deployment cycle, automation, advanced rollout techniques like canary and progressive rollout. Developers care about user experience, faster travel shooting and how their microservices are discovered and does their routed. Site reliability engineers primary focus is on application availability that requires extremely good observability, quicker incidence response and post martyrdoms of the incident. So those incidents doesn't occur again. NetOps team traditionally who have managed and run application delivery controllers or also load balancers as some people call it are responsible for network policy, compliance and also managing controlling and managing the network. And DevSecOps is all about application and infrastructure security but the automation is becoming more and more important. So we just saw at least six stakeholders and each unique needs that gives a blueprint which you can use to evaluate these four architectures. And these are the seven attributes on which we'll be evaluating these four architectures is starting with application security which is the critical requirement for DevOps and to be honest, everyone in the organization. Observability, which is a very critical for SREs, continuous deployment for DevOps teams as well as for developer scale and performance for every team member. And the emergence of open source tool has really accelerated the pace of innovation. So we'll be talking about how different architectures enable the open source tool integration. Many of them come from CNCF. In last two years, STL is emerging as a North Star open source control plane and we'll be touching upon how each architecture fares for STL unified control plane integration and the most important one, what kind of IT skills are required for each of these architecture. So from this, you will really look at four architectures, how they fare on each of these seven attributes which are directly linked to your stakeholders and a granularity of East, West and North South traffic. Let's dive deep into our first architecture, two tier ingress. The bottom line of two tier ingress architecture, it is simplest and quickest to production. Whether you are a cloud native novice or expert, it is still the simplest and quickest to the production. The reason for that is there is much less learning for platform and networking teams. They still manage their existing knowledge and administrative control. Let's go a little bit deeper into that. In two tier ingress, for North South traffic, your ADCs or load balancers are split into two parts and each is doing a different function. On top, you see the green ADC, which is managed by networking team, which is responsible for all security policies like WAF and SSL and this also being used for North South traffic as a load layer four load balancing. You can use Citrix ADC or the similar product for that. Inside the dash box, that is the Kubernetes or container environment, you see a blue load balancer, which is managed by platform team and responsible for North South layer seven load balancing. So for North South, you have two load balancers. One is green, one is blue, each managed by a different team and does a different function. And you can run the authorization whether on the green or the blue load balancer. For traffic between the microservices or Kubernetes nodes, this architecture uses standard open source kube proxy load balancer, which is a open source, very basic layer four load balancer, which has the basic capabilities of just doing a round robin load balancing and which has benefit of simplicity, but also lacks many features, which we are going to look at pretty quickly. Why this is also simplest and quickest to production? Because same blue, sorry, same green ADC on the top can also be used to do layer four to layer seven load balancing for your existing monolithic applications. So you can use the same green load balancer to load balance your monolithic applications as well as layer four load balancing for your cloud native or microservices based application gives you a better return on investment and a faster transition as you transition from monolithic applications to the microservices based applications. Let's look at the seven attributes which we talked about before. Starting with application security, load balancers or application delivery controllers have been very effective for excellent production for North-South, which benefit is still remains to continue in this environment also, but kube proxy have a lot of limitations in terms of the application security or communication between the microservices. It even doesn't offer the basic network policy and segmentation. That means you have to add complexity by projects with products like Project Calico or equivalent. For observability, every packet for North-South traffic passes through green or blue ADCs. It has ability to see every packet or every transaction. That means you get excellent telemetry, observability and you can build very insightful dashboards. Kube proxy was not originally designed for a very detailed observability or telemetry. So that reflects as a limitation in this architecture. Many of the application development teams are moving towards continuous integration, continuous delivery, continuous deployment and wants to use advanced traffic control techniques like progressive rollout, canary, automated canary analysis, blue and green deployment model, faster rollbacks. All those functionality is pretty applicable here for North and South, but the kube proxy has limitation with integration with the newer tools. So you lack that functionality here. For a scale and performance, North-South scalability is extremely good. You can also scale out with the clustering various ADCs for North-South traffic. And for East-West, kube proxy have three different modes. Many of the customers have been using IP tables as a mode which is, I believe is also a default mode, but that lacks the scalability. So Kubernetes have developed a new mode which is called IP virtual server which gives you much higher scalability. And you can find the information about individual modes of kube proxy for their scalability on kube proxy web pages. But our recommendation is, if you are using kube proxy for East-West traffic, IPvS is the mode to go for. Of course, it is a little bit more complex to deploy than standard IP table modes. For open source tool integration, like for Prometheus, Grafana, Spinecker, EFK and many others, you can integrate extremely well for North-South traffic, but kube proxy doesn't have that many APIs and also doesn't support many of the modern open source tools there, so you are limited there. If you are moving or considering or plan to move to Istio in future, in this architecture, you can support Istio for North-South. Many of the ADCs, including from Citrix, have a gateway or the functionality of Istio communication APIs included, but currently kube proxy is not Istio enabled. And the most important one is the IT skillset that requires minimum training for platform and networking team and both can move at their own speed. To summarize, this is still the simplest and quickest way to production for cloud native experts and novice. It gives you excellent security, observability, continuous deployment scale and other open source tools and technologies integration for North-South traffic, but very limited functionality for East-West traffic, which is the communication between the microservices or the containers or the nodes. Next one, we are going to look at variant of two tier ingress which is the unified ingress in which the only difference is you combine two tier ingress for North-South traffic into a single ingress, which is represented here by Brown ADC. It is still managed. It is managed by platform team, but here critical requirement is the platform team has to be very, very network-setting. There are few benefits here. You reduce one ADC hop or tier, so it gives you simplicity from the previous version, but the team has, platform team has to be very network savvy. You also reduce one hop latency and many of our customers are deploying and considering this for their internal applications but also it makes them future-proof if they want to deploy external applications later or customer-facing applications where they require WAV or SSL to be added. So it makes you future-proof for them. Features and functionality for security, observability, continuous deployment, scale, open-source support are very similar or exactly same to the previous architecture. So we're not going to go into details. The differences between various architectures have been highlighted in italics in this slide. So the only difference from the benefits perspective from previous slide is the IT skills set required. For these architectures to succeed, platform or infrastructure teams has to be very, very network-setting. This is simple, provided your platform teams are very network savvy. It again gives you excellent North-South capabilities for East-West, you are at par with the previous version, previous architecture. Let's move to the Gold Star or North Star architecture, which is most advanced, most modern, really addresses a lot of functionalities which teams are looking for North-South, for East-West traffic. That is the service mesh. Undoubtedly, service mesh offers best observability, best security. If you're looking for a very secure traffic among the microservices, this is the architecture to go for. If you're looking for a very fine-grained traffic management among the microservices, this is the architecture to go for. But this is complex too. Complexity comes from attaching a mini ADCs or mini load balancers as a sidecar to every microservice pod. There are benefits and disadvantages of both. The advantage of having a sidecar attached to each pod is that it has excellent visibility as every traffic flows through the sidecar, but it also adds the complexity. Another benefit is you can offload some of the common functions from each microservice to sidecar, like retry algorithms. If a certain microservice doesn't respond, building the sidebreakers, connection breakers, also encryption built into the sidecar for a more encrypted communication between the microservices. Let's look at it into more detail. By the way, the north-south is very similar to what we've seen in the first architecture. And for sidecar, you have choice to deploy Citrix CPX as a sidecar, or you can deploy NY or other similar products. For application security, it's pretty consistent with the previous two architectures we have seen for north-south, but you get excellent protection for east-west because of the sidecar. You can enforce policy. You can enforce e-grace or in-grace policy. You can do the rate limiting. You can run the mutual TLS between the pods or microservices. You can run the encryption between them. And if you want to build super secure applications, whether you are in the finance applications or where you need applications for defense or many other high security applications, this is the architectures to go for. For observability, having a sidecar attached to each pod and having a visibility of every application, traffic flow gives a tremendous visibility or observability and you can build amazing actionable insights from them. It also have for east-west traffic, excellent advanced traffic control like canary deployment, advance canary automatic analysis, also progressive rollout where you can split the traffic between the newer version and the older version of the code, splitting like 20% to the newer one, 80% to old. And once you feel comfortable, you switch that order. So it gives a tremendous continuous deployment, capabilities through advanced traffic integration by integrating tools like Spinecker for east-west traffic also. East-west traffic here inherently benefits from distributed architecture is scalability, but there are few things you have to watch out here. This architecture is as good as your sidecar quality. If each sidecar takes six milliseconds of latency that means you are adding two-hop latency, which is equivalent to 12 millisecond, but if your sidecar like Citrix CPX takes only one millisecond, then your performance is far better. So here the quality of sidecar matters tremendously. One more caveat you have to be aware of if you are building a very large microservices-based applications which has multiple parts, these sidecar tend to add up more CPU and more memory because each requires a defined memory and CPU. So these total CPU and memory for hundreds of sidecar or maybe in some cases thousands of sidecar can add up pretty quickly. It has an excellent open-source tool integration for north-south as well as for east-west. It also integrates very well for east-west traffic with Istio APIs, but please be aware if you add the Istio control plane for authorization, each packet has to go to a Istio mixer that may add one more hop. Istio mixer is going through a modernization, so watch out for this space, how the next version of Istio mixer evolves and how scalable it becomes. Last but not the least is the IT service requirement. You see it is very complex and it is a steep learning curve for platform and networking teams. To summarize, if you need best observability, best security, best integration of open-source tool, including from CNCF for east-west traffic, this is the architecture to go for, but please be aware about the complexity. But what if you really wanted all the service mesh like benefits and features, but much simpler? That is the fourth architecture we are talking here, service mesh light. Many of our customers who are not ready for service mesh yet but want all the benefits of service mesh like secure traffic between microservices, fine-grained traffic management, observability, they go for this architecture. In this architecture, all the communication between parts or microservices for east-west traffic passes through a centralized application delivery controller in Kubernetes form factor. It could be a CPX or it could be NY and having a centralized load balancer gives you policy control, security, gives you fine-grained traffic management and you can implement policy and observability much easily than the previous architecture. Of course, you have to select the right memory size, right CPU resources for purple ADC shown here and your vendor or Citrix will be able to help you with that. In terms of features and functionality comparison, it's very similar to the service mesh. The only difference is if you want to run encryption between the microservices, you cannot offload to the sidecar that has to be done in the application itself. Otherwise, every other policy or security functionality which you looked at in the pure service mesh you can still implement here. Observability and continuous deployment is exactly the same as the previous one from a benefit perspective. For a scale-in performance, it is still very highly scalable but one additional benefit for scale-in performance for this architecture versus the pure service mesh, you reduce one hop because you don't have two sidecars between the two parts. Now you have one single centralized load balancer between them. The benefit for open source tool integration and SEO integration is pretty much the same but for the IT skill set, that is the game changer here. It requires minimal training for platform and networking teams. The training requirements or learning requirements are pretty similar to the first architecture, two tier ingress which we have seen. Another added benefit of this, if you already have a two tier ingress architecture, you can very easily transition this to this service mesh light architecture and take benefit of security, better observability, better continuous deployment for east-west traffic. Or if you are a cloud native novice, want to get your feet wet with the new microservices-based application, you can start with two tier ingress but have an easy roadmap to transition to service mesh light and take the benefit of enhanced application security, observability and many other features for east-west traffic also. This gives you a good framework to consider and choose the right architecture for you. The question is what will be your architecture choice? There are no right and wrong answers. It's purely depend on what is your team's skill set and what are the trade-offs you are willing to make between benefits and the complexity. If you're looking for simplest, quickest way to production, two tier ingress is the preferred choice but you lack the capabilities for advanced features for east-west traffic. If you have network-savvy teams, unified ingresses seems like a great choice but you still lack the east-west intelligence. If you're looking for a best observability, best security, the north star, service mesh is the architecture of choice but it's complex. But if you want to offer very similar features of service mesh but much simpler, service mesh line is the architecture. Now you are armed with four architectures, evaluation criteria across seven attributes and details about how each attribute and architecture fairs well for north-south as well as east-west traffic. With that, I'm going to request Miko Desani, my colleague, to give a quick glimpse about Citrix Cloud Native solutions and its principles. Thank you, Pankaj. It's Miko Desani. I'm in product management at Citrix ADC. So let me talk to you about how we've approached the Cloud Native Marketplace for our customers and really what we've done is we've looked at the customer requirements, their different stakeholders in these themes and what we find is that the goal really is how do you insert Cloud Native architectures in a fairly efficient manner and that's how we came up with this. How do we, about architecture flexibility? Essentially, we want to give our customers a way where they can leverage their existing net scaler footprint, Citrix ADC footprint and tie that into a Kubernetes infrastructure and do that in a very seamless way. To do that, we've discovered these four different modes of potential deployments and that's what we've made available to our customers, first of all, and it gives them an option to start in one place and then move on to another environment as they grow and expand their architectures or change their topologies. Second thing is we looked at, what do our stakeholders really need? There's platform team folks, there's app developer folks, DevOps teams and the ADC team and we've discovered that one is it's important to insert into different types of Kubernetes platforms so one thing we've done is to make sure that we can tie into different Kubernetes distributions, enterprise-grade Kubernetes, managed services Kubernetes platforms and then tie into different CNIs so that we can plug into these environments and then make it easy for existing net scalers instead of ADCs to plum into these Kubernetes platforms and then as well as for the DevOps and developers who are new to ADCs, expose the APIs, expose the Citrix ADC features as Kubernetes-friendly API so they can plug that in as part of their YAML configurations when they deploy their applications, for example. Third bit was performance and scale. So we've seen this time and time again where new architectures are deployed, there's a need for exposing the architecture to outside traffic and then being able to scale that over time to a production-grade and so what we've done is make sure that customers can do that in an efficient way. They can leverage the scaling capability of their footprint of Citrix ADCs, use existing clustering technology, for example, which are scale-out technologies to do that and be able to quickly adapt to the changing microservices footprint, the app topology as containers come and go and new containers are spun up and they're auto-scaled, be able to adapt to that so that the ingress devices can change configurations on the fly and then adapt to these new app architectures. In terms of API security, ADCs are used heavily for securing the traffic into a perimeter and there's a variety of tooling around that, part of it's WAF, part of it is SSL and so making that available to our customers inside the Kubernetes cluster is a key point here, is being able to insert such policies so we can protect APIs that are being exposed to these clusters and providing things like rate limiting so that if particular thresholds are breached, rate limiting thresholds are deployed and being able to use, we have very secure SSL profiles that have A-plus grading from top companies, for example, that do this and so that making those profiles exposed and available through traffic inside the cluster as well as outside the cluster. In this case, I'm referring to the Kubernetes cluster and finally, being able to show what's happening in the cluster, what's happening to the apps, what's happening to the traffic so that our customers can easily troubleshoot problems. What we've done is be able, since the ADCs in the path of the traffic and we have very rich set of data that we collect, we can expose this into a service graph as part of our ADM, I'll explain later, that then allows customers to view their microservices, look at traffic flows, look at layer 7 data such as HTTP errors, latency and then bandwidth and then identify hotspots. So next slide, please, Pankaj. And so I mentioned tooling, so what we've done is we've looked at the tools that our customers are using and we've identified that there's really a few sets, right? One is monitoring and logging and being able to plug in our rich data without those tools. So these are using tying into Prometheus and Grafana, tying into EFKs, as I mentioned, Elasticsearch, Fluentd, Fluentbit, Kibana, then tying to tracing, so distributed tracing tying to Zipkin as a tracer and then plugging into an existing multiple types of CNIs like Calico, Kano, Flano, for example and then supporting things like Helm and then of course supporting JRPC so for very modern app architectures with HTTP and JRPC support for those and we're seeing as well the need for now providing value to the CI CD pipeline so we've done integrations with Spinnaker where we can provide Spinnaker with our rich set of data to tell Spinnaker that an app is working well and therefore to continue a deployment or an app is not working well because it has a bunch of errors and therefore to roll back a deployment so that's all available and then plugging into the primary enterprise create Kubernetes distributions as well as cloud-based Kubernetes services like I mentioned here, GKE, EKS, Azure Kubernetes Service, OpenShift for example and now we're working on Istio support for Istio because we believe that that's an important future migration. Next slide please. So the cornerstone of our implementation for cloud native is starts with the load-balanced ADCs the load balancers and the proxies and we have multiple forms with our approaches we have multiple form factors that we make available and make them Kubernetes aware and we tie them into new workflows for example that developers develop CI CD tooling required so that you can leverage these load balancers in this Kubernetes environment so let's start with the proxies we have I mentioned multiple types of proxies in different form factors that offer layer 4 to layer 7 functionality virtual load balancers like the VPX it's available both on-prem and the cloud we have container-based load balancers like the CPX it's available as a sidecar it's available as an ingress device so we have recently announced a bare metal net scaler called the BLX for bare metal use cases for more modern deployments and we have a rich set of hardware load balancers the MPX we have many customers have deployed this have actually used the MPX for unified ingress deployments and we have a multi-tenant version called the STX where you deploy multiple virtual load balancers into hardware platform and that's also being deployed by our customers leveraging their existing footprint as ingress devices for their clusters what ties this all together is we have a very common code base so functions available in one load balancer format and form factor is available across all form factors so that simplifies tooling it allows our customers to use their existing operational tooling to run to insert Kubernetes into their environment so that security profiles for example that's monitoring for example that's a layer 7 policies and we've developed a platform called CPX ATM which is both a cloud service as well as an on-prem deployment that ties this all together ADN is a product of products it's a control plane monitoring it provides monitoring and analytics observability and it controls existing net scaler footprints and modern net scaler footprints and modern app architectures next slide I think that's far visit and I think we're on time and team you will we published a lot of our code and trial versions of our products at github.com if you've not checked out yet github.com slash the tricks I strongly encourage you to point your browsers to github we have an excellent material about you can try CPX you will see the best practices how to integrate Prometheus and many other open source tools which also have a lot of detail and sample configuration for these architectures and if you want to learn more about Citrix cloud native solutions you can point browsers to our website and if you need a much more detailed conversations or a discussion please reach out to Miko or myself we have emails there in the first slide which will be available later and we can do a very detailed deep dive discussion with your teams back to you Karen awesome thanks Miko for a great presentation we now have a few minutes for questions if you have questions you'd like to ask please drop them in the Q&A button at the bottom of the screen and we'll get to as many as we have time for so right now we have two related questions the first one is someone's asking about so they've been using ambassador for north south traffic and Istio for east west traffic well it does involve oh sorry I got answered it says another question is in the case of mesh light how do you ensure that all east west traffic goes through the central load balancer and not P2P is it by using a combination of RBACS, Calico network policy and console service discovery can you provide some more information people would like to address that sure so when you deploy the app you point the app so for example the second tier app when you deploy the app you point it back to the central load balancer the load balancer in this case the ingus load balancer is a service for other traffic so that will push it to the central load balancer and then this would be consistent with Calico policies making sure that you don't infringe upon any other Calico network policies that you may have in place I have actually a description of this in Github I can point you to our Citrix Github page which shows you how this is deployed great, there's another question and it's asking what type of vendor sidecars are out there in production that's a good question sidecars are usually referring to service mesh in this case I can't say that they're in production but I have customers talking to us about service mesh deployments potentially for 2020 but clearly they're looking at Istio and Envoy as the model I've also seen some of our customers who are using for example console plus Envoy potentially actually much of it's Envoy but console plus another sidecar but this is an evolving space just to summarize there's a open source option as Envoy but if you're looking for a more vendor supported Citrix does offer CPX as a sidecar and if you want to do a trial or want to get more information reach out to us privately and we are happy to share more insights and set up the trial or answer more questions on that anyone have any more questions okay there's another question Envoy has a port conflict issue on TCP we found it really difficult to ensure you need TCP ports for our Kubernetes services do you have any ideas actually Krutik could you email me offline I think you have several technical questions I'd like to be able to talk to you directly so we have ideas yes let's so first of all CPX is based on we have a full TCP stack so we have a lot of control in TCP and UDP and HTTPS of course HTTP2 as well so that's one thing that's wrong but the Citrix stack is that it's got multi protocol support so I have better ways of controlling TCP ports so I'd like to be able to talk to you more in detail on how we can do that for you that actually is one use case for Netscaler for Citrix ADCs is that we insert in applications supporting legacy apps for example because our ability to control these ports TCP ports and because of our embedded TCP stack so just to summarize Citrix ADCs for Kubernetes environment include sidecars support many of the legacy protocols which is not available in other offerings in the marketplace and we can walk you through or share more insights on that but that's a huge differentiation we also see a lot of customers who has very similar requirement to support the protocols which are supported beyond other parallel offerings in the marketplace and that comes from supporting various legacy and emerging protocols over the many years with Citrix ADC which was previously known as Netscaler and if you want to reach out to me it is punkash.gupta citrix.com and for Miko it is miko.thesini at citrix.com great thanks punkash and Miko for a great presentation that looks like that's all the time we have now thank you for joining us the webinar recording and slides will be online later today and we're looking forward to seeing you all at a future CNCF webinar have a great day thank you thank you