 My name is Pranav Anamit and I'm joined with Jacob to talk about how users can deliver their modern apps anywhere, both publicly and privately. What we've seen as a most recent trend is developers are adopting new approaches in order to deliver their services faster. They've moved from the private data center to one public cloud, which is the hybrid cloud. But now there are actually users and developers are trying to adopt multiple public clouds as well as the edge so that they can deliver the apps or services faster by leveraging the best of each cloud. For example, maybe ML from Google or maybe some service from AWS or some service from Azure and try to leverage best of each service so that they can develop new services faster. Similarly, in the previous decade, users went from legacy apps and virtualized the apps into VMs. But now developers are adopting new application architectures such as microservices, containers and even serverless in order to develop and also deliver the new services faster. And the third big trend obviously was the move to agile and actually doing it and actually delivering the services faster. But now it has actually become continuous, it's basically continuous integration, continuous deployment, continuous delivery, as well as continuous verification of the application. So it's a constant iterative software development process. And all of this has been done by developers in order to deliver their services faster. However, delivering an app is not just developing the app. To actually deliver the app to the so that it can be consumed by a user involves multiple point of products and multiple teams. And these teams, they communicate using sort of support tickets. And because there are so many multiple point products, nearly seven of them, there's multiple iterations that you need to go through in order to deliver this app to this user. Each of these iterations takes days to weeks in order to get it right. And meanwhile, your app is not static, your app is continuously changing. The developer is because of the whole continuous integration, delivery, deployment and verification, the APIs are changing. So this whole process is continuously being iterative and taking longer as a result. And what is the end impact of this? The developer is developing the app faster. They are actually going fast. However, because they're going fast and because the current tools are point products are siloed, each of the individual teams, both DevOps and the network ops, network IT that also calls infrastructure and operations, they are actually getting overwhelmed and burdened because of this constant changes in the application. And this is negating all the efficiencies that the developer has gained by adopting a modern app approach. Let's look at why does it take so long to deliver the app? Let's look at firstly how long does it take? First, in order to deliver the app, you need to first obviously have some form of app management, which is either you can have your own Kubernetes cluster or your virtual machine, for example. So you have that. Then if you have a Kubernetes cluster, then you need an English controller so that you can direct traffic to this cluster from the outside. Once you have that, then you need an application firewall, an app firewall, which essentially protects your app-to-app traffic, products traffic coming into the application. Next, once you have that, then obviously you need to expose this app publicly. You actually want to scale both your application firewall as well as your application. So you need a load balance in front so that you can scale this horizontally. And this load balance is where you will basically terminate your SSL connection and you'll actually decrypt your traffic. So you need a load balance to do those functions. In front of load balancer, you want to protect that. You actually don't want it to be overwhelmed by malicious traffic. So therefore, you will add a network firewall so that you can block malicious traffic coming into the load balancer. Then in front of that, you need a router because you need the traffic to actually come in here. So you need a way to advertise the routes, the app reachability to the outside world. And then in front of it, this is in your data center. So this is right here is your data center. In front of it, you actually need like a DDoS because you don't want to saturate the link between the user and your data center. So you need a DDoS to block malicious traffic coming in and only allow the good traffic. So you want the DDoS which is away from a data center, outside your data center, that is blocking malicious traffic and only allowing good traffic, ensuring that this link doesn't get saturated. So you need all of these different components before your app that you develop or develop can be exposed to the internet and allow users to talk to the app. Let's look at the time that it takes. This part, the creating a Kubernetes cluster and creating an English controller has become pretty fast mainly because of the whole infrastructure as code. There are these recipes available, charts available. So you can actually do that pretty fast. About two days for each, two to three days for each. So that is becoming quick. But when you start getting into all of these other components, the app delivery components such as an app firewall, that takes some time to actually tune the app firewall in order to reduce the false positive, to figure out what needs to be blocked. What do you turn on only the OS or do you turn on something else or what rules do you turn on? So that takes some time to tune. Then to figure out load balancer, to actually program the certificate, to actually create decision certification and to actually create a virtual IP and all of those take some time as well. It takes about one to two weeks. Same with the network firewall and the adultery. Each of these take about one to two weeks. And in the case of DDoS, because you're working with a component that is outside your data center, which is not modern in terms of having all of these click to turn it on and configure easily and quickly or doing it using code, it takes longer. It takes about two to three weeks to actually configure DDoS as well. So the total time that is taken by this app is to deliver the app is about seven to 12 weeks. And in this case, all we're doing is we're delivering the API publicly. Now, let's take an example of how long does it take to deliver the app privately, which means I have an app, I have an API and I only want either my partner app, which is sitting in another cloud to access it, or my partner customer or my business partner to access the app privately in their own data center. So say I have an app here in my private data center, I want this app to talk to another app which is sitting in the public cloud and I only want this app to talk to this app. I don't want this app to be or API to be exposed publicly on the internet. So in order to do that, all the other components remain the same. In addition, what you need to do is you need to actually have a private connectivity link between your private data center and your public cloud. In addition, you also need to have VPN so that these two can be in the same network and you also need to have this private link, which could be, for example, to disconnect or express route or essentially any private layer to connectivity between the two. So you could either do VPN over the internet, you could have a private link and then do a VPN on top of it, and there are multiple architectures. But all of this actually takes an additional three to four weeks. So to deliver an app or API privately, it takes about nine to 15 weeks. Now, if you look at this, why does it take so long to deliver an app? And is this right for a modern app? The developers sitting there going, I'm already, I'm gaining efficiency. I'm actually going faster. I'm actually adopting new architectures. But you're telling me that all of that efficiency is negated because of the existing mechanisms of delivering an app. What is the right approach that developers, DevOps, as well as network and IT and infrastructure and operations team should look for when your developer is going to use modern apps? We call that approach a distributed cloud approach. And the key differences between a distributed cloud approach and the current approach is what I'm describing in this slide. The first thing that you need when you are delivering modern apps is you cannot work with multiple point products. As I said, the application creation is a continuous process. Everyone's trying to move faster. If you have multiple point products and silos where you have to go one after the other, using support tickets to say, please issue me an IP address. Please open this port. Please configure the IP address. Please configure a rule, a network firewall rule for this IP address. Oh, by the way, my app died, came back up, and everything changed again. Please do a new IP address and so forth. It will not work in a modern app approach. So you cannot have multiple point products. You need essentially an integrated stack of these application delivery components which can move fast and namely in response to changes in the apps and the APIs because these are continuously changing. The second thing is the location of the app is no longer just one location, one data center. It's actually multiple locations. It is data center, a public cloud, a network cloud, maybe in the edge. So you need the same infrastructure components everywhere wherever you want to distribute your application. So the number of places where you need to manage these different infrastructure components is increasing. It's not just one. It's actually multiple. And therefore an appliance-based model where you're managing each appliance one by one with no longer scale is now you're talking about deploying multiple regions, multiple availability zones, multiple clouds, multiple places in the network, both the cloud network edge and in case of edge, multiple thousands of edges. So you cannot scale this using an appliance-based model. You need a distributed fleet operations model where your manager's entire thing is a fleet. Therein you are creating, you're defining your intent once, you define your intent once and saying this is how it should work and the system should take care of distributing the configuration to all these different locations where the application exists. So you need a distributed fleet operation approach. And to do that, the critical component which goes and said is you need a control-plane-based management. You need a control-plane to distribute the state, to distribute the reachability. You cannot rely on a management-plane approach of going one by one to every appliance. Third thing is once you've configured, how do you manage this? With seven different products? You have seven different port panes of glass where you need to go and see seven different things. You cannot troubleshoot a problem past because you have to go to seven different planes of glass. You need to also then go to, even if you take all of the metrics, the logs, put it all in a centralized system, there still isn't a way to thread the request together from one tool to the other. How do you thread a request or how do you uniquely create a single pattern of a request from the router all the way to your app? How do you know what is the key to in which you can uniquely join all these different tables? The fourth thing is the lifecycle management of it. The current tools are operationally complex because they're all appliance-based. That worked fine when you had one location, two locations. If you talk about multiple of locations, it's very hard to do lifecycle management of it manually. You need a full SaaS-based operations approach with full lifecycle management built in. It's fully managed and that's what you should look for. Lastly, if you want to scale to different locations, you cannot have the same form factor like a hardware or maybe a really thick machine that's in order to deploy it anywhere. You need something that is deployable in multiple thousands of clusters. That is what developers, DevOps, network infrastructure teams as well as the IT team should look for when they are considering app delivery for their modern applications. Let's look at, now that I've shown you what are the key tenets of a distributed cloud approach that the developers and DevOps and network of should look for, let's look at how it will work. How would you deliver a modern app publicly using this distributed cloud approach? Today, if you remember, I had a bunch of components over here, like the Indisk controller and VAS. What you would first do is, first, you would essentially replace those components with a distributed cloud architecture. You replace or maybe you could do a new application using this distributed cloud approach. Now, getting rid of your existing infrastructure components, keeping an existing infrastructure components, this distributed cloud approach can essentially work over the top of that. You will first have that. The second thing is the distributed cloud approach should actually have its own network where you can advertise these applications publicly so that this becomes easier. The first thing that it should do is it should, essentially, you could have deployed your app anywhere. You could have deployed it in an existing Kubernetes cluster or your VM or so forth. It should discover what apps have already been deployed. Once it discovers this, it should have a control plane. Again, this control plane is critical. This control plane is essentially advertising application reachability. It's not describing routing. It's not just describing the layer three routing. It's describing application layer reachability that says to reach API one, not IP address one, but API one, here's how you reach this API. And then it also distributes app health. You need to distribute that. Once you have used a control plane, the next thing is you need to then advertise the app. So you should advertise the app publicly. In this case, we're advertising the app publicly. So the distributed cloud approach should enable the user to actually advertise, to choose where you want to advertise the app. In this case, we're advertising the app on, for example, a public pop using an any-curse virtual IP. This will direct the traffic from the users onto here. Then the fourth thing using a distributed cloud approach is you should then have the ability to distribute app delivery functions closer to where the data is generated or to where the user is. In this case, for example, we should have the ability to distribute, for example, SSL offload closer to the user. This way, you're improving the application performance. The second thing is you should have the ability to distribute a web application firewall functions closer to the user. This way, you're actually protecting your applications further away from your data centers. You're protecting it right here. And all the malicious traffic is blocked right here, rather than having a backhaul to your data center and being blocked in your data center. So you should have the ability to distribute your different application delivery components closer to where the data is generated or where the user is. And then lastly, in order to make sure your performance is good, you should have a persistent connection between from the front end all the way to your origin. So you're not doing multiple TLS setups for your application. This is how you would deliver an app publicly using a distributed cloud approach. Let's take the same example of delivering an app, in this case, privately from one side to another side, using the distributed cloud approach. Again here, we are saying keep your existing app delivery components. Keep it as it is. Don't get rid of it. But you deploy a distributed cloud application gateway over here. And then this application gateway, as I said, first, it should deliver. It should discover what apps are configured. Second thing is you need to have a global, you need to have private connectivity. You should look for solutions that essentially incorporate private connectivity to all different components, both your private cloud and your public cloud. So you can have private links with each of these so that you don't have to set it up yourself. The service comes, the solution comes with that. And all you're doing here is you're discovering your app. In your control plane, you're now advertising the app reachability to the site where you want the app to be consumed from. In this case, you want to do consume from this site. And then here is where you advertise the app privately to that site. So this app now is reachable only from this site. So this is done using policy. It's not reachable from the internet. So you cannot have traffic coming in from here. This will not be allowed or should not be allowed by any solution that is offering this. This app should only be reachable from this location. And now you can choose to distribute your app delivery functions closer to where the traffic is being generated. In this case, you can distribute maybe SSL termination or web application firewall right here. So that all traffic malicious traffic is blocked over here and it's not backhauled to this site. And then lastly, because this is a completely private connection, this is not going over the internet. This essentially reduces your risk profile of the traffic. So this is completely led to connection. You're not exposed to the vagaries of the internet, such as congestion and so forth. Now, let's take a look at what is the benefit of using this distributed cloud approach? What is the impact on the different teams? If you see the distributed cloud approach and actually Yaku will show you how long it takes, we're going from order of weeks to order of minutes and hours essentially, right? So it's a significant improvement in terms of the time taken. What that means for DevOps is that now they can actually deliver new services faster. They can actually keep up with what the developer is doing. And so instead of the app delivery taking, say, three months, four months, which means in a given year, you can only do like three or four services, you now can actually deliver new services almost every week or every two weeks, right? So you can actually deliver new services faster. So you can go from three to 11, which means now you're basically bringing in more top-line revenue to the organization, right? The second benefit is that with the different locations such as private cloud, public cloud, and so forth, you need to have different teams to manage each of these different locations. So in this case, this customer who went through this, they actually had a team of 25 with almost 15 QA and 10 DevOps to manage these different locations and to manage these different point products. But now by using just one product and in different locations, they can significantly reduce their operational team from about 25 down to 25, which saved them a significant amount of money as well. And the third component of this is that troubleshooting problems. This is like day three, since you had to go to 300, since you had to go to multiple portals, it took about five to six hours to troubleshoot every incident. By using a single pane of glass and a single distributed cloud approach, they can reduce the operational time or the troubleshooting time from 300 minutes down to 15 minutes per incident. And this saves a significant amount of time so that they can focus less on troubleshooting problems and more on delivering new services and more on the top line, focus on the more high value items rather than focusing on infrastructure level components that do not add business value to their company. So this is the business impact of going with a distributed cloud approach compared to the existing multiple point product siloed approach. So I've spoken a lot about this in the slides, but I'll hand it over to Jakob that he can talk about. He can show you some of the distributed cloud approaches and how easy it would be to actually configure using this approach. Okay, let me share my screen. Okay, can you hear me and see the slide? Yes. Okay, so now I prepared a short demonstration of Pranav explain. So you want to show you also some real stuff and how hard it is to do it or how easy and how it looks. So basically I decided that I will pick simple app. I didn't want to do complex microservice for this demonstration so I took like common application in this case which is WordPress and MySQL and what I have in my demo setup is Vanilla Kubernetes cluster running in virtual machine in private data center and I think I'm using Calico as a C9 plugin. So really Vanilla Vanilla topology and two pods, one WordPress, one MySQL. And to do this action in a normal world as Pranav explained and I have very good experience with this, that usually this part is handled by DevOps people and people like me which is pretty doable and it can be pretty fast so you set up VM with Kubernetes, you deploy Enginix ingress controller, you will use the Let's Encrypt threat manager which will give you automatically certificates, you can do all this stuff but then my issue always was the left side where if you are in big company you need to create a support ticket for the network team and then it takes if you are lucky it takes three days but it can also take two weeks to set up the firewall and all this stuff here and it is not very flexible and also hard to debug I will also show this so to map it what you will see in my demo is I will basically will not need left side, I will focus here and I will have my Vanilla WordPress because most of the people today they already have Kubernetes running and they want to expose DDoS protection for their site and they want to use web application firewall and they want to have a central visibility through one console so that is my goal in this demo so what you will see so here I have a private data center and this is a Volterra global backbone with few global pops and distributed anycast where I am going to advertise the site and here is the end user so as a first step I will discover the app so in my other virtual machine I decided that I will run Voltmesh as a virtual machine you can run it as a pot in the Kubernetes as well but you in this case I'm running it in the separated virtual machine I will load the kube config it will auto discover all the services in this cluster then because I have a distributed control plane I can immediately have a reachability anywhere and I can expose an application locally globally in different data center very easily you will see it and I'm getting all this for free now next step I will advertise it on anycast so it will be accessible from the closest routable point and then I will configure SSL offloading and web application firewall and we can do rate limiting or service policies for specific ASN so there are multiple combinations what we can do and then the user will access this and he will end up in my private DC on the WordPress so now let's let's go ahead and start so this is my terminal and I really have just single node vm and if we take a look I am using Calico as a CNI plugin so it's really like most popular plugin what people use so nothing special very simple setup and in default namespace I deployed WordPress and my SQL and I'm using I created services service for the WordPress now let's take a look how we will configure the volt mesh through console so this is a volt console which brings the central view I am based in Prague so I deployed volt mesh virtual machine in the DC Prague site and I have a visibility so I see the the health and basic information about this about this node I can go inside and see how much how much traffic is going through this through this node so this is like a system view I can see a number of nodes so it's a single node vm I can read the IP address utilization metrics from CPU memory so this is like a overview in in our console we call it site and my site is DC Prague now to start with the first step I need to discover WordPress service and all services which are there so I go to service discovery section and in service discovery we will create a new config for DC Prague and we need to say where where my where I want to discover we have concept of virtual site if you would like to discover from multiple sites by single config or I can choose the site in this case I will say I want to discover on my Prague DC Prague and I want to discover on local network so basically the there is a local connectivity between the volt mesh node and Kubernetes cluster now today we support console and Kubernetes I'm using Kubernetes so I keep Kubernetes I do configure and now it is asking me to provide a kube config kube config I can I can either give a plain text clear secret or we have mechanism how to encrypt the secret so it can be decrypt only in the target location and it cannot be decrypted centrally in this case I will go with clear secret and we need the kube config so this is my kube config right with the private IP where I am sitting so I will just copy kube config and I will paste it here one more option what I what I can choose is if I have a port isolated or if they are reachable so in case of calico uh since volt mesh support bgp peering I could even do the bgp peering between uh calico route reflectors and and my volt mesh but since I did really basic calico deployment it is using some vxlan encapsulation and I cannot reach the bots so I keep bots are isolated it means that it will use a node port for the service then I could configure some other stuff like publishing web services but this is not the for our studio demo so I can just save it and if I refresh you can see it discovered four services and one of the service is uh wordpress default which is matching exactly what what we saw in my terminal you can see right so this is the uh this is the service and it is this is the same so now I discovered the endpoint and it's time to go and um create uh http little balancer uh before I create http little balancer uh I have to configure the origin pool because uh wordpress requires special route config to to be able to work so I will go and create origin pool which is basically saying where my uh wordpress is running where is it served so we can go and say uh wordpress and it will be kubernetes service it will be wordpress default site is sprag uh I am discovering it on uh outside network and port port is 8080 I can create health checks I can create tls in this case I'm using in sikur on the kubernetes so we can um the services export on port 80 so I will keep it as is now um wordpress requires um uh route host drive right to be able to work so what I did before this demo this is really wordpress specific I created the the uh the route object where I will just now add the at the origin pool so that should be uh easy to do okay and it's uh it's actually there so we don't need to do anything now uh let's create the sttpl balancer so to create sttpl balancer uh we can say wordpress and domain so for domain I decided I will use uh delegated domains so what it means uh in the system namespace we have ability to delegate so this is my domain which I own up vk.tls I own and uh you can delegate it to us and then we automatically provide let's encrypt certificates and we manage the ns records it's an optional thing so so before I had to create this ns record and delegate my domain so now I can create any any sttpl balancer and automatically I get certificates from let's encrypt and everything so the way how it is configured is uh I can say here if I want automatic certificates then it will use the delegate domains or I can bring my own certificate in case that I have my own I can use this and upload the key and cert I will use automatic and the domain domain will be uh wordpress.app vk.tls I own now I want to have HTTP to uh HTTP redirect to HTTPS and I will use my route config which we configured so for the route config I will just choose the object what what I had there and now the almost the last point is where I want to advertise the configuration so in this case we want to advertise it on the internet so it will be globally available but we have other options like you can advertise it locally in the cluster or in the different specific location in this case it will be internet and then what we want to also do is we want to configure the WAF so to configure WAF I can actually I have two options either I know exactly what I want and I am someone who understands web application firewall very well so I will define my own rules and reference the rules or I will do the WAF intent in this case I want to show you the WAF intent and the way how I will do it is I will create new WAF wordpress and I can just simply choose PHP and wordpress and it will automatically pre-configure the right profile for me the right application profile what I want and this is important I can say do I want to block attack when it is detected or I can raise the alert I will demonstrate the attack so let's do the block and you will see how it blocks my attack when we will try it from the laptop so this is the WAF configuration and that's it I think now we can just save it and what will happen is that it will start the DNS domain verification and in a few few minutes we should have our our load balancer ready and we can actually try it so the domain challenge verified and in in a few seconds it should be there certificate valid so the vhost is ready this is my domain and we can actually try yes and this is my wordpress site so you see how quick it was it and I have valid certificate I have even redirect I should have right should redirect me so you can you can all try let me let me see what is the chat window so I'm going to send it to everyone so you can guys open in chat and you can try to open the site and we will we will see the some traffic so now what I can do is I can start generating some traffic so it will be it will be visible there that we are that someone is coming to the site okay so I'm generating the traffic to the sites site is running and this is what I've just shown you is how hard is to do configuration so it really took like 15 minutes including explanation and we have configured easily wordpress which is running somewhere in in the private data center without direct public IP or anything it's just sitting somewhere on the internet connection and that's that's all what is needed now we can go back and we can take a look on troubleshooting part and the visibility this is important again if if if I sorry this is this is my google home assistant he got crazy he started playing some music okay so in normal world it is very hard to troubleshoot right and some and I have a good experience that even to get the latency between all these places very stuck or the locks you really need to build the solid monitoring and logging where you will send all those locks and you will have very good visibility and it takes time to build it and sometimes hard problems troubleshooting of hard problems can take even three hours to figure out where the actual problem is right if it is on firewall because something some IP address is blocked there or it is in the WAF right just tune WAF for your rules is a sometimes complicated exercise to to find the right rules and it can take days to even weeks to tune all the values think especially when you have some custom application so all this is very hard to do and I'm not saying that with this it's easy it is also hard but you have everything in central places so it's not about that you don't need to do troubleshooting and this just magically work always no but you have a single page where you can see all the locks all the metrics latency between locations and all this information are there you can integrate notif do integration with the notifications to your external systems you can send locks to your Splunk or or Datadoc so we have all these integrations and it is much faster to debug such a problem to show you what I mean is let's do the quick overview so you see both data we are getting and how how looks the visibility part so you saw the config part now we can actually take a look on the application traffic and on the application traffic we can see that now from the public network traffic is flowing looks like people started opening the site because it's really flowing from Singapore, San Jose, London, Amsterdam and Paris so this is all our pops and then it goes to DC Prague so we see that literally from all the places it is coming right now so this is like a application traffic visibility now I'm going to go inside the HTTP local answer itself and I can go on WordPress now and what we see here this is pretty nice because the average latency between client and our global backbone load balancer is 25 millisecond but then it takes 140 milliseconds to reach actual virtual machine running in the DC and then less than one millisecond to go to the application so this is like total upstream so you can see here we see that we have total 14 unique visitors in the last five minutes I can see which operation system they are using top clients right so top is Paris this is probably my watch my generator I'm reaching Paris browser type TLS top ASN right so I have all this information now I can read the metrics so I can see the request rate and right now it's a one request per second in last hour because we just launched it I can see the traffic so majority traffic goes to Paris and San Jose now the interesting part is the is the request so here I'm getting sampled request rate and I can filter by code I can filter it by country by ASN top source IP so this is my IP fingerprints right and let's take a look on some US requests which are coming and you should be able to see so here is one of the client IP and we can take a look we see the latency for this particular client and duration type of OS and also we should see the country which instance it reach right ASN and city right so this is the basic info what I am getting from every every request very easily I can filter out 405 and keep just 200 right and easily I can navigate and see all the requests which which are coming now we have a more features like api endpoints like machine learning but that would have to run for more than one hour to get some to learn some stuff and display it but now let's take a look on the on the application firewall and security events so what I'm going to do now is I'm going to generate attack or attack so WordPress has some open source tool called WordPress scan and you can run it and it can tell you what version is there what is available to do and potentially you can try to hack the system so let's see right so I will run basic enumerate and you can see that the basic just the basic one immediately got aborted with 403 and it's saying this is might be due off so basically our buff is blocking the traffic so we don't see it so we can try to send a random user agent and now this will go through it will not discover much but we should actually see the attempt to dump SQL and try some basic WordPress URL which usually people trying to hack and attack and we should see the alerts immediately and it should be blocked so let's let's let's take a look so if we if we refresh it let's put the last one hour and in a few seconds we should start seeing the alerts see so now actually this is me right so there is already a security event so we can see the security event which which happened just now and you can see that buff mode block it it hit two counts right and this is the rule ID which was hit so this rule ID is sometimes also used when you want to disable some rule IDs because it's you are getting false positive blocks so then you can you immediately see which rule ID blocked your access and you have information on what that it was actually a WordPress scan agent who who tried to scan the site if we refresh we should see the more alerts coming from the second run right so this was this was uh this was other run when it was trying to do dump SQL on on the upload so I immediately see it and in the upfire wall you see that last five security events they are coming here and you can actually filter it out and easily see what is happening so this is a simple example where I just show you how in a little 15 minutes set up like globally distributed application running in your DC with laf and dedos protection and very easy to do so now I'm going to pass it back to Pranav for the final slide yeah so um I think we can open it up for questions since we are almost at the top of the hour I can open it up for questions if any questions that you may have and as a reminder you can just add your questions to the q&a box at the bottom of the screen and we do have about three minutes left yeah so there was one question that was interesting um the I think the question was uh do apps need to be modified in order to use this and uh I think let me just answer for the benefit very well the apps don't need to be modified at all right the apps are what you want to develop you can tell the apps in any language and you could be anywhere containers vms anywhere what we do as I said is we discover the app and you discover the app using either dns based mechanisms using using the kubernetes discovery using uh using console discovery different discovery mechanisms and you discover the app and once you discover the app then your art control is then taking the app and advertising the reachability of the app over to different locations both public as well as private and then any changes to our control to the control plane the control plan is essentially control plan so that doesn't affect your app the developer doesn't have to change the app in order to account for just a little I think that was the one question nice so um anything else yeah there were no other questions that I could see in the chat um uh is there is any other question um is there any other question uh that was answered uh that was asked earlier um just um let's see here what is the advantage of distributing load balancing and ssl termination to the network edge yeah one of the uh one of the big advantages um that of distributing uh application delivery functions uh to the network edge is that firstly these functions take up our performance are compute intensive so they take up a lot of cpu cycles such as uh ssl termination decrypting the ssl traffic takes up a lot of cpu cycles so distributing it out away from a data center conserves cpu cycles on your data center on your public cloud so that's one so reduces your cost but the more important thing is by distributing the application uh delivery function closer to where the user is actually improves application performance because to set up a secure connection from the client to the actual app to the server uh is a is like a six message dance and doing that over long distances where the latency is high actually reduces performance so if you do that set up closer to where the application is and then you have a persistent connection from the network edge to the uh origin server this way your performance all your chattiness is is you know in country for example and then the over the long haul you're not you're not doing this constant um uh a handshake which actually improves application performance so distributing application functions are waived and closer to the user improves application performance uh reduces the uh risk uh because all of the attacks are blocked to the edge and uh reduces your cost okay thank you pranav um we unfortunately have to wrap it up now any closing closing comments uh no thank you for ever joining uh if you have any questions feel free to reach out to uh to uh to yakubo myself uh we are on twitter on linkedin on um on on email and we are happy to answer any of your questions so this is this is our hand rules uh feel free to reach out and we can answer any questions offline too wonderful all right thanks so much everyone and enjoy the rest of your day bye bye thank you bye bye