 Yeah, my name is Amit and I work for the TME product management team at AVI Networks. So we are a solution which is based on the SDN principle wherein you do the control and the data plane which is bifurcated. You have the control plane wherein you go across and do all the configurations via the UI, the REST APIs, and then you have something like the service engine which ensures your load balancers to your L4 to L7 load balancing on applications. So this is something that you, I mean, it's totally irrespective of any cloud so it's got nothing to do with in terms of a particular hardware that it works on. So let's say if somebody would want a respective service engine or an application to be hosted on Azure, you go across and do that on Azure. You want AWS, you want to do it in OpenStack, you want an OpenShift. So let's say if somebody would want to create a respective cloud. So these are the kind of options he's going to see. So if somebody would want, he doesn't want to use any kind of a cloud so you can use a bare metal. So you just go across and use a bare metal wherein you can deploy our service engine plus the controller and you can get going. If somebody would want to do an OpenStack, let's say he wants to do an OpenStack. So this is the day zero configuration, let's say somebody would want. So you're good to go. So you just go across and select your respective tenants that you want this to be spun up in or respective network. You want your respective controller and the service engines to be spin up in. And you have the respective other configurations in terms of a lot of address pairs, security groups. And this is a day zero config. Somebody would want to integrate with, let's say we do the integration with some other SDN vendors like a contract. So you just go across and put in a contract endpoint URL. You want to do it with Nuage networks for integration with another SDN. So you just go across and put in key in those details and then you're good to go. So from an SDN integration perspective, the day zero config is pretty simple. You don't have to do anything else apart from this and you're just good to go in that sense. So let's say if, so this is in terms of OpenStack. So you have your load balancers in case. So what we'll see is that we have a controller which is spin up in OpenStack. And then you have, one second, not this tenant, this tenant. So you have, you get, what a timeout. Yeah, so what you can probably see is this is the legacy Elbas V2 that people want to deploy. And you've just gone ahead and configured the endpoint driver for this as Avi. So let's say if you just go across and create a load balancer. So these are the two servers that I'm gonna do load balancing on. So what's gonna happen is from a perspective of a person who's into operations, his management. So what he's gonna see is that that particular load balancer which he created in Elbas V2, that gets created on Avi. So from that perspective, yeah, it's still getting created. It's just updating because this is a little bit of latency because of, yeah. So if you see here, yeah, so it gets created. And in the other sense, let's say if somebody would want that in the horizon UI itself he doesn't want to, I'm not sure why it's timing out again and again. Yeah, so if a native operator, he doesn't want to integrate, he doesn't want another UI. So he can probably just go across and create virtual services from within horizon UI when you can export Avi's iframes which is here. Something you were seeing here, you can actually see here. So you can actually go across and create virtual services within Avi itself. And plus the fact that let's say if people want to create, let's say this is our respective virtual service. So what you can do is that you can see multiple things. In terms of analytics, you can see the client RTT, the server RTT. I mean, what is happening? So if there was a case wherein you run into some particular scenarios, people report back to you that your respective application is a bit slow. So you can actually see those analytics. Let's say we go to logs. So you can, if you just open up these logs. So these logs tell you everything. So it tells you that this particular request that's coming in, it's coming in from this particular IP address which is in China. This has been sent across from this particular operating system which is a Mac. This tells you what version, what ECDCA certificate type it's using, what happened here in terms of the response that the server sent back to it. This is the service engine, which is the load balancer, which is serving it. The server connection IP, then what is being sent in the URI, everything. And let's say you selected a mechanism wherein this particular thing should be load balanced, everything. So if you just want, so you can see in that sense as well. So it can give you a bifurcation in terms of what all traffic is being sent is it from Mac OS, iOS, Windows, Windows 7 or unrecognized. So just the run characteristics are pretty decent enough to go across and verify that. And let's say if a particular user, if you want to see that you're just suddenly experiencing a DDoS attack. So let's just go across to this service. Yeah, if you go here, yeah. So if you can see here, so what the system recognizes that this is an event which is generated and the event's name is a DDoS attack. So what is happening is we're just simulating a SYNFLUT attack. When in the SYNFLUT attack is coming in from these particular rogue IPs. So based on that, you can actually see that there's an attack which is happening and you can take action on top of it. So this is happening in that particular application. And this is the type and this is the description in terms of the fact that it's been initiated from these particular rogue IPs. And let's say if you would want that there's a respective event wherein you want your respective, the load to be shared across multiple virtual services or multiple service engines. You can just go across and scale up. So what will happen is that this particular virtual service is now going to be hosted across on two load balancers. So you can just go across and plus the fact that you can do it. This is just a manual trigger. You're going to have automatic triggers wherein you can specify. Let's say there's an event, this particular event, which is lasting from Tuesday to Thursday. And you're going to observe a surge of traffic from Tuesday to Thursday. So you're going to make sure you're going to write a rule which will say that from Tuesday to Thursday, these are the service engines which are going to host my traffic and bang on Thursday night at 12 o'clock, just scale them back in. So the second AC which just got created will just scale back in. You'll just probably keep hosting it on one AC. So you're going to save on resources. You don't want that there's a particular event that you should bring up some new hardware and you shouldn't just do across the laborious work of bringing up new hardware and just creating it and just back-pulling it. Yeah, so this is one more thing which I wanted to show. We also support BGP. So in that sense that if somebody would, and we have native support for IPv4, IPv6, and dual stack, so if somebody would want that they want to peer with a BGP on the front end or on the back end using SNAT. So you can do that as well. It's pretty straightforward config. So what will happen is that you came, so these particular IPs that you see are the IP addresses of your next-top peer plus the fact that on your service engine side, let's say it's this IP, 192, 160, 111, or 85. So this is what is where you're going to peer with. So these are your data Linux. These are your Linux on which you're going to listen for BGP advertisements. So you can just go across and do that as well. So let's say somebody would want to create a BGP-VS. So this is the learning of networks it's doing from Neutron. So it's just going across to Neutron and it's just fetching the subnet and the network. There's already a pool, so let's say we just want to create a pool. So it's pretty straightforward to create a pool. You can have some specific health monitors wherein you can attach, let's say a ping, a TCP, or an HTTP. So this is where the good part comes. Let's say you were seeing an open stack when you were trying to create a load balancer there. You just had some, it's pretty restrictive. You just have a couple of HTTP, TCP, or ping. And let's say somebody would want to write a health monitor for himself. He just wants to create a health monitor which he wants to, wherein let's say it's an HTTP, he just wants to do it by himself. So you can just write some custom scripts and you can get going with that as well. So let's say we just create this, we just go here. So we have two servers in the backend network. So this is in terms of analytics, like what we were saying, so you can have insights in terms of active and passive insights and no insights. You can let, and let's say if somebody would want that he wants to stream his particular logs to a third party, which is like say, if you go here, I think, yeah, here we don't, yeah. We integrate with Splunk. So if you want, you want to do an integration wherein you want to send your respective logs to an external server. You just key in the IP address and you can just specify what kind of logs you want to send, you want to send matching logs, significant logs, or non-significant logs. Then you can send it across to that particular server and we send it across to, and then it can be seen from that perspective. So in terms of, like I was saying, so this is just a vanilla case. If somebody would want that he wants to advertise his front end whip, you can do that. You can use this particular whip as an SNAT, or you can advertise on the back end SNAT via BGP as well. So in the meantime, this gets created. We can actually see, yeah. So we also do GSLB. So if somebody would want, you know, he wants to do a global server load balancing. So you can do across multiple sites. So here in this sense, if you see, it's pretty much straightforward. So you just go across and, you know, just play across with this and just specify the respective GSLB pool that you want and the amount of traffic that needs to be sent across to each one of them. You can specify priority. You can specify, you know, the algorithm type. Let's say this is round robin. You can specify consistent hash or geo. So this is for the other site. This is for the other site, wherein you specify what is the controller's name and then the respective virtual servers that you've created there. Well, you know, in terms of geolocation source that you want. If you want to use a configured source, you can specify that as well as, in terms of latitude, longitude and the tag. We also support something which is like pool groups. So the respective, some people want to create pool groups wherein they want the respective pools to be, you know, clubbed together in a pool group fashion. Let's say you want to do it like this. Let's say you select this. Yeah, so you can have multiple members. Multiple pools are grouped together in a pool group. Oh, sorry. It should not be pool one, it should be pool two. So you have now have two pools which have four servers and you made it as a pool group. So you can use that concept as well. We also support something like IPAM. So if we have integration with third party IPAMs as well for info blocks, as well as the, so there are a couple of them here you can probably see it's a pretty exhaustive list. So we have something for AWS, we have something for the internal ones for Azure, for GCP, we have for info blocks. And the OpenStack IPAM is actually, it's used for mesos clouds. It's something which we support for mesos clouds, not for OpenStack in particular. So if people would want, let's say in this particular scenario, if you want to see here, I think this one is not the correct one. Yeah, so if you see here, you can just specify the respective IP addresses and then your IP addressing schema is sent out from these particular pools. And you just go across and tie that particular IPAM type to a cloud. So the cloud type, let's say it's this guy, he's gonna use that IPAM and plus the fact that you have a DNS profile as well. So if somebody would want, he would want to host a DNS virtual service, you can actually host a DNS virtual service as well. So this is an IPV for DNS VS. So it's just pretty straightforward. So you just have a system DNS and a UDP per packet and it's listing on port 53. So if somebody would want to send respective requests, this is what's gonna happen. So these requests are gonna land here and then it's gonna be load balanced for DNS. We also have support for things like auto scaling. Let's say in terms of the fact that you have respective triggers or events, in terms of the fact that you want to do CPU based or some matrix based auto scaling for servers. So you go across and do auto scaling. Let's say if there's a trigger that you specify that for X traffic, the number of CPUs or the number of servers are gonna be two, but if it's gonna be X plus Y or X minus Y, you're gonna use four. So you can specify those particular policies. You can just create an auto scaling policy. You have a lot of metrics here. Let's say you give it the name test. You specify an alert, just log event. That's a scale out event. So what will happen is, so if you would want the scale out of that particular auto scaling should happen, should happen on this particular trigger, use this trigger. So you can actually play with this trigger as well. So let's say if it's a metric, you select a metric rule, whatever. So you can see there's a bunch of metrics that you can use. You can have an L4 server, good throughput. You can add something like an open connections and average pool bandwidth and anything. There's tons of metrics that you can use here for your auto scaling to be used. Yeah, and if there are customers who want to use, we have the high availability in format for the controllers. So this particular UI that you see is for the controller, which is managing all your UI. So you can have high availability for the controller, but then you can have a quorum, let's say if you would want. So you can have three nodes which will be part of this particular cluster. And then you'll have a control cluster IP, which is just like something like in VRRP, you have a virtual IP on which it's gonna listen. So let's say if at one particular time, if 231 is the guy which is the serving node, this goes down, so this guy selects and then punts in this guy as the, but this IP is where you'll always be, the controller will be available. So you have a high availability here. Plus you have high availability on the service engine, which is the load balancers. So you support, we support these types here. You have N plus M, you have active, active, and then the legacy active standby, where you can use a floating interface IP where it's gonna listen to those requests. I think that's all I had. If there are any questions, you can let us know.