 Hello everyone and welcome to the OpenStack Infrastructure Summit. We are excited to be part of this. I am Saurabh Sureka and along with me I have Richu and Hunter with me. We are part of the OpenStack team at A10 Networks and today we will be talking about the amphorae of application delivery and security with Octavia. The agenda that we plan to cover is you know we want to walk you through the journey of Elbas. Elbas has its own journey since the OpenStack inception. We want to talk about the Elbas plus amphora that we bring in that enables both application delivery and security for OpenStack. Richu is going to help provide an architecture overview, the interoperability, deployment and best practices using Octavia and if time permits we will have a demo that demonstrates the service orchestration and the per app analytics that we've enabled on OpenStack. A quick introduction to A10 Networks, who we are. So we've been around for about 15 years. We have about you know 8,000 plus customers and we deliver and secure critical workloads at hyperscale. So we work closely with the leading service providers with the web scalers and the large enterprises to help with their application delivery and security needs. We are known in the industry for our best price-to-performance ratio and that's primarily enabled by the performance that we enable with our flexible architecture. We've been associated with OpenStack since 2014 with the Neutron project. So Neutron was known as quantum prior to that and we've both benefited from the community so thanks to the developers out there and then we've also worked closely with our customers to bring our services on their OpenStack infrastructure. The Elbas journey, Elbas was integral to Neutron. As you know Neutron is the networking component. Since Octavia it has forked out, it has its own name with Octavia. Now Octavia brings in the concept of amphora. When I heard the term first I was mesmerized by it. So I looked up and this is the word is derived from an ancient Greek terminology where in the amphora was used primarily to both store and transport products like wine and this happened in the Bronze Age around maybe 2020 BC. What better way if you are looking to convince the technical mind share what better way or what better name to have than the amphora? So with Octavia the Elbas is decoupled from the OpenStack development. This is important both from development as well as from deployment point of view and the amphora can run as a virtual machine as bare metal or as a container. So this enables the different type of application workloads and depending on your use case you can run the Elbas either as a VM bare metal or as container and it truly enables the at scale load balancing with multi-tenancy which is required in today's application workload. So with the ATEN solution we bring in Elbas plus so it's basically our shiny amphora for your OpenStack infrastructure. So a typical mid to large deployment for OpenStack will have few hundreds of applications and we enable three key things for your application workloads. The first and foremost is the availability. We make sure that your applications are always available. The second is security both for the applications as well as the underlying infrastructure and third the important point is around observability. So the ability to have real-time view into your application traffic at a per app per user level. So to expand on this you know when I say availability it's really enabling five nights of availability and five layers of availability for your OpenStack infrastructure. So the availability can be in terms of making sure your application is fault tolerant making sure the application infrastructure is fault tolerant. The load balancer the load balance and infrastructure the namespace that is associated with the load balancer. If you have deployed your applications across multiple regions or multiple clouds we provide fault tolerance across all of that and this enables you know the BCDR type of use cases. So if you want to have even intelligent traffic management for your applications wherein you know the infrastructure that is closest to the user serves that particular user so you can enable it with our service. Especially we also enable the blue-green type of deployment. So this is important you know when your dev team wants to push out some changes for a select type of users and then you want to ensure you have a smooth adoption for your application and thirdly the the custom application health checks that can be enabled right at the HTTP layer or at TCP UDP the network layer or even at the application protocol layer. Talking about security we enable defense and depth posture for the application workloads. So starting from the secure connectivity end to end between the user to the application enabled by SSL TLS IPsec. So this is primarily for either a site to site or from a client to site type of connectivity. Then we enable application access management which is providing you with single sign-on multi-factor authentication enabled by our integration with the IDPs. The third thing we enable is the DDoS protection both at the network layer as well as at the application layer and then finally you know protection from the bot or the malicious traffic. We also have the web application firewall that protects your application from the vulnerabilities of you know like common scripting attacks and so on. Talking about observability we enable real-time observability at a per app per flow level. This is done without having any kind of agent in between. We've built the smart in at the load balancer and streaming capability for you know to our controller. So this enables quite a lot of new use cases around predictive analytics providing with actionable insights and importantly around alerts and notifications that can be configured easily. So all of this can be is API driven so if you want to integrate this into with your scene vendor or your components of the infrastructure you can easily extend it yourself. So this solution is enabled you know our LBAS can run either as a high performance hardware as a VM as bare metal or as a container. We bring in high performance with our integrations you know around SRI, VDPDK so you can run our software as a single instance 100 gig and then you get the same feature functionality across multiple clouds. We realize that there is a need to automate the workloads as well as automate the deployment and configuration of the load balancer so we have a strong support of automation toolchain be enabled. The other thing you know we hear from our customers is they are looking for a self-service portal for their workloads and they have two primary user personas you know this is around the NetOps and the DevOps. The NetOps team is primarily responsible for the lifecycle management of the load balancer and the DevOps are responsible for their view into the application traffic you know so things like configuring WIP are just looking at the application health check so we enable both those use cases with Harmony Controller so that so for if you have hundreds of applications you can pretty if you are a DevOps you can zoom in to your specific application and get real-time visibility about 200 plus parameters that can be you know used for advanced analytics as well as for alerts and notifications so simple use cases you know if you are running into an application error you can easily configure a webhook as well as you know post a message into your Slack channel so that ways it will alert you the moment the incident happens and this is enabled in open stack as well as you know across the hybrid cloud so with that I'm going to pass it over to Ritu. Ritu over to you. Thank you Sarabh. So what we're going to start off with is I have a few slides here to introduce what Octavia is as part of the open stack releases and then we'll walk you through the ATIN Octavia provider driver. So here Octavia is an operator scale load balancer as a service which has been made available on the open stack release since Liberty. On the back end Octavia uses the Amphora driver for scaling and on-demand load balancing creation so a multiple of these Amphora are called Amphore and each of them can be deployed as either a watching machine a container or a bare metal. What this brings to the table is it makes this Amphora driver truly supportive of different form factors for load balancing and horizontal scaling is what differentiates Octavia from its predecessor the neutron LBAS. Basically it adds more machines to the pool of resources that's made available making it extremely supportive of the multi-tenancy for customer SLAs and security associations. So interaction with the open stack components Octavia interacts with NOVA, Neutron, Barbican so on so forth through the provider driver interface which is basically a super set of the LBAS V2 APIs bringing the flexibility of flavors for different sized VMs and also creating making it easy to create these custom provider drivers for integrating into Octavia basically makes it supportive of modern cloud deployments. Next slide please. Here I'm kind of summarizing the Octavia architecture into three key components. Amphora, the individual load balancer is called an Amphora. What's natively available as part of Octavia is an Ubuntu VM which spends up with HA proxy. Then you have the controller which is essentially the brain of Octavia which has five demon services running. They are the controller worker driver, the health manager, housekeeping manager and so on so forth. Each Amphora comes up with a network interface on the load balancing network. So what this essentially provides for you is a direct connection into the tenant network so that you can access the tenant servers. Next slide please. So as part of the tenant provider driver for Octavia what we've done is basically consumed the natively available Octavia APIs and customized it for deploying, configuring and managing eight and load balancers and in this case eight and load balancers get deployed as Amphora. So our focus with the provider driver is that we keep it highly scalable, highly available and make it easy to automate these configurations which are eight and load balancers aspect. So the three microservices to make this provider driver actually easy to manage deploy what we've done is built out our own three microservices which run as you know the demons here specifically the Octavia worker health manager and housekeeping manager and we've also attached it to the eight and controller worker. For SSL based secrets we leveraged the barbican eight and thunder and I'll go into the deeper use cases and functionality that eight and thunder provides but this in this specific case it's the SSL termination or what we call SSL offload. Next slide please. Okay so here so eight and Octavia provider driver is now available and it's also publicized on the OpenStack web page. The link here that we have is our GitHub page where you can go you know step through the documentation also download the package. We have more resources and links to resources at the end of this presentation. This is just one of it. Next slide please. Okay I'm going to focus on some of our top key functionalities that we've enabled for eight and Octavia provider driver as of the stain release September 2020. Of course there is a lot more functionality that we are planning to enhance on. The top would be that we've enabled today is layer four network load balancing and configuration and automation through the provider driver. Layer seven application load balancing through the provider driver. Layer seven HTTP rules and policies which then get configured as aflexes and bound to the load balancer. Advanced health monitoring again this can be application based. This can be network based for your backend servers and services and the speeds and feeds required for that again natively available through the provider driver. SSL offload or SSL termination. We're using Barbican as I mentioned for storing the SSL secrets. Another key functionality one of the most popular use cases for A10 load balancers through the open stack integration. The next one would be the heretical multi-tenancy. Now this is a key feature like we already spoke about as part of what Octavia brings to the table. So what we've done is enabled the heretical multi-tenancy which ranges between your domain project and your tenant and this achieved through layer three virtual partitions within the A10 load balancer if it's to do with the hardware kind of deployments and similar are individual VMs if required in in case of the A10 v Thunder or the virtual Thunder deployments. So high ability cluster of course A10 active standby mode deployment and we also provide something called the VCS which is the A10 load balancer clustering to aggregate resources and provide you more throughput and more performance aggregation and the HAA the high ability for redundancy and failover with VLAN support we basically provide you the single aggregator which can be split into multiple partitions and then you could use differentiation of your customer engagement through the through the VLANs and all of this A10 Octavia provider driver which can integrate into A10 the bare metal or the hardware solution the virtual solution called the V Thunder or even our container solution. We also have the A10 Octavia conflict file which gets into the more integrities of A10 specific configurations. Next slide please. With this we'll move on to the demo and for the demo Hunter will join us as well. I'm Hunter Thompson I'm the engineering lead for A10's OpenStack team. I'm going to be giving a demo today covering how to configure and orchestrate methan devices among other things in the OpenStack environment but before we get there I'm going to show just the very basics of Octavia. So first up we have the load balancer which is just the IP address you know it's it's essentially your VIP. We have the listener which is actually sitting there it's listening for the requests that are coming through in this case we're going to have an HTTP listener it's going to sit there on port 80. We have two backend member servers those are sitting inside the sales pool which will have the round robin load balancing algorithm directing traffic between them. Also today we're going to be showing the A10 harmony controller just to show the analytics and the traffic that are flowing through to the those back end servers. So in this section I'm just going to show the config really quickly here just that we have the controller worker our health manager and the housekeeper which are Octavia microservices running there in the background. So here I'm creating the OpenStack load balancer itself in this case we're going to be creating the VIP on this provider provider network which is just directed at VLAN 111. I'm just now I'm going to be showing that we have our M4A created on the admin project and if you look on on the right side of your screen there you can see the the journal entries where it's logging all of the commands and information that's being sent over. So now I'm actually going to hand it over to Richu who's going to cover harmony as we are about to start sending requests from our client there to the VIP and then onto the servers so we can get some analytics off that. Thank you Hunter. So we are going to log into the harmony using your regular web browser. Yeah and A10 harmony is can run as a VM within your OpenStack post network and here that's what that's what we've done. As you can see that it is a per as I'll show you actually it is a per app visibility deep analytics platform which plugs into the A10 load balancers and of all form factors across virtual and hardware and containers and gives you the deep analytics. So as you see here we're looking at the cluster where we have two of the A10 Thunder devices which are clustered and Thunder 1 is our is that's the master and the active device. We'll go into the corp A10 and to view more deeper analytics. In the analytics dashboard you as you log in you'll be able to see the inventory and here in this case we have the infrasummit demo and looking into the analytics here the analytics is you know can divided across every layer of the transition of the package. So here you see that we have the first client tab then you have the internet within the Thunder cluster you have the cluster tab if you have enabled web application firewall which is the BAF analytics you could see that in there. ADC service is what we've enabled which is the server load balancer the web analytics and details which we'll tap on and look further into. You also can record and monitor your server and application health. So first we're looking at the client page. You can see the request location the request methods the response codes in detail as you hover over it you see more details you can click on that and log into and get the the list of incoming requests and tap further into that. I'll show you a dashboard which does just that. So the peak time in which you received more requests in this case we seem to have sent a lot more requests at 1400 today 1400 o'clock today. Within the operating systems you see that we have sent requests from the macOS the windows linux android etc i think hunter is also going to send some call command or the ios tell us more hunter. Yeah i'm going to just simulate an ios command here and then we have to wait on harmony to refresh at polls every once in a while so while that's happening we're actually going to go back to the l7 policy the l7 rule in that i had said that if we see a header with reject and then bad request we will just you know reject and drop the request so we can see that there and if we you know change it up and we do something like a good request it'll just let it through just fine there and now we can see that after refreshing we can see that ios has been added. Thank you hunter that was that was really useful now we're looking at the browsers we've been sending some call commands from using farfox and just parallel it shows up as unknown and chrome of course and here you can see the throughput to dig deeper into per packet analytics we also show that here in in the minimize tab which we've pulled up and as you can see the granularity is extreme in order to troubleshoot this is an extremely useful tool you can choose across the browsers you can choose across the client operating system the device the client ip the urls the request type etc and it gives you all the packets that were logged with with these selected details. Looking at the applications you're able to tell the response time of the application the application latency this gives you a very good detailed view about your application health something that helps the app developers to look more deeper into top domains and urls that were targeted application servers there's the server health and the server response time itself that brings me to the end of this demo. All right thank you hunter and thank you for the demo this is the final call to action slide so you know if you are looking at the OpenStack Octavia project and if you would like to partner we are here to help you so especially you know a lot of customers you know a lot of users are actually moving away from the neutron albaster Octavia so we'll be more than happy to engage with you all so especially if you are looking for high performance vnfs or even for cloud native solutions and and you know a lot of people are you know looking for that consistent application to live in security across the hybrid cloud infrastructure right so we'll be more than happy to collaborate on that and in below here you know we have some important links you know so the email id for any open source project is open source at 8andnetworks.com the GitHub the page that's actively updated and visited you know it has the 8 and Octavia project list here the provider plugin and the driver that Rich you talked about on the OpenStack page and then finally you know the the pipi package if you want to just download and install it in your environment so again thank you everyone and this is what we have for today thanks for joining