 Okay. Thanks for joining me here. I'm Philip Volgaerts. I've been working for ARVI Networks since a few months and actually I've been pretty busy in load balancing space for the last 15 years. So I wasn't changed and it was really interesting to see how applications have been developing, the architecture has been evolving and how load balancing actually fits into this story. So if you try to figure out why I'm so free to reach out. So when I asked me to talk in the airfront of OpenStack, it was pretty new to me. And when I first delved into the environment, I saw that a lot of big names and loads are actually using OpenStack in production. And what struck me was that most of these customers were actually talking to because they still need to solve some very lasting issues before they can actually go into production. And that's actually why load balancing, load balancing as a service, availability, scaling are such a huge thing to take a look at. Last survey we actually looked at was that performance of a cloud. It's still very important to try to put together everything, to try to reuse CPUs, memory and so on and so forth. But CPU only and memory only is not an indication really or this came out for that matter. If an application is really performing all well, there's still the entire network team. And basically at the end of the day it's a user experience. If you deploy a new web app, the only thing you're imagined on is when these customers stay on the website, they have a very good performance and the end user experience. And so performance and security for that matter are a very important aspect when you deploy next generation type of applications. So to live up to the promise of software defined principles and web skill principles where you can actually deploy and automate things, have multi-tenancy and stuff like this. We also need to actually come up with a very good approach, good architecture, a good set of tools to actually deliver on the enterprise-grade application services. Application services I mean load balancing, security, there's a level of load, there is seven switching and so on and so forth. So there is seven based load balancing functionality as a zone of load or things which are not going away. It's not because you move to clothe that you don't need load balancing anymore. On the contrary, load balancing becomes even more important. We see migrations going from standard apps to containerized apps. Nearly everything needs to be skill-bound or load balanced. So that's actually the reason why are we actually starting this space to provide load balancing security services in cloud-like platforms? So why are there load balancing? Because there are quite a lot of them in the market. First of all, if you look at the open source tooling which is available for load balancing, for example, NGLAX or HGproxy which is basically packaged with open stack, these are very good tools. They are robust, they're stable. But sometimes we need more advanced functionality like as is a lot of loading in that, new types of certificates, higher key lengths, there is center routing stuff like this. So that's typically why there is a reason why you would move to what we call legacy type of applying solutions, load balance, basic hardware. And quite often they offer all the tooling way more than you actually would need. The problem is if you want to integrate legacy load balancing equipment within a cloud, it's really against a lot of issues because it typically doesn't run in the cloud. It actually sits beside it. It's not because something is running in EVM. It can actually do cloud-based type of functionality. So that's actually the reason why we actually came up with a new solution. If you look at today's workloads and apps, just still I've heard the word ten times today, running on bare metal probably, or in VNs, and more and more a type of container kind of alliance where they are public or private cloud. In any type of these solutions, because your app can live everywhere and it's able to migrate from any place to anywhere, you basically need these services across the entire type of architectures. Not to speak from the way apps are being deployed, not necessarily a one-to-one mapping, but we see these big monolithic apps, which are still there, move to multi-tier apps where, for example, your database, your web tier, your middle tier is already separated, to go to microservices where you basically have container services, which is a very big small container, basically, which does a very specific task, and these container services actually block the container service. I've seen multiple demos there where they actually do a scale-out of a web application by simply typing, typing, compiles, scale to five to twenty nodes. But it's not because you can actually scale-out applications, it's not a matter of adding one and suddenly go to them web service. These 10 web services need to be available, we need to be sure they are accessible, we need to make sure they get as a self-law, so it introduces a lot of challenges we need to address. So the solution at Avi is basically a distributed architecture, it's basically a load balancing that can actually run in all these kind of environments, where this is open-staking, where this DCOS, where this is location, so basically you have a uniform small footprint of load balancing and security. In instance, that can actually run everywhere where you potentially might need it. On top, this is all centrally managed, that means that if you need to integrate all these tiny load logs into a cloud, well, there's a centralized management platform, which is basically one instance or a cluster of an instance or a target container, for that matter, which actually talks to all the other ecosystems. So it's truly living up to the promise of software defined to application service where you have, first of all, a data plane which takes care of all the things you need, why your app becomes faster. On the other hand, you have a control plane which integrates with, for example, open-staking. That means that if you deploy something in open-staking, you can actually pick up all that deployment, the moment you push that button to deploy the app, actually pick up on the configuration and do the load balancing for you. So you can see there's a pretty broad spectrum of applications and tooling that can actually work in coexistence with the solution. It's simply because of one fact. It's an open-ami, you can actually go to it and get it and take a look at it. So where do we add on top of these tools? It's, first of all, what you might expect from an enterprise-rated load balancing, meaning HTTP, HTTPS, these below-balancing in-depth health checks and stuff like this. Other things like content switching, caching compression, all these kind of scenarios. So these are important. I need, these are important, first of all, because your apps are needed. Second, because you need to offload it. And last but not least, it will guarantee you a very good and user experience. That's why you need to call application-level tools. How do you actually go along with scaling? If you deploy an app and it's running on two web services, why would you need very high capacity load balancing? It just takes resources. It's sitting there. So the load balancing itself, once it's deployed, actually also lives up to these web-scale principles. You can actually start out in the small load balancing. If you're happy to scale out, you can actually scale out very easily the load balancing. It's fully automatic. Either an open stack, it's a matter of clicking a button or launch an API call or a policy for that matter, and it will do it for you. The security part, of course, the web apps. I've been working in web application security space for a very long time. Your web apps don't get more secure. The way you develop apps isn't really different in how apps are code-inserted. The path to the security part is still a very important factor. Not to mention, as a zone, web application requires all these things are a bare necessity when you actually deploy your web apps. DDoS protection, it's another fairly important thing which is gaining a lot of attention. Quite often we get involved with customers and their sole and only problem is that their web apps or the businesses on relying is highly critical. New type of web apps are simply intact. So you need tools to actually accommodate. And that's basically the main difference because of a simple tool that does what it lives up to, up to enterprise-grading type of functionality. Something we added, but it is more to containers. When I learned today that a lot of containing deployments for QB netters or DCOS for that matter is actually living inside an open stack, also reduces our big security issue because these containers, they live somewhere on these holes. There's very little protection between these containers to talk to each other. The answer for that matter is actually sitting between these containers and it can actually learn automatically which containers are talking to which containers. So it can actually turn on what we would call microsegmentation. Open between VMs, actually between containers. So it's a very good feature and it's basically building in how the product works within our plot. As I said, end user experience is very, very important. You might have the biggest plot, the fastest machines, but if an app is slow there might be a lot of reasons. As I said, it can be CPU, it can be memory, it can be disk IO, but it can be latency, it can be the network in the data center, it can be the virtual network, it can be the intimate connection, related to location, related to device, iPhones are slow for that matter. So what we added on top of that is that we actually do a lot of analytics and visibility. Since we actually separated the entire data processing part from the analytics, these analytics are living on the controller. If you need a lot of analytics, just get out the controller and it will give you an in-depth view of how an application is actually living. Based on these information, if you, for example, would see that app latency would increase, well, we can actually talk to OpenSpark or talk to Docker or talk to DCOS for that matter and actually ask to scale out the web apps until the situation is back to where you're really expected to be. As I said, it's fully centralized managed, full rest API, and no data, because that's really a basic requirement. So how does it work, technically? The only thing you need to do, I'll take the open stack example, it's actually deployed, you come out with an open stack. Basically, it's a controller. The moment this controller is up and running, you actually give it credentials to go to Keystone, and that's it. At that moment, the controller will learn everything it actually needs to learn about the environment, and the moment you actually use LWars as a service, for example, by an app, it will actually do all these things for you. It will deploy the wall walls where you actually need it, put it in the right networks, and actually deploy the config. As I said, the controller can do that in backbendle, can do that in VMs. He can actually do that also in containers or in public cloud. We have cloud connectors so that open stack, even, sorry, if our controller would actually be sitting outside of open stack, can actually block the open stack to do this kind of stuff. So the controller can really be in hybrid cloud that can actually take care of that. But all these robots are deployed, as I said, they send out the metrics, and the controller will do all these nice graphing and trending. So briefly how does it work in open stack, so as I said, you simply deploy the container. The container will actually have a restate behind or use interface. It will actually talk to Keystone to import all the roles, tenant information and so on and so on. So all this information is pulled out, and the moment somebody treatments of our restate behind or treatments of horizon, for example, deploys a load balancing, it will actually go talk to Nova to deploy the nodes or the service engines we call them. We'll do all the networking for you, we actually place it automatically in the networks, and we can actually pick up an IP address automatically, and for some customers we even register that IP address in DNS. So that is really a very easy way to deploy load balancing with all the hassle of configuring VLAN stuff like this. So there are about two modes, basically how you can install it, as I said, or three modes, can actually have outside of open stack. Or you can actually deploy it in this kind of scenario, where you basically have a provided tenant, which has actually doing all the load balancing analytics. You manage it, if you need load balancing, you actually deploy it on that. On the other hand, you can deploy it like this, where you have an admin tenant, where actually your open stack is talking to, your API is talking to, and we can actually deploy the load balancing into the tenant. As I said, it's multi-tenant, which basically means if somebody will log in and start a tenant, you would only see his own config, and it's also the only thing he will be able to change. It's all the same, come to your office, and it will do that all automatically for you. One of the last slides when I jump into the demo, to show you a few dates, is a very neat functionality's autoscale. As I said, it will continuously measure how apps behave, and for example, if you were to detect that the load balancing engines are, let's say, on the scale. We got an immense hit on this as well, we need more CPU. At that moment, our controller will be aware of that, and it can actually instruct open stack to actually spin out extra load balances. In this case, these four load balances act as one big scalable load balance. The load, as a cell, is increasing anymore, and you need resources for something else, like for example, an Amazon. You can actually take away these load balances gracefully, not by exploding them, so that you go back to the original. Something we will add pretty soon, in a matter of few days, is that you'll be able to do predictive autoscaling. So we're actually trying to figure out the trends. One is your big traffic appears once in a day, once in a week, in the weekends, so that we can actually correctively scale out the environment, and we don't need the performance anymore, scaled back in. Totally automatic. If one of the load spills, imagine that an old goes down, or that one of the load balances for whatever reason would crash. The controller is totally aware of the situation, and will talk immediately back to open stack, and actually restore the glass ceiling, as how we defined your SLA for that matter. This is how we're scaling the load balances, but we're also measuring how the app is performing. So we have a fairly good idea of what the app performance is, what the throughput is, what the app response time is, based on that information, which is something you don't get out of a CPU or memory, so really in-depth, HTTP-related information. You can actually use that information by means of an autoscaling policy, you can actually say, okay, sir, talk to our controller, tell our controller or controller, sorry, a controller will know that we need more resources, can actually go to open stack, alter means of living or something else, and actually ask to scale the app. Once an app is scaled out, our controller will pick up on it, and add a new nose, technically, to the cluster. A few backup plans, it's like simply because of the trick liability, so I'll go to start, we can see it. So basically, this is the controller, which is actually deployed in open stack. So you can see it's built in tenant, it actually took all these tenants, which you can see on the left for me, pulled them out of Keystone, so I can actually see which services were actually deployed in the low amount. I've never touched this controller, it's just there because I just wanted to figure out what it's done. Basically, this entire config is actually done through means of the Elbas drivers, the Elbas plugins we have made for open stack. So the moment somebody deploys through means of Elbas as service, just configure it here, and the driver will actually push it to the Elbas. Of course, this is Elbas, it will give you the features that promise you, where you use Elbas version 1 or shortly Elbas version 2, you will get some extra functionality, but at the end, there can be much more to it than just the functionality offered by Elbas. But one scenario is that you use Elbas, so you're actually not relying on a separate API for that. What we added is the ability to actually see the statistics in the rise of view manager. I can actually go into this VIP, and you will actually see how this app is behaving. I'll use another app to show you more details, but this is actually what you get. You never touched, the only thing you did was actually install the controller and made it point to Keystone. If you want to use Elbas, you can, it will continue to work. If you need specialized functionality, you can actually go and actually also deploy instead of through means of Elbas directly on the controller. Again, we're still linked to OpenStack, so if I would create a service over here, I'll switch to the right. So if I would create a low-loss, right now I'm using the API of the controller itself. You will see that the moment I create one, even when I use basic setup, I can actually take things and this information is actually now being pulled out of the OpenStack require. So you need the networks and basically say, okay, deploy it in this network and can basically go in and also ask him showing all the VMs which are deployed in which networks, I'm not doing the wrong, but if you can see, I'm in the wrong tent apparently, but it actually takes DVMs which are available in the tent to make it easy. If I go over here, you can actually see this. If I would go to network topology, you can actually see when it loads. So where the load balancet is, so this is our load balancet and these are actually the service and the clients which are generated in the traffic. These are all domains of full integration. Now I have some five minutes left. I'll show you a few more capabilities which can make your life fairly easy when it comes down to understanding when an application is really working as it is expected. As I said, it's not only CPU memory and disk, so an application which is deployed for reasons of this solution, as I've shown you, we actually carry all the statistics and we do a lot of number pressure on it. So basically, if you look here on the right, you can actually see that things seem to be okay. You get all the information as you would need. Connections are all open and pretty low, but if you pay attention, you actually see that the system detected that the app is responding, but that we have quite some errors on the request level, not even the connection level. So the moment this gets my attention and I can trigger an alert so that I get an SMS or whatever, I can actually go into the logs and I open them already here. Let me refresh that page quickly. So this moment actually I'm seeing all the errors which were actually detected by the computer. These can be network and engagement related or whatever. If I would actually ask him to show me also all the logging related to traffic which is okay, it actually won't work. So at this moment, I'm actually looking at a bunch of logging information which is all indexed and analyzed for me. So they will try to figure out what's actually the problem I have with these requests. I can actually ask the system to show me a break off of how this looks. It will actually tell me that about what is it, 97% is a gain, whatever 3% is actually resulting in errors. Anything which is in blue I can actually filter on and I can do the same operation and it will actually tell me that these are 404s for 4xx. So right now I'm actually digging out of I think 70,000 logs and I know this amount of errors is basically related to response codes. So you can see most of them are related to 404. Again, if I try to figure out what server, what application is causing this error, I can actually simply ask the system to show me all the URLs that actually cause this mistake. So it's a missing image somewhere. I can actually figure out pretty easily also on which backend server. If I had more time I would show you but this application is actually layer 7 based content switching with about 10 nodes behind it. So it really pinpoints whatever you have on one place. I have a minute left so I can show you something else which is pretty intriguing and we have an application over here. We call it scale out because we're going to scale it out and if I would look at it again I can see that and time it was okay and stuff like this but basically this time I'm seeing a pretty high percentage of failed connections. I could actually go into the logs, try to figure out what's wrong. Actually you will see that if I do that I can actually figure out fairly easily that this low balance system, this one, is actually being hit by one to two gig of SSL traffic and so on and so on if it holds. I'm handling about one to two gig, I'm handling five, six, seven, how much is it? Hitting maximum CPU is one VCPU on one gig grand. So performance is pretty okay but at the end you'll see that we have some errors. So to solve this error I would have been alerted already 20 times because I can fairly easily pull the threshold on how much CPU can actually tolerate. I could actually go into this application automatically or manually hit the scale button and what will happen right now is it's actually talking to it. What happened is we deployed a second load balancing in a matter of a few seconds. It's sitting the same networks, it's handling half of the traffic right now. If the problem wouldn't be solved at this moment I could fairly easily do exactly the same thing and put a third load balancing into the game. As I said this is fully orchestrated, there is no magic behind it and it can be done fully automatically. Based on all the metrics we gather I have shown it on the load month so we can basically do the same on applications. It's my story, if you have any more questions feel free to ask, I will be around for the rest of the day so feel free to pass by. Thank you.