 Hi, thank you for coming during the happy hour. I'm Prasad Vallanki, CEO of OneConvergence. We're a startup based in California, in Santa Clara, California. What we are demoing and talking about is our network virtualization and service delivery platform. This is a software solution which plugs into OpenStack. And it allows arbitrary networks to be created as a neutron plugin. And on top of it, we also do what we call network services. And particularly, where we're innovating is around network services and the creation and the chaining thereof. So if I look at it, our solution is a network and service controller. And we have a plugin into Neutron. And it works on existing Edge switches, OVS. So we don't write our own kernel modules or any of that. We work on existing software, including the OVS V-switches or overlay any of the demons which are existing in the Linux area. What we work on is existing infrastructure of the top of the rack switches. So we don't make any assumptions of anything in the networking. What we do is we separate the problem into two areas. There is a network virtualization and what we call the service virtualization. In the network virtualization, we do the creation of overlay networks. And that is basically using existing any of the overlay protocols such as VXLAN, NVGRE, or in future, any of these ethernet or MPLS coming to foreplay. However, and that allows us to create what we call the multi-tenant domains for networks. But where we are looking at going a lot more is in the service area. So while we allow arbitrary topology networks to be created, we are trying to provide where services can be inserted, arbitrary services can be inserted and chained. And in a fashion where we're creating a service overlay network on the network overlay. So as services get chained, it becomes a lot more difficult to chain, I mean, to basically create this service overlay to have network traffic to be engineered from one service to other service. In looking at the problem, we're also looking at how we kind of make it distributed. So in some cases, where the not-so traffic can be arbitrarily taken into east-west traffic, we basically distribute that. And one of the areas where in terms of we are trying to get the cloud technologies into it is where we can do elastic scale out. So depending on the load or a CPU or the resource usage of a particular service, how do you scale out? And while in an inserted, and if you have multiple services inserted, the challenge, how do you kind of engineer the traffic and also scale out? So we are innovating quite a bit on that area while we're providing the whole overlay networks. Our solution completely goes into OpenStack. We are completely dependent on the OVSV switches and the existing whatever protocol. So we are not creating, as I said before, we are very much a standard-based approach we're taking. So one of the things I want to showcase over here is actually two showcases. One is inserting multiple services into a network. And then while inserting, there will be change from a router to firewall. And the other part is the whole scaling nature of these services. So let me switch to a. So I mean, I've been having a lot of problems with wireless here, so hopefully everything will work out pretty good. So what I'm showcasing here is our dashboard. The dashboard is basically, it is inserted into as a horizon right now, though it can also support independent UI. So right now it goes into horizon as a tab. And it gives you the overview of the network. And then in what we show here is we can go to the tenants. And what I'm going to show here is the creation of this particular network, which is, yeah. So here what I'm showing is a firewall and a router being inserted into a network. And then it's inserted to a template. We follow templates which are being as supported by the Open Stack. We use AWS as our format, AWS format. And then we create these templates where you can start the template, and then the whole network will be inserted. The whole topology will be created. And on top of it, the service will be inserted into the router. So I'm hoping that this will work based on our network here. So these are the various templates we can support. And we can extend it. And people can create their own custom templates. Here what I'm showing is this firewall service, which I showed the topology being created. So again, I'm hoping that our network will cooperate. And we are logging into our data center in California over VPN. So this is where we are having, all day I've been having problems. OK, so there you go. So some of the firewall, here the firewall services are getting created. And the create is in progress. While that is in progress, I'll show something else over here. So the next part of the demo, what I'm trying to show is this whole elastic nature of our load balancer, where as service gets inserted based on some traffic generation, both in terms of the resource and traffic, we scale out the services, and sort of thing. So here, I've already created, this is the template which we pre-created. And here what I'm inserting is a Nginx web application firewall. And our technology is independent of what services are getting inserted. So we take a type of service and insert them, and then chain them and scale out. So here, I'm just taking a bare Nginx load balancer. It could be, and we're working with several partners where the existing appliances can be inserted into the networks. So this is just one of it. Over here, so if you look at it, this service is running. So this is basically the traffic pattern. I mean, where the instances agree, what happened is we've been running a traffic pattern inside our data center. And as traffic increases, or changes pattern, various instances get created. And then they scale up and scale down based on the traffic. And you can see here, this traffic pattern, you can see the auto scaling where there are instances which are going up and down. You can see the number of instances are 1, 2, 3, 4. They go up. And the traffic on the port is demonstrated here. And this is the traffic pattern. You can see that varying of a traffic pattern, which is varying over time. And this traffic pattern can be reflected on the instance creation. And similarly, there is also this port. Similarly, the number of ports which are getting allocated in the network is also following the same pattern as the services. So what we're doing here is basically as we are monitoring the resource usage at the network, at the service level using the CPU and the memory. And with that, we are calculating what needs to be done in terms of scaling and adding those services into the thing. So overall, what we are providing is a technology where you can insert arbitrary services, chain them together, and then you can scale them based on the various parameters which today we are looking at that network traffic, which also can be customized to something else later. All this, of course, is available in terms of in our booth. Basically, we can run through the whole demo at our booth. We are welcome to visit our booth and come by. And the other thing we are trying to also, we are right now, we are in a semi-stealth mode and there's a company. Our product is not released yet. We're working with a few prototypical customers where we are going to work with them to make it more robust and also validate a lot of what we are doing. And in a few months, we would be releasing the product. So that's where the status of our company and our product is. So that's all I had in terms of the demo and what we want to show today. We welcome to our booth and I just want to showcase what we have and what we want to show. Thank you very much for coming by.