 Hi folks, thanks for coming. So I'm from OneConvergence. My name is Roshan Gurupati. I just wanted to go over this policy-driven network service delivery, which is the topic of my talk. So we, at OneConvergence, we believe it is all about network services. It is all about policy and it's all about these icons that you see here. So, you know, basically things like, you know, how do you insert and change services? How do you scale out and accelerate all of these kinds of things? So there are a lot of vendors out there with overly-based network virtualization solution. So using network overlays, we think that is the right way to go. So because it abstracts the physical, underlying physical network and provides a clean way that works on legacy networks as well as SDN-enabled physical fabrics. But network overlays is only one aspect of the problem that a number of vendors are providing solution. What we believe is we need a lot more on top. So basically, on top of this network overlays, we're introducing this concept of service overlays with our solution. So when I say service overlays, we're talking about, it is for network services, right? So things like VPN, firewall, load balancer, IPS, IDS, you know, whatever, all of this network service layer. So they're being able to do things like the service orchestration, life cycle management, so inserting and chaining of services, whether it is basically any of these kinds of services. And things like elastically scaling out a service, right? So distributed load balancing and elastic scale out, being able to provide those kinds of things. So that is, when I say, when we're saying auto scaling or scale out, that is, we're talking about horizontally scaling in the cloud. But Accelerate is another aspect. So Accelerate is basically how you scale each instance of the service up. So I mean, autoscale is like horizontally, you know, scaling and autoscale, Accelerate is more like what we implying is how you can scale using intelligent NICs like Cavium's Liquid IO. So being able to offload things like vSwitch, VXLan and CapDcap, and even as well as some of the network services API functions, like, you know, crypto, SSL, DPI, those kinds of things. So this is where we are adding, as a company, a lot of value. I think this is the problem that needs to be solved. So it's not just about network overlays. So, and then have a policy to drive this, the network overlays as well as the service overlays. So we are one of the key contributors for group-based policy. So we believe in that, having higher level abstractions that are application deployment friendly. So and have this policy model, which is group-based policy is there in OpenStack, you know, so we have actually put out a white paper and where the contributors, it is open source effort. There are a number of companies involved. So we use that model basically to drive network and services overlays. So basically this is our solution. So talking a little bit about this policy-driven approach, right, so user expresses his intent with higher level of abstractions, not having to deal with the lowest level ports, nets, subnets and all of that. So but being able to, so things can be very flexible in terms of easy to define and easy to change. So I'll show you in the demo. So basically, so this can be defined in a policy template, or you could do it through GUI or other things basically. So that gets rendered through our group-based policy driver that we have. So the GVP framework is in OpenStack and we have our NVSD driver for group-based policy. So through which those policy gets rendered to our controller, our network controller, service overlay controllers, basically. And then we realize the networks and services. So all of that without the user specifying all of those, we're able to realize those. So some of the things is basically, so we basically abstract services into three types of based on insertion mode into TAP, L2 or L3. So and basically we provide the APIs for insertion and chaining and this being able to use in this declarative model as defined by GVP. So essentially we have, there is a GVP API that maps to our NVSD policy API through our driver. So fits in with that framework. So basically heat can be used for orchestration and automation. And so we support things like elastic scale out that I mentioned. So another thing is what we're on our platform, what we're doing is we're integrating a number of open source and leading OEM network services. So essentially that way, you know, if somebody is deploying things they don't, you know, so they can do mix and match of open source and leading vendor services. So for instance right now we've already integrated HAProxy, PFSense, Snort, Suricada, all of these open source stuff as well as some of the leading OEM services like for instance F5, ADC and there are a number of other ones will be, you know, announcing soon that we're integrating. So it is very powerful that you're now, so you're able to do a mix and match or for a cloud service provider being able to base on what their customer is subscribing to, being able to provision those kinds of services. So basically I'll show you a demo in terms of how we can do something like this in a very simple, simply using this group where we're using the group-based policy framework with our product, you know, as I described. So to be able to realize, you know, this type of topology. So without specifying a lot of the lower level details, right? So if you look at it here, so I talked about three insertion modes. So IDS is like their tap. So actually this topology is a little different than I guess the demo, but we're showing an L2 service insertion and an L3, right? L3 is a web application firewall. So now let us go through this demo. So I'm playing this video. Okay, so we're, we've been showing different things. So in this particular demo we're showing basically a snort, doing a copy to a tap, you know, basically IDS, snort, and then a redirect. So there is a redirect action to a service chain. So where it is a Surikata, an IPS, and a L3 service, WAF, basically. So this demo, oops. So in this demo we'll be showing things in the, oh, this is the heat template. So pretty simple. So you're defining a service chain node. And then, so, yeah, so here you're simply defining this very high level constructs that are GVP constructs. So and then we're creating these different groups, right? There is a web group, app group. So and creating a provider and a consumer, right? So the consumer is the source, provider is the destination. So this is the, this is in OpenStack. This is a GPP API. So GPP UI. So we're showing that, so the things are clean now. None of these services, nothing exists. So everything is empty, right? So we're going to launch the heat template and then you'll see, so basically whatever was shown to you will run that and see those things created here. These are the service providers. So these have not been created yet. So WAFs, not Surikata, as shown in the picture. So now running it in heat to create all those resources. So now you can see all the resource list that is created. So these are all, so now you come back into OpenStack. You can see all these instances that are up now. Right? So earlier, so it was clean. There was none of these stuff, right? So now you can see all of these things showed up in this UI. You could have created this from the UI too. So RCLI or heat. And then this is now, you're in our UI, the NVSD UI. So we're showing some of the same things here. So the services are running now. And these are the connectivity groups, the web group and the app group. So now we're going through basically, so we're doing the service routing and all of that. This UI is showing that. So service routing is kind of like, it is our terminology. So in heat, it is more like redirect. So some of those things are connectivity groups. We call connectivity groups in heat. Sorry, in GPP, they're called policy target groups, PTGs. PTGs are connectivity groups. I guess Cisco has endpoint groups. So there are different things. So now you can see these, all these ports and everything got created without actually specifying any of this stuff. So these are all the list of the ports that got created. Earlier, it was all empty, right? So based on this very high level definition, all these things got created. And now, so basically now this is the other consoles of the various services that are running. So we'll generate some traffic, right? And show things being redirected, copied to a snort, and copied to IPS and the WAF and all of this stuff. So that's what, so on this various screens we're showing all of that. So that we created, so basically from scratch, with the very high level constructs, we're able to create the topology that we showed you. And now we're running traffic and you can see things are getting, you know, copied in the services working. So these are all the instance consoles for the various services. And then what we'll do is basically remove some of the things from the, one of the endpoints from the connectivity group. And then you'll see that it stops copying, you know, sending the traffic to snort. So basically you'll see that now. So deleting that the endpoint now, right? So it is showing basically how easy it is to delete, add, and all of that without actually, you know, basically things decoupled from the physical topology without providing all of the details. So now we removed that endpoint and running the traffic, you know, so, and then you can add back in, right? So basically, you know, decide being able to copy to snort or not, based on things being added and removed. So now we're adding an endpoint. We showed you, we started out with a bunch of things that got created through heat, right? And we sent traffic and then being able to affect a change by deleting something and now adding something back. So we're able to do all of that, you know, so to make it very easy to affect changes. And so we think, you know, so having this high-level policy model where you're abstracting and kind of decoupling from this physical topology makes it very easy. And another very interesting thing is you can have layered policies. So you can have the infrastructure guy have his policy and then the application deployment guy can layer his policies on top based on what the infrastructure guy allows. So you can do even like layering of policies. So basically that's kind of what we're showing. There is a lot more to the product, but here what we're trying to show is how policy based on GBP framework driving these network services. That is what we're showing here. Another very important thing that I would like to point out is this Accel rate. We're working with Cavium using Liquid IO Intelligent Mix. So we have a demo for that. I'm not showing it here. So wherein it is very easy to redirect service chains to, for instance, the SNOT could be running on a VM on a server that is equipped with this Accelerated Intelligent Mix where you're able to offload. So the V-switch is offloaded, the NCAP-DCAP, and potentially other functions like crypto and DPI, this could all be offloaded. And then what that could do is, and we can basically create service chains and direct things to those kinds of things as well. So we're a partner of Cavium. We're also showing those kinds of things. So essentially, you know, on the fly being able to affect these kinds of changes. So that is what we're about. Any questions? Others I'm pretty much done. You can stop by our booth. You can look at the demo, you know, and so basically, you know, where we can offer free licenses for doing evaluation POCs. So if you're interested, you know, contact us. We're also interested in establishing partnerships. So we're enabling more and more network services, OEM, leading OEMs, working with system integrators. So we'd be happy to entertain all of that. Thank you.