 Good afternoon everybody. My name is Hussein Kizal and I have Rameen Vishary with me. We're from Nuage Networks and we'll share with you a little bit about Nuage and what we do and how you can use Nuage to offer basic Alicard application deployment with service insertion in OpenStack environment. So Nuage Networks, we are based in Silicon Valley. We are a venture of Nokia that is focused on basically providing the scale and performance offered to service providers with proprietary hardware to virtualized software-defined networking solution in the data center. The value proposition is basically we're an open high-performance scalable SDN solution that works with any workload whether it's virtual, physical, or container-based, anywhere in one data center or multiple data centers over any physical infrastructure. You're not really tied to one particular vendor. And we've been a member of the OpenStack community with code in IceHouse, Juno, Kilo, Liberty, and of course, Mitaka. So just to give you an overview of the solution. The solution comes in three virtual components. The topmost layer, which is the management layer. It's called the virtualized service directory. And that's basically where the policy management happens. That's where you do the service insertion. That's where you create predefined networking templates that you can reuse your application. You can redeploy an application X number of times using the same template and then only having to go back and modify the template. So that simplifies the security administrator, network administrator's job by having control while giving application developers the flexibility to kind of work within a protected environment. It has a northbound API which basically supports OpenStack, all of the distros from Red Hat, Canonical, IBM, Mirantis. We also support CloudStack as well as VMware ReCenter. So we have plugins. Some of our customers have their own code that sits on top that's basically custom built that uses our APIs. On the second layer is the control plan, which is based on our 7x50 platform, which is basically a high-end performance router that we virtualize the control plan. So you basically get BGP, all of the routing protocols, and you get the scale. So as the number of servers increases, you keep adding controllers that are federated. You're not bound to one data center, but you can span across data centers. You can peer with traditional routers, and that's how you kind of grow your data center and workloads. Also on the VRS side, which is the lowest layer, we have basically an OVS-based virtual routing switching instance that sits on the hypervisor. And we support all of the hypervisors. We support Hyper-V, KVM, Zen. So basically you can have one solution as well as ESXi. You have one solution that covers all of your needs, whether it's today you're using one hypervisor moving to another, or you have different groups that have different needs. You have the one solution that offers you anything on the hypervisor, can scale on the control plane, and provides you that programmability on the management scale. We also now, networking is only one piece of it. Obviously, in the data center there are multiple solutions, whether it's security, load balancing, IPAM. So we have a rich partner ecosystem from market leaders. On the cloud consumption layer, all of the OpenStack distros we've certified with, as well as security appliances, application delivery controllers. On the networking side we have Arista, Dell, HP, obviously. And then we've added, recently we've added partners such as Radware, Checkpoint, Brocade, Nokia Vital QIP, and we've certified with IBM ICMO. So what's important here is that we're focusing on, obviously, market leaders. We want to make sure that when you select Nuage, we still offer you that choice, freedom of choice, whether it's a hardware appliance, whether it's a service appliance, and you'll be able to build best of breed solutions. And what's making it easy for us is that we actually use that programmability of the network by open sourcing our VSPK where partners can leverage that programmability of the network to kind of integrate with Nuage. So you can basically build your own connectors to pull information or analytics or stats from Nuage and push it to those devices. So it's available on GitHub for people who want to take it for a spin. And then we've basically started, six months ago, a certification process where we basically invite partners to certify their products and solutions on Nuage to make sure that they can go to customers, have the confidence to say that this integration has been certified by Nuage. So we've certified, we had Paul Alto, Fortinet, and the V-Armor GuardiCorps counter-attack. Last open stack, and we've recently added Checkpoint, Radware, Brocade, and Navi. So the certification process, again, it's self-initiated. We create an environment. We create a test plan. The partner self-certifies, and then they submit the results, and we basically approve them and share that on our website. And now we're actually going to switch to a live demo to show you basically what does, and Remy's going to take over. Hi, everyone. You don't need this one? Can you switch to the... No, the previous one, just one slide, and then... So what we want to show you in this demo is that you can quickly spin an application on OpenStack using Nuage Networks with OpenStack, using Python Script or NCBol Playbooks. In this case, we'll deploy an application with this composed of free app servers and one database using NCBol. And we'll load balance everything using LBASv2. So we have our own implementation of LBASv2. So we'll load balance all these web servers. And the traffic which goes from the web servers to the DB will get redirected to a firewall. That type of architecture can be implemented with any partner certified with Nuage. So basically, if you take any security vendors, you can plug their firewall and do the exact same thing, like steering the traffic to the firewall. It could be a virtual firewall or a physical one, it could be a physical load balancer or a virtual one, or even a container load balancer. So let's switch to the demo. So just to give you an overview, this is our OpenStack environment. We have already deployed the firewall, and the firewall connector will push policies and information to the firewall based on our REST API. So the firewall connector will put information from our management platform and push some object like IP addresses, subnet zone to the firewall. So you'll be able directly from the firewall to configure objects that you can be having, for example, in OpenStack. So if you configure a security group on OpenStack, you'll get the same security group in your firewall because it will synchronize everything. We already deployed also the network topology as if an OpenStack administrator already gave you a blueprint of what the application should be. So what we'll focus is on just deploying the application on top and configuring the load balancer to load balancer traffic. So in the first part, we'll have to check some of the things that the blueprint is already configured and so on, and if we go to the OpenStack platform, so we'll deploy a new 8-templates. So basically what was already deployed was deployed using only 8-templates. We didn't use any manual commands and so on to deploy. So what we wanted to show is that using 8-templates, basic 8-templates, you can deploy your topology, steer the traffic to your firewall, deploy your load balancer so you can do everything using 8-templates and new hash networks. So now our demo application is currently deploying. And if we go to instances, we'll see that the DB server is getting deployed and we are starting to put some application servers. So our application is currently deployed. And if we go back there, so what we'll be doing is we'll create a new load balancer using the LBUS V2 API. So we wrote our own modules to talk to the LBUS V2 API, which is like we'll open source all the modules because that's only using the LBUS V2 API. And what we'll do is we'll create new members based on the 8-templates we just deployed so we'll add the application servers to the load balancer. So if we switch to the application, which is not really fancy, but that's the most server I still booting, that's why. Just to give you why it's getting a little bit slow to deploy is we are using OpenStack plus new hash to deploy OpenStack plus new hash labs on top. So we have multiple nested environments for partners to certify. And all our partners will have the same thing. So we'll give them a full OpenStack plus new hash environments so they can directly do whatever they want on the environment to be certified. So that's why we have built a platform on top of OpenStack to deploy labs for our partners. So our application is deployed. And normally if we refresh, we'll see that the server is changing like every time we'll eat a refresh. It depends because there is like a lot of JavaScript also. So it's getting a lot of balance. So from that, we didn't do anything really fancy. You can do it like using plain Neutron and so on. The second part of the demo is we want to show you that you can have also containers workloads. So not only have VMs, I think most, maybe most of you are planning to have like containers in the data center. So what we want to show you is that you can, without touching any, not the topology, not the policy, you'll be able to plug containers directly in the same network you'll define OpenStack and load balance using the same load balancer. So we'll go through like the same steps. And I will just go to our VSUI. So we see that we have like our app network and we have like some VMs, which are the app servers, the load balancer, as well as the firewall. And if we go to policies, we'll see that we have a bunch of policies which are mapped to security groups in OpenStack. So as soon as we plug a VM in OpenStack in a security group, we'll see the VM appear directly in our policy group. And those policy groups will be synchronized to like any type of firewalls. So basically all our partners leverage our API to retrieve all this information. So if you take, for example, a Palo Alto Network's firewall, you'll be able to have all the subject in your firewall. Or if you take a 14-net firewall to be the same thing or checkpoint, exact same thing. So just to show you quickly how we can, for example, see the rules. So for example, in this case, we have developed a small firewall based on IP tables and using a REST API. And we are pushing all the information from our VSD, which are taken already from OpenStack to the firewall. So you can see that you have all the zone SAP Nets policy group and will be able to do, for example, a new rule based on this specific group. So for example, if I come there and select, I want to allow from App Security Group to DB Security Group and, for example, only port, I don't know, 27, because it's a MongoDB server. And we have this specific policy group. If we click on the small info, we'll see that we have currently free app servers, which are the free servers we deployed at the first time. We are currently deploying. So on the other side, we are currently deploying the containers application. So all the containers are deployed. We just deploy 10 containers of the same application. And we are retrieving the IP address and reconfiguring the load balancer using these IP addresses. So we'll create a new member, like we did for the VMs, for every container that was spawned on a Docker REST. We are not only supporting Docker Hosts. We can support Mesosphere Kubernetes, so everything that can run Docker containers can be supported. We don't have any real tight integration. The only thing that we have to install is we have to install our VRS on each IPervisor or Docker Host, and we'll be able to connect your container or your VM to the same network. So if we came back in, so in VSD, we see that we have a lot more V ports in this policy group, which are both the VMs and the containers. And if we go to design, we'll see that in the AppSatnet, we'll have both containers and VMs. So we have used the same subnet defined in OpenStack to deploy as well your container. If we go there and refresh, this one, we'll see that now it's not like AppServer or something. It's the container ID that is getting spawned. So if we refresh, we'll see that you'll be a lot balanced between all the Docker containers we just deployed. So we didn't change anything on OpenStack. The only thing that we do is we deploy Docker containers on a separate Docker Host and connect them to the same network. So what does this mean? So I'll give you a use case. So in the partner program, as you see, the solution is flexible. So multiple CMSs, multiple hypervisors, multiple partners, and there's only three of us in the team. So we actually use that platform, Nuage in OpenStack, using these templates to spin up these environments where partners can self-certify. So the use case is very simple. We have different stakeholders. We have engineers. We have sales. We have marketing that want to use this interface. And not everybody can install OpenStack, obviously, or configure it. So this sticks away a lot of the legwork that usually is uncertain at the end if you're successful or not. So it saves us a lot of time. It saves us a lot of money, obviously. And what's good about it is that if you have shadow IT, that people are running to AWS, and all that environments like this help bring them back. And this is the next slide. So we basically track, obviously, we did an ROI. We got a return investment within three months. And we have a lot of users using this because it's so easy and so trivial to kind of use. So we are at the Nokia booth. If you want to come and see the demo and interact with it, more than happy to show you. We have sessions on Wednesday. And you can come to the booth and have a full schedule. And we have giveaways right next to that screen. These umbrellas in case it's going to get wet outside and t-shirts. Thank you very much. Thank you.