 Good afternoon. Welcome to Vancouver. My name is Martin Horville and this is Jonas Vermeulen. I'm the Principal Solutions Architect for Australia, New Zealand and my colleague Jonas. I'm product line manager for Nuage, covering Europe, Middle East Africa. So we're presenting today on Nuage Networks Enterprise Grade Networking and OpenStack or how IT needs to deliver networking with high availability, scalability, interoperability across multi-site environments seamlessly with existing heterogeneous infrastructure and vendors and interconnect private clouds with external private clouds too. That's really what we're talking about today. So as an introduction I'd like to first set the parameters around what we believe Enterprise wants and we've gathered this from a lot of our customers. Essentially driving the business is faster time to market, a lower cost and higher quality, reduced operational expenditure and ubiquitous, easy to manage, maintain and consume technology. The technology that's driving this business change throughout the market is the trend of IT as a service. So self-service from a catalogue, on-demand, an operational expenditure model for charge back, pools of resources that can easily be adjusted. We see that with the platform as a service type environments, availability of integrated applications in shared environments, so even application platform as a service and short cycle provisioning. So moving from longer cycle implementations to much shorter. The enterprise is complex. There's existing hardware, hypervisors, platforms, apps that can't be virtualized as well as platforms such as mid-range systems, multiple data centers, remote branches, remote workers, the list goes on. It's a very complex environment. As well, pressure from the business to perform, hidden IT, Amazon workloads as an example, being pushed out by people in marketing, other areas of the business without central IT even knowing what's going on, add to that reporting compliance and in many cases a limited set of highly skilled staff to implement all this. What we see to enable all of this is essentially a simplification of this complexity. Scalability, abstraction, flexibility and extensibility, we think of the key approaches to ensuring that we can deliver successfully to enterprises leveraging the new technology like OpenStack Private Clouds and Hybrid Cloud. This enables consumption to the enterprise and consumption is what they're seeking as I mentioned, IT as a service. This is really the complexity that we see in data centers. We need to eliminate that to enable consumption as a service. What OpenStack delivers to the enterprise, faster turn up for the business efficiency. It does minimize costs. We see that with examples like PayPal, DevOps, single set of APIs enabling these short cycles across the business, not just within IT, but DevOps applications being implemented out in other divisions as well. An open ecosystem of vendors, freedom of choice and much stronger enterprise support from vendors. We see this with some of the distros from Red Hat, Canonical, et cetera. Networking environments are highly complex. How do we manage those going forward with this new way of doing business and this new way of doing technology? We believe a policy based approach to networking really is the key. Starting with policy templates on the left where we can define them once and then use them many times across the organization like copy paste in Word. We see with some of the work we've been doing with other vendors with group based policy, which is an open source project and part of the OpenStack project as well, a set of commonalities that we can define with that abstraction. Users, application types and business rules. And with this, it allows flexibility with simplicity within this complex environment. We can then apply those policies out to the environment, as I said, many times in the same way that you would copy paste. We also can introduce the complexities of those environments within the abstraction, such as three tier applications, external web servers, middleware, databases, all the rule sets, load balancing, firewall requirements, all set as a template. As I said, design once, use many times. Across the data center, enterprises can then deploy these services in multiple data center environments across the WAN and also in different silos within the organization, whether they're using Zen, KVM or ESXi hypervisor, and in fact, whether they're using OpenStack or CloudStack or vSphere. Within that complex environment, we have things like IP address management, DHCP, DNS, load balancing firewalls, new traffic flows of east-west as well as the edge that needs to be secured. How do we do that? It's through this policy templating engine. We can define all of those capabilities and components within the template once and then apply many times. The enterprise environment can then be leveraging that ubiquitous policy engine, excuse me, across all of their locations globally in multiple data centers and as I mentioned, with different cloud platforms. We had a presentation yesterday in a demo and we've got another one this afternoon around this. This is one of our reference architectures that we actually have running, using different cloud platforms. This one's within OpenStack, using different distributions of OpenStack, completely federated across the environment. So the themes that we're going to address and Jonas will continue on with are abstraction, scalability, flexibility and extensibility. I'll hand over to Jonas. Thank you, Martin. So what we want to do in the remainder of the session is actually to go in more detail into each one of those more bigger themes and give you a few examples of what enterprises need or what technical challenges they have and how we can address some of that with OpenStack or with the combination of OpenStack and Nuage. For abstraction, that will actually mean how do you model your networks and how do you use a policy when you transition your application through a lifecycle, ranging from first development where you do things in isolation through to test and to enter Q&A to production. We're going to see how you can actually do the reuse, multiply, design ones and reuse multiple times philosophy. Obviously, that brings us also to an aspect of scalability. When you design a cloud, you're actually going to see a lot of endpoints that has to be managed that have to be governed under that policy. So the aspect of scalability is very important in the context of networking. Now, in flexibility, the example that we want to develop on is how do you connect this X as a service, whereby in an enterprise environment, you see this mixture of legacy firewalls, load balancers that are non-virtualized. You see some innovation coming up there. They're virtualized, but sometimes they're also fully distributed. How do you connect that all up and what are the models that we see prevalent in an enterprise context? Finally, we want to talk a little bit about extensibility. How do you stretch your cloud from one site to multiple sites to also go to public sites? It's actually building on the same philosophy as the Keystone Federation, where you actually have one database or one source of identity. We're actually also going to look, how can you do that from a networking point of view? Can you stretch your networks? Can you also maintain one source of policy where you're actually that you can apply across multiple cloud deployments? Let us get a start of how we enable abstraction and service velocity across these different environments. I actually want to take a step back and here explain a little bit what we see how development environment look like. Typically, projects are done in a very isolated way. They're small project teams. They're developers. They're developing their application and they shouldn't harm anyone. They want to run even multiple versions of that. They're here to trade their little bubbles. They can all develop them in isolation without any communication with something else. What it effectively means on the networking side is that once I develop my application, I actually want to extract my configuration, my network policy, and I want to use that later on. Otherwise, I have to redo it again later. I also see that I will get a lot of distributed routers for this. They don't have to be distributed, but typically we see that to make effective use of your infrastructure and to avoid that you have to do configuration or your hardware routers, you want to make them distributed. You want to make them fully in software so you don't have that dependency anymore of a centralized node or of a hardware router that does the particular routing for that developer instances. You also see that there is typically an overlap of IP space between them because you're in different versions of the software and while basically developer, you don't want to care about what particular IP space you're working in. I mean, you work anyway in isolation. Now, once you move then on to a test environment, you move away from that isolation. You're effectively working in an environment where all applications are bundled up or where they're in one bigger bubble, they can all communicate with each other. And so effectively, you still want to make use of that policy defined before and reuse that in your test environment. From a routing perspective, it also means that I'm moving away from a lot of small routers. It's actually a big, very large distributed routing instance. And I'll also move into an environment where I get a unique IP space because here I actually want to test my communication with monitoring tools, maybe with a centralized DNS or IPAM. I want to have authentication running. So there is a lot of shared infrastructure that I want to validate here in this test environment. And production essentially is just a copy of the previous. So I think again, it's going to be very important to reapply that same policy. And the only difference really with test is just that the number of instances could be different. So when we then thought about, okay, how can you reuse policy in an in between those different environments, it's actually not so trivial anymore because your network infrastructure or your subnet schemes are going to be quite different between the two. Your IP addressing will be different. So you kind of have the choice what you're going to do. You either will have to modify your cookbooks or you're going to have to modify your maybe scripts to deploy your applications so that they know about the new IP addressing scheme. Or the alternative that we see is that you can use an external system to define your topology and enforce policies. So effectively you're going to can use or you can use group-based policy or you can use another policy system that are going to apply to the environments or to the routers that are defined in OpenStack. So this is an approach that we have developed for a number of our customers and essentially here what you see is an example for a test environment. On the left side you see how you model a full test environment from a networking perspective. It is this one distributed big router, one logical, that combines or make sure that it connects up all the different projects. So it could be something like B2C sites, some consumer analysis project, stock application and each one of those project has its own policy around it. The important thing is that all these projects can communicate with each other and yet they can still have their own policies for instance security groups to limit their east-west traffic flows. Tenants can see only their own subnets when you work it like that. So tenants think about an OpenStack, they typically see their own routers, their own subnets. If I use a system like that the only thing what I need to do is actually to map the subnets defined in this particular project to map them in the tenant context of OpenStack and as such a particular tenant could see as developer subnets, could see its test subnets and could see its production subnets yet the routing and policies are very segregated than defined in an independent way from each other. What I then effectively can do is with my policy that I've developed or that I've fine-tuned in my developer instances, I can start pulling that out, I can start making an export of that, I define my application per application, I can export it, I can maybe store it somewhere centralized, I can share it with others on GitHub, I can export it to another site and then in the next phase of my application development I can reapply it in my test environment, I merge it, I fine-tune it, I can get approval because that's still important in enterprise context. Before it actually moves to production. So essentially I'm going to get like a definition of a policy, a definition of my network structure about all the security rules that is very specific to an application and that I take from development through to test to production. There is no variation between those three environments and it's always going to be a system that is going to check and enforce the policy itself. In this example you could also mimic those environments in other locations as well, so other data centers, etc. Absolutely, absolutely. That's the nice thing if you have a centralized catalog of your policy definition. They're managed by configuration management tools, they get up and what we see is that you have a typically a centralized lab environment where your R&D is sitting for your software development but the actual deployment of the application could happen in different parts and different data centers and so they can take this one validated design and reapply it in multiple locations. In the meantime you might think how does it like for instance from a security perspective how this is such a policy actually look like and this could be like an example of that where I have two applications, my B2C site and my stock application that each have their own policy around what ports need to be opened, what ports are allowed and in my environment that combines all these applications together I still want to have a single list of all of them together. Whether we expect or what we see that the network administrators don't want is that there is still a capability to define a top policy list that defines all my infrastructure policies and these infrastructure policies they would allow to the network people or the operator people to always access for instance the VMs themselves they would always they always have to manage it via SSH or they always they want to make sure it's secure there so they drop talent sessions and so they always have to be applied regardless of how the application looks like inside. Now as I add on applications every application actually want to have a separate definition of what policy describes it so this is an example for the B2C that could have different ports that it opens up or that wants to drop and I have a different definition for stock application. Finally there is also some bottom list that says anything that was not allowed is for instance a drop. Now this whole list has to be compiled and would have to be applied out on every hypervisor that hosts a VM that runs in this test environment. So it is a process that runs in the background but essentially the whole definition of this policy can happen upfront you can save it you can manage it in your configuration management tools and you can take it from one location to another location. You can also use it to for instance to do backouts or rollbacks. So suppose you have an issue in your production environment you can actually take the policy that is valid at that moment you can take it out you can roll it back you can roll apply it in your test environment. So you kind of have a direct replication of your current network topology and your policy put it in place in your test infrastructure. Make sure you understand what the behavior should be find tune it again and then reapply it for instance there are some hidden slides here. So this was a bit about abstraction and velocity right. The next thing we want to address is what type of flexibility we see that people want to have to deploy firewalls to deploy load balancers or to set up a VPN as a service. Obviously the default option when you're using OpenStack is to use what comes out of the box firewall as a service load balancer as a service VPN as a service. The way how that is deployed is effectively you have a network node and in there there are certain namespaces where you have your HAProxy running or you have your north-south firewall or your VPN service. Now this is a software implementation it's automated but if you look in an enterprise environment it could be that they want to use a different model they may want to use an existing firewall or an existing load balancer and what we effectively enable with new hash networks is to also connect up those centralized appliances throughout a gateway. It can be still a software gateway or it could be a hardware gateway but effectively you want to ensure that the context or the virtual firewalls or the virtual context that you're defining in your load balancers that they are tied up in the networking configuration of your tenant. So essentially in this example if you have like for instance a tenant network the green one you want to make sure that it links into the virtual context of the physical firewall or load balancer and we did some of that integration with F5 with Palo Alto as well on the firewall side and what we're using to in the open stack site to model this connectivity between the tenant network and the network or the VLANs on the physical load balancer firewall is a provider networks. There is also projects defined under layer 2 gateway which effectively makes it a bit easier and we can make a direct mapping and that's something that we see coming up in Kilo or in Liberty. So this is centralized non-virtualized now as innovation kicks in we obviously want to distribute our network functions. So distributing network functions means that it's not anymore centralized in a network node but I can start distributing for instance my load balancer function and the example here is that here I don't actually need anymore a network node because my function is being distributed and effectively I'm still using the load balancer as a service module to deploy a new load balancer for a particular tenant to configure the pools to configure the VIP but the instantiation of that load balancer function happens in a very distributed fashion. So here the example that we are developed is together with AV networks it's something that you can also see in our booth later on or I think it's in their booth this afternoon quarter past four and we see this system end to end working. Last option is actually to look at the firewall side of things or we're actually seeing a more push to also use a distributed agent framework that is multi-tenant. So essentially what we then allow is that an agent a network agent is running alongside our OVS or our VRS inside every compute host that can inspect the traffic and in case something happens or if something special has to happen with that traffic that we can actually maybe drop it maybe redirect it at least do something special with it. These agents we see them under the form of a VM could be under the form just of a process or container or a docker. The type of functions that we see sitting in that type of agent we've implemented a few by Nuash itself that's like the proxy R port DHCP function so effectively you capture DHCP it's a local agent that would answer there it's not like being centralized anymore the same thing with metadata agent metadata service we're also doing the storage proxy for Swift so for instance if you want to access storage you're effectively gonna capture that requests and instead of going in an overlay network to go to your storage you can actually break it out and directly access it from the hypervisor so by an agent or you can apply more security related functions layer 5 to layer 7 work. So I think overall what you see here is that there is a number of options that you have to connect up access to service except whenever you deploy your open stack environment have a look to that and considerable works best in your environment. Now lastly we also want to see a little bit about how we can connect clouds to other sites and where we come from is that typically we see like larger financials actually a lot of enterprise I mean any industry they wouldn't just have one data center they would have multiple and the question is what you do with like your application that are sitting across those data centers and do you replicate all your infrastructure across the two and do you manage them very separately or do you try to at least have some kind of synchronization between them some kind of replication so at least some of your own burden is taken away and that things are happening behind the scenes that are making sure that either users or networking or getting synchronized or accessible from other sites and this is exactly the use case we're working on from a networking perspective on the user side of things that has actually something we get with kilo and actually before already a bit with identity federation now on the networking side that you have to think how can I also federate my network is there a possibility that I can access my resources that are sitting in another site can a VM that sits inside one can it ping a VM inside two or in a public cloud without having to go over a breakout gateway without having to go over a VPN particular service actually would I be able to define a service that stretches across those networks and can I define a subnet in my other site that still uses the same routing instance and my first in my primary site for instance and can I even define for example a security group or a policy that takes in resources from site one and site three and considers them as like one because it's them all as for instance these are database servers and can I have one policy that says nobody can access database service except for this particular firewall for instance so this is the type of problem space that we wanted to work in and that we are trying to solve with new ash and we actually enable that since our first release two years ago by allowing to have a centralized policy engine to deploy that and to control new ash in to control the networking in every site with a new with our new ash plugin what that means is that I can indeed have a network that stretches across multiple open stack instances I can have them a layer two subnet that stretches across I can have a single routing instance that routes between site one site two and a public cloud and I can really go into my open my new ash engine and see hey these are all my VMs and this is my network topology and I can map them into every open stack environment now we also saw that this topology is something that it's great but people sometimes want to make sure that there is no shared infrastructure between those sites they really want to make sure that from top to bottom from their provisioning stack down to the data part everything gets replicated so that they get absolute independence between site one and site two now what we therefore allow is something that we call a federation of a policy and in this particular deployment we would effectively deploy the new age via the new age directory in every site the local directory the local policy engine is then responsible to resolve your local network requests but in case you have a stretched network in case you want to access a resource from a remote site at the time of start-up of your VM it would actually contact the home VSD it would contact the main VSD to ask what is now my policy what is now the networks I need to talk to what are my security policies that I need to comply with and this is something that we would call federated policy in both models in this one and that one I actually now able to stretch my subnets I can have subnets created in a public environment that can attach to a private instance I can have VMs I communicate between them and I can have my security policies across sites so I think overall it's like thumbs up we're actually can establish multi-clouds distributed they can all talk to each other using one network policy that's effectively how we have realized the picture that you saw in the beginning where we've seen a stretched cloud ranging across America's Europe and Asia Pacific so overall let's go to conclusions we have a number of needs that we have identified in the very beginning here we got an abstraction that we need make sure networks get consumable and what you actually saw here is that we can solve some of that by applying by being able to define network policies that are defined on an as you need basis we got scalability scalability is something I mean we can talk I mean we run actually a lot of our tests there are some videos we can share with you but in a lot of our scalability test actually in Amazon and Amazon we run them with containers because it's containers in VMs and we can start up them in a very easy fashion and run those scalability tests ranging from 20k containers now up to 100k containers and we kind of see that to ramp them all up it actually takes us an order of magnitude of minutes and the reason why we can do this so fast is because we have a distributed control plane that is going to calculate how all these containers are going to have to talk to each other what network is needed to tie them up all together this is basically the reason why we believe we have a very scalable solution I was also showing you some of the flexibility that we enable if you move your cloud forward and you start thinking about services like VPN load balancer firewalls you may have different desires than others and so you need to have that flexibility to connect them all up and lastly I was showing you how we actually can federate the network across multiple sides yeah with that we want to thank you for your time here and we would like to open up here for questions so the demo that Jonas was talking about which is actually showing the reference architecture that we had up on the screen earlier he's been shown at the RV Networks booth T9 at 4.15 this afternoon so if you want to have a look at that in operation and that's actually implemented in our in customer data centers across across different locations you can see that in operation but if you have any questions we'd be more than happy to answer them for you yes we will absolutely so we enable any workload on any public cloud environment by a software based virtual route switch that sits on a VM so therefore unlike many other vendors you don't need access to the underlying hypervisor in that scenario and you don't need any other hardware or software in the back end of that cloud provider so then that VRSG module which is the virtual route switch module can then bring in all that information from say the core data center in your private cloud as an example or in fact you could as we have done with soft layer run the entire solution in soft layer as it's all software based it actually gives us some other benefits as well because I mean typically in a public cloud there are quite some restrictions in terms of the networking you can build up there you can't I mean it's very difficult to assign multiple IPs have a lot of vNICs even run some layer 2 multicasts it's typically blocked so by effectively deploying our virtual switch on a VM and I have for instance containerized workloads we're actually overcoming some of these limitations that you see in public clouds and that's been one of the key arguments why people are looking to effectively deploy the NASV switch in that public cloud environment. The other advantage as well is you you're assimilating your APIs so as a developer you can be writing to a single set of APIs in any environment whether it's your private cloud your existing data center or public cloud now obviously with our plug-in with OpenStack that further enhances that so we see a like-for-like with the two APIs there. Any other questions That means I'll be able to do the equivalent of having one security policy across like several Amazon VPCs or OpenStack projects. That's definitely a use case you can implement with that. I can say that I want all of them to always share and inherit a few policies. Absolutely. The only requirement is that you're using the Nuage plug-in for each of those OpenStack cloud environments. So our OpenStack plug-in interacts with the default OpenStack security groups. So when you first configure it you can decide whether you want OpenStack to be the master if you will or Nuage to be the master and then it flows through depending on how you. I mean so to be honest I mean there is a number of the advanced network topologies that are just not I mean that you're just not capable of modeling in OpenStack at the very minute. So for those more advanced network topologies yeah you'd have to make your network topology in Nuage right you make your subnets you say hey these these are the things that need to be stretched these are your that one policy I want to make sure it's applied everywhere and then I'm actually mapping your network topologies or your subnets in the OpenStack instances that you want to bind it to. It's something that I mean it's used in most of our M.Va or customer cases like that by the way. Any other question? Okay thank you for your time. Thank you very much.