 Hello everyone. Welcome all to this session. I hope everyone, I mean, please free to have a seat. So this session will be about neutral networking in private clouds and in more specifics, I mean what we hear from us talking to customers in Europe what are typical requirements that you see in enterprises in larger enterprises who want to stand up open stack and what they face or what they expect from networking, what type of features they expect there, right? Now in essence it's not going to be too different from public cloud, huh? So there is definitely a lot of similarities of the way or what enterprises want to do with their private clouds. Business like departments, they want to have that agility, they want to have APIs, they want to have an isolation so that they're not doing something wrong with another application or that they're not doing anything with a neighboring department or another division. And also from an owner perspective, if you look at the ones who are going to run such a cloud in a bigger enterprise, also there they're very much incentivized to keep costs low. So they're also looking at lowering the cost of servers, they're also looking at open compute projects, they're looking for optimizing hypervises at a very high level automation. So all these things, they're really very much the same in enterprises, they want to adopt that same model that has been so popular in the big clouds, like from Amazon, from Google and they're now trying to mimic that to run their own internal private clouds. You might want to ask, okay, why if they all like so much that public cloud, why to go private? And it's all, I mean it's not a surprise, it's all about this information protection, control compliancy that they have to go with. Now this is all good, this is from a high level. Now what we're really interested in, what does that boil down to on a networking from a networking perspective. And what we're actually seeing is that I mean, what enterprises are telling us that, I mean one of the key differences is a type of scale that they expect in the VPC or as part of a tenant. Because one of the major differences there is that applications don't run purely in isolation, they actually communicate a lot with each other. They're not really like having one single interface to the internet, no, they're actually communicating with a lot of neighboring applications. So what it means is that the number of subnets or the number of VMs within one tenant environment is going to be a lot and a lot higher. This could have an impact on the number of subnets you have to support and the way how you're doing your security groups. The other thing that we are hearing is that, I mean, it's obviously driven by how can you sell it internally, how can you sell the proposition of a private cloud internally, is how can you support a lot of the legacy applications that are out there. Because if you're not supporting or if you're not catering for legacy applications, the real business units, they're going to be having very little incentive to actually go to this new type of private cloud. At the same time, there are existing operational processes, there are existing environment things. What I want to focus on is what are typical things that you have to integrate with. One of the examples here I'm going to elaborate on is the way how you go or integrate with IP address management solutions. And finally, I also want to address a little bit of how can you still respect existing organizational rules or existing, the rules of existing departments. Because essentially what you have is that, I mean, running up a new private cloud and we're all here, we're all wanting to do the new thing, but you have an existing corporate structure and it's not so easy to actually change that. So instead of fully changing that and instead of fully adopting a new culture, some of the enterprises actually want to see how they can map the current organization into the new way of working. And I'm going to give a few examples or I'm going to give a clue on what we are doing there with new hash networks. So that should all give you a little bit of a feeling, I mean, or give you a tagline I have a feeling we're not in Kansas anymore. I mean essentially it's we all know where we are, but now we're going to go from the public cloud. Let's now see how things are running in real life. So let's start a little bit with pertinent scale. And what I've drawn up here is basically what you see in a public cloud. If you as an enterprise you go to Amazon, the first thing you pretty much do is you start up your VPC there and then you're basically going to interface to the Internet and you're going to have a certain gateway there or route there and you're basically having their fully isolated networking environment. And this is pretty much how all VPCs run one next to another. You have your own VLANs, your own addressing space and then or if you're having a public VM you could have your floating IP to go to the outside world. And this is really the same model that OpenStack has adopted. Also there we have the tenants, we have external networks, external networks will have a floating IP the VM that need external connectivity, their floating IP will be routed on the network node to go outside to the Internet right? Now as I said earlier this might not be the desired way of working in your enterprise. A desired way of working may actually be that there is lots of different little bubbles, lots of different applications that each interact with each other and as a whole the only separation that really is necessary is to separate between lifecycle phases or to really separate between business domains right? So you might have developer QA and production or you might have business unit 1, 2 and 3 but inside everything could be still open. Inside you actually want to make use of a kind of a distributed routing process or you want to make communication as good and as automated as possible but there is no real necessity here to really give different VPCs or really give different routers so to speak to everyone of those applications. The other thing what is quite different also is it's not like you have overlapping IP space typically in enterprises you could have but typically you have an IP system that you really want to know this host I can address them with that particular IP address particular host name so whether they are deployed in FQA or production you'd always want to have a single pointer to reach a specific VM and also because of what I was saying earlier because of you have these bigger bubbles here you're ending up with a much higher scale immediately so numbers that we have seen is actually here having to support tens of thousands of VMs here and over a thousand subnets as well it's not like the tens servers setup obviously the phases or if you really want to make a step to private cloud you typically start off with about 50-100 hypervisors and you grow from there but this is like the number from a networking point of view that we're seeing that it has to scale to The other thing that you see here is that every bubble here has two external connections and I'm going to develop a little bit on that how do you go to the internet from each one of those bigger application these bigger life cycles or how do you go to a management or a shared network so when we started off with that and we went into the discussion with some enterprises I mean we actually first thing was to investigate if floating IP could be used or reused in some shape or form right now the thing is if you're having your little application model as a tenant as a project and you deploy routers in there and you want to go out this is where you would then use your floating IP now the thing is if you have a lot and lots of those applications you may wonder okay what is then the sense of a floating IP what is the real meaning of a floating IP it would be really something artificial that you're bringing into the discussion to make it work that actually doesn't feel right if you have to introduce a new IP or introduce a new networking routing to communicate between tenants the other thing is that floating IPs from an optimization perspective you're effectively always going through this bottleneck of the network node and you could think okay that's definitely the case with Icehouse you might think that's resolved in general but what we have to realize is that if you grow your data center at a bigger scale you typically don't have layer 2 fabrics and you can't really go out at every hypervisor with a specific floating IP so essentially you would still have to go to this one network node where you can expose your public IP which is in a different range than your server or your hypervisor address and then lastly also there is some limitations around multiple floating IP subnets so essentially when you're growing bigger you want to use more and more floating IP subnets it's not so trivial actually here to set up as well so we were looking further and so we thought about okay let's use shared networks shared networks is on the other side of things shared networks I mean you can attach VMs here from every tenant to a shared network and that's the whole idea then they can communicate between each other now obviously what you then see is that okay how to do the routing between different shared networks you kind of have to deploy a separate router at another tenant that is not visible then to your tenant himself because you would only want to have them hang the VMs onto that subnet but the biggest problem is actually that you have shared networks that are visible to each and every tenant there is no way how you can map a shared network to only specific tenants they're effectively exposed to everyone and what you then have is basically to I mean how do you then even scale up to the high number of VMs if all the VMs need to talk to each other so the way how we want I mean we in the end address such type of use cases with Nuage is actually to have a routable domain a router per life cycle and within such a routing context by default all VMs or all subnets can talk to each other what you can then do is to make a hierarchy in this router context and the hierarchy would then be to on a per tenant level on a per project level and you map the subnets within the project that you want to expose them in so essentially here in the example you see for instance the router that is responsible for routing between all developer instances will have a number of subnets and you can then map the subnets to a specific tenant the idea is then that the developers or the members of this tenant then only see those subnets and they can only deploy the VMs in those subnets and the same applies to other tenants if some of these instances want to talk to each other they can they just go via our router instance and that routing instance I mean we're always implementing it in a distributed way essentially you will actually have an overlay tunnel, a VXL tunnel that goes from your hypervisor that hosts the VM for tenant Y that goes to the VM on tenant X now it also solves another thing if we have this distributed routing the way how you can connect into the management networks, the shared networks and the public internets is actually by peering or by advertising the routes or by static routes I mean whatever you prefer in your environment to actually say hey if you want to reach subnets everything for this developer instance you can first have to go here to the cloud distributed routing and then he will sort it out to go further up to the tenants or the other way around every subnet that is being created for a particular tenant gets automatically advertised into the other side now sometimes people still want to have certain isolation between tenants I mean that is actually quite I mean that brings me here to security groups so if you do want to implement certain policies that say what projects or what tenants can talk to each other which ones cannot talk to each other or what hosts within a subnet cannot talk to another host then you typically go to security groups and that's also the way what is possible today with OpenStack and it's a way of really it's a v-port oriented way it's very much what you do with Amazon what you do with OpenStack and you basically can define that hey those machines those v-ports they're all like Windows machines and they can talk with Active Directory servers or you could say those web machines can talk to application machines and that's actually the normal way that is very useful if you want to set security rules within your tenant environment as a user itself now there is also the other set of requirements where you have more from a global perspective if you want to say hey tenants cannot talk to each other projects cannot communicate with each other or you want to do it on a subnet to subnet basis it's actually another way and that's actually where we also see a lot of requirements from enterprises what they're looking for so here it's like very easy in OpenStack security groups this is actually a model that they also would like to see so with that this is more or less what I wanted to talk a bit about scale you see it's instead of having very smaller application at each work independently actually in enterprises we see a lot of the need for having bigger domains or bigger environments where VMs and applications can talk to each other such your distributed routing or your network topology should be able to cater that much bigger scope or much bigger range of VMs and subnets the second case was all about supporting legacy applications and legacy applications I mean I have three and it's all to do with convincing the business units to make use of this new private cloud how do you do that you have to say no you have to start from scratch and you have to redevelop all your new applications you start with dockers and all of that or are you going to give them a hand and so essentially here what I'm going to list out are three things that we think okay with some easy help on the network side we can convince business units to actually adopt an OpenStack cloud paradigm and the first one is you are only validated or only supported for instance on VMware I'm not saying it's only on VMware you have other things but it's a typical thing like there are certain applications that only validate on a particular hypervisor and so what you then get is a heterogeneous environment where you basically want to have networking that can span multiple server groups that each have their own underlying hypervisor would be how to support multicast applications multicast is probably something more from the past is probably something that you don't see so much anymore in the new types of applications where you have everything nice unicast it but the reality is that there are quite a few application frontends or application that are still using multicast for internal synchronization or you have the video broadcast that have to be distributed so for those things you actually would need a solution that can send and receive layer 3 multicast and makes as good as possible use of your fabric and your overlay structure and the last one is around voice and storage applications and typically voice applications they are they want to be first class citizens, they want to be prioritized a little bit more and so for voice applications you typically see requirements coming back around to make sure the quality of service is set correctly with either a trust of whatever the application has set or either an override or we want to make sure the other guys are well behaving in the network and they end up with rate limits and lastly for storage applications suppose you have a newer application that uses object storage instead of trying to access Swift through a network note or instead of really going through a gateway to reach that common Swift backend there are actually options or they are looking how we can do a local breakout from the hypervisor and get the object from the backend and pass it on to the VM with all those things you see there are legacy applications and there are a set of network goodies that are coming as a result out of that to support that case 3 is around how to integrate in existing environments and there is a lot of different systems to integrate with could be OSS but here the focus is on IP address management and IPAM system typically consists of DHCP and DNS and a lot of nice analytics tools on top of that as well but essentially the current environment is that all the servers they send out their DHCP, DHCP gets relayed in the fabric server, the DHCP server does the IP address allocation gives some special options through to the fabric back but also there is the DNS registration that later on that host can always be identified IP to have its right now this is what enterprises are used to so they have also all the operational tools around lease management and all these things to operate that very well and they are quite hooked on to that so let's have a look what we can do from an OpenStack perspective so with OpenStack today is like if you want to start up a VM your DHCP agent is actually going to give the IP address the address allocation itself is actually not done here by the running DNS mask process but is actually done here in the OpenStack framework so it's not something what essentially happens is every time you start up a VM during the scheduling process your IP address is allocated and is then provisioned into DNS mask and when later on that VM boots up it's going to go to DNS mask to retrieve it. There is at this moment no core project that really handles DNS registration there is an incubation around designates and so that would provide a huge step forward but at the moment it is where it is so how can we make the two things work together so this is my normal infrastructure with the DHCP and DNS and it could be as simple actually as deploying a DHCP relay. The DHCP relay is already something that is available in DNS mask it's not something it's just a matter of configuring it correctly and so what is then needed is effectively a synchronization between your IPAM system and Neutron about what subnets are to be made available in OpenStack or what subnets are being used from IPAM so synchronization on the subnet pools then there is a synchronization on the IP address allocation and so typically this one would still select the IP address but wouldn't actually give this address back to Neutron at that moment this guy knows very well what IP address will be used for a VM and so when that VM finally starts up the DHCP relay agent can actually retrieve it back from the DHCP server so it's just a few simple tweaks but it could make a huge difference in making it more and more acceptable in an enterprise environment. There is also an optimization to that, especially with Juno I think that's going to be even better instead of having the DNS mask and doing the DHCP relay here you might as well put it here and then at that moment you have a fully kind of distributed DHCP relay agent that takes out any DHCP discovered from here sends it out over unicast to the DHCP and then it's another step that you actually have distributed so last case is around roles of existing departments and I just wanted to there is no rocket science on what is the correct answer here but the reality here is that from a top level point of view people want to deploy faster and faster their applications and it doesn't matter at what cost sometimes I mean it's like what the guy from BMW said yesterday during his keynote it's all about doing things fast, fast, fast so this is a top level directive but then it comes like how do we fit it in with existing change processes with existing groups and responsibilities where each one has its role to play and so you have the apps people you have network people, security people what we see for instance is that in one particular case network they were the only ones which are in subnet ranges, security these were the one administering user and permissions and the one setting ACLs and then compute or then the ones who are actually controlling what VM goes where now when going with OpenStack Cloud you can actually get away with all of that you can go much faster the question is if you want to go immediately with this revolution or if there is something that we can make a more evolutionary path so when looking at OpenStack what you actually can do there is you can today create nice member roles in Keystone so you can specify or can define multiple roles you have to then manipulate some of the policy files to restrict what every role can do but it actually works quite nice because you can work it at a very granular level almost up to the API level so to speak, create routers and stuff you can say hey only this role or only this guy can do this and then you just have to select your members to the role so we can actually mimic that whole organization we can map it into OpenStack if we want to the problem is when we do that is that you're effectively still creating that or you're still maintaining the same chain of activities you still have a security guy, still having a network guy who has to say yes I'm alright with that and only then the guy deploying the VM can get into action so this is what we perceive or what we see as a status quo what we have done in Nuage is actually try to make a kind of a cookie cutter approach and the cookie cutter approach is where you can actually do templated designs with a template you can make one project a template or one whole life cycle application, you can put it all in a template and depending on the environment sometimes you want to include subnet definitions as part of that or exclude subnet definitions or include ECLs or exclude them, I mean it depends a bit how the organization essentially works internally but effectively the outcome is that you get a template that has been approved by security that has the networking guys are happy with and it is good to go for the application guys to start up with their application deployment it doesn't matter how small their deployment is, how big they want to create it, if they want to maybe add more own subnets or not, once you have defined that cookie they can actually color it themselves and make it their own nice application out of it. With that, I want to thank you for your time, I think what I try to bring here is just a number of small items that we think they could be nice additions to the way how OpenStack works and it's also kind of a common call to action here, it's like let's actually work together on getting some of those things actually into OpenStack so we can all bring those up to a level that we can adoption even faster and better in private environments, thank you for your time do you guys have any questions, this one do you want to see how they've colored in, oh how they work so you can actually your template you would put in a search, I mean you can choose how much network specifics you can put in, but you start at the router level so you can have a routable domain already at certain security zones so you would say hey in this router I would have a DMZ zone or web zone or you would have a DB zone in the back you can then attach security rules to those zones so you can say hey web zone can talk to application zones over these type of ports these are the quality of service parameters you want to apply to whatever subnets that have been attached to it and this is what you could give as a package to or maybe call it instance, this is what you instantiate and give to an application guy the application owner he can then add its own subnets so he can do a little bit of own network management if he wants to right, or you can do it yourself as an administrator before him I mean it depends there a bit, but essentially what I said was everything that template in the permissions, the ACLs, also by the way the permissions who can deploy in what zone are all part of that template and so what I mean is once that instance is given it doesn't mean that everyone can just deploy as before in that instance no they're actually subject to the rules and the permissions or the rules and their provided permissions that they have given the job to, if you want to get some more I mean really see it it is a policy model, yes absolutely I mean there's different words for it, we started off with templates and then now all of a sudden policy became like a nice word last I mean beginning of the year so it's policy based networking absolutely if you want to have a look feel free to come to the booth and then to see it for yourself any other question, thanks all