 OK, welcome everybody. My name is Hugo Tripares from Chibok Phyllis, outsourcing company in the Netherlands and also one of the project management committee members of the Apache CloudTech project. I got involved with CloudTech a couple of years ago, about two years ago, when we were building our own cloud solution and we had some problems with our networking. And we came across this company called NYSIREN, which was able to solve our networking problems. Now we'll go into detail a bit more later on. There's a lot of networking problems, but there was no real cooperation between, at that time, Citrix, who was doing a lot of work on the CloudTech distributions and the NYSIREN people. So in short, I needed somebody to develop the integration. My colleagues who were building the cloud found me willing to do it. So I spent a lot of time doing integration work with CloudTech on SDN networking, particularly on the NYSIREN branch. But I've ever since been involved with a lot of the SDN stuff inside CloudTech. So I'm mainly a user of the SDN stuff. I'm not really into the whole technical details, though. I, by now, am familiar with some of the stuff. But this is more a talk from a user perspective about how does CloudTech interact with software-defined networking. So I will be telling a little bit about networking in general in CloudTech, a little bit about how I, as a user, see software-defined networking, and then a little bit about the details about which providers are there in CloudTech, and how are they used from a networking perspective, and what actually is the user perspective on software-defined networking. So if I take five minutes to discuss CloudTech networking, I don't know. Are the people here already familiar with CloudTech? Yeah? Sort of, sort of. So OK. The quick rundown is CloudTech can basically work function in two different ways when you purely look at the networking perspective. There's the basic networking. Isolation is done using security groups. This model is the typical model from the Amazon-style clouds. Everything in a big broadcast domain, you secure your machines using security groups. Very easy, open, transparent, and there's actually some hacks involved to making it really secure. You have to filter out requests, et cetera, et cetera. But it's a flat networking area most often directly connected to the internet. CloudTech also offers a different model, which is the advanced networking. And advanced networking is where you really get your own network. So you get your own private broadcast domain, you get your own IP space, and you're routed to the internet through a central router. It can be a virtual router that's provided with CloudTech, or it can be any of the devices that we currently support, like the Netscalers, et cetera. So as the end, in the beginning, for CloudTech we're really designed to solve a problem in the advanced networking. So we didn't focus on the basic networking. We really only focused on the advanced networking bit. And we will get into detail about that later. And ever since it really took off, we started with just an Azure integration, and by now we have about eight different implementations of software-defined networking ready to go in CloudTech in various stages of supporting all the advanced features. So what's the problem? This is the problem. We have this very nice cloud orchestration system. And we're using CloudTech here in the stock, as an example. But it basically goes for any type of cloud orchestration management. It can do a lot. It can talk to your hypervices. It can automatically provision stuff. It can do a lot, a lot of different things. But it's also one problem. There's this guy. This guy is sitting there in the data center. And he has this nice laptop with a USB to serial converter. And he plugged it into some network device. Or if he's lucky, he actually has a telnet session into the network device or an SSH session. And he's creating VLANs. Well, I don't know about you, but I find creating VLANs and configuring ports to be in a particular VLAN a very boring job. So we need to think of something new. And this is where network virtualization comes into play. Actually, being able from a software-controlled layer to control your network seems like a lot better idea. And in fact, if you expand on that idea a bit, you actually get to a point where you can not only control the network, you actually can create virtual devices. And that's what network virtualization is about. It's about creating virtual networks that, from the outside, look like any regular network you would just have created using your own VLANs or using whatever. But actually, they're just virtual constructs. You're basically decoupling the entire logic of the higher-level functions about broadcast domain from the actual important parts of how traffic goes from A to B. So same question, who's managing the network? Looks a lot better, doesn't it? So one of the questions I get asked very often is, what do we do with the network administrators? Is SDN oscillating the need for network administrators? No way. And we're using SDN a lot. And we haven't fired a single network operator yet. So the trick here is that we think it's important that people focus on the thing they do best. I mean, this is also the LinusGun. Do Linus operators really want to make sure that the LS command is working to that smoke perfection or that they want to work on a really interesting core stuff? They want to work on a really interesting core stuff. Same goes for network administrators. Do they want to configure ports the entire day? Do they want to keep on configuring VLANs on a system? No, they don't. What do they want? They want to focus on the stuff that really makes their life interesting, about how to fully optimize that fiber connection over 200 kilometers. How to get the last bit of forwarding power or forwarding speed out of the new switch they've got. That's the interesting problems. And it's my personal opinion that with software defined networking and network virtualization, we can move a lot of the non-interesting stuff, like configuring ports to automation tools, like cloud management systems, and leave the network administrators to do the really interesting work. So this is sort of what it means, say, like defining a common lingo. Decompose the control plane. What data is going where from the data plane? How does the data get there? And that's, again, the point about what do you want your network administrators to do. It makes network management easier, because you have to focus on different levels. No longer does the network administrator need to have a complete overview from all the way down to the ports, all the way up to what the application is doing with the network. But you can easily segregate and say, OK, I'm going to leave this bit to the automated tooling. And this bit is what I'm going to focus on. And it provides an API. And this is where we as system administrators are happy anything with an API. We can do stuff with it. I usually do stuff with it that nobody ever intended that we could do with it. And that's where it starts to get really interesting. So for the fine networking, a few years ago, when I started with it, when I got in touch with people from NYSERA, we were talking about network virtualization as a way to making a broadcast and main virtualized. And it was really simple. I could create a logical switch. And on a logical switch, I could create a logical port. Done. Deal. I got my switch. And it's virtual. Nice. Nowadays, stuff is changing. We've got APIs. We've got software-defined stuff that does our networking for us. So what's stopping us from creating distributed firewalls? Firewalls does no longer a single point in your network where all the traffic flows through, which is in most typical networks, a huge bottleneck for the actual speed you can get from the network, but really actually a virtualized device that does distribute the firewalling. Why a speed? 10 gigs, 100 gigs. I don't care. Load balancer, same thing. Do we need a static device somewhere in the network? Again, bottleneck in that network? No. We can have it distributed. So this is what currently a lot of the software-defined networking vendors and the people working in that area are currently working on. More advanced services. I think VM were coined the term software-defined data center a while back. I mean, if we have an API and if we can control it from software, there's no limit to the amount of stuff we can do. Now, there's a nice track about Open Daylight here at the conference. So everybody, you should look into it if you're interested in this kind of things. So onto CloudStack. There was a couple of people here who already use CloudStack. Maybe they can point me out where software-defined networking is integrated in this beast. It's a trick question. It's not. The thing we wanted to do with software-defined networking in CloudStack is that we wanted to hide it from the user. I mean, the whole point is that you need to know about software-defined networking to be able to use it. We built it into the core of CloudStack, really into the core networking functionality. And to a user, it's not interesting that there's a software-defined networking being used. You shouldn't even have to worry about it. I mean, no user ever worries about VLAN, except having to tell their network administrator to go and create one. But there should be no special setup, no special source, no magic to perform to actually use software-defined networking. So in CloudStack, we really made an effort to hide it as much as possible and make it a generic thing that everybody would be able to use without having to think about it too much. So when did it all start? Actually, before I joined the project, pre-ACS, for people who don't know, is a term we use in CloudStack to indicate that this was before the software was donated to the Apache Foundation. At the moment, it's a top-level Apache project, CloudStack. For that, it was lots of different companies from VMOps, cloud.com, but by Citrix. And actually, we had some kind of software-defined networking in there already. I mean, we were using OpenVswitch already, and there was the option of using the GI returnals on Xenserver. So you could create your own traffic flows, and the software would hide it. It was implemented in a way that really made it work, but nothing really with the ideas and current concepts we have to software-defined networking. So this is actually changing. There's a lot of people working on this. But this was really the first thing we had in CloudStack, and it focused on the GI returnals from the OpenSwitch to create it. And the different part was it was entirely controlled by CloudStack. You did not have to do anything as an administrator to use it. So it was really easy. It didn't need any kind of external components. Any Xenserver would do. There's some trouble with GI. Most notably, the fact that most network cards aren't able to do a lot of offloading for GRE. So in modern networks like 10 gigs and up, it's really difficult to get really good performance with the GRE standards. And if you create a full mesh of tunnels, it doesn't scale. So currently, there's been some changes. One of the students working for Google Summer of Code on the Apache CloudStack has made a lot of improvements. He ported the project to KVM. He made some improvements to the tunnel generation logic. So we're actually in far better shape. And the next thing that's up on the list is making sure that we integrate with Open Daylight. Basically, we want to have an open source, free to use type of SDN in CloudStack so everybody can use it without having to resort to the current commercial vendors. But they're not bad. That's nice. Where did my screen go? There you go. Scared of commercial stuff, I think. So this is Nesera MVP. When we released version 4.0, that was shortly after I got involved with the project, we were able to release the first version of the Nesera plug-in. At that point, the Nesera plug-in wasn't able to do much more than just create logical networks and logical switches. Looks really simple. It's actually quite an involved process if you look at what the open feed switch is doing. They push new flows. There's a central controller in a network with an API. You talk to the controller. The controller distributes the flows to the entire network. He distributes to the hypervisors and distributes to some other devices in the network, like the service nodes, which you need for the Nesera. The rule upside for us, why we really went for the Nesera next to the API in the ease of integrating it with CloudStack in the beginning, was also that they used an STT protocol. Basically, a real ugly hack on TCP. I mean, we all know that TCP is very fast because it's offloaded by a lot of the network interface cards at the moment. STT is just stateless TCP. Basically, we stop doing the whole Cinex stuff. We stop doing the acknowledgments. We stop doing a lot of stuff. We just send packets, and they happen to have a TCP header. The big upside is that it's offloaded by the network card. The big downside is, if you try to render this to a firewall, see what it does with your connection table. It will kill it. And the downside, well, if you call it the downside, you can't use the run-of-the-mill OpenFeed switch. Because it needs to support the STT protocol, you actually need a custom version of the OpenFeed switch. Lucky for us, most of the people working on OpenFeed switch are or used to be Nesera employees. So there was a lot of good integration. So it didn't really deviate from the standard OpenFeed switch. It was just the same old OpenFeed switch we all know and love with some added pieces onto it. So for us, it was a good choice at the time. And actually, it still is. Then things got out of hand. And we started looking about, what can we do next? So the thing we did is we looked at Nesera, and we said, OK, guys, what are your next innovations? And they said, well, we're going to focus on layer 3. Layer 3 means for us firewall, NAT, and the VPC. For people not familiar, CloudStack VPC is a construct in CloudStack virtual private cloud. It's sort of like you can build your own multi-tier infrastructure within the cloud, completely separate, which is our policy engines, et cetera. So multiple networks tiered together with policies between them. And this allowed us to do more interesting stuff. Suddenly, we were not only able to create networks, and we still had to rely on the CloudStack virtual router to do all the routing and all the interesting NAT stuff for us. But we could create advanced networks completely based on software-defined networking. So we had the isolation, which we already had in the four-point overviews. And we certainly were able to add NAT capabilities, like static NAT, support forwarding, source NAT. We were able to add some basic firewalling support. I'm calling basic, because at that time, it was stateless firewalling. Not really useful, but good enough to have some kind of port isolation, especially since we, as a company, run an internal cloud. We're not selling cloud services. We're using it for internal IAAS services. So that was good enough. And we had support for the VPCs, which is something we use a lot to separate applications from each other. And lo and behold, in 4.1, we found BigSwitch. BigSwitch was already there, was already looking at CloudStack. But because I integrated the MVP plugin, there was suddenly a platform, a way of doing it that allowed other vendors to jump on the bandwagon and get their stuff ported to CloudStack as well. So we had the second plugin, which is the BigSwitch plugin. So skipping the 100-serial layer 3 support, BigSwitch. They are using the same setup and as the Nacira plugin in CloudStack. And actually, yeah, it is quite similar. There's a lot of the similar things. They have some different ideas about how stuff should work. And I didn't even investigate in depth, but I really liked the solution. And what I really like about them is that I really focus on keeping everything or at least using open source stuff. They're a very open company. I mean, Nacira has some closed source bits and I believe the BigSwitch is mostly open source stuff. Not free to use, but at least open. And they work actively with a lot of the open standards. So there's a lot of interaction between them and other people in the community. And the basic support was, again, isolated networks and they had some extra features like being able to use the DHCP, et cetera. Then we hit our next milestone release and more people jumped on the bandwagon. We got Mirakura and Midonet. They were a Japanese-based company also doing network virtualization and they're really focusing on cloud-style deployments. So they had a lot of inbuilt features to do cloud-style deployments and we had Stratosphere SSP. But the Mirakura and Midonet, they really have a nice setup that includes everything with layer tree support, static net, routing, et cetera. It's basically the whole advanced networking concept of CloudStack in a virtualized concept. The downside was a bit that it only worked with the KVM hypervisor and they have a complete solution. So you will have to make certain choices. Like with Nacera, we can say, okay, I just want to use the network isolation and I leave the DHCP and the DNS to the user, to the virtual router. In this case, it's a package solution. You either use it with all their features or not. So there wasn't much ability to switch and mix and match different services. There was a bit of the downside, but other than that, it's a really nice thing. And especially if you're using KVM, it's a really nice solution and people seem to be very happy with the bandwidth and the capacity that it can handle. So there's a little good stuff out there about the Mirakura and Midonet solution. The next one, guys, I really tried to get some information on this. It just didn't work. I mean, if anybody can translate that for me, I'd be really happy to know. But my Japanese isn't really good. They seem to have a nice solution. I've been looking through the code and it provides all the functions that you would expect from a software-defined network and provider that provides isolation solutions. But other than that, even with Google Translate, I didn't really get that much information from their website. The feedback I got from a few people in Japan is that everybody's really happy with them and they're still actively supporting this plugin and people are using it. They have a lot of customers I believe in Japan. So it's a good thing and I'm really happy that they put in support in CloudStack. Other than that, I really can't tell you too much about this one. And here we are with the current status of technology. We've added VXLAN. Actually, it was quite a surprise for me as well. We never heard anybody about VXLAN or there were a couple of questions when are we going to support VXLAN? And then suddenly somebody showed up on the previous CloudStack collaboration conference and said, okay, I've got a patch. Here it is. If we integrate this, we have VXLAN support. So that was a really nice surprise. So suddenly we are able to say, hey, we have support for VXLAN. But I haven't been looking into it in that much. But it sounds very promising. It solves one of the real problems with the limited amount of available VLANs. So you can create a lot more and one of the really fun things about VXLAN, it's a proposed standard. And people are jumping on the bandwacking. So there's going to be network cards supporting the VXLAN standards, meaning that we can have faster forwarding, better support for it. And yeah, well, open standards is good. I mean, open standards means interaction. And especially in the cloud era, interaction is key with everybody. So more open standards, yay. Again, it's pretty new. It's still a master. It's expected to be released with a 4.3 release. So yeah, future is looking good. There's not that many. So far the fine networking providers out there more are coming every day. Open Daylight is doing a lot of good stuff. So we're hoping to get integration with that. We already did some work on it, but we need to be doing a lot more work on this, which is why it's great that I'm here at this conference so I get to attend all their talks as well. And of course, the next big thing is getting more features in. If you look at the current state of software defined networking cloud stack, depending on the different types of providers, depending on the different type of networking, they all support pieces of the functionality of cloud stack. Cloud stack is becoming a really, it is a really big project and it has lots and lots of features. And having support for each and every feature in cloud stack is proving to be quite a challenge. And you see that all the software defined networking vendors and all the people working on the plugins actually struggling to get all the features in. So there's features about security, firewalling, security groups, et cetera. One provider will support them because they have some easy out-of-the-box support. The next provider you need to rewrite a whole bunch of stuff because they have some security support but it's on a different level. And we need to follow the technology. I mean, a lot of this stuff is really bleeding edge. So a few of the interesting things that we want to support are listed here. The virtual private cloud is the thing we keep working on. It seems to be a key feature for cloud stack. A lot of people are interested in it and we need to be able to support it completely virtualized. So we can do the isolated networks but currently we're still depending on the internal virtual router to actually do all the routing and all the security. And especially with the VPC setup, it would be really great if we can use the software-defined networking vendors to use their higher-layer APIs, like the Layer 3 APIs, to do routing, firewalling, security, load balancing, et cetera. So the more they progress with their features in the software-defined networking stack, we will keep implementing them in cloud stack to make sure that we, yeah, eventually we can use this. Common configuration and setup. More personal pet peeve. I mean, there was a lot of trial and error in implementing the software-defined networking in cloud stack. It's sort of finalized how we wanted to do it. It's, the basic concept is fixed but now we need to get some kind of streamlined configuration. We need something easy and fast so we can make administrators understand how to add software-defined networking. Security groups. Yeah, it's the Layer 3 type of security with Mac filtering, address filtering. More advanced capabilities for the basic networking support. That's really gonna be tricky. And the most interesting thing for me is really the configurable on-ramp, off-ramp, where with the current, yeah, switch vendors jumping on the bandwrecking or software-defined networking, they are starting to provide solutions to do on-ramp, off-ramp. I mean, who doesn't have a big legacy installation in his or her data center at the moment? It's all nice that we can move to software-defined networking, but what do we do with the old stuff? We have really old stuff in our data center as well. And if you want to provide, for example, cloud bursting, you need to link those two networks. You can do it on an IP routing, so you put some service there, some service there, and you set some routing in between. But for some legacy applications, actually a lot better if you really have the same network. But if one piece of the network is virtual and the other piece is real physical VLAN somewhere, how do you deal with it? Well, there's a couple of solutions. NYSERA has a solution, or I should say VMware NSX actually. They have a solution. Arista is working on some solutions where you can really, yeah, on one port you have your virtualized network with all the trunks where you can, yeah, configure a virtual network, and it will link itself to one of the VLANs that exist in the current data center, making it easy to shift machines and services between cloud and on-cloud, which is especially nice for, yeah, either burst capacity or migration scenarios. And of course, we're all waiting for what's next. We don't know. There's a lot of talk about the SEM providers. Like when I started my talk, yeah, there's a lot that can happen in that space. Software-defined data centers. We probably need to do something with that. I mean, we're a cloud and orchestration system, so if there's software-defined data centers, we need to be part of it. So we have to follow the technology and the way of thinking of the software-defined networking vendors and the people really, yeah, thinking about that stuff and make sure that we have to support for it. And I think it's gonna change a lot for CloudStack in the near future. I mean, currently we're still depending on a lot of physical infrastructure and that will change. We're already seeing some kind of, like integration with UCS. One step in the direction of really automatically creating your hypervisors. Now it's hypervisors on which you create virtual machines, but if we just get stacks and stacks of hardware in the data center, we can on-demand create hypervisors when we need them. That's yet another scaling issue we can deal with and it's all automated by API. And that's what software-defined anything is basically about. So if we focus on how it actually works inside CloudStack, so we're gonna dive a bit into the internals of CloudStack and see how we did the initial implementation. So first of all, configuring CloudStack is not the hardest part of setting up a software-defined networking. I mean, software-defined networking requires some understanding about your network. It requires either commercial software or some tooling that you would probably need. You need to know a bit about OVV switch. You need to get some gear from somewhere. So the hardest part is actually making your network ready and getting all the software and all the APIs ready to run. I mean, if you wanna work with Open Daylight, you have to get the Open Daylight project, set up your controller, et cetera, et cetera, before you can even start configuring it in CloudStack. So by no means is CloudStack the really hard part of setting up software-defined networking. But since there are so many vendors out there, nowadays I just focus on what do we do inside CloudStack. So first of all, we have the concept of a physical network in CloudStack. It doesn't even need to be a physical network, but for now it's called a physical network. It's a way for us to identify which types of networks there are, where you can connect to in a physical space. And more importantly, it's about what kind of isolation method you can use. And the isolation method is currently used to identify how you want to separate your guest traffic from one another. I mean, I have two guests, I have two tenants on my cloud, how am I going to separate those networks from each other? The default is currently Stelvlan. So that's the default network and that's actually used for some of the internal networks for CloudStack by default, like the management network or the public internet network. And there's the guest setup. So in this case, I have two physical networks, one for management, which is also used for the public internet connection and one guest. And the guest, I was able to select the STT provider, which in this case allows me to, basically, enable me to select an Azure MVP provider in CloudStack. I could have chosen any of the other options for Big Switch or MitoNet. They're all in there and you can select them. This means that every guest network created will be routed to that particular provider and then you can set up and configure that provider. This is what the configuration would typically look like. You give it an IP address of the controller, you give it the authentication credentials. Some other interesting bits, every SDN provider has its own type of configuration that you need. In this case, for the VMware NSX, you need to create a transfer zone, which is like, it mimics the cloud zone. So you have a zone in your cloud and you have a zone in the NSX solution. And we have the Laotree gateway service. Again, that's a particular device for the VMware NSX solution, which is used for on-ramp, off-ramp traffic and the Laotree routing. So you give those ideas and that basically enables a provider. And the provider is a concept where the CloudStack orchestration engine can ask for services like prepare me a network or release a network or add a port to the network. And the next thing is the offerings. For people not familiar with CloudStack, CloudStack has this as a service model where basically the handover layer between administrators and users is all types of offerings. An administrator can define an offering, whether or not it's a disk offering, specifying how many gigabytes somebody can use, it's a compute offering like how many CPUs you get and how many memory you get. There's also the network offering. And a network offering details that if you create a network based on that particular offering, what kind of services would you get? Would this network have a DHCP service, yes or no? Would this network have some kind of DNS support? Would it have static net? Would it have a firewall, a low balancer, et cetera? So these are the offerings that you can create. The new thing in the offering here is the selector box for virtual networking. And virtual networking is the thing we introduced to be able to select that this particular network needs some kind of software-defined networking. And actually when you select it, you can select the multiple providers there. Again, this is due to the architecture of CloudStack. This is separate from what we selected in the physical networking where we just said, okay, I want to use this isolation type. This is where we really select, okay, this is the provider used when I create a new virtual machine, this is provider used to do all the work. You can set some other stuff here like quality of service, et cetera. And then, yeah, prepare the offering. How did the slide get here? Okay, so now it's to the domain of the magic. We go to the gurus. If we've created a network offering and we want somebody to be able to use it, we give them access to that particular network offering and sooner or later, they're gonna either create a virtual machine or they're gonna just consciously select and say, okay, I want to create a new network using this offering. The first thing that gets hit is the network guru. And a network guru is not a wizard old man typing in commands on VLANs anymore. We're now in the software defined area. So it's actually a piece of software that does stuff. And the network guru is the guru that really creates the network. So this used to be a placeholder thing in the VLAN because nobody ever needed to create a VLAN. That was what the network administration department was for and actually CloudStack didn't have to do anything for it. But in this case, we actually have to do some work. So in this case, we are going to call one of the APIs of the software defined networking vendors and ask them to set up a network and give us back some kind of ID so we know which one it is. So all networks in CloudStack have what we call a broadcast URI and a broadcast URI consists of a type of networking like an L switch or a VLAN and the identifier which specifies it. In the case of VMware NSX, it's a UUID in the case of a VLAN is a simple integer number. So we can always know this network is linked to this particular type of construct somewhere, somewhere. And the guru stores in the database and makes it available to use. The next thing we need to do is the element. And the element is a different beast from the guru where the guru deals with a network as the complete concept. The element is just concerned with creating a simple port or actually with a virtual machine. When a virtual machine starts, you plug it onto a network interface card and the network interface card is plugged into a network. And this is where the element comes in. The moment somebody plugs a network element or a network interface card on a network, the elements will wake up and the elements can deal with all those stuff. For example, elements deal with low balancing, elements deal with firewalling, elements deal with setting up the DHCP address and the DNS entries. So we added a new element specifically for software-defined networking or actually we implement one element for every software-defined networking vendor we have at the moment. And the element is responsible for making sure that there's a port connection. So it can be as simple as specifying the virtual LAN number for if you're using traditional VLANs or as complex as really going into an API and telling some software-defined networking provider that you need to connect this particular port giving this UUID to the software-defined networking switch. This is actually where stuff really got interesting in implementing it as well because this is also where you need to deal with your compute. With the old style of networking with VLANs, everything was in the network. So you just had to connect to the network, all done and proper. And here we have some interaction between the compute area where we need to configure the VM specs and provide the networking details for the VM specs and at the same time deal with network configuration. And since we're using virtualized networking, it really integrates with the compute stack. So we need all kinds of settings on the compute stack. There's properties, we need a supply when we create a network interface cards on the virtual machines, et cetera. So this is where we had to do some new stuff for CloudStack in allowing for this integration. There's a lot we can improve there, but for now you can, all the lists that software-defined networking vendors have some kind of code in the compute area so we can set the correct settings and the network element will actually prepare the network to receive those settings. And if you've configured everything correctly and every step works, the moment you boot up the virtual machine, the network will see the new network interface card and connect it physically or virtually to all the right networking providers, giving you the access you need. So to summarize, it's the network interface card which is the linking pin between the virtual machine and the software-defined networking implementations. There's some flags we need to set on the hypervisor to enable it to be found. So the hypervisor does some of the work. It talks, for example, to open V-switch. It sets some properties which identifies that particular port on the software-defined network and on the other end, the networking part of CloudStack will actually prepare the same network, set all the settings so the moment this port gets added to the network, it will recognize and say, hey, I know this port, I know this virtual machine, I have some configuration somewhere in my network for this machine so I know which communication should happen so I can build my flows or I can build my, whatever it is, the particular software-defined networking vendor uses. It's not really a common way of doing things. Some set a UUID on the network interface, some basically just detect the MAC address that's used. So there's some tricks involved in getting everything to work, but it's actually quite difficult to get it to work completely. And then, if you're completely lucky, everything works. So in this case, I have a guest network, actually barely readable, I think. Well, you just have to trust me on this, there is a network created here. And the same network happened, is created as a logical V-switch on the VMware NS6 or this is actually just the MVP manager which will show me. So in the entire virtual world now exists a network and there exists a few ports and they're all created. So the world is a happy place when this happens. And again, the real, the concept and the real goal here is to make sure that users will not have to worry about this stuff. We run this particular system at our office. Even a network guy usually doesn't even log in to the network manager every time he has to go in and he has to look up the password which means that he doesn't do it that often which is a good sign. It basically just works. So we're really happy with it. And that basically all there is to it. I mean, there's a lot that can be said about software defined networking and I think there will be a lot said especially the next few days here with the Open Daylight people and all the other stuff about software defined networking. But from a cloud stack perspective, this is what we have. So if you're interested in learning more, join the Open Daylight talks here. If you're interesting to know more about CloudStack, please join us in Amsterdam for the CloudStack Collaboration Conference. And if you want to know more about this, I'll be around here at the CloudStack booth or you can connect to me one of those things. I'll be on free notes, Twitter or whatever. Questions? David? So I've got a few questions for you. So we talked about, you said one of the future challenges handling security groups in SDN or security groups of that layer three type of routing and actual decision making still relevant when you have things like Open V Switch on every node, on every hypervisor now that, is there some thought to replacing that EV table functionality? Yeah, good question. So the question is, what's the relevancy of the security groups as currently implemented in CloudStack when you have software defined networking and Open V Switch on all the hypervisors? So actually a really good question. It depends on what we want to do with basic networking. As I explained in the beginning, we have to advance to the basic networking and in the basic networking, you need some way of separating your 3Ms from each other. And actually it can be done with the same concept as advanced networking. You can create a logical switch for each and every machine. But then again, there's a lot of people who are used to Amazon style deployments and even want to use it. And actually it's a fairly easy and simple way to provide some security. So do we need security groups as they're currently implemented? Probably not. It's going to go away. We probably no longer need EB tables or any of the magic that we currently do on the hypervisors to be able to support security groups. The new style security groups, as I currently envision them, is going to be more like port access security. You have a machine, you know which kind of traffic is allowed access into that machine. And that's something that's really helpful in a lot of cases. Even with advanced networking, it might still be beneficial to be able to say, okay, this machine has a port-based firewall. As I was saying, we now have distributed firewall capacity within the software defined networking area. It means that we can do advanced type of security. I know, if I know which application I can deploy on a certain machine, I can, together with that application, push a security policy that needs to be on that machine. And that security policy could include stuff like which IP addresses are allowed to connect to me, which ports are allowed to be open, et cetera. That type of security for me is part of the new style security groups where we can really have fine-grained security and say, okay, this particular group is an auto-scaling group of a particular application called MongoDB, I don't care, some application, it auto-scales, and every new machine that gets it gets a typical port security. Not on the host, because that means we have to rely on something that people might have access to, but on a deeper layer, we're going to enforce it in the networking layer because we have this distributed capacity. Did that answer the question? Yeah, I did. My next question is, you've been running MVP in production for a while now. Yes. Would you make the same choice again, especially now that you've seen a lot of the other technologies get implemented in CloudStack? Would you still choose MVP or something else? And any goods or bads that you've really discovered out of MVP or MES? Yeah, good question. Would I still make the same choice for running MVP in our data center as we do now, knowing the current state of affairs? Yes, I would. And there's quite a few different reasons for that choice. First of all, I really like technology and their involvement with the software that find networking area. They're really at the forefront of the revolution. They've been acquired for a huge amount of money by VMware, so it's going to integrate with a lot of their stuff as well, which for me means that there's a really certain future and a bright future for the Azure MVP product. On the other hand, the relationship with MVP is quite well. So I'm regularly in touch with those guys and we get really fed the latest tidbits about what's happening in the SDN space. So that's worth something as well. But for the technology, it's good. But probably if I had chosen a different SDN vendor from the beginning, I would have stuck with that vendor. There's not too much differentiating power at the moment to really say, okay, I'm going to really prefer this vendor for that vendor. And we went with send server, so we run our entire cloud on send server at the moment. So SDN was a logical choice because it's pretty well supported and the MVP is pretty well supported on send server. Had we gone the KVM route, we might have made a different choice because other software defined network institutions are better supported on KVM. So yeah, watch this space, I would say. Currently, it's the best choice and we're sticking with it, but I don't know, but it might change in the future. There's a lot happening. Any more questions? No? Okay, in that case, thanks for your attention. Please draw by the CloudStack booth if you have any questions you think of later on. Thank you.