 All right. Welcome everyone. My name is Toby Ford. I work at AT&T, if you didn't know that already. I'm an architect working on our domain 2 program. And I've spent a lot of time recently working on networking and for obvious reasons. Networking for us, when it comes to both the infrastructure and the network functions that we're trying to virtualize, has become very complex. There's a lot of variations that we've tried over the last four or five years, a lot of different combinations of controllers and data planes. And it's really come to a point where we need something different. And so today, we're going to talk about what is that? What is that different thing? And so first, so we're going to talk about this notion of gluon. And today, I have a panel of esteemed guests, gentlemen, like to welcome to the stage. So Jeff from Ericsson, Vince from Nokia, as well as Nokia, but also from our foundry, then Ian Wells from Cisco, and Marco from Contrail. All right, guys. Thank you very much. First, I'd like to start at the use cases. Why are we here? What are the requirements? And so Jeff, can you help me to describe what are some of the use cases that we're trying to deal with when it comes to what we're doing with gluon? Yeah, absolutely. So these use cases are really focused around how we take the existing stuff that we have outrun in the network right now. So your purpose-built firewalls, for example, or maybe CPEs or any kind of other service that you have outrunning in the network. Maybe this is even other data centers. How do we take these services? How do we connect them in an efficient way? Most use cases, or most of what we're doing with right now when it comes to working in the cloud is we really just need a port. It's going to connect that to some kind of TCP-type service. And that works great. But there's 99 other networking protocols out there that we need to find some way to deal with. And this is what we're trying to address with gluon. Thank you. So Marco and I have been working in the last period of time, especially, working very hard to deliver this into production, what we think of as SDN. Marco, can you tell us a little bit about what makes the problem hard, especially when it comes to operationalizing it and integrating it into our systems? Sure. I think the best approach would be to ground it in an actual use case and deployment that we've painfully spent a lot of time analyzing and actually getting out in production. And if you look at what a typical mobile EPC deployment would look like, you're literally looking at between dozens of VMs as an example. And each VM would require at least 10 ports per VM with hundreds of IPs in a single subnet. And then on top of that, you need multi-tier topologies in order to satisfy the EPC application connectivity requirements. And on top of that, you throw the need for policy of 5-2 post steering. And then throw on top of that a little sprinkle of I need to be able to support hundreds of thousands of flows at tens of millions of packets per second. Oh, and by the way, it all has to be carrier grade because it's a difference between showing a carrier name or no service on my cell phone. Now, you take all that. If you focus on exactly how you enable all that, there's various elements. There's orchestration. There's performance elements to it. There's security and monitoring operations. But if you focus on the connectivity part, which is the neutron specifically, the networking part of the neutron required various things, right? Everything from being able to enable the brownfield connectivity of this application into the existing infrastructure where all the RANDs were hanging off of and for GI land to the internet. So that required things like BGP L3 VPN. Then you need multi-tenancy for security and various other reasons. That's another L3 VPN component. Then you throw on top of the fact that these applications are brownfield. So you just can't ignore the need for them to interact with the network today and after they're migrated. So things like you need to be able to run BGP at a tenant level. And then take on top of that, the need for scaling out L2 and L3 connectivity between the various tiers of these applications running and combined to form an EPC element. There's a lot of different connectivity needs from just a single use case as an example. So how we approach it of course was we enabled as much as we could with neutron. Neutron has an existing set of APIs that are defined that are fairly specific to what OpenStack was originally used for. But as you can clearly see elements such as L3 VPN, BGP and service chaining, some high requirements were lacking there. And obviously we addressed the needs immediately either through what we could in neutron APIs as a subset super set. We obviously enabled it through APIs that our controller offered. Midterm, we obviously are adopting other neutron kind of big 10 projects such as networking, BGP VPN, but that even is lacking, right? So that was kind of a mid to short term approach to solving these requirements. And then it was very clear that long term we had to look at something else, which is why we were obviously very keen on looking at GLU1 as a potential opportunity to address some of these long term requirements. Thanks, Marco. And one particular way of thinking of it just to get right distilled. When we look at the API set and what we're working on within heat is an example for the most recent thing that we're virtualizing the MMSC. You look at that piece and a small percentage, less than 10% of the APIs that are available to the heat developers is actually the core neutron code. The rest is is contrail extensions to the API set. And that works and then it's helped to cover what Marco described, but just imagine with the program that we're trying to run with domain two, we wanna have the level of competition and interoperability. And that's really this model is a deterrent to that. And so that's why we have such a diverse group on this panel who's helping to solve this problem. So Vince, why is neutron not enough? Yeah, thanks for the question. It's good you brought up a use case about the mobility network. I have another kind of example that will lead into why neutron is not enough. You know, we were pretty new to OpenStack. We started with Havana and back then we were standing up, you know, just simple VMs on a flat network. And then we got Icehouse and we started doing three tier applications. And then at that point we started working with an SDN controller that came in. And at the time of the SDN controller, we heard terminology like SFC. So basically chaining, chaining of VMs. And the use cases that we're giving to us, you know, when we asked, what is that? Was, well, imagine like a VM, two VMs stood up on a flat network, pinging each other. And then you want to put another VM between them and muck with IP tables to stop the pings, like pretend firewall. And I thought, you know, that's not a real world use case. So we in the foundry decided we'd build a real world use case. And luckily in our location, we actually have AT&T, AVPN. So a VPN service between our foundries and for test and development. And what that gives you is that networks that are created in one location get propagated to the other locations so you can reach various networks. So we yanked out the hardware based CPE and we put in a virtual router, Nokia virtual router and then we decided we want to hook that to our cloud. So we built an open stack environment. It created some neutral networks and there was no convenient, easy way of connecting those neutral networks to our tenant, to our cloud. But we had an SDN controller and it had features and you were talking about the APIs available from the SDN controller. So we could do it using our SDN controller. So now we had, we built an application, we, I think it was WordPress or something and we could reach it from other locations. And then we added internet service and we could reach it from the internet. And now we basically had an unprotected web server and it was our first ever service chaining use case. We wanted to be able to spin a VM in the same cloud between that WordPress server in that cloud and the internet that would do firewalling functions. And we wanted to spin up another VM that would do identity management. So when we were on the, that corporate network we could use our LDAP database to authenticate it. And of course we went into the neutron and we couldn't figure out how to do it. And our SDN controllers had that capability. So we were able to do it. So these features are now coming into neutron but when I heard about glue on it sounded like a way that we could have prototyped those services earlier and built them. And in fact our first demo was a layer three VPN which we built pretty quickly. So I've seen an evolution of services from the flat layer two network to layer three to then BGP and service function chaining and there's multicast and there's a whole list of services that I think glue on will enable us to prototype faster for open stack and still work with neutron where we need to work with neutron. Thanks Vince. So I don't think it was very long ago when we were just talking about glue on for the first time and now it's a real thing. Jeff can you tell me a little bit about what it is exactly? So glue on, what is it? Well we've got several different types of implementations but we're not so much really focused on the implementation as long as we can kind of figure out a better way of bringing in these type of requirements. We're on our second iteration now of a different implementation and what it's doing is glue on is a port arbiter. When the bind request comes in from NOVA it's glue on that's responding with that port to VM bind requester. Then south down to that we've got these little guys called protons. Protons are networking APIs that we have collectively all come together across five different companies. The first one that we put together like Vince mentioned was our L3 VPN proton. And what this is doing is it's giving us our common set of APIs to do the things we need to with L3 VPNs. So when we need to create an object that gets pushed down into at CD. All right, this is our distributed database. And it works really well with all of our vendor proprietary SDN controllers that we're coming in and working with. So regardless of what APIs each one of us is using we never had any issues interoperating and working when we're using glue on. Each one is monitoring at CD when an object gets created and then it can act on it. At our first demonstration we had four different SDN controllers all working running in the exact same open stack instance where we were all able to interoperate. And we did this in a very, very quick amount of time. I mean we were looking at a couple of weeks I think before we had everything all up and running with VMs, everybody connecting and talking to each other across VPNs. Thank you. So at the top of this picture there's sort of an arrow that has no connection. Are you suggesting that maybe we're going to just replace Neutron? Ian, what do you think of that? So we've iterated on this. Fundamentally what Neutron is for what it's turned into would be a better way of putting it is a Layer 2 API. It basically says there are networks which because we've defined them to have subnets they're Layer 2 networks. You're basically sending Layer 2 traffic from one port to another across these networks. And that's great if what you need is Layer 2 and it's actually absolutely fine for many cases. Lots of the things you normally want to do with open stack is really just getting applications to talk to each other and Layer 2 works just fine for that. We're trying to do things that are generally more complex. They involve not only the network APIs that we've been talking about here or the networking protocols here like service chaining like L3 VPNs but also all of the ones that we don't know about yet. Tobi is one service provider. If I take three service providers and put them in a room together they will all talk about how their network protocols are much better than each others because they all do it differently. So we know this will expand out. The problem we run into with Neutron is that because it's that Layer 2 API we struggle to make much use of or turn in adding other APIs to do different jobs. We care about ports but we no longer care about subnets and we don't have these Layer 2 domains that bridge lots of ports together. So the first model we thought of is, well Neutron is in this model here, a proton. We're not good at physics, I'd like to point out. So it's one API of which we want to use it along with the others. Neutron's great, we like it. It solves our control plane problem when we're actually trying to tell these network functions what to do. Neutron's a great API for that. It's just not very good for talking to the world about doing specialized tasks in a service provider environment. But we're kind of iterating on this and trying to work out what we can do. And Armando's sitting there, right? He's the Neutron PTL, he thinks this is so faintly amusing as we sit there and struggle, but I think we're getting. Yes, absolutely. So Armando, we've talked, he knows what we're trying to do and he sympathizes, but the problem is Armando is defending a user base within OpenStack of lots of people who use the Layer 2 API and it's enough for them. These other APIs are just, you know, effectively additional cruft that you would install and you would never use and they would get in your way. So we were trying to find a way of implementing them separately. What we couldn't do at the time is we couldn't really extend Neutron to do these extra APIs because they're very specialized and as I say they conflict with the API that's already in Neutron. As things are evolving, we've been learning from what we've done. We actually have a session in half an hour where we're going to talk to the Nova guys about the interface between Nova and Neutron which binds Nova and Neutron very tightly together today to make it a little simpler. And these things mean that at some point we may find that we can actually bind these APIs into Neutron instead of setting it separately aside and having a different component. But we're basically experimenting to see what the right way is to get these APIs there for the people that need them and to make sure the people that don't need them don't get affected by them at all. Thank you. All right, so moving a little away from the technology aspect of this. I mean, there's been statements that we're not really doing this out in the open or there's been a secret effort to do this. How would you address that concern? Marco, let's take that one. I mean, to my knowledge, I mean, throughout the whole process I remember we first kicked off a session discussing GLUON and its requirements coming from most of the telcos in OpenStack Tokyo. Then there was obviously a clear understanding that we would drive these use case requirements in OPNF, OPNF being that ready in order to communicate these cases. At the end of the day, people need to be grounded and understand what it is people are asking for so they understand exactly why they're doing the things they're doing. And obviously, mailing list discussions, BTL's engagement. So from my point of view, that's as open as we can get. I mean, if the question is it could we be more open? I would say if that's not a form of collaboration then if there's another form, then please let us know. But that's my opinion on it. I don't know if the other panelists have any comments, but. Would anyone else like to add something to that? Well, I would say, I think the problem we've had is that we've been trying to do this with a small community who understand their needs, right? The problem we run into here is we're trying to do weird and specialized network APIs for weird and specialized cases. And so we've put together a community of effectively networking specialists to try and deal with that. In this instance, as for the code, I mean, we've been trying to put it up on GitHub, but what I think where we've failed and where we're trying to improve is making sure we're a proper open stack project that's having clearly expressing what they're doing and sharing with the community the progress that they're making. So beyond what was just described, what do you think is the right mechanism to get these capabilities into open stack? I've got a short answer and a long answer, and the short answer is we need to provide contributors to open stack that get to know the system. Going back to the open, the open question just very briefly, I use open stack now and it's very open and there's a lot of stuff out there and even I don't know what half of it is. So being open is not sufficient to actually know what's going on. But speaking about contributions, I'll talk again about my team. We started with deploying open stack. We used cloud band and then various other installers just to get to know open stack. And it was becoming important. So we wanted to figure out, well, how do we contribute? And actually it was the women of open stack. I think maybe it was in Vancouver that had a how to contribute documentation. And so that's where I learned about the Garrett process. But then discovered that documentation wasn't really the field that my team as developers were core at. So we wanted to figure out how to fix bugs. It's all about contributing. So I think the first bug I took actually couldn't duplicate in whatever the main branch is, but it did exist in an earlier branch. So without even getting it assigned to me, it got closed. Other people on the team, I think they had a bug in Neutron, or sorry, in Nova. And really what it involved was a lot of communication backwards and forwards between the developers and the people who'd written the bug to understand how to duplicate it, to suggest changes, and then finally they got the fix in to discover that they haven't got any test cases written. And so it was like a continuous learning process, but you have to put in time and you have to engage to understand that. And then finally we had a contribution and we have various other bug fixes since then. So then we went, we're actually participating at OPNFV, which is another kind of open environment related to telco use cases. That kind of led us to the GLUON project. And since being in that, we kind of learned from other OpenStack environments how to contribute. We saw how networking SFC did it. They created a project within OpenStack so all the code was visible. So since we did our demo at OPNFV, we've similarly created a project in OpenStack, OpenStack GLUON where all the code's available to be seen. So we're showing it. We've got great people like Ben who are sitting up communication channels to Armando and others to explain the use cases that we're trying to do and to get feedback. And we're getting great feedback. Feedback such as you've got to have, you've got to build momentum. You've got to get a set of customers that really want these features. You can't just put in a project that looks cool but doesn't deliver something that our customers need. People are going to want to develop the code that gets used. So then the last thing about how to get this in is reaching out to the other operators. We work very closely with AT&T and we believe in OPNFV we have a lot of networking use cases that have been presented that are meaningful to multiple service providers. So get out there, look at those use cases, figure out if those ones will help your business. We happen to think GLUON might be a way of accelerating the development of those and getting them into OpenStack and making OpenStack better for our telco customers. Thanks Vince. Yeah, I think as you're mentioning, I think it's actually a good representation not only of just getting diverse group together to solve a problem but also some of these new things that we're trying. So like OPNFV itself and then what we're intending to use OPNFV for is to solve some of these problems that we see if there's an integration issue here or we're not able to cover the use cases there that we actually can spin up an effort that brings the right focus to the problem and then kind of focuses in, solves that issue. And this has been a great example of a team working together to do that, that is very diverse. We don't have everyone here but essentially these are four very competitive networking vendors working together to solve my problem. But also, others have taken it further. So Huawei and Ono's folks have taken it in a different direction, but in a good way in augmenting what we're doing. So to all of you in that vein as beyond the technical and process aspects, working together, can you describe how did it turn out to be easy or difficult to working within this diverse group and then within the constraints of AT&T's world? So I'll kick this one off with just kind of how we started. Was it Tokyo that we first were presented with the gluon concept? I think I met Ian there probably later over beers. But it was kind of compelling, I saw it and saw the APIs and thought that's a cool demo. But we weren't ready to start working with it there at that point. I think at the next conference there was a working group on NetReady or gluon and we joined that and it was a group of us competitors sitting in the room and we decided to architect a solution to some of the NetReady use cases and drew up an architecture on the board. Some people had had some previous technologies that they wanted to use and we understood the use cases. So actually drew up, I said an architecture on the board and I think it was Ben, I'm gonna call Ben out again. I think he said, okay, so who's gonna code it? Or who's signing up to code each piece? And my two developers who have on it looked at me and I kind of looked back and nodded and Kamal sitting over here, he said he'd sign up to do the shim layer, had familiarity with our new SDN controller and could write a shim layer that could call its APIs and set up these networks and Tom said he'd do the SED and some other components and the other guys on the team signed up, people signed up to say, well, if we have a demo, we'll stand up a Farris lab that you can host a demo on it. And people basically in that meeting signed up to do work and then what actually happened is they did work. So we had people documenting it. We had a lot of collaboration tools that we used. I'm gonna call out Etherpad because instant collaboration, joint collaboration to work on an idea. We had lots of face-to-face meetings. We had lots of teleconferencing meetings and we built I think respect for the skills of each other and actually as was mentioned earlier, put together a demo that we took out to OPN and Fee Berlin in just over a month. Just an amazing effort. So I think I started working on this one back in the Onos lab, or Onos conference early in the year. And that was when we were all started working on the API piece, defining what that was gonna look like. And it went really well. But I think what we really started picking up a lot of traction was when we started working in the Farris lab, common ground, everybody got to bring all their stuff together. Do our testing, do our interoperability. I mean that was another big piece of it was making sure that as each independent vendor we could all do a full mesh or BGP SDN controller and making sure we could spin up and talk to each other all working in the same environment. So the Farris lab really helped. So I think the thing that's impressed me is really the speed with which we've not only come up with new ideas but turn them around. And that's been basically standing around a white board or a notepad or whatever and saying, what if we would do this? And I'd like to mention Tom here actually because Tom's the one who's actually been turning around the code into usable stuff in no time flat. I have no idea how he does it but it's fantastic. But some of the things I think what we started with was we need new APIs. What we ended up with is something where we've actually got an automatically generated API. We define it in a file and we have an API just pops up out of thin air from that file. It's been fantastic really. Everybody's just been pitching in and solving things. It's just been really good on that. So I'll make the obvious comment. It definitely helps when you have a very large customer that says fix my problem or else. So after you take that into account, the whole process was actually, it definitely wasn't a Silicon Valley episode but we definitely did meet and have a lot of meetings in the Bay Area and other various locations. And I think over time we did build a professional level of respect for one another and then we actually started building momentum and solving real problems. And I think that kind of fed into itself and then everyone really kind of just got excited about it and things just progressed at that point. So that on top of what everyone else just said, of course. Yeah, thanks guys about that. I want to recognize two people, specifically Ben who's done a great job at forming this project, getting it all to work, orchestrating this. He's done such an awesome job and then also there's Margaret, wherever she is. Slave driver for everyone. Yeah, so what's next for Gluan? Ian, you want to take that one? Well, I can try. The answer to that is we're working on it. So I think we've recognized that what we need to be able to do, and this is where we differ from Neutron, right? Neutron is trying to keep its APIs as stable as possible while making sure it irons the bugs out. Because there's a large audience of people and it's important that we don't make their lives hell. So the backward compatibility of the APIs is very, very essential to what's going on. Whereas you can hear from the number of ideas we've been throwing about just as we've been talking here that we don't honestly know where this is going to take us exactly. So we want to make APIs as fast as possible and then we can experiment with them and we can show them to people and they can say that's wrong. That's not how we're going to do it. And we can argue over it and we can come to a conclusion of what new APIs should look like for our needs. And I think we've accepted now, took a little while, honestly, that we need to find a path for that and the components we used to do that are not so important as the fact that it's possible for us. And I think I'm hoping that as we take this forward now we can take this into Neutron and we can work with the Neutron team and get a system together so that Neutron is a framework for adding APIs that we need to use for the other networking APIs that we're going to want in the future. It's looking good at the moment actually. And as I say, we're now involved with the Neutron teams trying to get one or two of the bits solved that would not only improve Neutron and Nova's interactions but also improve the possibilities for what we're trying to do in the future. Yeah, so I think that's a question. You're right. What we're saying here is that we can make APIs on the drop of a hat and that is absolutely a blessing and a curse. If what it means is you've got four customers, four companies represented here and we decide four companies represented here, that's fine. And we have a customer that asks for a feature and we implement that with four entirely different APIs than we've doomed ourselves and that's absolutely true. But the model we need to adopt there is not code does not have to enforce consistency of APIs. What has to enforce consistency of APIs is responsibility. We should get together, we should argue and debate and get the APIs done but we can do that now also by example, right? We can try the APIs and people can demonstrate that the APIs work or alternatively don't work well with the code but then standardizing the API, choosing the one that you're going to standardize upon is something that we can do by talking to each other. Not necessarily by committing the code to a repository but actually having a conversation and making sure we all agree that of them that might have its weaknesses but that's the best one we can do. That much is true. The migration is a problem if we go there and similarly backward compatibility is a problem and we've given some thought to this I have to say but we're not solving the world's problems all at once. We are making progress here I think towards a goal and step one is being able to bring these APIs up and then we can have the debate. We do absolutely recognize that standardization is key in this. So binding a port here is literally that interface between Nova and Neutron where you're attaching a port. In fact, the only thing, the one thing that all APIs here have in common is an attachment point, a port. There's never been a debate that has to exist because you have to attach them networks to virtual machines to get traffic in and out. So binding a port is a function on a port which exists today in Neutron. So you're saying effectively that ports in one kind of networking might look very different from ports in another kind of networking and literally the only thing they would have in common is I can attach a virtual machine to them and I see your point and I agree and again that's a point for future discussion. The point about this is not to define it upfront but to learn from our experiences I think that you're illustrating all of the pitfalls and it's useful that we have them in front of us and we say these are the things we have to avoid doing. No, it's good. That's important part of the discussion. Briefly, I love the technical conversation but where we came out of it was for the fact that we saw the evolution of the APIs and we realized there would be new APIs and we see new API services going into Neutron and we came out of it because possibly this is an easier way of doing it. Maybe there's even in Neutron better ways of doing it but we expect there will be new networking. And then for us it's very important and I think as competitors in the SDN controller space is we get this right and we, what is it? The tide comes in and it rises, it lifts all boats so we're looking to get good features for the OpenStack community so the OpenStack community can have a bigger set of customers and particularly what we're looking for is new telco customers and features for them. I will say really quickly that we did take some of the largest SDN vendors out there and we did all come together to decide on what this API should look like collectively. So we're all working on this next type of networking technologies here and we are doing it with I think is at least a good portion of the right people to try and play with it and see what these APIs should look like. So I think we're at a good starting place. Exactly, so the next effort that I'm working on beyond GLUON, GLUON will continue but the next step for me is taking this kind of momentum and then addressing the problem of network APIs. I believe very strongly that we have to start from a reference implementation that works that demonstrates something that's actually usable rather than the standards approach. This is the foundation of what I think why OpenStack is so strong and why GLUON demonstrates the opportunity this way. For me, there's been a lack of responsibility and this goes back to, I don't have my favorite slide here but really when it comes to agile programming or any kind of development, there's a yin yang. You have to innovate, you have to iterate, you have to move forward, you have to cover our needs but at the same time there has to be time to come back and take a re-look at what you're doing and refactor. You know it's agile washing if you're not planning out a significant reduction of technical debt and so in this way we've spent so much time in the network industry just expanding in every direction without any mature re-look at, okay, why do we have four different groups having their own SFC variations, three different IETF standards for SFC, neutron SFC work, why are there so many? Something, some force has to be here to kind of bring it all together. So my next effort is maybe as we've been saying the nirvana stack but in the essence is can we come up with an agreed upon model for the APIs, can we come up with an agreed upon what is the thing that we need and then at least come to some level of at least standard view of what's needed because we have not had that. And just the small amount that's in neutron today and that the def core is defined as the standard piece that's solid, that's good but we have to go much farther than that. And if it's always only 10% then it isn't doing what I need. All right, any other questions or commentary from the audience? Amit. So I don't think you have to worry about that because we pretty well took all the controllers and showed they fully operate with gluon and we have that running in a demo and down at our booth. Interoperate, yeah. Just to elaborate on that. So our SDN controllers have an extensive set of APIs and the shim layer is the thing that for the most part is calling those APIs. Each of our controllers have different features as well. So gluon is like a place where you can prototype APIs, align people on the same features perhaps, but going into using multiple SDN controllers I think we all expose out APIs that can be used by gluon and we demonstrated that. Yeah, it's a piece that I'm glad you mentioned that because we didn't talk about this one aspect of it is the operational aspect of running OpenStack Cloud with hundreds of sites and then multiple SDN controllers. The technical debt of having to, if I have to migrate from one version to another or from one SDN controller to another, these things are very difficult and what we've just shown, what was really exciting about the OpenFV Summit demo was the possibility that I could actually have a running VM and have its networking construct switched out underneath it and it still runs. So gluon is a work in progress but what we've demonstrated right now is you could actually create different networks using different SDN controllers that had different features. When we did our OpenFV demo, we had four SDN controllers running at the same time and then we were running the same service on all of them but we could have been running different services on different SDN controllers. So, interesting question. I think if you're, this comes back to actually what you were talking about earlier in terms of versioning of APIs but what you can look at here is if you've got a choice of APIs and say two revisions of the L3 VPN API then maybe the thing you're trying to do here is discover the API you want to use based on the features that you require and so I think there are possibilities here for making sure that the API you get given, the endpoint you get given, supports the features you need and therefore would be directed to the SDN controller that does the job for you. You know what we look like now, you can always come and find us after the event. All right, one more question, yes. So, that's also a very interesting question and I have to admit not being the expert on Ironic and obviously open stack containers or the story there is yet to be fully defined I think but effectively what we're talking about here is binding a port and binding a port is from the SDN controller side or the neutron side either way is a matter of saying I will drop the traffic in the place that you have told me to drop it. Now, if you're binding a port to a virtual machine that effectively means you're putting it into OBS or you're putting it on a bridge or a V host user interface, there are many options. It could also mean an SRIV port of course. If you're putting it to a bare metal machine it effectively means pumping it out of a switch port that's connected to the right interface on your machine. The point is that that's actually something to do with binding it's not necessarily something to do with the API. So, theoretically as long as the binding negotiation and again this comes back to the next meeting we're having as long as that binding negotiation exists with multiple possibilities and you can negotiate it being dropped into a container or to a virtual machine then it's just a matter of the SDN controller supporting the binding you like. All right, thank you very much. Thank you to the panel. Thank you to the crowd. Appreciate your time.