 there we go okay sorry about that okay so I I think we'll get this we'll get this session started so we have no slides anyone who wants to clap there we go yes so okay so this this is a panel on the plug-in decomposition process that we did in neutron so so I thought real quick we'll just introduce who's up here so I'm Kyle Mestri the PTL of neutron I'm also the chief technologist open source networking at HP so I'm gonna moderate this I'm Armando Miyacho I also work for HP and I'm Kyle Satik I'm Gary Cotton I work at firmware on the Nova Neutron drivers Sukdev Kapoor I work for Arista Networks I'm part of the SDN team at Arista Networks my name is Maru Nubi I work for Red Hat I'm just a general-purpose neutron contributor okay so actually let's let's do this how many people are aware of the neutron plugging decomposition oh okay Salvatore put your hand down we know you're so actually a good number of people okay so Armando why don't you give us a quick overview of what it is so we can level set everyone with what yeah I mean obviously I won't be able to base justice in only a few minutes but so these efforts started at the beginning of kilo when we recognize that the way that the code base at grown maybe when passed a certain like critical mass and we realize that we had to do something obviously neutron as as as experienced and enormous and enormous like growth in terms of you know plug-ins and drivers they were at 50 now in kilo wow yeah so that that's indeed a big number so we realize that the way we started the development process didn't actually kept the base with you know with the with the rate of growth and you had to do something so so we we kind of sat down and figure out how we could change the code structure as well as the development testing and documentation process in a way that we could cope with that growth and what we did we leverage the again the way neutron is extensible and pluggable and as many of you may know we you know we have minority plug-ins we have service plug-ins we have ML to mechanism drivers we have agents and we figured for all these components that had that were contributed by like vendors or other open source communities we could consider them to be components of their own that it didn't necessarily need to be part of the same code base and went through the process of you know fragmenting that code base so that it could be managed in in in silos the end result you know the end-to-end system still look like a neutron system but that would that helped us in in empowering the single you know each an individual team to be you know more in charge of their destiny that was like as far as the you know the code structure the composition was concerned obviously we had to revise the way contributors brought their future requests forward they dealt with the fact management the way they've you know the packaging testing continuous integration and so on I'm obviously there is an you know aspect that we went through the process of like getting consensus on there was another well-meaning consensus I would say like last cycle and we went through the process of taking all these mechanism drivers and plug-ins and go through the process of again taking you know bringing them out and having having them be on a life on its own we'll share experiences and you know throughout the you know throughout this panel and I personally think that being the one obviously looking after this process in kilo which by the way hasn't hasn't finished yet I mean in this in in this summit as matter of fact will will go through the process of understanding what needs to be you know what's what's left to be done I felt like it was fairly positive experiment but yeah I mean this is an opportunity here to obviously gather input from from the crowd and you know obviously by by by my colleagues here to to see you know what went well what went wrong and what we do in this forcoming cycle to improve things even further so I forgot to mention there's two mics so hopefully we can make this well I thought there was two mics okay there's one mic I lied so we can make this interactive if people have questions you know please please feel free to line up we have a bunch of other questions but I think next so so Gary would you like to take it on and tell us because you went through the plug-in decomposition with the VMware driver right and yeah that's correct kind of with us we had requirements to provide two plugins for the vSphere drivers so we'd actually develop those internally in the ice-house cycle and in Juneau we were planning on up streaming them and that's kind of where we started interacting with Armando and how to go about that and basically that's when kind of he raised the idea of the decomposition initially I was pretty much opposed to that and I'll explain kind of later when we talk about the pros and cons by the mix two of us by the way so basically what we did once the decomposition was decided upon we were fortunate enough where we work to actually work over the Christmas break so we were able as well as everybody was away to upstream all of our code to get the CI up and running and get everything done there were two major challenges one was that the actual creation of the pro of the project in Stackforge where one of the obstacles that we had was trying to get all of the git history into the project and thankfully I think Doug had written a script which enabled us to do that and that was really really great and then basically another challenge that we had was our back-end provided distributed virtual routing and in the Juno cycle that was a layer three there was an extension and it was kind of made an official API in kilo so we had to do a number of tweaks along the way but thankfully kind of after having been in debating on how we're going to upstream the stuff is whether we do it kind of on a patch by patch basis or we just throw all of the code out there the fact that we're able to be the owners of our own destiny kind of enabled us to actually have a plug-in up and running upstreamed within kind of a couple of weeks kind of we'd invested two or three months internally developing all of this but the actual process of upstreaming it and getting it out into the community was relatively smooth the major challenges that we had was the review process over the database migrations in the neutron tree where kind of we got some great inputs there and basically that enabled us kind of to springboard and get our show in the road. So Sukdev you also went through this as well. I think you had a note and some of the preceded questions around like the ML2 sub team as well and how you work together with them and you kind of shared what worked and didn't work. Yeah so we used because ML2 team was quite concerned about it the way the trees set up for ML2 drivers everybody shares everybody cooperates and so forth so even during the Paris discussion there was a lot of apprehension a lot of people were opposing it I was one of them you know who was not very keen on moving forward with it but as things started to progress as we there was part of it was a fear of unknown right when we started so as things started to clear out it was it turned out that it was a really really smooth process it was a much simpler than than what than what we had thought it would be and not only we were able to do it rather quickly we were able to help a lot of people in the ML2 subcommittee itself so for three or four months every week we would discuss this we would help each other we were showing samples oh here is how I did it people are sharing their experiences that they're they're expressing where they're stuck where the help is needed and so forth so so few things I want to mention so one of the things where I want to really thank the infra team in terms of helping set up the Stackforge framework so that was a pretty smooth in the beginning it was a little bit tricky but once it got going you know it was a cookie cutter at the point right and another thing which I want to really commend is the core team so the way they responded to the issues like for example I can share my personal stories where I had patches couple of patches they needed to be back ported right but they need to be merged into the master branch and the core team won't approve it so so I'm chasing these guys you know and and nobody's willing to approve it and the deadline for the back port is going away so I was frustrated like hell so I'm bringing up in the core team and then people jumped in as they realized so so so immediately I think Armando came forward he fixed us back he changed the policy because nobody knew what the policies how we're going to deal with it while the decomposition is going on moving forward how we're gonna allow the back ports and so forth so everything was resolved but you know so so everything fell in place this was a classical example of real teamwork that's what I would say so okay so maru then I have a question for you as well so it revolves around testing a little bit right I always put you on the spot a bit with this but are there any concerns you know with the testing aspects of this do you think around moving the code out right so when the code was in we had unit tests you could debate you know the coverage aspects of that but they were there we didn't have third-party we had third-party CI but there was no entry you know we're not running everything through every patch and running a certain set of tests so are there any concerns around this now that we've decomposed these I'm not actually sure I'm the right person to be answering that but I think that like my participation was largely in getting the process started Armando did most of the work of actually implementing it but one of the motivations for allowing vendors to do their work outside the tree is that they don't necessarily need to hold to the same standards testing code quality any of that stuff that the neutron project itself does like the core stuff the core stuff is relied on by any number of different pieces but for a given vendor they may have different internal standards they may have different like they may have all kinds of non-neutron testing that they perform and they may not need to have comprehensive testing of the integration part so the goal and I think it was you know it's achieved just because the stuff the vendors are working outside the tree now they don't have us requiring them to have a certain quality of unit tests or a certain style of code or the only thing they require really is to have third-party CI and really that's only an informational thing it's telling them when we break them and for some people it will allow like if you have a close connection to the company if you're a developer in the company and you see a neutron patch breaking third-party CI you can take action on that but really it's it's just an indication we require it mainly because it's indication that you are integrating early and often not that we actually need it to pass all the time it just has to be like you're making a best effort right so one of the thing I wanted to mention was you know we've heard a lot about companies and vendors here but this also applies to open source projects so I've been involved with the open daylight project and the oven project and they're decomposed as well so so like you know Armando what do you think I mean this this has affected the open source projects as well and you know we've actually seen a lot of a lot of interesting collaboration there as well and I think it the same benefits apply to both yeah I mean and I think obviously is a good point because open daylight as a matter of fact was one of the sort of like proof-of-concept drivers that we looked at to try out how this process would work in practice and and not only these efforts has helped existing drivers or plugins again go at a much faster pace in their development but it also enabled new drivers and new plugins to be brought on board in you know in in line with that with the rest of the neutron you know existing existing drivers and plugins just as you know as Gary mentioned mentioned before and I think that has been that's been beneficial because it meant that requests for new plugins and drivers wouldn't sit like in the you know in the queue in the review queue like for ages because the review team didn't have the like the manpower to go through the process of vetting and validating all of them by again enabling them to only contribute a slim integration layer for for you know their plugins and drivers it was a lot it was it became a lot of smoother process so for instance like you mentioned no oven that's been an open source initiative that as I think has as benefit greatly because it also allowed it due to like the faster pace of development to gain a lot larger mind share as it might have had if we hadn't had the this this process in place before and so yeah I would think that it was a possible you know positive experience for for these projects too yeah I kind of disagree a bit with that kind of and the I think the the the oven project a good example of this is I think if that was entry in neutron that could have exposed it to to more people and more people could have been contributing just a few of the pain points that we have with our external plug-in where some changes in neutron if somebody kind of when all of the plugins were in tree if somebody made some kind of API change or some internal change that would affect the unit tests immediately and those whoever is making the change the neutron code base would in order to get the gating to pass would have to fix all of the tests so we'd what we found over the last few weeks is that our plug-in breaks every three or four days what we did now is we basically when patches proposed to neutron we run the unit tests on the external plug-in and then we're able to detect immediately kind of if and when the the plug-ins broken for example two or three weeks ago somebody dropped all of the context of support for all of the unit tests and then our plug-in broke for for a few days so it took us a long time to to get that fixed what one of the pain points that that I feel is that the community has been a bit fragmented is that in the past if I was proposing a patch maybe not everybody would understand that but some people would actually look at that and they'd give valuable comments on that and that's something which kind of it's a trade-off is I'm able to to do things a lot faster and at my own speed but I'm not getting any inputs from anybody in the community and kind of one of the things that Marus spoke about was say hardening of tests and code quality and various standards to live up to that's kind of internally we try to to keep up to those limits but there's certain watchdogs that are no longer around to help us ensure that quality and level of say acceptance of patches within the within the community so it's all basically a trade-off the upside is that we're able to upstream a complete fully baked neutron plug-in the downside is that we don't have external people involved in the project yeah so it's a really a double-edged sword yes or no like I think the the oven project's a good example by the fact that that's not on somebody's radar in the neutron project that won't see a patch that's proposed that they're not simply interested enough otherwise you can subscribe to a project and be involved in that development process I don't I don't think that that is that simple for somebody who's new to the community but that's my two cents no I I'm gonna have to agree with the Romando somewhat on this because even even when we were in the trees right so when if I submit a patch right anybody who has an interest in it they will they will spend time and they will review it but most of the people will ignore it but the only difference is I have to be chasing the cores to approve it so from five to ten days if I'm lucky five days maybe ten days 20 days to get it merged in now it takes me five to ten minutes I've got patches in Nova which have been in review for nine months so really so really that the challenge is is about velocity of merging versus how much of the shared stuff you can have and I think we wanted to say something here too well just I think that Gary's concern is entirely correct that if we separate things out of the tree then their visibility is going to be less for people who don't know where to look and it does it does have a tendency to fragment the community and we do need to make up for that we do need to have you know maybe a landing page that highlights all the important projects or but some way of repairing that damage and I maybe if you're suggesting we're not doing a good enough job of that that's something we can work on so so I think actually this is a good segue but I wanted to say so I there was actually a session on the new open stack governance which was right before this I think the big tent as it's been called and I think to some extent maybe how we're trying to to help raise awareness of this is we have like the neutron tent so a lot of these plug-in back ends that were spun off on the stack forage and existed on stack forage for the kilo cycle are now actually being proposed back into neutron so so the repositories are coming back in a neutron yeah yeah did you it's so I mean right and so it's one way to and and and they're not in the same tree so so in other words it's a way to try to keep the velocity the review velocity but keep them under the same tent right so as the PTL it's more work right because now these are under I have to pay attention to these I have to make sure they're there but but it's a way we can try to at least bring back some of that cohesion while leaving the the velocity of reviews with the teams so that was our biggest concern in in Portland when we when we argued about it that was the key reason that you know now you lose that visibility from from what what is aware so the way I go about is I go look for the requirement file and I try to find it from there oh where you know in which directory where the requirement file is then I try to chase that and I go look at the stack forge and I'll find that but eventually I'm able to get to where I want to get but it it's a little bit harder it's a pain in the neck so you know it's which evil you rather have right so I mean you know when when you have a release coming out you know the customers are looking for a feature you know you want to put it out you know here you're at the mercy of 10 15 days to get the code margin versus you can put it out the next day and so forth so it's it's a trade-off yeah one of the things that we did was we have kept the stack forge plug-in in line with the upstream plugins so when the stable killer branch was cut for neutron we cut the stable killer branch for the stack forge project and critical fixes we backport to that so whoever using stable neutron we can tell them listen you can use the VMware NSX repository and these are the this is the repo that that you take and we ensure that it doesn't break with the stable repository and that that's one way of trying to keep in line with the with the project so that actually brings up a good point so these decomposed back ends a lot of them have already released a pi pi and for those who aren't familiar that's a place for Python packages to be released so they can be downloaded using pip so have you both gone through this you've released things to pi pi how you haven't done you did right so I yeah I've done it so that's that's another interesting point right so how do you manage the release so here you have a neutron release and on the other side you have a stack forge where you have a project which is completely independent release system how do you align them so that's something only time you know we'll get better at it you know so it does give us flexibility so presently the way we're doing is we are aligning you know when when neutron goes like stable kilo we have created our stable kilo so and we have a version it appropriately and we've pushed it up so now when somebody is running stable kilo they go and they pull our package it's gonna match so if there are bugs and so forth back porting and so forth so to some degree it does give us a lot of flexibility we can tell them hey pick this version when you're applying the patch so so I mean I started with the very apprehensive about a whole thing so I had all those concerns you know nothing is visible I'm gonna be lost where do I get the stuff and all that so that drawback is there so there's no denying right so it's a little more painful to look for things but the benefit which we have already started seeing is tremendous right we have you know with the last several weeks we have made so many changes boom you know I mean you know five minutes you're merged you're up there you're testing you know you release the code you know so that's a huge plus again if anyone has a question feel free to you know raise your if you can't get out raise your hand and we'll try to take it otherwise you can head over to the microphone oh okay we'll come back we'll come back to you yeah if there's time okay so I'll repeat the question for everyone so the question was does the decomposition apply to L2 agents or where the where the lines drawn so Armando did you want to answer that so I mean agents may not necessarily live on on their own they may like working in conjunction with a with a plug-in or a driver and the idea is that yes you can take the agent code and keep it co-logated with your driver or plug-in and the processing kilo was that you would contribute in you know as the slim interface belonging to the core neutron tree it's just like the agent entry point we will need to revise that in liberty but the idea is that yes I mean you would the same principles and practices that were applied to plug-ins and drivers would you know just as well apply to agents kind of for me what the confusing part is that the reference implementations are still kept in the tree and sometimes with all of the ml2 drivers and plug-ins that's a bit confusing kind of what should belong in tree and what shouldn't so that that's something which I think in the kilos in the L cycle is going to be resolved I think so yeah we'll likely decompose the reference implementation as well so so there's a question over here and then we'll come over to self so at the beginning there seemed to be some consensus that this was a good idea that for maybe some other open stack projects maybe some other open source projects that that have a plug-in kind of architecture or external contributors is there any document anywhere anything that has kind of the bullets the suggestions of of how I should do this if I've got my own open stack project what can I take away what can I learn other than just coming and listening to you guys because I kind of can get the feeling there's more to it kind of is the spec that I'm underwrote where that's very very detailed and yeah I mean there are a couple of pointers obviously the spec proposal I can share the URL like somehow after this session and after the blueprint got approved the blueprint specification approved we went through the process of documenting the steps to contributing new drivers and new plugins according to this new model and what what steps to take to to bring the existing plugins and drivers to embrace this new model but and then that came as a form of like developer documentation that was that's been part of the a neutron core source repository and obviously those steps are heavily dependent on how you know a neutron is arranged and but I would imagine that that could be used as a you know as a platform to generalize and then take those same concepts and maybe apply to like a different open source project so obviously you know if there is an interest from some other open source project in taking the same steps as we did we can obviously work with the core team myself included to go through the process of you know generalize those steps and and see how they could be applied to to a new context or reference I should just say that I don't actually think that a lot of what we're doing specifically is relevant to other projects because the circumstances behind why we chose to do what we did initially when when you turn was started it was a low enough like overhead to have the vendor stuff in the tree and having the vendors close to the community was actually really beneficial for growing the community it just reached a point where it wasn't sustainable anymore so for another project to consider what we've done they would have to evaluate you know all the criteria we used for making the decision not just you know whether to split or not because of the benefits gained there there's a whole bunch of social issues behind it and I think there there's API related to right I mean like if we had a really tight internal API that you know some of those things that you want I guess the other yeah the other kind of concern is that the way we were architected it's not just yeah you're right it's not entirely social a lot of it was just kind of a nature of how we've implemented plugins and the tight coupling it like made it really difficult to have like an easy separation for a project to be able to kind of have future options I think maintaining a separation between something that a vendor would maintain and something that would be more core if you do that early then this kind of step would be much easier and it wouldn't be nearly as painful as we've experienced okay now we can come to yourself I would like to take a cut at the second part before you take this I actually forgot the first part yeah but so this is the second part right so third-party CI and for those who I'd like to take specifically that part yeah I don't even know if there was a question here's I think that there was a few things one part I took out of it was third-party CI how do we so we have all these plugins that have been decomposed even when they were in tree now they're out of tree they're coming back under neutron how do we guarantee the quality of the plugins right or how do we ensure so they're in the tree what do we consider quality and I wanted to touch on the third-party CI part because we've been doing this for a year in neutron with varying degrees of success and I think part of the reason is is because it's kind of been an adversarial process a bit at least that's my perception and I think we can do a little bit we need to do a better job of I think providing providing what what people need maybe maybe what we're requiring the run isn't good I think in some cases some people have developed third-party CI systems that were not exactly like the open stack system and and had trouble with that from a cultural position and I think that we need to kind of relax those constraints if someone's running third-party CI and they've done it whatever way they want as long as it is reporting the results it's actually testing the plug-in and you know we should be fine with that so so actually we're going to be talking about that in the design summit and so self I respect you to be there to protect so I'll I'll try to have a stop at your first you know the first part of your question so if I understood it correctly you were asking about what are the next steps or you know what are we gonna be doing as far as DP schema they be extensions are concerned and so when we started when we started looking at this effort in kilo we realized that the timeline was incredibly tight obviously the second you know that the cycle that is the second half of the year is usually like shorter due to holidays and we figured that if you wanted to like go all the way through and you know take vendor plugins and drivers entirely out of tree it was going to be probably as you know a step too far so you know up to the question is that you know you added in or out we we figured to answer with well maybe we can we can get like something in the middle and you know aim for it in the in time by the you know by the kilo early ships that we don't like break the word and we figured that that golden mean was we can take the backhand code entirely out of tree and keep getting DB schemas you know API extensions configuration files and a few other things you know we can keep them like a teeny you know a tiny shim layer of the plug-in or driver in the tree and and and and get on with it so that proved you know that that process proved itself successful somewhat and what we're gonna look for in in the liberty liberty time frame is to enable and drivers and plugins to go all the way and take extensions DB as well as API and the other three there are a certain you know the certain mechanisms that we need to work on in order to enable that to be like more official quote unquote and I think in the some you know and this summit will take the opportunity to look at those aspects did I answer your question thank you very much so so along those lines so we've decomposed plugins and drivers at the ML to level the core plugin level what about service plugins is that is that up because for example we merge new service plugins in tree and you know we haven't gotten there are we looking at doing that in liberty well I mean as a matter of fact in kilo we went surgical nets I mean it's like we also like split out the service plugins of trees you know thanks to thanks effort and I as far as I understand they're gonna go through the same process and and look at their drivers and plugins and and adopt some of the practices that we sort of like spearheaded for for you know for them so I think again as much was saying there are contexts where this can be applied successfully and if you your architecture is is in is done in a certain way this this model of operating lens itself pretty well some others may not benefit the same way and I guess that's the judgment call and it's to be made by by the core team the PDL of the project and so on just to say that I don't honestly think that like getting to some end goal with regards to like vendor decomposition it's not necessarily a huge priority I think we need to weigh it against everything else the fact that you know vendors get most of the benefits of having a degree of autonomy they get to release independently they don't have to go through the same like core review like hurdle to get the code you know merged so I think like maybe it'll be worth doing that this cycle but maybe it won't be I think I'd like to hear from like different vendors I'd like to hear from you two actually whether you think it's justified to go full bore and try to get everything out of the tree or they're most of the way there and you're happy with that for now so so I have already taken out the rest of service plug plug in out of the tree so so most of the rest of stuff is out of the tree and I think it works works just fine so so I you know once you understand the process and once you're familiar with it I don't see there is any reason to leave most of the stuff in and the tree unless you feel compelled like DB for instance I don't know how to deal with the DB migration scripts and so forth into the stack forward so that's why it's sitting there but anything which is the back end anything which talks to a rest of it is completely out of the tree so for us I think that's in most parts we kind of happy the database migrations are always challenging the keeping the the internal code quality up to standard is something which we striving for we've got three CIs running at the moment kind of two posting upstream one we have yet to kind of organize the bits to post it upstream too but that kind of we we're running on all of the patches we're running the unit tests so I'd say we're relatively happy it's just we need to understand the database model moving forwards I think the extensions are also challenging whether neutrons gonna move to a micro versioning model or stay with extension model and I think that will have its impacts on external plugins and we just got to roll with the punches and see how the community evolves with that okay so I think we've only got five minutes or a little bit less left so one so one thing I wanted to ask okay we'll get to you and just as I wanted to ask one more thing because we we kind of presuppose that this was one of the reasons we did this was to increase review velocity and certainly I think we've heard that it's increased review velocity outside but I'm curious has it increased review velocity for the neutron core so you'd like to ask our panelists well I think it's actually kind of had an adverse effect on the reviews internally in neutron kind of I made a bit of a storm on some changes in the API model where like certain attributes were added to the main API and those those I think in the past maybe if the core reviewers were more focused kind of on the core project instead of the ancillary projects then maybe those issues could have been dealt with at the core but that's just my perspective on it I think kind of that the I don't know I feel that maybe people are reviewing a little less because they're more focused on what's their kind of bread and butter and less the community stuff but that's yeah I mean I guess speaking my own experience actually I felt the full brand of the force of this process like reviewing lots of code being decomposed so to me this angle was kind of a typical because we had to go through the process of a kick stank it's kick starting the fraud the process which led to like lots of culture but once that kind of you know calm down a bit I've seen I've had a positive I've seen like my review dashboard getting more focused and and obviously easier to digest on a daily basis so I felt like to me that I felt that to me was very beneficial and soup devs and bothering you all the time for well he still does but I mean that's fine it's okay I learned to cope with it but but I would say this would be a more relevant question in the liberty cycle yeah once we are that's why I was saying you know to me these you know kilo was in a typical release cycle because we had to bootstrap the process I think speaking for Armando I think he was probably over overburdened in this this please cycle but the liberty will start to show the fruits that that's the way yeah I look for anyone to buy me a beer yeah okay I think we've reached the time limit so thank you to the panelists here and hopefully everyone learned a bit about how we're trying to help scale neutron development good afternoon and welcome to the neutron futures panel we're going to be talking today with a set of startup founders in neutron we've talked about technical issues around neutron sort of throughout the day in various forms we're going to actually deal with some of the larger architectural questions think about where we're headed with neutron and think about some of the activity that's actually taking place in the startup space as well so thinking about all of the venture activity that's taking place a lot of the interest in neutron from the the broader technology space and see where we're moving forward so with that when we do a set of brief introductions I'm Eric Hanselman I'm chief analyst of four five on research we're an emerging technology analysis firm for those of you that don't know us and to my right I'm Dan Demetrio I'm the CEO and co-founder of me to Korea and we're one of the network virtualization overlay players in the in the space Scott's net and I'm a principal architect at New watch networks I'm not actually a founder and New watch actually isn't a startup we are a venture under a big company called Alcatel Lucent but we are fairly arms length and we try to act like a startup and try to be cool like a startup and I'm not a founder I was one of the first hires though so I guess I get credit for that we love you anyway but among close if you and co-founder at a plum grid again another overly based as the end solution that we bring kind of a comprehensive solution with security to open stack thanks and I'm Rob Sherwood CTO of big switch networks somewhat different from all my panelists we actually provide network virtualization for the physical network and we'll happily pass all of their traffic above us all right and I'd also like to keep in mind while I've got a boatload of questions for these guys we also want to get questions from you as well so get your thinking caps on as we start digging into this I'll be opening up for questions specifically we've got a microphone over on this side so if you got questions line up on the mic and we can take your questions as they come up I wanted to start off with an acknowledgement that today happens to be an interesting anniversary we talk about disruption in networking and what's happening and you know this was this is the 35th anniversary so a significant anniversary of the Mount St. Helens eruption which if you're from the Northwest and even if you're not you probably know about Mount St. Helens and one of the questions about neutron disruption is is this going to be the sort of volcanic shift that's taking place in the marketplace today there's disruption and then there's disruption so I want to throw this out to the panelists is neutron that level of disruptive are we displacing networking broadly is this is this opening up new vistas or where are we and taking this from neutron back in its quantum roots a little retrospective over head so that's well I mean I guess neutron is just a way to express the needs for applications to create networks on demand now the disruption or the non disruption may come from our existing physical networks ready to basically change as the workloads need and usually what happened is that when you had a single organization essentially using networking on their premises you could change it as much as you wanted because worst case you could disrupt yourself so in this case what was happening is that the danger would be minimal now when open site comes to exist and with the projects with neutron with networks on demand now the question is is this notion of having a network that I'm sticking the network could bring the network down acceptable or not and if you think a lot of the network position concepts is about how to create a safe environment that can be feature rich secure on-demand elastic dynamic that eliminates those risks so I would guess that the notion of neutron is the disruption or not I would say neutron is the catalyst that requires this elasticity and this on-demand creation of networks that now different solutions different vendors are going to provide an answer to that yeah I don't think we're fundamentally changing how networking is done we're still moving packets around it's still largely IP based we have new protocols we have new control planes we have ways of expressing those networks but we're still passing packets largely in the same way we've always done maybe using software a little more than chips maybe using API's and programmability instead of CLI's to provision but we're still moving packets so maybe not quite that big crater but probably a new vista I'd say I actually would say you know what's cool about neutron is actually that it forces the issue of automation which I think has been largely ignored and that is the thing that I think is different and fundamental you know to Scott's point are we actually doing networking differently not really but to turn networking from a CLI problem to a DevOps problem which is really what neutron is a vehicle for doing I think that's actually pretty cool yeah you know the problem with these panels that they wind up agreeing with each other too much but in all seriousness I think it's not so much like a volcano erupting but maybe more like a glacier melting you know so it's slowly slowly melting for a while and then suddenly you know the the rate of change increases but I think I think what's really changing is that exactly as my fellow co-panelist have said the workload needs are different and the traditional networking concepts don't really work exactly as they as they should either in terms of functionality or scale or fault tolerance and such and that's driving us to create new solutions but over time what's happening is that the value is going to shift from the box to the software you know and that's happening in multiple ways you know the the functionality moving closer to the edge of the network closer to the workload to the host as well as within the switches themselves you know the desegregation of the black box which is into operating systems and the hardware platforms is happening and I think that is also influenced by the need for automation like I love you too it's the I'm tired I apologize he missed a lot just climb it on that exactly you know so I think I think the value is shifting so that's not going to be like a big eruption it's not going to it's going to not going to kill profit margins overnight but over time it's definitely going to make very very large changes well and I would take the case that I'll disagree with some of the panel and I think for a couple of the reasons you guys said which is that automation which we haven't really done in networking and I think in some cases really just scares the pants off a lot of networking people you know the automation piece is something that we kind of maybe got kind of good at maybe automating a few VLANs here and there but if you take a look at most enterprise distribution I mean it's you know it's really not out there in any meaningful fashion and and that neutron is a shift in mindset it's a shift in maturity in terms of operations so I do what if I follow up on that so something that Dan said see now he's got a much bigger time zone shift than you did so you got to cut him some slack here you know so I absolutely agree that value is moving into the software but a lot of people when they hear into the software they hear into the V-switch and you know a lot of the reasons why networking people traditionally fear automation is because their networking stack their physical stack is this scary black box that could fall over at any time because he sneezed at it or it could erupt yes or it could erupt you've seen that error code and I guess you know in my mind yes values being pulled out of the switch hardware but there's actually this huge ecosystem opening up on the switch side for the software and that software stack is no longer as scary because you can poke it you can product you can actually do DevOps and automation on that side as well and I think this is a different dimension that's getting unlocked through because of things like no neutron it is indeed and there's multiple ways I mean as Rob was mentioning from a automation point of view can always automate physical virtual V-switch physical switches the question is always what's the fundamental value that you are trying to provide and to whom and if you think traditionally when people were thinking infrastructure you had compute storage and networking groups and now within compute we could argue that it was automated virtualized with VMs and v-centers and KVMs and things like that now what happens is when you bring networking networking is fundamentally something slightly different because it's not an entity per se it enables somebody essentially most packets on behalf of somebody that request a service if you think it like that then what you do is you create applications on demand we are talking about open stack where you're going to create environments that may happen into your private cloud may happen into your public cloud may happen into multiple environments and the question is always should the network be static and physically attached to a specific site or should be able to follow the application and there's different values that come from different angles for example when you come from physical networks the notion of SLAs multicast high bandwidth connectivity this is a physical property that must come from a proper orchestrated physical environment but then you're going to have things like security and your application that goes into the hybrid cloud and some sort of federation that your application spans in a secure way may be encrypted from your private cloud to your public cloud so what we've seen is that this shift that I was mentioning about what goes to the research versus what goes to the network and I don't think people should take it is one versus the other in the sense that different values come from different environments for example from the edge you can even encrypt the traffic end to end that doesn't mean that the physical network doesn't have to provide some sort of path with a specific SLA now the question is what features do you put in each environment into the what so-called overlay versus the underlay which in Palm Creek usually we call it V&I virtual network infrastructure versus physical network infrastructure because people have to understand that the tension between overlays and underlays is not one against the other because all software runs on top of some sort of wire so if we start thinking it in a different way virtual network infrastructure versus physical now people have to associate the proper value to each layer especially in the thinking of what happens with public cloud with hybrid cloud with multi-site locations and then the whole thing starts to kind of make sense in a different way from the current understanding that one layer has to do everything so we're moving towards a transition between the overlay capabilities that maybe enables that shift in the underlying physical environment or definitely when you attach connectivity property or a networking property with security close to the application when the application moves around you can track what the application needs in a much better way but again you need both components from a feature point of view from properties that the application aspect the overlay definitely enables that but the SLA may come from a physical characteristic of the network. I see Rob chomping at the bit over there go for it. I'm trying to set how much of this conversation monopolize you know I could go at this all day right so in my mind there are a number of nice properties that are useful to implement in software and there are a number of nice properties to implement in hardware and I think we agree on that and now whether you implement those nice properties with an overlay completely irrelevant and so you know what is an overlay a tunnel header is it really fundamentally architecturally different from an MPLS tag a VLAN tag or some sort of other you know it's metadata to the network to say you know I've already thought about this a little bit let me pass some of that on and what gets really complicated is when these things start having different brains you know if you look at you know for example how the ML to plug in works and to plug in is don't get wrong incredibly useful thing but it's a hack right it means that I'm going to have one brain for the physical network and one brain for the virtual network and that's great because that's the the state of where a lot of deployments are right now but if you actually had one brain to manage it all for example something that managed the physical hardware and the virtual hardware that ends up starting to look like a much more in my mind sane network architecture and that's completely independent whether it's an overlay or non-delay all right so you step did Mike if you can get to the mic I realize sort of clunky here getting people and I should let you guys talk at some point so I appreciated the last comment that was made I'm not sure I agree with it holy all of your products can give me the ability to take a virtual machine and plug it in but when I look at what I used to be able to do in the physical network how do I am how do I implement a multi-tenant service node in a VM that could legitimately connect to thousands of tenant networks using any four of your products with a standard API that doesn't require me to have an NDA with you that works out of the box right now yeah there's ways to achieve that now in open stack in particular I think you know looking towards Liberty and beyond some of the VLAN aware VM and Nick types and some of those things I think start to get you there through a nice clean standard open stack API but can we get you all for to commit to one of those we're working on it so isn't that neutral no let's say you mentioned two things right one is how do I create some entities some connectivity that scales multi tenant if you follow the neutron API you can create networks create routers create projects create floating a piece create everything now if you treat these as the common layer because you may or you may not want to get stuck with a specific vendor it's up to the four people in this table to essentially make sure that you are satisfied with our products but why not neutron as an API we're fine I mean honestly if we had logical V landing in a service node we'd be pretty happy with that but we haven't seen that be a point of progression in any one of these things as as a fundamental use case for the network didn't have to be a fundamental use case before with the physical network it just was but now we had logical networks to find where no one cared so I from what I've been seeing over the last couple of days as I've been exposed to this work is there hasn't been a lot of pull for that particular use case from a cloud provider because the multi tenant isn't necessarily at the VM or the service node layer but we're starting to see that in these NFV use cases more and more and I spent half a day in a meeting with a very large US telco and a few of our partners in the industry talking about exactly this the commitment for that Vlan aware Nick type or VM or whatever the work streams called ping me after and I'll go look it up I think is going to happen and I think you're going to see that because the large telcos are starting to look at OpenStack as a as a viable platform and are starting to push those requirements more and more so I mean I will say and I'm happy to follow up with these afterwards and so the the API is into our controller which are published API is it's part of our our our neutron implementation talk about Vlan aware that you talk about multi-tenancy you can create multiple logical routers and there's a system is a piece called the system router which you can connect these logical routers but the requirement was to use the same API across all four of you because that's what I used to have with Ethernet that's what I used to have with my switch API for Ethernet well let me be more precise than that I had a way to do multi-tenancy to a network edge device yeah with four different CLIs from four different vendors right I get it but but when it's not necessarily four different APIs there are core there are core neutron plug-in vendors in this building that you cannot do that with it is not uncommon to find a core network vendor where I can't have one IP address assigned to multiple ports still this is true and the fact is that it unfortunately it's not just the four of us who are need to agree to make changes to the neutron API yeah we can all be very easy we can all stand up here and say yeah let's show you how to do it through our extension but that's not a standard API that you want to say all right it is actually a separate point to say you know in my mind neutron is actually the lower bar which is to say you know as I come up with a function that I think makes me you know better than other people particularly as an open API as people have you know people start to look at and say okay that'd be really useful as inclusion for for neutron and now we know one way of doing it once we know another couple ways of doing it then we can create a standard interface for ever everybody to drop this into their plug-in to do it and I think that's actually the right way to move I mean certainly that's the open-source way of moving a standard forward so actually Mike from back up so Rob you mentioned ML to and ways that we've looked to address this with neutron is that capability that we should be focusing on that is something we need to move beyond pluses and minuses for ML to and sort of where we are so here we have the notion and maybe even following up on the discussion about the API you have an API is an abstraction right you want service to be performed a multi-network reuse IP addresses whatever and now we start building the onion from an API that we say neutron that standard now we say well what about going a kind of a SAP way to plug in different vendors and this could be a plug-in and ML to driver and so on what usually happens is that especially with kind of different vendors everybody's going to create or a manual ML to or a plug-in and kind of test and certify solutions but what's going to happen is that in networking you have two approaches one is you create an interoperability environment where everything has to mess up with everything and we come from a net of networking that people are obsessed about the the little elements like a switch a router a firewall a DHCP the DNS and so on and now when we go to this new virtualized wall what happens is that if I just focus on providing a switch or a villain who's going to make sure that the router of somebody else interpret with mine and here you have a proliferation of drivers that even each vendor could provide a certification for its own functionality but as an environment you don't have any grantees that it works the other extreme is you go to a plug-in from somebody that has more than one component and now all these multiple components are going to work together because that's the certification aspect that gets even so I would say that regardless of plug-in or driver the question is what's the kind of need from the community point of view in terms of the interoperability testing or the functionality matrix that a new definition of a networking vendor in a cloud environment has to provide because if you think if the goal of the cloud is to jump start the cloud in hours not days and now you say I have to bring 25 vendors of networking that they are going to work with 25 little drivers that they are going to interoperate together is this the right way of doing clouds for the future or is this the way that we did things in the past and this is where we have to rethink a little bit the notion of what does it mean networking for the cloud not only from defining plugins and drivers but starting by are we having the proper API and then what type of operational tools visibility and what kind of differentiation do expect from vendors and maybe this notion of 25 vendors in the same deployment is not realistic anymore so that's the other thing that we have to try to understand I have two comments on that so most agree with what you said you know and basically I think with respect to ML 2 and the drivers for the different layers in some cases I think it just doesn't make sense to use multiple components for layer 2 layer 3 and some of the layer 4 right I mean I think in probably all of our solutions you know the base functionality includes these multi-layer aspect right so it doesn't make sense to plug in somebody's others other router you know into into one of our solutions typically that said you know with the services on top like layer 4 7 services particularly layer 7 stuff there we were not going to do everything under the sun so we do need an integration point and indeed that's where interoperability testing is very important and that's probably what neutron should be focusing on right rather than trying to build stuff from scratch in my opinion Michael does with American Airlines one of things I'm hoping to hear more about is how you guys plan on talking for enterprises to legacy things like we have really good Oracle sales people who like to sell exodators exologics you have fabulous sales engineers with F5 or Citrix that like to sell their products because we need to do the SSL offload we have this legacy world of things that we have to connect to and we need to be able to connect to them with policies guarantees and we may have to manage MTU sizes I haven't really heard you guys talk about how we would deal with the real world of intermixing where we have to get outside of just the open stack controlled environment some people call it legacy I call it shit that works all it took with somebody to ask the question and we can talk about anything up here I will say you know so stuff that folks at big switch conzi from our team is here working on things like the external port extension something you say here's a physical port connected into my open stack environment that's a good bootstrapping stuff towards that some of the things like the LBASP to some of the firewalls of service stuff those are at least modest first attempts working in that direction and actually think one of the things that's great about open stacks it provides a forum for us all to get together and say all right let's at least e-cat that the minimum subset of how we get the stuff to work yeah from our from our customer use cases like what we've seen of course we can talk theoretically about what should be possible what shouldn't be possible what we've seen are you know two different patterns basically one where the users are running all the legacy stuff like F5 for example entirely outside of the cloud just all the way out in front you know and that's not great but it works you know for now until the that vendor provides a some sort of virtual model for a virtual form factor of their of their product or until they move to something else right which they that might be the answer ultimately in that case and the the second point is where they try to use basically like the layer 2 gateway service you know either in a the v-tab the hardware switch or in a software gateway to get things out and back into the into the virtual network which is not ideal but again it does work until a more elastic version of those layer seven products comes out uh... that's what it's going to be so it's a transition step i think but i think here the fundamental question is if you ask at least the forum panelist here uh... that's the role that we kind of provide to our customers in the sense that one thing is what you can do with open stack and notron and open-stack is kind of uh... going with uh... the containers and everything kind of across the world but then you're going to say i have a specific set of assets that are not even under the control of open stack and i have to onboard them so i bet you ask for people you're going to get for answers because our job is to answer these questions right uh... what happens then is interesting phenomenon right because uh... as soon as you start uh... using solutions that solve your needs but they deviate because we have uh... specified uh... notron API that uh... maybe different vendors have to provide extensions where at this point you have kind of this dual management model so i would say that uh... what happens a lot of times is that uh... in the neutral community we have a lot of people working from vendors themselves and sometimes the voices of customers saying is open stack a close environment that all the api's and all the use cases that we should think about open stack and people would say no because we bring physical assets and docker containers and people would have to create that but what about everything else and this is maybe where the notion of a user saying look it's not only about the world was that you have been in stock but what about bridging to the other side of the wall and i would say probably you will not find that much resistance from commercial solutions because that's what we are good at of solving uh... today's needs because we cannot wait for standardization to come with the proper answer but the point is how do we kind of uh... agree on some ways that uh... the community advances towards those use cases yeah i mean uh... there's we're what three years into this adventure that we call sdm maybe four years three years since the uh... nice era and yeah well so um... we're realizing really quickly that that not everything's a greenfield in fact almost nothing is a greenfield and and we have to figure out ways to address these things all of us as vendors here have a solution that answers your question i've got a box i can sell you today that takes care of what you want it to do is somehow nicely interact with your open-stack environment if not become a part of it the ironic work in things like that are happening are good step in the right direction but the open-stack community is kind of taking a view that is the world is open-stack and not necessarily anything else and uh... there is a lot of other stuff out there there are bare metal assets there's a big oracle database there's that big company down here in seattle that does cloud reasonably well we have to figure out a way to to interoperate with those things and to leverage those assets as well within open-stack and i'd like to see more community involvement from that point of view here is an issue that we're going to get back to in just a minute but first we're going to take the next question great thanks uh... i'm going to bring the uh... conversation a little bit more a little back to startups and strategies and so forth away from product specifics uh... and i know that you've all been around all the your companies have been around longer the neutron and you sort of joined the neutron ecosystem as new con neutron emerged and uh... in those subsequent three years there are now uh... separate cloud networking ecosystems that are forming today specifically around docker around kubernetes mesosphere and other data center software defined data centers so my question to all of you or any of you is the extent to which you focus on uh... neutron uh... style networking verses addressing some of the requirements that being introduced by these other ecosystems with their which are distinctly different targeting different use cases and deployment uh... patterns i guess my mind that these cases are not so distinctly different or rather with the the right uh... the right software interfaces actually not too hard to all of those use cases so at least you know for for my company open-stack is you know that the first thing out of the door for us the first thing we want to support but we also support uh... and where we also support other things that are coming down the line that you mentioned at the end of the day everybody's looking at multi-tenancy everybody's looking at virtual networking everybody's looking at how i manage overlapping ips how do i integrate third-party services that are have physical ports in my network you know if docker is doing with three levels deep of the switch that's not so incredibly different it would be my answer though maybe a bad idea but open-stack is a framework kind of gives us a shortcut to some of those right um... there's this new ecosystem around docker if i can run docker under open-stack well then i've got a way to to present a network service to that we have a lot of customers asking us for hyper-v and microsoft support i can really easily run my new watch stuff on hyper-v but all that dot net stuff is a pain and and so if i can get you to run hyper-v under open-stack then i have a simpler way of offering a service there that we can start to integrate but you're right these things are always changing we always also do vmware and cloud stack because that's the reality and and these things are changing and when we have to stay ahead of those and figure out where we focus resources and where we keep moving the ball forward and we're vendors and we're coin operated so you know you guys are the ones that are telling us where we go to be fair i think the the docker at kubernetes and and such use cases are very very recently emerging right and i mean doctor has just recently started doing their whole live network abstraction thing in of course we jumped on to that to be able to support it well but uh... that hasn't been deployed yet i guess possibly the big difference between the deployment patterns as uh... chris said of uh... open-stack cloud-stack vmware which are all kind of similar and uh... and doctor could be that that doctor is much more developer focused so it needs to include the the developers desktop to some extent you know that's a little bit different you know than just running everything in the cloud as it were but we'll see we'll see what happens and and you have to think that we have kind of a networking solution all of us that essentially provides a set of mental uh... contracts like connecting a port to something and something could be physical could be virtual could be a container itself the next is how you orchestrate that in order to provide the proper automation and this is where you say today we are in open-stack summits or such we are discussing how open-stack is being used through newtron and our plugins or drivers in order to operate our networking solutions we don't create networking solutions that predicated upon a specific stock in the sense that networking has always been about connecting things and things leave across different environments physical virtual containers uh... anything so that the question becomes more what is the market going to have kind of a clear leader like what happened with open-stack four years ago at least when we started you were saying that we all predate uh... newtron when we started there was like many uh... open-stack environments and today you have from an open source point of view now in the container wall is happening that uh... multiple at a merging and eventually there's going to be some consideration because the industry will not uh... be able to handle five six uh... different uh... open source ways of orchestrating containers and this is where the maturity of the market is going to happen and it's going to be much easier to see what's the proper orchestration thing and maybe it's open-stack uh... give open-stack kind of provides the proper integration to that so always a couple that technology and what our networking solutions can do versus how do they get operated and in today we are discussing how do they get operated through open-stack there's no limitations why from open-stack you could connect containers and VMs and it has a demo about that in the summit but I guess everybody has it right uh... so think technology versus orchestration and uh... we have to bring all these things together in a way that is beneficial to everybody all right well actually headed down the road with some discussions about the venture interest in this environment and really what's happening in networking broadly uh... it's not only just the companies that you all represent but a whole set of folks and probably a number here who are in this environment uh... that's pretty well I don't know if you'd say overheated but certainly very hot in terms of investment and interest uh... is that reasonable is justified is networking really worth all this well I don't think it's overheated for sure you know in in my opinion first of all if we're gonna talk about the SDN I mean that's such a broad umbrella a lot of different products are ripe for disruption you know in the uh... adventure parlance right switching appliances of all sorts right anything with software stuff into a box right this right for disruption you know now that people don't necessarily want to run physical boxes anymore if things are moving elastic uh... so I think I think the level of venture interest is quite appropriate you know given uh... the size of the opportunity if not too low and as we look for more use cases beyond just data center in cloud and into some of the way and SDN ventures that have come yeah I think the investments justified and will continue to see some interesting things happening especially because as we were saying that working is kind of the glue that puts everything together and with the current models of reinventing how hosting happens how private clouds public clouds there's a lot of innovation that is is ready to disrupt your system there's the old adage you know put your money where your mouth is there's a lot of money just because there's a lot of money being thrown at this problem uh... I actually think it just means that there's such a huge change going on the ecosystem you know the if you've heard the quote you know the network is in my way I think a lot of people finally come to terms with the idea that the thing that's actually causing them to not do as much business as they want is not the cop ex-cost it's not the up ex-cost although it would be nice to get those to go down it's actually the business agility if you could actually solve that networking piece your company can do more of what it does to make money and I think that's where that's the thing that's actually forcing the flywheel from from Santel Road to to make this happen I used to live on Santel Road put that plug in and we talked a little bit about the community and where we are in terms of overall engagement uh... I know there were a lot of requests made of all of you things that people would like to see uh... what would you like to ask back of the community around Neutron and what what do we need to be doing where do we need to be going what's uh... you know if I was here or some of the other team uh... what would you fire up for them directly if I could return to the question the gentleman in the front asked the open stack is uh... in Neutron specific is a great place to do third party integration it's one of these things that my mind should be totally commodity and it's actually just benefits everyone to to have more api's like albaster firewalls the service of things like that or the multi-tenancy so the more you can say this is what I want to have happened here's the magic button and when I push it I want you to implement it so that it does that right thing tells us the magic button looks like we can make that happen so I guess in terms of making faster evolution on api's you know which is and to address the community needs uh... well as a member of neutral we found out that it's slow probably you guys you know felt the same way right and one of the and one of the reasons is because the community always wanted to have a reference implementation based on open v-switch or the whatever you know something that's actually core in in Neutron in Neutron itself in open stack and so my cheap plug you know for for our open source project here you know we open source some part of our our software last year I mean a very significant part precisely so that we could make a pure open source implementation of certain features our own way let's say you know without without actually going through open v-switch plug-in or ml2 or any of that stuff because we believe we have the technically correct path you know so i think that uh... part of what we could have in terms of community engagement is if people want to have an extension of some sort that that can be put into into our open source implementation please contributed and we'll push up an extension api for that as well more open source is good it seems like what this community has been demanding yeah i mean the ask i have of the community is is to um as you're involved with the the ptl's and with with the development community ask them to keep pushing Neutron has a reference framework not a reference implementation i mean that's that's one of the things that we think is is holding up Neutron and advancing is we spend so much time within Neutron trying to develop a widget that passes packets that's not always the easiest problem to solve and some of us have been investing a lot of money in a lot of years building things that do that well let us do that well let the guys that have a nice open source implementation build that and do that well you guys use that in your labs and in your testing in your production environments and let's force the open-stat community to be a framework that we can develop to and let us add value there where we can instead of focusing on you know actually implementing the thing in in in a network node or something away from code first well i you know i i think we have to way the pros and cons doing some things that we try to do in Neutron and maybe step away from a few of them i guess the other beyond what has been said already uh... is the notion that until now Neutron is focusing on a set of a deeply small number of APIs that provides certain features we have to start thinking that when people deploy clusters at scale you have to start focusing on operational tools and what kind of APIs and abstractions would be provided to understand uh... how to troubleshoot how to monitor how to perform so a lot of us provide a lot of differentiation in terms of operational tools and uh... features that give you visibility into the network the question is beyond cilometer that basically reports urban stats how we can bring a set of elements into the Neutron project that allow us to understand much more what's going on in this important layer of the stack i want to make sure that we get that chance for any further questions if anybody has them but i've got a couple closing questions we start to wind down here we're here at the official coming out party for Kilo uh... on the cusp of the launching of liberty what do you like or what should everybody here know about Kilo that they may not know already on the networking side of things and what do you most want to see in liberty so i wouldn't look at this after you sent us that question and to really kind of put some thought into it and i didn't see a widget that really got me excited the thing that got me excited was the actual decomposition of the uh... plugins that happened so that's the i think along the theme of what i was what we've been saying is stepping away from making it a product and making it more of a platform that lets outside things contribute in a in a more meaningful and rapid way and on top of that they would add this notion that Kilo probably signifies with all the discussions about the big ten open stack core how do we go into a model that from release to release is not seen as a major upgrade major disruptive but rather that is a gradual things that you can upgrade components as they need it because uh... this maturity is needed for the industry to essentially graduate from the fast-paced innovation cycles to a much more supportable environment especially for vendors like us six months completely disruptive release cadence uh... creates challenges for the industry so how do we grow with Kilo to a much more sustainable model more mature model for the enterprise i'll actually withhold my answer to get the question for the challenge that's okay thanks it's not really a question but uh... when james hamilton said the data center network is in my way it wasn't a call for eruption or disruption or anything like that it was uh... uh... request that networks become invisible people in this room carry a great deal about networks people outside not very much they'd rather not ever deal with them and it seems like there's a huge amount of inertia towards uh... making what you are doing more complex whereas people building apps want it all to be simpler absolutely agreed if you think that anything about what i said contradicted that and uh... that must have been spoke uh... at the same time uh... you know i i feel like that's the the real benefit of newtron as you're saying i want to network that does this is a set of buttons uh... make all the other complexity go away i know tom so i know i know the gist of this question i think i think what he's really referring to is that uh... it seems like the newtron community keeps talking about adding more and more apis into the into newtron and it's unclear actually that those apis are all necessary my interpretation of that is that there's some set of apis that are needed for application developers to express the needs of the workload and such and there's a bunch of other stuff that's for the operators it's not there's not a clear separation between the two and my personal feeling is actually that those operator specific things are going to be very hard to standardize because that's where we all differentiate to be honest right so uh... not a huge amount of incentive there to to standardize them and with that we don't have any more questions i think we will wrap it up any final closing statements from any of you or are we good to go remember that we're standing between everybody in free beer on the expo well that was exactly my thought here so with that if you'll join me in thanking the panelists uh... thank you for spending time with us