 I think we've got a set of mics, we'll pass them back and forth and sort of do that. Good afternoon and welcome to the Neutron Futures panel. We're going to be talking today with a set of startup founders in Neutron, we've talked about technical issues around Neutron sort of throughout the day in various forms. We're going to actually deal with some of the larger architectural questions, think about where we're headed with Neutron. Think about some of the activity that's actually taking place in the startup space as well. So thinking about all of the venture activity that's taking place, a lot of the interest in Neutron from the broader technology space. Let's see where we're moving forward. So with that, I want to do a set of brief introductions. I'm Eric Hanselman, I'm chief analyst of 4.5 Own Research. We're an emerging technology analysis firm for those of you that don't know us. And to my right. I'm Dan Dimitrio, I'm the CEO and co-founder of Meet Okura and we're one of the network virtualization overlay players in the space. Scott Snedden, I'm a principal architect at NewAge Networks. I'm not actually a founder and NewAge actually isn't a startup. We are a venture under a big company called Alcatel Lucent. But we are fairly arm's length and we try to act like a startup and try to be cool like a startup. And I'm not a founder, I was one of the first hires though, so I guess I get credit for that. We love you anyway. But among close, CTO and co-founder at Plum Grid, again another overlay based as the end solution that we bring. Kind of a comprehensive solution with security to open stack. Thanks, and I'm Rob Sherwood, CTO at Big Switch Networks. Somewhat different from all my panelists. We actually provide network virtualization for the physical network. And we'll happily pass all of their traffic above us. All right. And I'd also like to keep in mind, while I've got a boatload of questions for these guys, we also want to get questions from you as well. So get your thinking caps on as we start digging into this. I'll be opening up for questions specifically. We've got a microphone over on this side. So if you've got questions, line up on the mic and we can take your questions as they come up. I wanted to start off with an acknowledgement that today happens to be an interesting anniversary. We talk about disruption in networking and what's happening. And this is the 35th anniversary, so a significant anniversary, it'd be Mount St. Helens eruption, which if you're from the Northwest, and even if you're not, you probably know about Mount St. Helens. And one of the questions about neutron disruption is, is this going to be the sort of volcanic shift that's taking place in the marketplace today? There's disruption and then there's disruption. So I want to throw this out to the panelists. Is neutron, that level of disruptive, are we displacing networking broadly? Is this opening up new vistas or where are we? And taking us from neutron back in its quantum roots, a little retrospective about where we're heading. So thoughts. Well, I mean, I guess neutron is just a way to express the needs for applications to create networks on demand. Now the disruption or the non-disruption may come from are the existing physical networks ready to basically change as the workloads need? And usually what happens is that when you have a single organization essentially using networking on their premises, you could change it as much as you wanted because worst case, you could disrupt yourself. So in this case, what was happening is that the danger would be minimal. Now when open site comes to existan with the projects with neutron, with networks on demand, now the question is, is this notion of having a network that the mistake in the network could bring the network down acceptable or not? And if you think a lot of the network virtualization concepts is about how to create a safe environment that can be feature rich, secure, on-demand, elastic, dynamic that eliminates those risks. So I would guess that the notion of neutron is the disruption or not, I would say, neutron is the catalyst that requires this elasticity and this on-demand creation of networks that now different solutions, different vendors are going to provide an answer to that. Yeah, I don't think we're fundamentally changing how networking is done. We're still moving packets around, it's still largely IP-based. We have new protocols, we have new control planes, we have ways of expressing those networks, but we're still passing packets largely in the same way we've always done, maybe using software a little more than chips, maybe using APIs and programmability instead of CLIs to provision, but we're still moving packets. So maybe not quite that big crater, but probably a new vista, I'd say. I actually would say, what's cool about neutron is actually that it forces the issue of automation, which I think has been largely ignored, and that is the thing that I think is different and fundamental to Scott's point, are we actually doing networking differently? Not really, but to turn networking from a CLI problem to a DevOps problem, which is really what neutron is a vehicle for doing, I think that's actually pretty cool. Yeah, you know the problem with these panels is that we end up agreeing with each other too much, but in all seriousness, I think it's not so much like a volcano erupting, but maybe more like a glacier melting. So it's slowly, slowly melting for a while and then suddenly the rate of change increases, but I think what's really changing is that exactly as my fellow co-panelists have said, the workload needs are different and the traditional networking concepts don't really work exactly as they should either in terms of functionality or scale or fault tolerance and such, and that's driving us to create new solutions, but over time what's happening is that the value is gonna shift from the box to the software, and that's happening in multiple ways, the functionality moving closer to the edge of the network, closer to the workload to the host, as well as within the switches themselves, the segregation of the black box which is into operating systems and the hardware platforms is happening and I think that is also influenced by the need for automation, like Rob, I'm sorry. Love you too. It's, I'm tired, I apologize. He missed the watch, just climb it on that. Exactly, you know, so I think the value is shifting, so that's not gonna be like a big eruption, it's not gonna kill profit margins overnight, but over time it's definitely gonna make very, very large changes. Well, and I would take the case that I'll disagree with some of the panel and I think for a couple of the reasons you guys cited, which is that automation, which we haven't really done in networking, and I think in some cases really just scares the pants off a lot of networking people. You know, the automation piece is something that we kind of maybe got kind of good at maybe automating a few VLANs here and there, but if you take a look at most enterprise distribution, I mean, it's really not out there in any meaningful fashion, and that neutron is a shift in mindset, it's a shift in maturity in terms of operation, so. Do you mind if I follow up on that? So something that Dan said. I don't know. See, now he's got a much bigger time zone shift than you did, so you gotta cut him some slack here. So I absolutely agree that value is moving into the software, but a lot of people when they hear into the software, they hear into the V-switch, and a lot of the reasons why networking people traditionally fear automation is because their networking stack, their physical stack, is this scary black box that could fall over at any time because you sneezed at it. Or it could erupt. Yes, or it could erupt. You've seen that error code. I have. And I guess in my mind, yes, value's being pulled out of the switch hardware, but there's actually this huge ecosystem opening up on the switch side for the software, and that software stack is no longer as scary because you can poke it, you can prod it, you can actually do DevOps and automation on that side as well, and I think this is a different dimension that's getting unlocked through because of things like neutron. It is indeed. And there's multiple ways. I mean, as Rob was mentioning, from an automation point of view, you can always automate physical, virtual, V-switch, physical switches. The question is always, what's the fundamental value that you are trying to provide into home? And if you think traditionally when people were thinking infrastructure, you had compute storage and networking groups, and now within compute, we could argue that it was automated, virtualized with VMs and V-centers and KVMs and things like that. Now what happens is when you bring networking, networking is fundamentally something slightly different because it's not an entity per se. It enables somebody. It essentially moves packets on behalf of somebody that requests a service. If you think it like that, then what you do is you create applications on demand. We are talking about open stack where you're going to create environments that may happen into your private cloud, may happen into your public cloud, may happen into multiple environments, and the question is always, should the network be static and physically attached to a specific site or should be able to follow the application? And there's different values that come from different angles. For example, when you come from physical networks, the notion of SLAs, multicast, high bandwidth connectivity, this is a physical property that must come from a proper orchestrated physical environment. But then you are going to have things like security and your application that goes into the hybrid cloud and some sort of federation that your application spans in a secure way, may be encrypted from your private cloud to your public cloud. So what we've seen is that this shift that Dan was mentioning about, what goes to the switch versus what goes to the network. And I don't think people should take it as one versus the other in the sense that different values come from different environments. For example, from the edge, you can even encrypt the traffic end to end. That doesn't mean that the physical network doesn't have to provide some sort of path with a specific SLA. Now the question is, what features do you put in each environment into the, what's it called, overlay versus the underlay? Which in Hungary, usually we call it VNI, virtual network infrastructure versus physical network infrastructure because people have to understand that the tension between overlays and underlays is not one against the other because all software runs on top of some sort of wire. So if we start thinking it in a different way, virtual network infrastructure versus physical, now people have to associate the proper value to each layer, especially in the thinking of what happens with public cloud, with hybrid cloud, with multi-site locations. And then the whole thing starts to kind of make sense in a different way from the current understanding that one layer has to do everything. So we're moving towards a transition between the overlay capabilities that maybe enables that shift in the underlying physical environment or? Definitely, when you attach connectivity property or a networking property with security close to the application, when the application moves around, you can track what the application needs in a much better way. But again, you need both components from a feature point of view, from properties that the application aspect. The overlay definitely enables that, but the SLA may come from a physical characteristic of the network. I see Rob chomping at the bit over there. Go for it. Trying to set how much of this conversation monopolizes. He and I could go at this all day, right? So in my mind, there are a number of nice properties that are useful to implement in software and there are a number of nice properties to implement in hardware, and I think we agree on that. And now, whether you implement those nice properties with an overlay, completely irrelevant. So what is an overlay, a tunnel header? Is it really fundamentally, architecturally different from an MPLS tag, a VLAN tag, or some sort of other, it's metadata to the network to say, I've already thought about this a little bit, let me pass some of that on. And what gets really complicated is when these things start having different brains. If you look at, for example, how the ML2 plugin works, ML2 plugin is, don't get me wrong, an incredibly useful thing, but it's a hack. It means that I'm gonna have one brain for the physical network and one brain for the virtual network. And that's great because that's the state of where a lot of deployments are right now, but if you actually had one brain to manage it all, for example, something that managed the physical hardware and the virtual hardware, that ends up starting to look like a much more, in my mind, sane network architecture. And that's completely independent of whether it's an overlay or an underlay. All right, so you stepped in. Mike, if you have a question, if you can get to the mic. I realize it's sort of clunky here getting people over. I should let you guys talk at some point. So I appreciated the last comment that was made. I'm not sure I agree with it wholly. All of your products can give me the ability to take a virtual machine and plug it in. But when I look at what I used to be able to do in the physical network, how do I implement a multi-tenant service node in a VM that could legitimately connect to thousands of tenant networks using any four of your products with a standard API that doesn't require me to have an NDA with you? Actually, that works out of the box right now. Yeah, I'm supposed to achieve that. Now, in OpenStack in particular, I think looking towards Liberty and beyond some of the VLAN aware VM and NIC types and some of those things, I think start to get you there through a nice, clean standard OpenStack API. Can we get you all forward to commit to one of those? We're working on it. So isn't that neutral? No. Let's say you mentioned two things, right? One is how do I create some entities, some connectivity entity that scales? A multi-tenant entity. Absolutely. If you follow the Neutron API, you can create networks, create routers, create projects, create floating APIs, create everything. Now, if you treat this as the common layer because you may or you may not want to get stuck with a specific vendor, it's up to the four people in this table to essentially make sure that you are satisfied with our products, but why not Neutron as an API? We're fine. I mean, honestly, if we had logical V landing in a service node, we'd be pretty happy with that, but we haven't seen that be a point of progression in any one of these things as a fundamental use case for the network. Didn't have to be a fundamental use case before with the physical network, it just was. But now we had logical networks to find where no one cared. So from what I've been seeing over the last couple of days as I've been exposed to this work is there hasn't been a lot of pull for that particular use case from a cloud provider because the multi-tenant isn't necessarily at the VM or the service node layer, but we're starting to see that in these NFV use cases more and more. And I spent half a day in a meeting with a very large US telco and a few of our partners in the industry talking about exactly this. The commitment for that Vlan aware Nick type or VM or whatever the work stream's called ping me after and I'll go look it up, I think is going to happen. And I think you're going to see that because the large telcos are starting to look at OpenStack as a viable platform and are starting to push those requirements more and more. So I mean, I will say and I'm happy to follow up with these afterwards. And so the APIs into our controller, which are published APIs, it's part of our neutron implementation. Talk about Vlan aware that you talk about multi-tenancy, you can create multiple logical routers and there's a system, there's a piece called the system router, which you can connect these logical routers. But the requirement was to use the same API across all four of you, because that's what I used to have with Ethernet. That's what I used to have with my switch. So you never had an API for Ethernet. Well, let me be more precise than that. I had a way to do multi-tenancy to a network edge device. Yeah, with four different CLIs from four different vendors. Right, I get it, but when it's not necessarily for different APIs, there are core, there are core neutron plug-in vendors in this building that you cannot do that with. It is not uncommon to find a core network vendor where I can't have one IP address assigned to multiple ports still. This is true and the fact is that unfortunately, it's not just the four of us who need to agree to make changes to the neutron API, you know? Yeah, I mean, there would be very easy. We can all stand up here and say, yeah, let's show you how to do it through our extension, but that's not a standard API that you want to do. Kyle, are any of the PTAs here? And we're safe, all right, let's change the drone. It is actually a separate point to say, in my mind, neutron is actually the lower bar, which is to say, as I come up with a function that I think makes me better than other people, particularly as an open API as people have, people start to look at and say, okay, that'd be really useful as inclusion for neutron. And now we know one way of doing it, once we know another couple of ways of doing it, then we can create a standard interface for everybody to drop this into their plug-in to do it. And I think that's actually the right way to move. I mean, certainly that's the open source way of moving a standard forward. So, actually, can you mic for me back? So, Rob, you mentioned ML2 and ways that we've looked to address this with neutron. Is that a capability that we should be focusing on now? Is there something we need to move beyond? Pluses and minuses for ML2 and sort of where we are. So here we have the notion, and maybe even following up on the discussion about the API, you have an API is an abstraction, right? You want service to be performed, a multi-tenant network, reuse API addresses, whatever. And now we start building the onion from an API that we say neutron that has a standard. Now we say, well, what about going a kind of a SAP way to plug in different vendors? And this could be a plug-in and ML2 driver and so on. What usually happens is that, especially with kind of different vendors, everybody's going to create a ML2 or a plug-in and kind of test and certify solutions. But what's going to happen is that in networking, you have two approaches. One is you create an interoperability environment where everything has to mess up with everything. And we come from a net of networking that people are obsessed about the little elements like a switch, a router, a firewall, a DHCP, a DNS and so on. And now when we go to this new virtualized world, what happens is that if I just focus on providing a switch or a villain, who's going to make sure that the router of somebody else interpret with mine. And here you have a proliferation of drivers that even each vendor could provide a certification for its own functionality, but as an environment, you don't have any guarantees that it works. The other extreme is you go to a plug-in from somebody that has more than one component and now all these multiple components are going to work together because that's the certification aspect that gets even. So I would say that regardless of plug-in or driver, the question is what's the kind of need from the community point of view in terms of the interoperability testing or the functionality metrics that a new definition of a networking vendor in a cloud environment has to provide? Because if you think the goal of the cloud is to jumpstart the cloud in hours, not days. And now you say you have to bring 25 vendors of networking that they are going to work with 25 little drivers that they are going to interpret together. Is this the right way of doing clouds for the future or is this the way that we did things in the past? And this is where we have to rethink a little bit the notion of what doesn't mean networking for the cloud, not only from defining plug-ins and drivers, but starting by are we having the proper API and then what type of operational tools, visibility and what kind of differentiation do we expect from vendors? And maybe this notion of 25 vendors in the same deployment is not realistic anymore. So that's the other thing that we have to try to understand. I see the question there, but I want to give Dan a chance to... I have two comments on that. So most agree with what you said, you know, and basically I think with respect to ML2 and the drivers for the different layers, in some cases I think it just doesn't make sense to use multiple components for layer two, layer three and some of the layer four, right? I mean, I think in probably all of our solutions, you know, the base functionality includes these multi-layer aspect, right? So it doesn't make sense to plug in somebody's other router into one of our solutions typically. That said, you know, with the services on top like layer four to seven services, particularly layer seven stuff, there we were not going to do everything under the sun, so we do need an integration point. And indeed that's where interoperability testing is very important and that's probably what neutron should be focusing on, right? Rather than trying to build stuff from scratch, in my opinion. Michael, those are the American Airlines. One of the things I'm hoping to hear more about is how do you guys plan on talking for enterprises to legacy things like we have really good Oracle salespeople who like to sell Exadata's, Exilogix, you have Fabless Sales Engineers with F5 or Citrix that like to sell their products because we need to do the SSL offload. We have this legacy world of things that we have to connect to and we need to be able to connect to them with policies, guarantees, and we may have to manage MTU sizes. I haven't really heard you guys talk about how we would deal with the real world of intermixing where we have to get outside of just the open stack controlled environment. Some people call it legacy, I call it shit that works. I'll let it took with somebody to ask the question and we can talk about anything up here. I will say stuff that folks at Big Switch, Kanzi from our team is here working on things like the external port extension, something to say here's a physical port connected into my open stack environment. That's a good bootstrapping stuff towards that. Some of the things like the LBASP2, some of the firewall as a service stuff. Those are at least modest first attempts working in that direction. And I actually think one of the things that's great about open stacks is it provides a forum for us all to get together and say, all right, well let's at least e-cat the minimum subset of how we get this stuff to work. Yeah, from our customer use cases, like what we've seen, of course we can talk theoretically about what should be possible and what shouldn't be possible. What we've seen are two different patterns basically. One where the users are running all the legacy stuff like F5, for example, entirely outside of the cloud, just all the way out in front. And that's not great, but it works for now until that vendor provides some sort of virtual model, for a virtual form factor of their product or until they move to something else, which that might be the answer ultimately in that case. And the second point is where they try to use basically like the layer two gateway service, you know, either in the VTAP, the hardware switch, or in a software gateway to get things out and back into the virtual network, which is not ideal, but again it does work until a more elastic version of those layer seven products comes out, that's what it's gonna be. So it's a transition step I think. But I think here the fundamental question is if you ask at least the four panelists here, that's the role that we kind of provide to our customers in the sense that one thing is what you can do with OpenStag and Neutron. And OpenStag is kind of going with the Lydonic and the containers and everything kind of across the world, but then you're going to say I have a specific set of assets that are not even under the control of OpenStag and I have to onboard them. So I bet you ask for people, you're going to get four answers because our job is to answer these questions, right? What happens then is an interesting phenomenon, right? Because as soon as you start using solutions that solve your needs, but they deviate because we have a specified Neutron API that maybe different vendors have to provide extensions where at this point you have kind of this dual management model. So I would say that what happens a lot of times is that in the Neutron community you have a lot of people working from vendors themselves and sometimes the voices of customers saying is OpenStag a closed environment that all the APIs and all the use cases that we should think about OpenStag? And people would say no, because we bring physical assets and Docker containers and we would have to create that, but what about everything else? And this is maybe where the notion of users saying, look, it's not only about the workloads that you have in OpenStag, but what about bridging to the other side of the world? And I would say probably you will not find that much resistance from commercial solutions because that's what we are good at of solving today's needs because we cannot wait for standardization to come with the proper answer. But the point is how do we kind of agree on some ways that the community advances towards those use cases? Yeah, I mean, there's, we're three years into this adventure that we call SDM, maybe four years, three years since the NYSERA. Thank you. And yeah, well, so we're realizing really quickly that not everything's a green field. In fact, almost nothing is a green field and we have to figure out ways to address these things. All of us as vendors here have a solution that answers your question. I've got a box I can tell you today that takes care of that. What you want it to do is somehow nicely interact with your OpenStag environment, if not become a part of it. The ironic work and things like that are happening are a good step in the right direction, but the OpenStag community has kind of taken a view that is the world is OpenStag and not necessarily anything else and there is a lot of other stuff out there. There are bare metal assets. There's a big Oracle database. There's that big company down here in Seattle that does cloud reasonably well. We have to figure out a way to interoperate with those things and to leverage those assets as well within OpenStag and I'd like to see more community involvement from that point of view. You raise an issue that we're gonna get back to in just a minute, but first, we're gonna take our next question. Great, thanks. I'm gonna bring the conversation a little bit more, a little back to startups and strategies and so forth away from product specifics. And I know that you've all been around, all your companies have been around longer than Neutron and you sort of joined the Neutron ecosystem as Neutron emerged and in those subsequent three years, there are now separate cloud networking ecosystems that are forming today, specifically around Docker, around Kubernetes, around Mesosphere and other data center, software-defined data centers. So my question to all of you or any of you is the extent to which you focus on Neutron style networking versus addressing some of the requirements that are being introduced by these other ecosystems which are distinctly different, targeting different use cases and deployment patterns. I guess in my mind, the use cases are not so distinctly different or rather, with the right software interfaces is actually not too hard to all of those use cases. So at least for my company, OpenStack is the first thing out of the door for us, the first thing we wanna support but we also support CloudStack, we also support VMware, we also support other things that are coming down the line that you mentioned. And at the end of the day, everybody's looking at multi-tenancy, everybody's looking at virtual networking, everybody's looking at how do I manage overlapping IPs, how do I integrate third-party services that have physical ports in my network. And if Docker is doing it with three levels deep of V-switch, that's not so incredibly different. It would be my answer. Though maybe a bad idea, but... But OpenStack as a framework kind of gives us a shortcut to some of those, right? There's this new ecosystem around Docker. If I can run Docker under OpenStack, well, then I've got a way to present a network service to that. We have a lot of customers asking us for Hyper-V and Microsoft support. I can really easily run my new stuff on Hyper-V, but all that .NET stuff is a pain. And so if I can get you to run Hyper-V under OpenStack, then I have a simpler way of offering a service there that we can start to integrate. But you're right, these things are always changing. We also do VMware in CloudStack because that's the reality. And these things are changing and we have to stay ahead of those and figure out where we focus resources and where we keep moving the ball forward. And we're vendors and we're coin operated, so you guys are the ones that are telling us where we go. To be fair, I think the Docker, Kubernetes and such use cases are very, very recently emerging. And Docker has just recently started doing their whole live network abstraction thing. And of course we jumped onto that to be able to support it well. But that hasn't been deployed yet. I guess possibly the big difference between the deployment patterns, as Chris said, of OpenStack, CloudStack, VMware, which are all kind of similar. And Docker could be that Docker's much more developer focused, so it needs to include the developer's desktop to some extent, that's a bit different than just running everything in the cloud as it were. But we'll see, we'll see what happens. And you have to think that we have kind of a networking solution, all of us, that essentially provides a set of fundamental constructs like connecting a port to something. And something could be physical, could be virtual, could be a container itself. The next is how you orchestrate that in order to provide the proper automation. And this is where you say today we are in OpenStack Summit, so as such we are discussing how OpenStack is being used through Neutron and our plugins or drivers in order to operate our networking solutions. But we don't create networking solutions that predicated upon a specific stack. In the sense that networking has always been about connecting things. And things live across different environments, physical, virtual, containers, anything. So the question becomes more, when is the market going to have kind of a clear leader, like what happened with OpenStack four years ago, at least when we started, you were saying that we all predate Neutron. When we started there was like many OpenStack environments. And today you have, from an open source point of view, mainly one, right? Now in the container wall is happening that multiple are emerging. And eventually there's going to be some consolidation because the industry will not be able to handle five, six different open source ways of orchestrating containers. And this is where the maturity of the market is going to happen, and it's going to be much easier to see. What's the proper orchestration thing? And maybe it's OpenStack, if OpenStack kind of provides the proper integration to that. So always a couple that technology and what our networking solutions can do versus how do they get operated. And in today we are discussing how do they get operated through OpenStack. There's no limitations why from OpenStack you could connect containers and VMs. Plumberint has a demo about that in this summit, but I guess everybody has it, right? So think technology versus orchestration, and we have to bring all these things together in a way that is beneficial to everybody. All right, well actually we headed down the road with some discussions about the venture interest in this environment and really what's happening in networking broadly. It's not only just the companies that you all represent, but a whole set of folks, and probably a number here who are in this environment, that's pretty, well, I don't know if you'd say overheated, but certainly very hot in terms of investment and interest. Is that reasonable? Is it justified? Is networking really worth all this? Well, I don't think it's overheated for sure, in my opinion. And first of all, if we're gonna talk about the SDN, I mean that's such a broad umbrella, right? A lot of different products are ripe for disruption in the venture parlance, right? Switching, appliances of all sorts, right? Anything where it's software stuffed into a box, right? This is ripe for disruption, now that people don't necessarily wanna run physical boxes anymore, if things are moving elastic. So I think the level of venture interest is quite appropriate given the size of the opportunity. If not too low. And as we look for more use cases beyond just data center and cloud and into some of the WAN, the SDN ventures that have come, yeah, I think the investment's justified and we'll continue to see some interesting things happening. Especially because as we were saying that working is kind of the glue that puts everything together. And with the current models of reinventing how hosting happens, how private clouds, public clouds, there's a lot of innovation that is ready to disrupt your system essentially. There's the old adage on put your money where your mouth is. There's a lot of money, just because there's a lot of money being thrown at this problem. I actually think it just means that there's such a huge change going on in the ecosystem. You know, if you've heard the quote, you know, the network is in my way. I think a lot of people will finally come to terms with the idea that the thing that's actually causing them to not do as much business as they want is not the CapEx cost, it's not the OPEX cost, although it would be nice to give me those to go down. It's actually the business agility. And if you could actually solve that networking piece, your company can do more of what it does to make money. And I think that's where, that's the thing that's actually forcing the flywheel from Santel Road to make this happen. I used to live on Santel Road, but that's not good. And we talked a little bit about the community and where we are in terms of overall engagement. I know there were a lot of requests made of all of you, things that people would like to see. What would you like to ask back of the community around Neutron? And what do we need to be doing? Where do we need to be going? What's, if Al was here or some of the other team, what would you fire up for them directly? If I could return to the question the gentleman in the front asked, the open stack in Neutron specific is a great place to do third party integration. It's one of these things that in my mind should be totally commodity and it actually just benefits everyone to have more APIs like LVAS or firewalls of service or things like that, or the multi-tenancy. So the more you can say, this is what I want to have happen. Here's the magic button and when I push it, I want you to implement it so that it does that right thing. Tell us what the magic button looks like and we can make that happen. So I guess in terms of making faster evolution on APIs, which is to address the community needs, well, as a member of Neutron, we found that to be a bit slow. Probably you guys felt the same way, right? And one of the reasons is because the community always wanted to have a reference implementation based on OpenVSwitch or whatever, something that's actually core in Neutron itself, in OpenStack. And so, well, this is my cheap plug for our open source project here. We open sourced some part of our software last year, I mean a very significant part, precisely so that we could make a pure open source implementation of certain features in our own way, let's say, without actually going through the OpenVSwitch plugin or ML2 or any of that stuff, because we believe we have the technically correct path. So I think that part of what we could have in terms of community engagement is if people wanna have an extension of some sort that can be put into our open source implementation, please contribute it and we'll push up an extension API for that as well. So more open source is good, it seems like what this community has been demanding, right? Yeah, I mean, the ask I have of the community is to, as you're involved with the PTLs and with the development community, ask them to keep pushing Neutron as a reference framework, not a reference implementation. I mean, that's one of the things that we think is holding up Neutron and advancing is we spend so much time within Neutron trying to develop a widget that passes packets. That's not always the easiest problem to solve and some of us have been investing a lot of money and a lot of years building things that do that well. Let us do that well, let the guys that have a nice open source implementation build that and do that well. You guys use that in your labs and in your testing and in your production environments and let's force the OpenStat community to be a framework that we can develop to and let us add value there where we can. Instead of focusing on actually implementing the thing in a network node or something like that. Stepping away from code first. Well, I think we have to weigh the pros and cons doing some things that we try to do in Neutron and maybe step away from a few of them. I guess the other, beyond what has been said already, is the notion that until now Neutron is focusing on a set of a deeply small number of APIs that provide certain features. We have to start thinking that when people deploy clusters at scale, you have to start focusing on operational tools and what kind of APIs and abstractions would be provided to understand how to troubleshoot, how to monitor, how to perform. So a lot of us provide a lot of differentiation in terms of operational tools and features that give you visibility into the network. The question is beyond Cilometer that basically reports certain stats. How we can bring a set of elements into the Neutron project that allow us to understand much more what's going on on this important layer of the stack. All right, and I want to make sure that we get that chance for any further questions if anybody has them. But I've got a couple closing questions and we start to wind down here. We're here at the official coming out party for Kilo on the cusp of the launching of Liberty. Or what should everybody here know about Kilo that they may not know already or in the networking side of things? And what do you most want to see in Liberty? So I went and looked at this after you sent us that question and to really kind of put some thought into it. And I didn't see a widget that really got me excited. The thing that got me excited was the actual decomposition of the plugins that happened. And so that's the, I think along the theme of what we've been saying is stepping away from making it a product and making it more of a platform that lets outside things contribute in a more meaningful and rapid way. And on top of that, I would add this notion that Kilo probably signifies with all the discussions about the Big 10 and OpenStack Core. How do we go into a model that from release to release is not seen as a major upgrade, major disruptive upgrade, but rather that there's a gradual things that you can upgrade components as they need it. Because this maturity is needed for the industry to essentially graduate from the fast-paced innovation cycles to a much more supportable environment, especially for vendors like us. I mean, a six-month completely disruptive release cadence creates challenges for the industry. So how do we grow with Kilo to a much more sustainable model, more mature model for the enterprise? I'll actually withhold my answer to get the question for the gentleman, that's okay. Thanks, it's not really a question, but when James Hamilton said the data center network is in my way, it wasn't a call for eruption or disruption or anything like that. It was a request that networks become invisible. People in this room carry a great deal about networks, people outside, not very much, they'd rather not ever deal with them. And it seems like there's a huge amount of inertia towards making what you all are doing more complex, whereas people building apps want it all to be simpler. I absolutely agree. If you think that anything about what I said contradicted that, then I must have misspoke. At the same time, I feel like that's the real benefit of Neutron is you're saying, I want a network that does this, these are the set of buttons, make all the other complexity go away. I know Tom, so I know the gist of his question. I think what he's really referring to is that it seems like the Neutron community keeps talking about adding more and more APIs into Neutron. And it's unclear actually that those APIs are all necessary. My interpretation of that is that there's some set of APIs that are needed for application developers to express the needs of the workload and such, and there's a bunch of other stuff that's for the operators. It's that there's not a clear separation between the two. And my personal feeling is actually that those operator-specific things are gonna be very hard to standardize because that's where we all differentiate, to be honest, right? So not a huge amount of incentive there to standardize them. All right, with that, we don't have any more questions. I think we will wrap it up. Any final closing statements from any of you, or are we good to go? Remember that we're standing between everybody in free beer on the expo. Well, that was exactly my thought here. So with that, if you'll join me in thanking the panelists, thank you for spending time with us.